Loop equations and topological recursion for the arbitrary-$\beta$ two-matrix model

We write the loop equations for the $\beta$ two-matrix model, and we propose a topological recursion algorithm to solve them, order by order in a small parameter. We find that to leading order, the spectral curve is a"quantum"spectral curve, i.e. it is given by a differential operator (instead of an algebraic equation for the hermitian case). Here, we study the case where that quantum spectral curve is completely degenerate, it satisfies a Bethe ansatz, and the spectral curve is the Baxter TQ relation.

The case of one random matrix belonging to an arbitrary β ensemble, although less studied than the hermitian case, has received lot of attention, and many works were done. In particular loop equations have been known for a long time, and their recursive solution was recently proposed in [17][18][19]46].
For the hermitian case, it turned out that a 1-matrix model was not general enough, for instance the spectral curve for 1-matrix model is always quadratic, it can't have higher degree, the critical behaviors are related to only a small subset of all possible conformal field theories. A 2-matrix hermitian model turns out to be much more general, it allows to reach spectral curves of any degrees, and any conformal minimal model [21,26,27,56]. Moreover, as applications to counting discretized surfaces, a 2-matrix model allows to count discrete surfaces carrying colors [25,54].
Unfortunately, trying to generalize the 2-matrix model to arbitrary β seemed very difficult, and was almost never studied. One reason, is that the joint eigenvalue distribution was not known, because the integration over the angular parts of the matrix was not known (properties of angular integrals for hermitian matrix models can be found for example in [68]).
Recently, some progress was made [6] in those angular integrals, and although we don't know yet how to compute angular integrals for arbitrary β, we know already that those angular integrals have to satisfy some differential equations, and this is sufficient to derive loop equations. This is what we do in this article, we derive the loop equations, and solve them perturbatively by expanding in some small parameter.

The hermitian two-matrix model
The hermitian two-matrix model is given by the partition function: (1. 1) where M 1 and M 2 are N × N hermitian matrices and the measure dM i is the corresponding Lebesgue measure associated with all independent entries of the matrices. V 1 and V 2 are called the potentials, we shall assume in this article that V 1 and V 2 are polynomials. The parameter T , often called the temperature, is redundant, it can be absorbed by a change of variable M 2 → T M 2 and a redefinition of V 2 , however, we prefer to keep it for convenience, and because of its future geometric interpretation. The hermitian 2-matrix model was introduced in particular as a formal series to study the Ising model on a random surface, i.e. the Ising model coupled to 2D gravity [26,44,54].
It is very well known [61] now that this integral can be rewritten in term of its eigenvalues problem as: where X = diag(x 1 , . . . , x N ) and Y = diag(y 1 , . . . , y N ) are diagonal matrices representing the eigenvalues of M 1 and M 2 , ∆(X) = i<j (x j − x i ) is the Vandermonde determinant, and I(X, Y ) is the Itzykson-Zuber integral [51,52] which corresponds to the integration about the angular variables: where U N is the unitary group equipped with the Haar measure. Relatively to the Itzykson-Zuber integral, one can also define the quantities: which can be used to determine the Itzykson-Zuber integral by the formula: Note in particular that the r.h.s. does not depend on the second index i or j. Technically, the last formulas are obvious since U i,j 2 in the unitary group. Eventually, in the hermitian case, it is also known that the M i,j 's satisfy the Dunkl equation [15,16,24], namely: We will see in the next sections that these properties can be extended uniquely in the case of the β-deformation of the hermitian two-matrix model case.

Generalization to arbitrary β two-matrix models
Similarly to what happens in the one-matrix model, we would like to generalize the hermitian two-matrix model to other β-ensembles of matrices. As in the case of onematrix models, the usual generalization is made from the diagonalized version of the problem (1.2) by: where I β (X, Y ) is a generalized version of the Itzykson-Zuber integral that we will describe later. Note that with our convention, the hermitian case corresponds to β = 1 (notation used by people working on AGT conjecture [2], and Laughlin wave function), whereas sometimes in the literature it is normalized to β = 2 (Wigner's notation [62,67]). We prefer to use the first notation in order to avoid unnecessary powers of 2 in all formulas. The diagonalized version (1.7) corresponds to the idea of the generalized matrix integral: (1. 8) where E N,β would be an ensemble of matrices such that the diagonalization of M 1 and M 2 gives (1.7). Note that this definition is only formal since apart from β = 1, 0, 1 2 or 2 where E N corresponds to hermitian, diagonal, real-symmetric or quaternionic matrices, no other ensemble is known presently (it would be interesting to see if the β-matrix ensemble introduced by [35] also reproduces the I β term).

The angular integral
We shall now define the I β (X, Y ) angular integral, so that it coincides with the actual angular integral for the matrix cases β = 1, 0, 1 2 , 2, and in fact we shall use the angular matrix integral defined in [6].
Following [6], we first define a good generalization of the M i,j and from (1.5) we will define the generalized version of the Itzykson-Zuber integral. In [6], the authors claim that the natural generalization of the M i,j , noted here M  i,j 's uniquely making the previous set of conditions a proper definition (in [6], an explicit solution for those M (β) i,j is provided as an integral, or is given by a recursion on N). Moreover all these properties are standard results in the hermitian case β = 1 and hold for β = 1, 1 2 , 2. In particular note that M j,i (Y, X) gives another formulation of (1.9) as: Similarly to what happens in the hermitian case, it is logical to define the generalized Itzykson-Zuber integral by: ( 1.11) Note again that this definition recovers the known cases when β = 1, 1 2 , 2. Eventually, from this definition we can prove (see [6]) that the generalized Itzykson-Zuber integral I β (X, Y ) satisfies the following Calogero-Moser equation [16]: where H (β) X is the Calogero-Moser Hamiltonian. Note again that I β (X, Y ) is well known to satisfy (1.12) in the β = 1, 1 2 , 2 cases. Now that we have introduced a proper generalized two-matrix model, we can try to solve it in the large N limit and in various regimes of the parameters. In particular, since no (bi)-orthogonal polynomials techniques are known in the case of an arbitrary exponent β in the Vandermonde determinant (although some work has been done in [34]), we will use in this article the method of the loop equations to determine the correlation functions of the models.

Summary of the main results
Our goal is to compute the asymptotic (possibly formal) expansion of the 2-matrix model integral and its correlation functions. and the cumulants (subscript c ) of expectation values of products of traces of resolvents W n (x 1 , . . . , x n ) = Tr 1 We want to expand them in powers of a small parameter ǫ as The small parameter ǫ is a combination of 1 N and β. There can be several asymptotic regimes of interest, which we discuss in detail in section 4.

Main results
• In section 3 we derive the loop equations for the 2-matrix model with arbitrary β.
• In section 5 we solve the loop equation to leading order for the resolvent W 1 , i.e. we compute W (0) 1 . We show that where ψ(x) is solution of a linear differential equation with polynomial coefficients where E(x, y) is a polynomial in 2 variables.
• This equation for ψ can be thought of as a "quantum spectral curve". We remind that for β = 1 one gets an algebraic equation E(x, W • This equation can also be seen as a Baxter T-Q equation where ψ(x) is the Q function. Its zeros s i 's (ψ(s i ) = 0) are thus the Bethe roots, and the equation E(x,ŷ).ψ = 0 can be translated into a Bethe Ansatz equation for the s i 's. We do it in 6.17. The Bethe equation for S = diag(s 1 , . . . , s N ) can be written in the following way: find a matrix B such that: where e = (1, 1, . . . , 1) t .
• In section 6.6 we show that this Bethe ansatz equation can also be written as the extremization of a Yang-Yang action: At the extremum of S we have S = diag(s 1 , . . . , s N ). The leading order F 0 of ln Z is the value of the Yang-Yang action at its extremum, and the first subleading term F 1 is half the log of the determinant of the Hessian of S: • In section 6.7.1 we compute the leading order of the 2-point function W (0) 2 in terms of the Hessian H = S ′′ of the Yang-Yang action: In a similar way, we compute the leading order of the 3-point function W (0) 3 in terms of H = S ′′ and S ′′′ in (6.7.2).
• In section 7 we provide a "topological recursion" algorithm to compute every subleading correction to any n-point function, i.e. W (g) n , in the form: However, in contrast with [48], the recursion kernel K(x 0 , x) is here a matrix of dimensions d 2 = deg V ′ 2 . We derive the topological recursion in section 7.4, and then we describe the kernel K(x 0 , x) in more details in section 7.5 and in appendix.

Loop equations for the β-deformed two-matrix model
The loop equations, also called "Schwinger-Dyson" equations, are a very powerful tool to study random matrices, perturbatively in the expansion in some small parameter (for the hermitian case, the small parameter is usually 1/N in the large N limit), see [3,23,26,40,53,66]. In short, Schwinger-Dyson equations exploit the fact that an integral is invariant by change of variable, or an alternative way to obtain the same equations is doing integration by parts.

Notations for correlation functions
For the hermitian two-matrix models, the loop equations are well known [20, 41-43, 47, 48, 56, 65] and they are very useful to compute the large N expansion of correlations functions, W (g) n and symplectic invariants F g presented in a more general context in [48].
In the one-matrix model, it is also known [46] that the loop equations method can be generalized for arbitrary β quite directly from the hermitian case. Indeed, we remind the reader that we can see loop equations as integration by part or Schwinger-Dyson equations of the diagonalized problem (1.7). In the present β-deformed two-matrix model case, we will see that we can get also some loop equations by following the same approach. Before writing down these equations, we need to introduce some notations (the same as in [20,41,42]): • The potentials are assumed to be polynomials: • The correlation functions are defined by: where the brackets indicates that we take the expectation value relatively to the measure defined by (1.7) for the random variables X = diag(x 1 , . . . , x n ) and Y = diag(y 1 , . . . , y N , namely: 3) The index c stands for the connected component (also called cumulant) of the correlation function that is to say: In order to have more compact notations, we will sometimes simply denote W (x) = W 1 (x) for the first correlation function.
• We can define similarly the second type of correlation functions: (3.5) and denoteW (y) =W 1 (y) for the first correlation function.
• To close the loop equations we will need to introduce the following functions: Note that they are polynomials in the variable y, and remember that M depends on X and Y and is also a random variable. In a similar way, we also introduce: Note that this time P n (x, y; z 1 , . . . , z n ) are polynomials in both x and y.
The loop equations method is a powerful method of deriving an infinite set of equations connecting the correlation functions and the functions P n and U n introduced before. There are several ways to derive the loop equations and we choose here to use the integration by part way which is the most convenient in our setting. In the case of hermitian, real-symmetric or quaternionic matrix models, these loop equations are already known and can be derived directly from the matrix integral settings (before any diagonalization). They can be found in many different places in the literature [13, 20, 21, 26, 41-43, 48, 54, 56, 65]. In our arbitrary β setting, we do not have the initial matrix integral model properly defined, thus we must adapt the derivation of the loop equations with our definitions (1.9) and (1.11). This is done in different successive steps that are fully detailed in the next sections.

First step: deriving an auxiliary result
We start by evaluating the following integral: The result is clearly null if we choose an integration path which goes to ∞ in directions in which e − N T V 1 (X) and e − N T V 2 (Y ) vanish exponentially, because we integrate a total derivative. The right hand side yields 3 different contributions: • Acting on the exponential we find: • Acting on the Vandermonde determinant, we find: • Eventually, acting on M (β) i,j and using the Dunkl differential equation satisfied by the M (β) i,j 's (1.10) we find: Then we see that (3.10) cancels with the last part of (3.11) so that we have our first equation:

Second step: finding the loop equations
Then consider: (3.14) whose r.h.s. yields 4 contributions: • Acting on the exponential, we find: • Acting on the Vandermonde determinant we find: • Acting on 1 x−x i we find: i,j and using (1.9) we find: Now we observe the following identities: first in (3.15) we can perform so that we have (using the notations U 0 and P 0 introduced in (3.6) and (3.7)): Secondly we can split (ii) in the following form: Note that (ii) ′′ is the same as the last terms of (iv) so that it cancels out. We can also split (ii) ′ into a sum over i, k minus the case i = k which is identical to (iii) except as a factor β. Therefore we can regroup (ii), (iii) and (iv) to get: Observe now, that we have: Then we have also: We recognize here one the second term of (3.21): Eventually, we are left with (3). We perform the change y j ↔ y j − y + y and split it into two parts: i,j = I β . and that from our first step we have (3.13) so that eventually: Now we can put everything back together to get the loop equation: which can be rewritten (performing a multiplication by − T N β ) as the master loop equation: where we have defined = T N (1 − 1 β ) which we prefer to write (3.29)

Higher order loop equations
With the same approach as before, one can deduce higher order loop equations for U n (x, y, ξ 1 , . . . , , ξ n ). Indeed, if one look at the auxiliary integral: Then the second step can be carried out using the integral: First, we find the same terms as before (the new term n p=1 tr 1 ξp−X being treated as a constant). This gives: However, the main difference is that now the derivative will also act on n p=1 tr 1 ξp−X . This gives an additional contribution: Then observe the two identities: and: Therefore the action of the derivative can be rewritten as: . . , ξ n ) − P n (x, y; ξ 1 , . . . , ξ n ) (3.39) These loop equations (3.28), (3.39) are exact equations satisfied by the correlation functions, they hold for every N and every β. Now, in order to solve them, we want to consider some limit and expansion around that limit, in which they can be solved order by order.

Relationships between correlators
The loop equations which we have written, involve the resolvent W (x), as well as higher correlation functions W n , and also some auxiliary functions U n and P n . However all of them are related.
First, notice that P n is the large x polynomial part of V ′ 1 U n : P n (x, y; x 1 , . . . , x n ) = (V ′ 1 (x) U n (x, y; x 1 , . . . , x n )) + (3.40) Second, notice that the large y behavior of U n is related to W n+1 : And then, the loop insertion operators allow to increase the index n by 1. Define: These operators, called loop-insertion-operators, are formal and they have the property that: The main interest of these operators is that they can be used to connect the correlation functions W n (x 1 , . . . , x n ) to the next correlation functions W n+1 (x 1 , . . . , x n+1 ): We also have

Topological expansion(s)
One sees, that the loop equation (3.28) involves U 0 and U 1 , and similarly (3.39) involves U n and U n+1 . This is why loop equations can only be solved perturbatively, order by order in some small parameter, in a regime where the U n+1 term is subleading compared to the U n term.
We thus need to expand all our observables in powers of some parameter, and this expansion is usually called topological expansion. Most often for hermitian matrices, the small parameter is chosen to be 1/N where N is the size of the matrices, i.e. the topological expansion is an asymptotic expansion for large random matrices.
Such a power series expansion does not always exist for convergent matrix models, but by definition it does for formal matrix models which is our context here (we do not interest ourselves in convergence here). For the subtle differences between convergent and formal matrix models in the hermitian case, we invite the interested reader to refer to [11,39].
We thus need a small parameter in which to compute an expansion. For the hermitian case, the good choice is an expansion into powers of 1/N in the large N limit. However, for β = 1, we can already see on loop equations, that the most interesting regime (in which loop equation are most conveniently solved) is a regime where = O(1), and so is not simply N → ∞. In fact, depending on which application of random matrices one is interested in, several limit regimes can be interesting. Fortunately, all of them are related together, and knowing one expansion allows to recover the others. Let us present 3 main regimes: Before presenting those regimes, let us introduce some usual notations: Those notations are those used in different applications of matrix models, in particular string theory [10,28,29] and in relationship to the now famous AGT conjecture [2].

Topological regime
This is the regime where we expand in powers of g s at fixed . This regime is the one interesting for applications to geometry and topological strings. In this regime, we wish to expand: Each coefficient F g ( ) = k=0 k F g,k have a geometric interpretation. Notice that F g,k depends neither on N nor on β (but of course it depends on the potentials V 1 , V 2 and on T ). In topological strings, F g,k is the generating function for counting equivariant Gromov-Witten numbers [12].

Large N regime
This is an expansion in powers of N, at fixed β. This is the regime interesting for large random matrices. Most often, one is interested in specific values of β, like β = 1/2 or β = 2. We shall write: This expansion is related to the topological expansion bŷ For exampleF and more generally by (4.4)

WKB large β regime
This is an expansion in powers of β at fixed N. This is the regime we shall consider below. It can be computed with standard WKB techniques, but we shall analyze it here with loop equations. We expand in powers of g s = T N √ β at fixed N: This regime can be related to the topological regime as follows:

Notations for the expansion
In this article, we want to study perturbatively the BKW regime where β → ∞, N fixed, and we write g s = T N √ β . We thus expand all our correlation functions as: Note that the functions W Eventually, we define the numbers f g by the expansion of the logarithm of the partition function itself: is also implicitly a function of T , V 1 and V 2 , and we recall that f g (N) is related to F g ( ) which is the usual topological expansion.
As we will see in the next sections, most of the formulas get nicer if we shift the first correlation function by the derivative of the potentials. Therefore, for convenience, we introduce the shifted functions: It is also useful to redefine a shifted version of the polynomial P (0) (x, y) as: which is a polynomial in both x and y (and it depends on T , N, V 1 , V 2 but not on β).

Spectral curve and Loop equations as an ODE
In this section, we shall see that the loop equation satisfied by the leading order W can be written as an ODE where E(x, y) is up to some shifts, the polynomial E(x, y) introduced above in (4.12). This ODE is called the "spectral curve" and plays a central role.

Loop equations in g s expansion
The leading order at large β and fixed N of loop equation (3.28) (notice that we have W which is equivalent to (See (4.12)): This is the master loop equation, also called the "spectral curve". It generalizes the hermitian 2-matrix models's spectral curve [13,41,54,56,65].
If we expand in powers of y: 2) reads: Remark: equation (5.5) with k = d 2 + 1 gives us that: In the same way, since P is only a polynomial in y of degree d 2 − 1 we have also:

The linear differential system
In order find the lower coefficients U which is equivalent to say that: By definition, the function ψ(x) is only determined up to a global multiplicative constant which can be fixed by specifying the lower bound of the integral in the last formula.
The first loop equation (5.5) can now be rewritten into a linear differential equation (ODE): and using ψ d 2 (x) = −t d 2 ψ(x), this leads to a set of linear equations that can be written into a matrix form: (equivalent to (5.10)) Therefore, we have obtained a system of first order linear ordinary differential equations given by a companion-like matrix. The determinant of the matrix C(x) is given by the standard formula for the determinant of a companion matrix: That is to say that the characteristic polynomial of the matrix C(x) is exactly our function E(x, y).
There is no known notion of integrable system associated to the arbitrary β two matrix model, but the fact that the spectral curve turns out to be a characteristic polynomial (also called spectral determinant), is very suggestive from the point of view of integrability [4,7,8], this could be a promising route, and needs to be further investigated...

ODE satisfied by ψ(x)
The matrix equation (5.11) can be used to find a linear ODE satisfied by the function ψ(x). Indeed we can rewrite it in following way: Multiplying line k on the left by (V ′ 1 (x) − ∂ x ) d 2 −k and summing the whole set of equations give that: which can be rewritten as: , we must specify the position of theŷ compared to x. In the last formula, it is implicitly assumed that powers of V ′ 1 −ŷ are always to the left of all powers of x, and: Eventually we find that ψ(x) must satisfy a linear ODE of order d 2 + 1, with polynomial coefficients, given by the quantized spectral curve E(x,ŷ). This generalizes the Schrödinger equation in the case of the arbitrary-β one-matrix model [46].

Quantum spectral curve
The spectral curve E(x, Y (x)) = 0 in the hermitian case, gets replaced by This can be interpreted as having now a quantum spectral curve.

Non-unicity of the solutions of the loop equations
In order to identify a solution of loop equations, it remains to compute the polynomial coefficients E k (x) of this ODE, and choose a solution ψ. Then, as soon as ψ(x) is chosen, we can find ψ d 2 (x) by using ψ d 2 (x) = −t d 2 ψ(x). The other functions ψ k (x) follow by a descending recursion with the help of (5.10). Eventually, the connection between ψ k (x) and U (0) k (x) is given by (5.8) so that the polynomial U (0) (x, y) is known.
One may think that the knowledge of the loop equations would be enough to determine all the correlation functions by solving them properly. But we remind the reader here that in general the loop equations (even in the hermitian case) have infinitely many solutions.
Even if (5.11) gives us some linear differential equations with polynomial coefficients, the main problem, also present in the hermitian case, is that the function E(x, y) is not known at the moment. Indeed, in the definition of E(x, y), we face the polynomial P (0) 0 (x, y) whose coefficients are unknown. The coefficients of the polynomial P (0) 0 (x, y) are not given by the loop equations, they have to be determined by some other informations which we have not used so far.
Therefore, in order to pick the proper solution of the loop equations one need to understand the missing information in the loop equations. When dealing with convergent matrix models, the answer is quite simple: the missing information is about the choice of the contour of integration of the model. Indeed, we have assumed here that the eigenvalues of the matrices were real, but one could take a different onedimensional contour and get the same loop equations as the ones we have derived here (as soon as there is no hard edges, i.e. no boundary terms in integration by parts). Nevertheless, different integration paths would lead to different correlation functions and thus to different solutions of the loop equations. In the case of formal matrix models, the path of integration and the associated convergence is pointless and some other considerations have to fix the choice of the solution of the loop equations (in the hermitian case it is the notion of "filling fractions"). In this article we will restrict ourselves (specifically at equation (6.5)) to a specific kind of solutions of the loop equations because it corresponds to the less technical possible case.
Let us just mention that, in the hermitian case, the homology space of possible integration paths, has exactly the same dimension as the number of unknown coefficients of the polynomial P (0) 0 (x, y), and basically, every choice of P (0) 0 (x, y) is acceptable, and corresponds to a given integration path for the eigenvalues. However, finding the integration path from the knowledge of P (0) 0 (x, y) is not so easy in general. In the non-hermitian case β = 1, it is not even clear what the homology space of possible integration paths is, and a good understanding of the missing information, its connections to the initial model and of the generalization of the notion of "filling fractions" for our case is still missing and is postponed for a future article. 6 The Bethe ansatz 6

.1 WKB approximation
Here, we work at large β and fixed N (working at large β or small g s when N is fixed is completely equivalent since they are related by g s = T N √ β ), so that our integral: can be computed by standard saddle point approximation, and in particular, we find that i.e. it is a rational fraction with N polesx i which are the saddle points for X, i.e. the points such that ψ must be a rational function with N poles, i.e. ψ(x) is a polynomial of degree N. This is the case we will develop below and that corresponds to the generalization of the solution presented in [46] in the case of a Schrödinger equation.
However, we would like to stress again that this is not the only possible regime. Indeed, we could also try to study directly the "topological regime" where N → ∞, T fixed, and β → 0 in such a way that = T N 1 − 1 β remains finite. This regime is more interesting for applications to string theory and AGT conjecture [1,2,10,28,29]. In this case, ψ(x) would not be a polynomial, instead, the coefficients of E(x, y), i.e. the choice of W (0) 1 (x) would be dictated by asymptotic behaviors in accordance with the regime studied. This is more challenging, we have done it for the 1-matrix model case [18,19], and we plan to do it for multi-matrix models in the near future.
Therefore in the rest of the article, we restrict ourselves to the "Dirac" case (the terminology "Dirac" is used here to remind that the density of eigenvalues is a discrete measure, i.e. a sum of δ-functions, i.e. a "Dirac comb"), where N is fixed and β → ∞. In particular it correspond to assume that ψ(x) is a polynomial (whose degree is the number N of eigenvalues).

Introduction of the ansatz in the equations
So, let us look for a solution where ψ(x) is polynomial in x. Then it is clear from the recursion relation (5.10) that all the ψ k (x)'s are also polynomials. We will use the following notation: So that: Here we have simply assumed that ψ(x) is a monic (remember that ψ(x) is only determined up to a multiplicative constant) polynomial of degree N and labeled (s i ) i=1,...,N its complex zeros. Note that this ansatz is very restrictive, since usually a linear ODE of the kind (5.15) does not admit polynomial solution if the coefficients E k (x) are generic. Then, from the definition of U 0 (x, y), it is also trivial that U where the u k,i are at the moment unknown coefficients.

Computation of the s i 's and of the u k,i 's
Putting this ansatz back into the recursion relations (5.10) for the U (0) 0,k (x) and identifying the coefficient in 1 x−s i , we get for all k and i: (Note that the double poles cancel) In order to solve this recursion we introduce the following matrices of size N × N: and define the vectors : Then the previous set of equations turns into: Remember again that we have some extra knowledge for k = d 2 and k = d 2 − 1 (Cf. (5.6) and (5.7)): Starting from u d 2 −1 = T Nt d 2 e and using the recursion relation, we have: In fact all the previous relations can be summarized into the matrix form: 13) or else in components: Eventually using the fact that −B u 0 = T N (t 0 − S) e we eventually get: On the other hand, from the definition of B we have: Therefore we obtain a system of equations determining the roots (s i ) 1≤i≤N : 17) The last system of equations (6.17) gives us a complete set of equations to compute all the s i 's. In fact, this corresponds to a Bethe ansatz [5] found in [46] for the arbitraryβ one-matrix model (where the authors got in the same context: Remark: (6.17) is a set of algebraic equations which determine the s i 's, and the solution is in general not unique. The choice of a solution, is also a choice of a saddle point for computing our eigenvalue integral at large β, and thus it is related to a choice of integration path for the eigenvalue integral. The number of solutions of (6.17), coincides with the dimension of the homology space of all possible integration paths.

Rewriting
With the previous results, we have that: Moreover we have: so that: Hence in the context of the Bethe ansatz, we can express easily the function U 0 (x, y) in terms of S and B We start with equation (5.2) giving that: We can compute the l.h.s. with the help of the definition (6.7): Note that we have used the fact that u d 2 ,i = 0. With the help of the recursion relation for the u k,i 's (6.8) we find: Observe now that we have the identity: Moreover we have also (using (6.19)): so that we are left with: Eventually, we can summarize the results in the following way. With (6.17) we can compute all the s i 's and then using the relations (6.13) we can get every u k,i (0 ≤ k ≤ d 2 and 1 ≤ i ≤ N). Hence we have found an algorithm to compute explicitly all U (0) k (x)'s or equivalently every ψ k (x)'s (and ψ(x)), that is to say that we have the solution of the loop equation at the dominant order in the studied limit. In particular, assuming that ψ(x) is polynomial in x is sufficient to determine it completely which indicates that the structure of the loop equations are very rigid and hides integrable structure.

Yang-Yang Variational approach of the Bethe ansatz
The Bethe ansatz equation (6.17) can also be obtained from a variational approach with a Yang-Yang action. Consider the following functional : where S andS are diagonal matrices of size N × N, A is a N × N invertible matrix, u is an N-dimensional vector, and remember that e t = (1, . . . , 1). We now look for the extremal values of S. First taking the derivative relatively to u gives: Performing the derivative relatively to S = diag(s 1 , . . . , s N ) yields: Performing the derivative relatively toS = diag(s 1 , . . . ,s n ) is the same: Performing the derivative relatively to A = (A i,j ) i,j is more difficult, consider an infinitesimal variation A → A + δA, that yields: thus since A is invertible and A e = e: But since S andS are diagonal we have [ASA −1 , S] i,i = 0, and the diagonal term of (6.33) gives ∀i : 1 = ( e u t ) i,i i.e. we have ∀i : u i = 1 that is to say Then a non diagonal element of the relation (6.33) gives: Combining (6.30) and (6.34) gives that at the extremum we must have ASA −1 = B. Then, we can rewrite (6.32) as: Therefore the extremum equation δS δA = 0 gives also that Again, since S andS are diagonal, we have [S, A −1 SA] i,i = 0, which implies that ∀i : ( e u t A) i,i = 1. Since at the extremum u = e, the last equation gives us that at the extremum: Let's now introduce the matrixB = A −1 SA. Then taking a non-diagonal element of (6.36), we find: We can now rewrite (6.30) as:B Therefore if we combine all previous results we find that an extremum of S must satisfies the following identities:

Computation of W
n dV 1 , and since we know W 3 . We will see later 7 another way of getting formulas for these functions which can be extended by recursion but having the main disadvantage that the symmetry of the functions is not obvious at all.
we get: .
It remains to compute the quantities ∂s i ∂V 1 (x ′ ) . For that we use that the s i 's are the extrema of the functional S(s,s, A, u).
We introduce the variable R = (s,s, A, u) as a "global" variable to avoid detailing each of the cases. It is a vector of dimension N + N + N 2 + N = 3N + N 2 . Therefore, the functional S(R) is a functional of R whose extremum R extr gives us the solution of our model. Thus we have for every variation δ and every component i: The first equality comes from the fact that ∂S ∂R i (R = R extr ) = 0 since we place ourselves at the extremum. Let's introduce the Hessian of the functional at the extremum point: It is a 3N +N 2 ×3N +N 2 matrix whose shape is given in appendix A. The last identity gives us that: Let's now specialize the last formula for the case when the variation δ is ∂ ∂V 1 (x) . Since the potential V 1 only appears in the term Tr V 1 (S) in the functional S it gives: Since it only depends on S (and notS, T or u), we see that in the last sum of (6.43), j only varies from 1 to N, corresponding to the variables s j 's (i.e. ∂ and ∀ j > N : ∂ ∂R j ∂ ∂V 1 (x) S = 0). Therefore we find (taking 1 ≤ i ≤ N): Note that this last result is only a small fraction of the results contained in the formula: depending on the choice of variation δ and the component i you take. With the help of formula (6.45), it is straightforward to compute W 2 (x, x ′ ): In particular, in this formalism, it is clear that ( is a symmetric function since H and its inverse are symmetric matrices.

Computation of W
3 (x 1 , x 2 , x 3 ) via the variational approach We use the same method: 2 (x, y) For any variation δ and every component k, we have: We choose δ = d dV 1 (x ′′ ) and thus Then we easily find: where the C i,j,k 's are linked with the third derivative of the functional S: is a symmetric function of its variables.

The topological recursion
In the last sections we have seen how to compute U n 's. Thus, for now we have all desired quantities at the dominant order (i.e. g s = 0) but we still miss the subleading corrections g ≥ 1. It is the purpose of this section to present a recursive algorithm to get all the subleading corrections.
The algorithm is very similar to the "topological recursion" considered in [40,45,48], and is in fact more comparable to another version of the topological recursion for the hermitian 2-matrix model, described in [49].

Higher loop equations
Evaluating the coefficients of g 2g s in (3.28) gives the higher loop equations for g ≥ 1: 0 (x, y) − N T δ g=1 (7.1) We can also project the higher order loop equations (3.39) onto g 2g s (with n ≥ 1) to find (we note ξ = {ξ 1 , . . . , ξ n } for short): n−1 (x, y; ξ 1 , . . . , ξ k−1 , ξ k+1 , . . . , ξ n ) − U (g) n−1 (ξ k , y; ξ 1 , . . . , ξ k−1 , ξ k+1 , . . . , ξ n ) We shall also write those equations for each power of y k , using 7.2 Matrix form of the loop equations for g ≥ 1 The goal of this subsection is to rewrite the loop equations (7.1) (0 ≤ k ≤ d 2 + 1) into a matrix form suitable for our algorithm. First by taking the case k = d 2 + 1 and k = d 2 we obtain that: Now let's introduce the following d 2 × d 2 matrix: Then we can rewrite the loop equations (7.5) into the following matrix form: In the same way, we can also rewrite the projection of the higher loop equations 7.2 onto y k as: Evaluating the coefficients in k = d 2 + 1 and k = d 2 leads to: and thus leads to the linear system: n,0 (x; ξ) U (g−1) n,1 (x; ξ) . . .
. . .  Observe now that the r.h.s. of the last two systems (7.8) (7.11) is composed of lower orders in g and an unknown polynomial vector P (g) n,k . The l.h.s. is always the same at every order in g and for every n. Therefore, if we can find a way to get rid of the unknown polynomials P (g) n,k 's in the inversion of the system, we see that we can perform a recursion and get all the corrections recursively. We will do so in the following sections by introducing a suitable kernel K(x 0 , x) and using some residue methods (which will automatically get rid of the polynomials).
7.3 Inversion of the linear system: kernels K(x 0 , x) and G(x 0 , x) Suppose that we can find matrices K(x 0 , x) and G(x 0 , x) of size d 2 × d 2 such that: • They satisfy the relation: • The function G(x 0 , x) has the form: where the A j (x 0 )'s are d 2 × d 2 matrices.
• The function x → K(x 0 , x) is analytic at every x = s i If we can find such matrices, then we claim that we can invert any system of the form (7.8): where v(x) is assumed to be known up to a polynomial and u(x) is assumed to have only poles at the s i 's and behaves like O 1 x when x → ∞. Indeed under these assumptions we have: Note that in the last equality, the polynomial part of v(x) does not contribute only the terms in v(x) which have poles at the s i 's contribute. Hence we have successfully inverted our system:

The topological recursion
Therefore with the help of the results in the last section we can compute all correlation functions by the recursion: 16) or more generally for higher order correlation functions: n+1,1 (x; x, ξ) . . .

Determination of the kernel K(x 0 , x)
The determination of the kernels K(x 0 , x) and G(x 0 , x) is presented in appendix B. We present in details some recursive formulas to obtain the various K(x 0 , s i ) and the derivatives K ′ (x 0 , s i ), . . . , K (n) (x 0 , s i ) in the appendix. The results are the following. If we define the following vectors (of size d 2 ): Then the matrices A i (x 0 ) defining G(x 0 , x) are connected to the K(x 0 , s i ) by: The matrices K(x 0 , s i ) are determined as a solution of the following system: where the 1 x 0 −s i in f q are in position Nq + i. (Note that f d 2 −1 does not have these terms).
where all the matrices B k , Id and 0 are symmetric matrices of size N × N given by: where θ(p > q) is the heavyside function, equal to 1 if p > q and 0 otherwise. And when q = d 2 − 1: Eventually the higher derivatives in x of K(x 0 , x) evaluated at x = s i are obtained recursively with formulas (B.30), (B.32), (B.33), (B.39) and (B.40) derived in details in appendix. Note also that to complete these formulas, we need to prove that the system of equations is consistent. This involves a special identity which is proved in appendix C. As a conclusion, we have obtained here a recursion to compute all correlations functions U (g) n (x, y; x). This corresponds to a generalization of the topological recursion [40,48] for the arbitrary-β two-matrix model studied here. Moreover, when d 2 = 2, one can recover the case of the arbitrary-β one-matrix model developped in [46].

Leading free energy
We clearly have from the saddle-point approximation method, that where the Yang-Yang action S is evaluated at its extremum. The link (4.8) between the WKB expansion and the standard topological expansion gives

Subleading g = 1
To the first subleading order, our topological recursion gives 0 (x) (7.31) and we remind that the d 2 − 1 th component of the vector U In order to perform the computation, we first need to compute U (0) 1 (x; x), which is also computed by the topological recursion and is worth (7.32) i.e.
and we need it at x 0 = ξ i.e.
Notice that K(x, s i ) may have double poles at x = s j , and thus U From the WKB approximation, we know that we must have where A is the action (see (6.2)), which is approximated by the Yang-Yang action S in the limit β → ∞, and H is the Hessian of the Yang-Yang action S, so this leads us to conjecture that We find that: where we could explicitly verify the lack of a simple pole term. The coefficients B j are given by: The technical details (and the definitions of quantities involved in the last formula) of this computation can be found in appendix E with other possible expressions for B j given in (E.68) or (E.68).

Example
As an example, we present explicit computations of the method in the case when N = 1 in appendix (D). In particular in this case we are able to determine W 1 (x) explicitly and to compute the corresponding F 1 . In particular we show in this case that it satisfies the equation:

WKB computation
The WKB approximation, consists in expanding the integrand in the vicinity of a saddle point. To leading order, one gets a Gaussian integral, and corrections, are moments of the Gaussian integral, and they are computed by Wick's theorem. In principle, one can write: and expand: and expand where the Hessian A ′′ is the quadratic form of second derivatives of lim gs→0 A, and δA can be computed by its Taylor expansion in g s . We thus get and 45) and in principle, a standard WKB approximation using Wick's theorem, should allow one to find f 2 , f 3 , . . . explicitly. Then, W (g) n can be recovered by applying d dV 1 repeatedly. However, this is long and tedious, and difficult to do in a systematic way. One of the difficulties, is that one has to compute the large β expansion of I β (X, Y ), which is not very well known at the present time.
So, our loop equation method is an alternative to WKB, it can be performed in a systematic way, and gives all orders in the g s expansion. Also, notice that Wick's theorem gives an expansion of Z in powers of g s , whereas our method gives directly the expansion of ln Z.

Conclusion
We have presented here the loop equations of the two-matrix arbitrary-β matrix models and a recursive way to solve them for a specific polynomial Bethe ansatz. This work is a generalization of the arbitrary-β one matrix model with the same kind of ansatz as developed in [46]. This work is also part of the "quantum algebraic geometry" project started for the one-matrix model in [18,19,46]. In particular, we have proved that in the two-matrix arbitrary-β matrix model, we find a "quantum curve" (5.15) of arbitrary degree. For now we have only solved this model under the assumption of a polynomial ansatz for the solution corresponding to a specific limit of the model (N and T fixed while g s → 0 (i.e. β → ∞)). However, we expect the notions developed in [18,19] to generalize well for the two-matrix arbitrary-β matrix model as it has been the case here. This future work considering generic solutions of the "quantum curve" (5.15) is under progress at the moment. Then one may wonder if properties of integrability and of spectral invariants [8,20,48] known for the hermitian case can be generalized in the arbitrary-β case.
A Hessian matrix of the Yang-Yang function Using the notation of the paragraph regarding the variational approach and using (6.30), we have: ∀1 ≤ i, r, s ≤ N : Where we have used here: We can do the same kind of computation starting from (6.31): The computation relatively to u is easy: Eventually the most technical one is the double derivative relatively to A: Eventually the shape of the matrix looks like: Where the matrix G is given by: and the matrixG is its dual: The matrix I N 2 ×N is is a N 2 × N matrix with N times − T N in each column and zeros elsewhere.
If we sort the A r,s 's in the following way: (A 1,1 , . . . , A 1,N , . . . , A 2,1 , . . . A 2,N , . . . , A N,N ) then the matrix looks like: The other matrices X, Y and Z are expressed by the previous computations, but since they do not have a compact or interesting form we do not mention them here.
B Finding kernels K(x 0 , x) and G(x 0 , x)

B.1 Decomposition of the matrix D t (x)
Before presenting the way to find the kernels K(x 0 , x) and G(x 0 , x), we need to introduce some notations. First, since D t (x) is going to play an important role, we decompose it in the following way: where we have defined the vectors and matrices of dimension d 2 by: Note here that all the matrices defined here are already known since they only involve leading order quantities determined earlier. In particular we have: and: where the coefficients u k,i are known and satisfy (6.13). In the following equations, we will also need to write down a series expansion of U around x = s i . Therefore, it is natural to introduce the following vectors: which correspond to the first terms of the expansion: Remember that u d 2 −1,i = T Nt d 2 so that we have the following important relations: B.2 Solving the system: Finding G(x 0 , x) and K(x 0 , x) In order to apply our scheme of recursion we do not need to compute explicitly K(x 0 , x) but only to be able to determine K(x 0 , s i ), K ′ (x 0 , s i ) and so on (where a prime means a derivative relatively to the second variable: K ′ (x 0 , x) = ∂ x K(x 0 , x)). Indeed, if we know this quantities then we can compute all the residues we need in our scheme and invert the differential system. This is reassuring since it was clear that K(x 0 , x) was not uniquely defined since it is a solution of a differential equation. We will first determine the kernel G(x, x 0 ) by computing the matrices A i (x 0 ) presented in (7.13). Note that in order to have simpler expressions, we will label the components of the matrices K(x 0 , x) and G(x 0 , x) from 0 to d 2 − 1 instead of the standard 1 to d 2 .
The strategy is the following: we will identify the coefficients of order (x − s i ) k in equation (7.12).
A straightforward computation gives: Multiplying on the left by w i t and using (B.8) we find that: so that: Therefore, we see that the knowledge of K(x 0 , s i ) is completely equivalent to the knowledge of A i (x 0 ). In terms of components we see that only the last line is special: We need to go to the next order in order to determine K(x 0 , s i ) or equivalently A i (x 0 ): Looking at the constant coefficient of the expansion gives: Terms in (x − s i ) 1 : We see that the first equation gives can determine the first lines of K(x 0 , s i ) as soon as we know the last line of K(x 0 , s i ). In the last equation, we see that we can get rid of the second derivatives by multiplying on the left by r w t i and using w t i r = T N . It gives: The last equation is only interesting on the last line, because all other lines trivially give zero (because we multiply on the left by r). Now we note the following identity for the component (d 2 − 1, q): Where we have used the relation for the u k,i 's (6.8). Therefore, we can simplify (B.16) into a much simpler way: This time, we see that all the derivatives K ′ (x 0 , s i ) have vanished. Observe that we can replace A j (x 0 ) by some K(x 0 , s i )'s using (B.9).
Taking the component (d 2 − 1, q) of the last equation we have: Remember that from (B.14) we have some recursion for the other lines. ∀p < d 2 − 1 : Therefore, we can group (B.20) and (B.21) into a matrix form. Let's introduce the following vectors: where the 1 x 0 −s i are in position Nq + i. (Note that f d 2 −1 does not have these terms).
where all the matrices B k , Id and 0 are symmetric matrices of size N × N given by: Then, we can rewrite the equations (B.20) and (B.21) into the nice matrix form: Hence, by a simple matrix inversion, we can find all the K(x 0 , s i ) and then using (B.9) find the kernel G(x 0 , x). Note in particular that the last column of K(x 0 , s i ) and hence G(x 0 , x) only have terms in In particular, if we define the "Bergmann kernel" B(x 0 , x) = − 1 2 ∂ x G(x 0 , x), we observe that it is a symmetric function. Eventually, note that in the case d 2 = 2 (i.e. the one-matrix model) we recover the results of the one-matrix model as described in [46] since we have a scalar problem corresponding to the component (d 2 − 1, d 2 − 1) of our problem.
B.3 Computation/choice of K ′ (x 0 , s i ) and K ′′ (x 0 , s i ) Now that the K(x 0 , s i )'s and the kernel G(x 0 , x) have been determined, the next step is to find the higher derivatives K ′ (x 0 , s i ), K ′′ (x 0 , s i ), . . . in order to be able to apply the residue formula (7.15). Unfortunately, since K(x 0 , x) is defined as as solution of a differential equation, it is not determined uniquely. For instance we can write the loop equation projected at orders (x − s i ) 0 , (x − s i ) 1 and (x − s i ) 2 . Using notations (B.5) we find: order 0: order 1: Remember that if we multiply on the left by r w t i we kill the terms We clearly see that now that the K(x 0 , s i )'s are known, we have some arbitrariness in solving this set of equations. Moreover, it is not obvious that this set of equations is compatible with our determination of K ′ (x 0 , s i )'s in the last section. Fortunately, we prove in appendix C that this is the case. For our residue scheme, we can choose whatever kernel K(x 0 , x) we want providing that it satisfies the assumption described earlier. Here we will choose it so that: We claim that this assumption is compatible with our previous set of equations. Indeed, since the matrices K ′ (x 0 , s i ) have vanishing d 2 − 2 first lines, we can extract from (B.27), the missing components: This expression is also equivalent to the following (multiply (B.27) on the left by r w t i and use (B.17) to get rid of K(x 0 , s i )): Then we can compute the first lines of K ′′ (x 0 , s i ) using (B.28): ∀p < d 2 − 1 : (B.33) where the r.h.s. is known. We see also that in (B.28), the last line of K ′′ (x 0 , s i ) vanishes identically so it cannot be determined and we can choose it to be zero to simplify the computations. These degree of freedom were also present in the one-matrix model [46] where K ′′ (x 0 , s i ) (which was scalar) remained unknown. Moreover, if we count the degrees of freedom, we see that we have exactly d 2 2 unknowns in the determination of K(x 0 , x) which corresponds to a unknown matrix of size d 2 × d 2 as expected for a linear differential equation of the form (7.12).
B.4 Recursion to determine K (n) (x 0 , s i ) Now that we have the matrices K(x 0 , s i ), K ′ (x 0 , s i ) and K ′′ (x 0 , s i ), we can perform a recursion to get the higher derivatives. To obtain them, one needs to look at the terms of the expansion of order (x − s i ) n . The main problem here is not conceptual, but rather notational. We introduce the following expansions: u ki x−s i −t k + δ k,0 x we have also: We can regroup them into the following vectors: Note for example that we have in the former notations .. We will also need an expansion in (x − s i ) p of the x → G(x 0 , x) function: Now it is easy to write the equality of the power n ≥ 1 of the differential equation Terms in (x − s i ) n : We can project this equation on the first d 2 − 1 lines. Any term with a r does not contribute. We find for p < d 2 − 1: Note that the r.h.s is completely known by recursion since it involves derivatives only up to order n. Then since we know the first lines, we can compute the missing last one by projecting the equation on the last line: Again, we note that the r.h.s. is known by recursion since it involves only derivatives up to order n. Moreover the coefficient (n − 1) T N is never zero when n > 1 so that we can compute K (n+1) (x 0 , s i ) d 2 −1,q . Therefore we can determine with both (B.39) and (B.40) all the components of K (n+1) (x 0 , s i ) as soon as we know the lower orders. Since we know K(x 0 , s i ), K ′ (x 0 , s i ) and K ′′ (x 0 , s i ) we can compute by recursion every derivative K (n) (x 0 , s i ) and thus implement the scheme given by (5.10) to determine all functions U (g) 0,k (x) and then by taking the last component of this vector (7.6) the correlation functions W C Compatibility of the system at order 0 In this appendix, we will show that equations given by the identification of the coefficient (x − s i ) 0 given before do not add any new constraints. Indeed, from the equation (B.14) we get: A possible worry is that one can eliminate the term K ′ (x 0 , s i ) in the previous equation by multiplying on the left by r w t i . Then one can obtain a closed system of equations for the K(x 0 , s i )'s which is different from the one we presented before. We will prove here that this does not happen. Let's first project the equation by multiplying on the left by r w i t . The first term disappear because of the relation (B.17). Therefore we find: From there we can extract r w i t K ′ (x 0 , s i ) and put it back into the first equation. We get a closed relation for the K(x 0 , s i ): Some simplifications occur and we find: We can project this relation into components. Note that the first d 2 − 2 lines do not give any new information since they are the same as (B.14): The projection on the last line is interesting: In the last term, the case k = d 2 − 1 cancels the previous one: We can put this system into a matrix d 2 N × d 2 N form. We introduce: where all the matrices C k , Id and 0 are of size N × N given by : and a special case for C d 2 −1 : Then the two sets of equations are equivalent to the matrix equation: The main problem now is that this system is different from the one we got before and thus the system seems overdetermined. Fortunately, the matrices, M and P and the vectors f q and g q have the same d 2 − 2 first lines. But the main difference is that f q has double poles in x 0 = s i on the last line whereas g q has simple poles. Writing: And looking at the last component gives that the two matrix systems are compatible if and only if the last column of PM −1 is null. From the form (companion-like) of the matrices we have: It is easy computation to see that the last column can be split in blocks of size m : where: so that: Multiplying on the left by P gives that the two systems are compatible if and only if: Note that we have also the determinantal identity: valid for every matrices A i and every matrix X of size N × N. This condition seems rather completely non trivial from the definition of B and of C k . But we will show in the following that from the definition of the u k,i 's, the last equation is automatically satisfied. Note then that because of (C.17) we obtain at the same time that P is not invertible and thus only (B.26) gives us the K(x 0 , s i )'s. Let's now prove that we have always: From the definition of the u k,i 's we have: We obtain then that: Note then that: so that: Then we need to add the last term: From the definition of u d 2 −1,2 we have also: and thus: 22) proving that only (B.26) gives the correct formula to determine all the K(x 0 , s i )'s.

D Example: Only one root s
In order to illustrate our algorithm, we present here the case when there is only one root s (we omit the subscript s 1 ). In this case, almost all the computation can be carried out analytically and in particular we show how to obtain W (1) 1 (x) and F 1 in this context. The results presented here are direct applications of the formula presented in the article. First we have from (6.14): Then we have: The action S is given by: The extremum is obtained for u = 1, A = 1, V ′ 1 (s) =s and V ′ 2 (s) = s = V ′ 2 (V ′ 1 (s)). The Hessian matrix is given by: At the extremum it gives: which gives: so that at the extremum we get: Hence we have with (6.45): giving the formula for W 2 (x 1 , x 2 ): We can now compute the kernels G(x 0 , x) and K(x 0 , s i ). The matrix M (size d 2 × d 2 ) is given by: 10) and the linear equation giving the K(x 0 , s) reads for q = d 2 − 1: The expression of M −1 is quite complicated. But fortunately, the last column (which is the only one we need for the computation of W (1) 1 ) is the most simple: s)) p , we find easily that: We can compute U 1 (x, y; z). We find: If we project along the k th power of y, it gives: Therefore we have with (7.16): 1,0 (z; z) . . .
We want to compute W 0,d 2 −1 (x) that is to say that we only need the last column of the previous vector. If we write it in components we get: The contribution from U (0) 0,k (z) is easy to compute since it only involves a double pole at x = s. Regarding the fact that K ′ (x 0 , s) p,q are null for p < d 2 − 1, there is only one term involved. An easy computation leads to: The contribution from U 1,k (z; z) is technically more involved. Indeed, since U (1) 1,k (z; z) has poles up to order 4, we need to compute the derivatives up to K ′′′ (x 0 , s). In particular, we only need the last column of the matrices. Since m = 1, the formulas of the derivatives simplify a lot: which gives: Then (B.29) reduces for p < d 2 − 1 to: The last component is a bit more sophisticated. We can obtain it by multiplying (B.29) by r w t in order to get rid of K ′′ (x 0 , s): which gives in components: 1 (x 0 ). First we can compute the term coming from 1 (x−s) 3 of (D.15) in (D.17): Therefore we find: And eventually putting (1) and (2) together and the contribution from ∂ z U (0) 0,k (z), we get to: 29) i.e.: Then a straightforward computation gives F 1 defined by ∂ ∂V 1 (x) F 1 = W 1 (x) with: where F 0 is given by −S which reduces when N = 1 to:

E Computation of W
(1)

(x) in the general case
We have: where: The equation to solve is: 1 (x) +U (1) 0 (x, x,ŷ) = −P (1) (x,ŷ) (E. 6) where: (E.7) Since we must perform an expansion in y, we write: E j,k (s j ,t n )y k (E. 8) and with Φ i,k = T N j A i,j E j,k (s j ,t n ) (E.10) Since: we obtain: We have the following properties: We calculate: The system to be solved is: We have The term involving the derivatives is easy to compute. Indeed, it gives: and from (B.32) we have: so that the term involving the derivative is: Note in particular that it contributes with a simple pole at x 0 = s i with residue N T . where the kernel K(x 0 , x) satisfies the matrices α i and Id are d 2 × d 2 matrices. We expand around s j : (E. 34) and also Consequently, from the expansion of K(x 0 , x) around x = s j : The expansion of D t around x = s j is: where we have noted: We also expand the kernels: We define the matrices: 45) and expand M j D t − T N d dx K(x 0 , x) = M j G(x 0 , x) and identify the powers in (x − s j ): and We note that The other equations tell Finally,