Supereigenvalue models and topological recursion

We show that the Eynard-Orantin topological recursion, in conjunction with simple auxiliary equations, can be used to calculate all correlation functions of supereigenvalue models.


Introduction
It is well known that Hermitian matrix models satisfy Virasoro constraints. As a consequence of Virasoro constraints, correlation functions for Hermitian matrix models obey a system of equations, known as loop equations. In turn, these loop equations can be solved recursively to calculate all correlation functions of Hermitian matrix models. In fact, this recursive system can be generalized beyond matrix models [5,17,26,27]. The resulting abstract recursive formalism has become known in the literature as the Eynard-Orantin topological recursion. 1 It can be used to calculate enumerative invariants in a variety of contexts, such as Gromov-Witten invariants and Hurwitz numbers (see for instance [13,15,21,25,28,30,31,35] and references therein).
An interesting question arises: can we generalize this triumvirate of Virasoro constraints, loop equations, and topological recursion to the supersymmetric realm? A natural starting point is supereigenvalue models, originally introduced in [6] -see also [1, 7, 9, 18-20, 32, 34, 36, 38-40]. Those were constructed precisely so that they satisfy super-Virasoro contraints. It was also shown that correlation functions in supereigenvalue models obey super-loop equations. Our objective is to complete the triad and recursively solve the super-loop equations.
Our main result is that in fact, there is no need to introduce a "super-topological recursion". Indeed, the Eynard-Orantin topological recursion, in conjunction with simple auxiliary equations, is sufficient to calculate all correlation functions of supereigenvalue models. This is perhaps unexpected. But as we will show, it follows because supereigenvalue models are highly constrained. The free energy of supereigenvalue models is at most quadratic in the Grassmann parameters, and in fact it is completely determined in terms of the free energy of Hermitian matrix models [9,36]. It is for this reason that the Eynard-Orantin topological recursion is sufficient to solve the super-loop equations.
However, the structure of the auxiliary equations hints at a new interpretation of the recursive structure in terms of super-geometry. Indeed, the Eynard-Orantin topological recursion starts with the geometry of a spectral curve. For supereigenvalue models, the starting point is also a spectral curve; that of the corresponding Hermitian matrix models.

JHEP04(2018)138
However, it must be supplemented with a Grassmann-valued polynomial equation, which can be thought of as a super-partner of the spectral curve. Together they form a "super spectral curve" (see for instance [18][19][20]). However it is not clear how to reformulate the topological recursion and the auxiliary equations in terms of a single recursive structure living on the super spectral curve. We leave this for future work.
This paper is organized as follows. We first review Hermitian matrix models, Virasoro constraints, loop equations and the Eynard-Orantin topological recursion in section 2. This section is meant to be a (hopefully) pedagogical introduction to these topics, from the viewpoint of formal Hermitian matrix models. Then, we turn to the study of supereigenvalue models in section 3. We show that the Eynard-Orantin topological recursion, in conjunction with auxiliary equations, is sufficient to calculate all correlation functions of supereigenvalue models in section 4. We also consider the super-Gaussian model as a simple example of the recursive formalism. We end with a brief discussion in section 5.
In this section we introduce formal Hermitian matrix models, and review the connections between Virasoro constraints, loop equations and topological recursion. A detailed discussion can be found in [24].

Formal Hermitian matrix models
The question of convergence of matrix integrals is a rather complex one. However, for many applications of matrix models in physics and enumerative geometry, convergence is not really necessary. More precisely, in this context -even though it is not always explicitly mentioned -we are often interested in so-called formal matrix models, rather than convergent matrix models. The difference between the two is well explained in [24].
In this paper we focus on formal matrix models. An important consequence of formal matrix models is that the quantities of interest, such as the partition function, the free energy and correlation functions, all possess a well-defined 1/N expansion. Let us now define formal matrix models.

Partition function and free energy
Let H N be the space of Hermitian N × N matrices and M ∈ H N . The partition function of a formal Hermitian 1-matrix model is given by the formal series Z(t, t 3 , · · · , t d ; T 2 ; N ) = (2. 2) The normalization of the measure (2.2) is chosen such that the partition function is exactly one when all coupling constants t k = 0. The free energy is then defined as F (t, t 3 , · · · , t d ; T 2 ; N ) = log Z(t, t 3 , · · · , t d ; T 2 ; N ). (2.3) In contrast, the partition function of a convergent Hermitian 1-matrix model is defined with the order of summation and integral in (2.1) switched: where is called the potential. Since summation and integral do not commute in general, formal matrix models (2.1) are different from convergent matrix models (2.4). Our interest in this paper is only in formal Hermitian 1-matrix models. For simplicity, however, we often omit the arguments (t, t k ; T 2 ; N ), and also denote a formal Hermitian matrix model by with the understanding that the summation should be taken outside of the integral. Hermitian matrix models possess a U(N ) gauge symmetry, M → U † M U , where U is an N × N unitary matrix. If we fix the gauge freedom such that M is diagonalized, the partition function is given up to normalization by where ∆(λ) = N i<j (λ i − λ j ) is the Vandermonde determinant.

Correlation functions
As usual we define the expectation value of a function f by

JHEP04(2018)138
and we denote by Trf (M ) c the corresponding connected expectation value. We are interested in the expectation values: It turns out to be convenient to collect all T l 1 ···l b (t, t k ; T 2 ; N ) for every nonnegative integers l 1 , · · · , l n in a single expression. We define the following generating functions, known as correlation functions: (2.10) The last equality is often used as a definition of the correlation functions; these should be understood as generating series in the variables 1/x i .

1/N expansion
For formal matrix models the free energy and correlation functions have a nice 1/N expansion. It follows from the definition of the partition function (2.1) that the free energy (2.3) has an expansion where the F g (t, t k ; T 2 ) do not depend on N . It can also be shown that the F g (t, t k ; T 2 ) are in fact power series in t [24]. A similar 1/N expansion also holds for correlation functions: with the W g,n (t, t k ; T 2 ; x 1 , · · · , x n ) independent of N . Those are also power series in t. For simplicity of notation we will often drop the dependence on t, t k and T 2 . In fact, the expectation values T l 1 ···ln (t, t k ; T 2 ; N ) can be interpreted in terms of ribbon graphs; we refer the reader to [24] for more details on this. It follows from the ribbon graph interpretation that they themselves have a 1/N expansion of the form: thus we can write, order by order, (2.14) The T (g) l 1 ···ln (t, t k ; T 2 ) are power series in t. Furthermore, it follows from the ribbon graph interpretation that if we collect the terms in the summation over l 1 , · · · , l n by powers of t, for each power of t only a finite number of terms are non-zero. In other words, order by order in t, W g,n (x 1 , · · · , x n ) is polynomial in the variables 1/x i , i = 1, · · · , n [24].

Virasoro constraints and loop equations
A fundamental result in the theory of formal matrix models is that Hermitian matrix models satisfy the so-called Virasoro constraints. This implies a set of relations between correlation functions known as loop equations. Let us now review these properties of matrix models.

Virasoro constraints
In order to study the Virasoro constraints it is more convenient to extend the potential (2.5) from a polynomial to a power series We will use this generalized potential to derive the Virasoro constraints and loop equations, but in the end we will set to recover a polynomial potential as in (2.5).
With this generalized potential we can obtain the correlation functions W g,n (x 1 , · · · , x n ) from the free energy by acting with the so-called loop insertion operator, which is defined by Then W n (x 1 , · · · , x n ) and W g,n (x 1 , · · · , x n ) are obtained by: We now define a sequence of operators {L n }, for n ≥ −1: Note that the third term is defined to be zero if n = −1. One can show that these operators are generators for the Virasoro subalgebra: A fundamental result is that the formal Hermitian matrix model partition function with a generalized potential (2.15) is annihilated by the Virasoro operators. This is called the Virasoro constraints: It can be shown that these two constraints are sufficient to uniquely determine Z, which is none other than (2.1). See [2][3][4] for more detail. We set T 2 = 1 for simplicity for the remainder of this section.

Loop equations
From the Virasoro constraints satisfied by Hermitian matrix models one can derive a set of relations between correlation functions known as loop equations.
We start with the formal series in 1/x: where the first equality holds due to the Virasoro constraints. We can rewrite (2.24) using the fact that correlation functions can be obtained by acting on the free energy with the loop insertion operator (2.18). After some manipulations we obtain the loop equation where V (x) denotes the derivative of the potential with respect to x. See appendix B.1 for the derivation. Further, by acting an arbitrary number of times with the loop insertion operator on the loop equation, one obtains the general loop equation: where we introduced the notation J = (x 1 , . . . , x n ), and P n+1 (x; J) is defined by (2.28)

JHEP04(2018)138
If we insert the 1/N expansion in (2.27), the coefficient of (N/t) 2−2g−n gives the expansion of the loop equation: where P g,n+1 (x, J) is defined by (2.30) If we now set the potential to be a polynomial of degree d, that is, the coupling constants are chosen as in (2.16), then the P g,n+1 (x, J) defined in (2.30) become polynomials in x -note that they are not necessarily polynomials with respect to x 1 , · · · , x n however. More precisely, P 0,1 (x) has degree d − 2, while all other P g,n+1 (x, J) are polynomials in x of degree d − 3. This is because we can rewrite whereZ does not depend on g 0 , therefore (2.32) It then follows that the highest degree term x d−2 in P g,n+1 (x, J) is only non-vanishing for (g, n) = (0, 1).

Topological recursion
The loop equations (2.29) provide a set of relations between correlation functions. However, each relation depends on a polynomial P g,n+1 (x, J), see (2.30), which cannot a priori be calculated from the matrix model. Fortunately, there is a method for recursively solving the loop equations for the correlation functions W g,n (x 1 , . . . , x n ), without first knowing the polynomials P g,n+1 (x, J). This recursive method can in fact be generalized beyond matrix models to the broader setup of algebraic geometry [5,17,26,27]. The resulting abstract recursive formalism has become known in the literature as the Eynard-Orantin topological recursion. Henceforth we will refer to the recursive method for solving loop equations by this name as well.
For completeness, let us review in this section the recursive method for solving loop equations in the context of formal Hermitian 1-matrix models.

Planar limit and spectral curve
The loop equation (2.29) for g = 0, n = 1 is (2.34) or equivalently so that (2.35) can be rewritten as The loop equation (2.29) can also be rewritten in terms of y(x). We obtain: The 1-cut Brown's lemma [16,22,23], which applies to formal Hermitian 1-matrix models, implies that (2.37) defines a (potentially singular) genus zero hyperelliptic curve of degree 2d − 2: Lemma 2.1. (1-cut Brown's Lemma) There exists a polynomial M (x) of x of degree d − 2 whose roots α i are power series of t, and a pair a, b of power series of √ t where a + b and ab are power series of t, such that with non-zero constants α 0 i .
A key point here is that while everything so far was defined as formal series in t, in (2.39) all the t-dependence is in a, b and the α i . In fact, it follows from the 1-cut Brown's lemma that the coefficients of the degree 2d − 2 polynomial in x on the right-hand-side of (2.39) have a well defined power series expansion in t. We can even go further, and "re-sum" the power series; that is, we think of the coefficients as Taylor expansions of actual functions of t. In other words, we think of (2.39) as defining a t-dependent family of (potentially singular) genus zero hyperelliptic curves of degree 2d − 2.

JHEP04(2018)138
This hyperelliptic curve, which is called the spectral curve, plays a fundamental role for the topological recursion. In fact, we will want to interpret the correlation functions W g,n (x 1 , · · · , x n ) as "living" on the spectral curve. Let us be a little more precise.
Since (2.39) has genus zero, we can parameterize it with rational functions: where z is a coordinate on the Riemann sphere. 3 We can think of x : C ∞ → C ∞ as a branched double covering. Its two simple ramification points are at z = ±1, which are the two simple zeros of the one-form The hyperelliptic involution that exchanges the two sheets of x : C ∞ → C ∞ is given by z → σ(z) = 1/z, with x(σ(z)) = x(z), y(σ(z)) = −y(z). (2.44)

Multilinear differentials and pole structure
We now want to understand the correlation functions as living on the spectral curve. More precisely, we define new objects, ω g,n (z 1 , . . . , z n ), which are multilinear differentials on the Riemann sphere, and functions of t. In other words, they are multilinear differentials on the spectral curve. For g ≥ 0, n ≥ 1, and 2g − 2 + n ≥ 1, we define them such that ω g,n (z 1 , · · · , z n ) = W g,n (x 1 , · · · , x n )dx 1 · · · dx n , (2.45) where we defined x i := x(z i ). By this equality, we mean that the Taylor expansion near t = 0 of the multilinear differential on the left-hand-side recovers the formal series of the correlation functions on the right-hand-side. For the two remaining cases, we define and (2.47) The ω g,n are now honest multilinear differentials on the spectral curve, so we can study their properties. The most important aspect for us will be their pole structure. Let us start with ω 0,2 (z 1 , z 2 ).
First, we note that ω(z 1 , z 2 ) cannot have poles at poles of Let us now consider the zeros of y(z 1 ) that are roots of M (x(z 1 )). From (2.48) ω 0,2 (z 1 , z 2 ) can have at most poles of the form 1/(x(z 1 ) − α i ) there. But by the 1-cut Brown's lemma, we know that α i = α 0 i + O(t) with a non-zero constant α 0 i . Therefore, if we do a Taylor expansion near t = 0, the constant term would have the form where we used x 1 = x(z 1 ) for clarity. It would then contribute an infinite series in 1/x 1 for a fixed power of t, which contradicts the statement that ω 0,2 (z 1 , z 2 ) should recover a formal expansion in t with coefficients that are polynomials in 1/x 1 . Therefore ω 0,2 (z 1 , z 2 ) cannot have poles at the roots of M (x(z 1 )). This argument does not work however for the ramification points, which are simple zeros of y(z 1 ). However, dx(z 1 ) also has a simple zero there, hence ω(z 1 , z 2 ) does not have poles at the ramification points.
All that remains are the coinciding points z 1 = z 2 and z 1 = σ(z 2 ). As z 1 → σ(z 2 ), y(z 1 ) → y(σ(z 2 )) = −y(z 2 ), and the double pole of the first line in (2.48) cancels out with the double pole of the second line. It thus follows that the only pole of ω 0,2 (z 1 , z 2 ) is a double pole at z 1 = z 2 .
In fact, there is a unique bilinear differential on the spectral curve with a double pole at z 1 = z 2 , no other pole, and that satisfies (2.49): This is the normalized bilinear differential of the second kind, which can be uniquely defined for Riemann surfaces of arbitrary genus [29]. The normalization is of course trivial here since the Riemann surface has genus zero.
For ω 1,1 , the loop equation (2.38) can be rewritten in terms of differentials as The two terms on the right-hand-side are clearly invariant under z 0 → σ(z 0 ), hence As for ω 0,3 , (2.38) can be rewritten as (2.55) The first two terms on the right-hand-side are exchanged under z 0 → σ(z 0 ), while the remaining terms on the right-hand-side are invariant. Therefore We now prove (2.52) by induction. Assume that it is true for all (g, n) such that 0 ≤ 2g − 2 + n < k. We show that it implies that it must be true for 2g − 2 + n = k. Assuming the induction hypothesis, for 2g −2+n ≥ 1 we can rewrite (2.38) in terms of differentials as , and all other terms on the right-hand-side are also invariant. Therefore and, by induction, this must hold for all (g, n) such that 2g − 2 + n ≥ 0.

JHEP04(2018)138
Let us now study the pole structure for ω g,n+1 (z 0 , J) in terms of z 0 ; since the correlation functions are symmetric the result will hold for all other z i , i = 1, · · · , n as well. The only possible poles are at zeros of y(z 0 ), coinciding points z 0 = z i and z 0 = σ(z i ), i = 1, · · · , n, and at poles of x(z 0 ). First, there is no pole at poles of x(z 0 ) since P g,n+1 (x(z 0 ), x(z 1 ), · · · , x(z n )) has degree d − 3. Second, there is no pole at coinciding points z 0 → z i by the loop equation (2.38) and no pole either at z 0 → σ(z i ) by the antisymmetric involution relation (2.52). All that remains are zeros of y(z 0 ). By the same argument as for ω 0,2 (z 1 , z 2 ), there cannot be poles at zeros of M (x(z 0 )), otherwise the ω g,n+1 would have expansions in t with coefficients that are not polynomials in 1/x(z 0 ). The only remaining possible poles are at the ramification points of x : C ∞ → C ∞ , that is, z 0 = ±1. In contrast to ω 0,2 (z 1 , z 2 ), these poles can be of higher order, and dx(z 0 ) is not sufficient to get rid of them.

Topological recursion
We are now ready to solve the loop equations recursively to determine all ω g,n+1 from ω 0,1 and ω 0,2 . Let us start with the loop equation (2.57), rewritten as with J = {z 1 , . . . , z n }. It is clear that the third line of the expression has no pole at the ramification points in z 0 . Thus, if we evaluate the residue of the expression on the right-hand-side at the ramification points, the third line does not contribute. We now take advantage of this fact to construct the so-called topological recursion. Let us introduce the normalized differential of the third kind ω a−b (z), which has simple poles at z = a and z = b with residues +1 and −1 respectively. It is given by This object can in fact be defined for Riemann surfaces of arbitrary genus as the integral (in the fundamental domain) of the normalized bilinear differential of the second kind. Let α be a generic base point on the Riemann sphere, and consider ω z−α (z ). While it is a one-form in z , we can also think of it as a function in z. (Note however that this is only true on the Riemann sphere, on higher genus Riemann surfaces as a function of z it is only defined in the fundamental domain.) It then follows that a∈all poles Res w=a ω w−α (z 0 )ω g,n+1 (w, J) = 0. (2.61)

JHEP04(2018)138
For 2g − 2 + n ≥ 0, the only poles of the integrand are at w = z 0 and at the ramification points w = ±1. The residue at w = z 0 gives It then follows that and, substituting (2.59) in the right-hand-side, we obtain the topological recursion: This is a recursive formula which calculates all ω g,n+1 (z 0 , J), 2g − 2 + n ≥ 0, from the initial data of a genus zero spectral curve and a bilinear differential (2.67) We note that the Eynard-Orantin topological recursion is of course much more general than this. It can be defined for (almost) arbitrary spectral curves, not just (singular) hyperelliptic genus zero curves [5,11,12,26,27]; in fact, it was also recently reformulated in terms of quantization of Airy structures in [8,10,33]. However, in the context of formal Hermitian 1-matrix models the formulation given here is sufficient to calculate all correlation functions.
The idea of supereigenvalue models is to construct a partition function that is annihilated by differential operators that are generators for a super-Virasoro subalgebra in the NS sector. The resulting partition function is not a matrix model, but it can be understood as a supersymmetric generalization of Hermitian matrix models in the eigenvalue formulation (2.7), hence the name supereigenvalue models.

JHEP04(2018)138
From the super-Virasoro constraints one can also derive super-loop equations satisfied by correlation functions. The missing link then is whether there exists a recursive method for solving the super-loop equations. We show that, in fact, the standard Eynard-Orantin topological recursion, combined with simple auxiliary equations, is sufficient to calculate all correlation functions in supereigenvalue models.

Supereigenvalue models and super-Virasoro contraints
Let us start by defining supereigenvalue models.

Partition function and free energy
Let V (x) be a power series potential (2.15): and define a fermionic potential Ψ(x) as where the ξ k+ 1 2 are Grassmann coupling constants. We define the partition function of the formal supereigenvalue model as where the measure is with the θ i Grassmann variables. ∆(λ, θ) will be determined shortly. One should keep in mind here that this is a formal model, that is, the summation should be understood as being outside the integral. Similar to formal Hermitian matrix models, it can be shown that Z is given by a formal power series in t s . The free energy F for the supereigenvalue model is defined as usual by Remark 2. We will denote objects in supereigenvalue models, such as partition function, free energy, and correlation functions, with curly letters Z, F and W n to differentiate them from their Hermitian counterparts.

Super-Virasoro constraints
We want the partition function above to be annihilated by a sequence of differential operators that are generators for a closed subalgebra of the N = 1 superconformal algebra in the Neveu-Schwarz (NS) sector. Let us first define such operators, and then show that we can uniquely fix ∆(λ, θ) such that the partition function is annihilated by these operators.

JHEP04(2018)138
We define the super-Virasoro operators L n , G n+ 1 2 for n ≥ −1 as where Σ −1 k=0 , Σ −2 k=0 are defined to be zero. These operators are generators for the super-Virasoro subalgebra [6]: We now want to impose the super-Virasoro constraints, that is, we want the partition function Z to satisfy the requirement that First, we note that the condition L n Z = 0, n ≥ −1, is automatically satisfied if G n+ 1 2 Z = 0, n ≥ −1, by the super-Virasoro algebra (3.8). So we only need to impose the fermionic condition.
It is straightforward to show that there is a unique choice of ∆(λ, θ), up to overall rescaling, such that G n+ 1 2 Z = 0, n ≥ −1: (3.10) It is now clear why we defined the partition function (3.3) with 2N eigenvalues; if the number of eigenvalues is odd, with the ∆(λ, θ) above the eigenvalue integral becomes zero. Hence we must require an even number of eigenvalues.

Super-Virasoro constraints and free energy
The super-Virasoro constraints is the requirement that the partition function Z satisfies the equations G n+ 1 2 Z = 0, L n Z = 0, n ≥ −1. (3.11)

JHEP04(2018)138
In this section we do formal manipulations of these equations to rewrite them in terms of the fermionic expansion of the free energy F = log Z. This will be useful for us later on.
Let us do a power series expansion of F in the Grassmann coupling constants ξ k+ 1 2 . First, we know that only terms with an even number of Grassmann coupling constants will be non-vanishing in the expansion, since F is a bosonic quantity. We then introduce the notation where F (2k) denotes the term of order 2k in the Grassmann coupling constants. For instance, F (2) is quadratic in the ξ k+ 1 2 .
The condition G n+ 1 2 Z = 0, n ≥ −1, rewritten in terms of the free energy F , becomes Identifying terms by terms in the expansion in the Grassmann coupling constants, we get the system of equations ∂g n−j = 0, (3.14) for l ≥ 1.
The other Virasoro constraints, L n Z = 0, n ≥ −1, becomes, in terms of F ,

JHEP04(2018)138
Order by order in the Grassmann coupling constants, we get for l ≥ 0.

Quadratic truncation and relation to Hermitian matrix models
Let us now come back to the formal supereigenvalue model. A remarkable fact, originally proven in [36], is that the free energy F of the formal supereigenvalue model contains the Grassman coupling constants ξ k+ 1 2 only up to quadratic order. That is highly non-trivial. In the notation above, this means that (3.17) For completeness, we provide a proof of this truncation for supereigenvalue models in appendix A.
Remark 3. While we have not investigated this closely yet, we do not expect the truncation of the free energy to quadratic order to hold in multi-cut supereigenvalue models; we expect non-vanishing higher order terms. This is because the proof is based on a careful permutation of indices of λ i , θ i , which cannot be freely done in multi-cut models. In the formal language, this means that the truncation only holds for formal supereigenvalue models, in which case the spectral curve has genus zero. We do not expect it to hold for formal "multi-supereigenvalue models" for which the spectral curve would have higher genus.
It turns out that this truncation of the fermionic expansion of F implies that F is closely related to the free energy of the formal Hermitian 1-matrix model F . More precisely, setting t s = 2t, we get the following relation, which was proven in [9,36]: Note that the free energy on the left-hand-side is for the formal supereigenvalue model, while the free energy on the right-hand-side is for the formal Hermitian model. In other words,

JHEP04(2018)138
This relation is fundamental. What it says is that the free energy of the formal supereigenvalue model is completely determined in terms of the free energy of the formal Hermitian matrix model.
Let us now provide a proof of this formula from the super-Virasoro constraints. Our proof is different in flavour to the original one in [9]. It is purely algebraic; we show that the relation is a direct consequence of the super-Virasoro constraints, if the existence of a solution that is quadratic in the Grassman coupling constants is assumed.
which is the case for the free energy of formal supereigenvalue models. (3.14) for l = 2 becomes For n = 0, we use the fact that Z = e − 2N 2 g 0 tZ , whereZ does not depend on g 0 , to see that F (2) does not depend on g 0 . Thus we get On the one hand, (3.23) means that for some A (1) which is linear in the Grassmann parameters ξ k+ 1 2 . On the other hand, (3.24) says that for someÃ (1) that is also linear in the ξ k+ 1 2 . Therefore whereF (0) =F (0) (t, g k ; T 2 ; N ) is some unknown function of t, g k , T 2 and N , which is independent of the Grassmann parameters ξ k+ 1 2 . Let us now consider (3.16) for l = 1 and n = 0. We get:

JHEP04(2018)138
where we used the fact that F (2) is independent of g 0 . Substituting (3.27), we get We will need this equation soon.
Let us now consider (3.14) for l = 1 and n = −1. We have Let us now multiply by ξ k+ 1 2 on the left, apply ∂ ∂g k+1 , and sum over k. We get: (3.32) By (3.29), the first term is zero. Therefore, we conclude that In other words, the free energy of the formal supereigenvalue model can be written as Note that so far we only used the super-Virasoro constraints for n = −1 and n = 0. What remains to be shown is that F (0) (2t, g k ; T 2 ; 2N ) = 2F (t, g k ; T 2 ; N ), where the right-hand-side is the free energy of the formal Hermitian matrix model. We go back JHEP04(2018)138 to (3.16) for l = 0 and arbitrary n: We substitute (3.33): Using the fact that ∂F (0) ∂g 0 is a constant, this simplifies to: Let us rewrite this equation in terms ofF (t, g k ; T 2 ; N ) = 1 2 F (0) (2t, g k ; T 2 ; 2N ). We get These two constraints are sufficient to determine thatF (t, g k ; T 2 ; N ) is the free energy of 1-cut formal Hermitian matrix models (see Remark 1). Thus, we conclude that F (t, g k ; T 2 ; N ) = F (t, g k ; T 2 ; N ), that is, the free energy of the formal supereigenvalue model takes the form (3.41)

JHEP04(2018)138
We now set T 2 = 1 for simplicity. With this under our belt, we can define the 1/N expansion of the free energy. Since t s = 2t, it is natural to define the 1/N expansion for F as (3.42) (3.43)

Correlation functions
Since the free energy of supereigenvalue models is completely determined in terms of the free energy of the Hermitian matrix model, we expect a similar statement to be true for correlation functions. The correlation functions of formal Hermitian 1-matrix models can be obtained by acting with the loop insertion operator (2.17) a number of times on the free energy, as shown in (2.18). We can define correlation functions in supereigenvalue models in a similar way.
We define the following bosonic and fermionic loop insertion operators: (3.44) Correlation functions are then obtained by: where J = {x 1 , · · · , x n } and K = {X 1 , · · · , X m }. We removed the dependence of the correlation functions on coupling constants for clarity. As usual, the correlation functions inherit from (3.42) a 1/N expansion: We can further expand the correlation functions in terms of the fermionic coupling constants ξ k+ 1 2 . Since F is at most quadratic in the Grassmann parameters, i.e. F = F (0) + F (2) , we see that the only non-vanishing correlation functions have 0 ≤ m ≤ 2. Further, we get where, as usual, the superscript denotes the terms of a given order in the Grassmann parameters. Moreover, by (3.43) we expect all these correlation functions to be somehow determined in terms of correlation functions of the Hermitian matrix model. For instance, it is clear that g,n|0 (J|) = 2W g,n (J). (3.50) Let us now study the other non-vanishing correlation functions.
We start with W (0) g,n|2 (J|X 1 , X 2 ). We have: We can simplify this further. Recall from (2.32) that ∂F g ∂g 0 = 0 for all g ≥ 1. (3.52) Thus we can rewrite It thus follows that g,n|2 (J|X 1 , X 2 ) = 2(X 1 − X 2 )W g,n+2 (X 1 , X 2 , J). Let us now turn to W (1) g,n|1 (J|X 1 ). We do not need to do much work here. We note that (3.55) But since W is the identity operator. Hence, we get For W (2) g,n|0 (J|), we get: We can use the same residue trick as for W (1) g,n|1 (J|X 1 ). It then follows It is easy to see that the right-hand-side is 2W (2) g,n|0 (J|), and we obtain To summarize, we obtain the following relations between correlation functions for supereigenvalue models, which can be thought of as a consequence of Proposition 3.1 for correlation functions:
The important point here is that all correlation functions of formal supereigenvalue models are determined in terms of W g,n (J), the correlation functions of formal Hermitian matrix models.

Super-loop equations
Let us now turn to the study of super-loop equations. There are more than one type of loop equations in supereigenvalue models, depending on the order of the Grassmann coupling constants. We call loop equations with an even (resp. odd) dependence on the Grassmann parameters "bosonic" (resp. "fermionic"). We simply give the equations here and leave their derivations to appendix B.2 and B.3.

Fermionic loop equation
The derivation of the fermionic loop equation starts with the following formal series After a few manipulations we obtain the fermionic loop equation: Now we expand the fermionic loop equation (3.62) in terms of 1/N , and act an arbitrary number of times with the bosonic loop insertion operator on it. Collecting terms order by order in the Grassmann coupling constants, we get the following two fermionic loop equations: where we defined which, by (3.43), has an expansion P g,n|1 (J|X) = P (1) g,n|1 (J|X) + P (3.68)

Bosonic loop equation
To get the bosonic loop equation, we start with the formal series: We manipulate the above equation to obtain the bosonic loop equation:

JHEP04(2018)138
where we defined (3.71) Before we do a 1/N expansion, let us study the dependence on the Grassmann parameters. (3.18) implies that the dependence of the bosonic loop equation is at most of order 4. Since P 1|0 (x|) depends on the Grassmann parameters at most quadratically, the order 4 terms in the bosonic loop equation directly yield the condition: Let us now study the bosonic equation at order 2 and 0. We act an arbitrary number of times with the bosonic loop insertion operator on (3.70), and then do a 1/N -expansion. We also collect terms according to their order in the Grassmann parameters. We obtain the two following equations: and V (x)W (2) g,n+1|0 (x, J|) − P (2) g,n+1|0 (x, J|) g−h,n−m|1 (J\I|x) where we defined

Topological recursion
The correlation functions for supereigenvalue matrix models are fully determined by (3.60).
What we do in this section is reformulate these equations in the geometric language of the Eynard-Orantin topological recursion.
To this end, we now restrict ourselves to polynomial potentials. That is, we choose the coupling constants g k such that g 0 = g 1 = g 2 = 0 and g j = 0 for all j > d. For fermionic couplings, we set ξ k+ 1 2 = 0 for all k > d . Note that d and d are independent integers.

JHEP04(2018)138
Then we showed that: 1. There are two meromorphic functions x(z) and y(z) on the Riemann sphere such that ω 0,1 (z) = y(z)dx(z) and with M (x) a polynomial of degree d − 2. We call this hyperelliptic curve the spectral curve of the matrix model. We generally choose the coordinate z on the Riemann sphere as being given by the parameterization of the hyperelliptic curve.

Supereigenvalue multilinear differentials
We would like to extend these results to supereigenvalue correlation functions. More precisely, we would like to construct (Grassman-valued) multilinear differentials on the spectral curve in a similar way. For 2g − 2 + m + n + p ≥ 1, we want to construct multilinear differentials: g,n|m (z 1 , · · · , z n |w 1 , · · · , w m ) = g,n|m (x 1 , · · · , x n |X 1 , · · · , X m )dx 1 · · · dx n dX 1 · · · dX m , (4.7) where x i := x(z i ) and X j := x(w j ). As usual, the equality here means that after Taylorexpanding the left-hand-side near t = 0 we recover the formal series of the correlation functions on the right-hand-side. Note that the factor of 1/2 is simply there for convenience.
For the "unstable" cases (2g − 2 + m + n + p ≤ 0), we modify the definitions slightly as for Hermitian matrix models. We define: Our goal is to show that we can construct such multilinear differentials.

Spectral curve
As for Hermitian matrix models, we know that there exists two meromorphic functions x(z) and y(z) on the Riemann sphere such that Ω (0) 0,1|0 (z|) = y(z)dx(z) and where M (x) is a polynomial of degree d − 2. We choose our coordinate z on the Riemann sphere as in (4.5).
Eq. (4.15) can be thought of as a superpartner to the spectral curve (4.12). Together they form a super spectral curve -see [18][19][20] for more on this. Ultimately, it would be great to reformulate the recursive structure as living on this super spectral curve. We leave this for future work.

Stable cases
Eq. (3.60) of course plays a crucial role. Let {ω g,n (J)} for 2g − 2 + n ≥ 1 be the set of multilinear differentials for formal Hermitian matrix models obtained by the Eynard-Orantin topological recursion (2.64). Then (3.60) implies that the stable multilinear differentials for m = 0, p = 0 and m = 2, p = 0 are determined by (4.20) Note that as shown in (2.58) all stable ω g,n (J) are odd under the hyperelliptic involution z → σ(z) = 1/z. Thus, this must hold for stable Ω (0) g,n|0 (J|) and Ω (0) g,n|2 (J|w 1 , w 2 ) as well. In particular, g,n|2 (J|w 1 , σ(w 2 )). (4.21) We now turn to the case m = 1, p = 1. In order to obtain the differentials for this case, we would like to modify (3.60) into a residue formula on the Riemann sphere. Notice that we can rewrite the third equation in (3.60) as g,n|2 (J|X, X 1 )dX is regular at X → ∞. Then, in terms of differentials on the Riemann sphere, and using the Grassmann-valued function γ(z) introduced earlier, we obtain Ω Remark that the residue here still makes sense. This is because both γ(w) and Ω (0) g,n|2 (J|w, w 1 ) are odd under the hyperelliptic involution w → 1/w, hence the integrand itself is even, hence a well-defined differential form on the base x(w).
We can rewrite this expression as a residue on the Riemann sphere itself: 2 Res x(w)=∞ γ(w)Ω Finally, we notice that (4.20) ensures that the integrand can have poles only at the ramification points of the x-covering (i.e. at w = ±1) and at the poles of x(w) (i.e. w = 0 and w = ∞), since all stable ω g,n (J) have poles only at the ramification points. Using the fact that the sum of all possible residues of a differential form on the Riemann sphere vanishes, we arrive at: Ω

JHEP04(2018)138
Following the same reasoning, we can turn the fourth equation in (3.60) into a residue formula on the Riemann sphere. We obtain: Res w=a γ(w)Ω (1) g,n|1 (J|w), (4.27) which is again valid for 2g + n ≥ 1. In terms of correlation functions of the Hermitian matrix model, we get Ω (2) g,n|0 (J|) = 1 2 Therefore, all correlation functions of supereigenvalue models can be determined using topological recursion on the spectral curve (4.12), in conjunction with auxiliary equations defined in terms of the Grassmann-valued polynomial equation (4.15). To summarize, we get: Theorem 4.1. Starting with the spectral curve (4.12), the Eynard-Orantin topological recursion constructs a sequence of multilinear differentials ω g,n (J). Then the correlation functions of supereigenvalue models are encoded in the following Grassmann-valued multilinear differentials on the spectral curve (4.12).

Super-Gaussian model
As an example, let us consider the super-Gaussian model, which is the simplest supereigenvalue model. We calculate the spectral curve and the associated Grassmann-valued function. Then by applying Theorem 4.1 we compute a few multilinear differentials.

Spectral curve
The super-Gaussian model is defined by the following potentials: (4.31) Then we have We know from (2.32) and (3.18) that To calculate the last term in (4.32), we use the following trick: Note that the first equality is due to (3.43) with (2.32), and we shifted λ → λ = λ + g 1 at the third equality. Thus, we get JHEP04(2018)138 polynomial equations are obtained from the super-loop equations without using the relation (3.18) with Hermitian matrix models. Thus, one might expect that this super spectral curve provides the right initial conditions to go beyond one-cut. One would probably need to reformulate the recursive structure somehow in terms of differentials living on the super spectral curve. At this moment we do not have a clear understanding of how to proceed. There are two other research directions that we are currently investigating. Firstly, it is interesting to ask whether one can define supereigenvalue models that satisfy super-Virasoro constraints corresponding to the super-Virasoro subalgebra in the Ramond sector, and then see whether the appropriate correlation functions can be computed recursively.
We have found such a supereigenvalue partition function, and derived super-loop equations (see also [20], where such supereigenvalue models are also derived from the point of view of quantum curves). We are currently investigating a recursive solution to these super-loop equations [37].
Secondly, recently Kontsevich and Soibelman reformulated the Eynard-Orantin topological recursion in terms of so-called "Airy structures" [8,10,33]. From this viewpoint, the initial data is a sequence (finite or infinite) of quadratic differential operators that generate a Lie algebra. A natural question then is whether one can define "super-Airy structures", in terms of quadratic (bosonic and fermionic) differential operators that generate a super-Lie algebra. This is indeed possible, as we show in [14]. An open question then is whether the partition function corresponding to super-Airy structures has an interesting enumerative geometric meaning. Could it be related to enumerative geometry on the moduli-space of super-Riemann surfaces?
A Derivation of (3.17) Here we give a proof of (3.17), which is the statement that the free energy for supereigenvalue models is at most quadratic in the Grassmann parameters. We include a proof of this fact here for completeness; it follows along similar lines to the original proof in [36].
Setting t s = 2t, the partition function (3.3) of supereigenvalue models can be written as We will drop the "formal" superscript in this appendix for clarity. We now would like to integrate over the 2N Grassmann variables θ i . Recall that Grassmann integrals obey where σ ∈ S 2N . The first equation ensures that terms with an odd number of ξ k+ 1 2 vanish, hence the partition function is expanded as

JHEP04(2018)138
where the superscript denotes the order of the Grassmann couplings ξ k+ 1 2 . Note that the possible highest order of ξ k+ 1 2 is 2N no matter what the degree of the Grassmann potential Ψ(x) is. This is because there are only 2N Grassmann variables θ i to be integrated. More precisely, we have We can now evaluate the integral over the Grassmann variables θ i . It is not too difficult to see that . (A.5) Next, the Vandermonde determinant in (A.4) can be expressed as By plugging this and (A.5) into (A.4), we get: . (A.7) Since every λ i is integrated, for each permutation σ ∈ S n , we can rename λ σ(i) → λ i . As a result, each term in the summation over σ ∈ S 2n gives the same integral, and we get:

JHEP04(2018)138
We now introduce a 2N ×2N anti-symmetric matrix A and a Grassmann-valued 2N vector ζ with components: We can then rewrite Z (2K) neatly as: A τ (2j)τ (2j−1) . (A.12) Next, recall that the Pfaffian of a 2N × 2N anti-symmetric matrix A is defined by Thus, for K = 0, we get directly that Z (0) = (2N )! pf(A). (A.14) To study the K > 0 case, we need to say a little more about Gaussian Grassmann integrals. Let M be an 2N × 2N anti-symmetric matrix, and θ be a Grassmann-valued 2N vector. Then the Gaussian Grassmann integral can be evaluated as: This follows by expanding the exponential and integrating directly over the Grassmann variables. Moreover, just as for Gaussian integrals, we can also calculate shifted Gaussian Grassmann integrals. Let M be an 2N × 2N anti-symmetric matrix, θ be a Grassmann-valued 2N vector, and η by a Grassmann-valued 2N vector. Then:   In other words, the free energy F = log Z for formal supereigenvalue models takes the form hence it is at most quadratic in the Grassmann coupling constants ξ k+ 1 2 .

B Derivation of the loop and super-loop equations
In this appendix we present a derivation of the loop and super-loop equations from Virasoro and super-Virasoro constraints. An alternative derivation of the super-loop equations in terms of reparameterization of the matrix integral is discussed in [40]. Note that we choose T 2 = 1 for simplicity in this section.

B.1 Loop equation for Hermitian matrix models
The derivation of the loop equation starts with the following formal series where the equality holds due to the Virasoro constraints. Let us first consider the third term in (B.1). This term vanishes for n = 0, hence we can shift indices:

JHEP04(2018)138
On the other hand, the first two terms can be rewritten as where V (x) denotes the derivative of the potential with respect to x. Let us denote the last two terms by P 1 (x), that is, which is a power series in x (and becomes a polynomial in x of degree d − 2 if we set g k = 0 for k > d). Putting all this together, we obtain the loop equation (2.25):

B.2 Fermionic loop equation for supereigenvalue models
The fermionic loop equation is derived from the following formal series:

B.3 Bosonic loop equation for supereigenvalue models
The bosonic loop equation is derived starting from the following series:

JHEP04(2018)138
Again, equality holds due to the super-Virasoro constraints. The first line is the same as (B.1) except the 1/2 in the third term. Thus it can be written as x n k≥0 (n + k + 2)g n+k+2 ∂F ∂g k .
(B.12) We manipulate the first term in the second line of (B.11) to get:

JHEP04(2018)138
For the second equality we used the fact that Putting all this together, we obtain the bosonic loop equation (3.70): (B.17) Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.