1 Introduction

Symplectic aspects of the monodromy map for the Fuchsian systems were studied starting from [3, 25, 33]; in these papers it was proved that the monodromy map is a symplectomorphism from a symplectic leaf in the space of coefficients of the system to a symplectic leaf in the monodromy manifold. The non-Fuchsian case was considered in [11, 13, 15, 19, 41]. Remarkably, the simplest non-Fuchsian case of the Painlevé II hierarchy was treated in the paper [19] in 1981, about 15 years before the Fuchsian case was studied in detail in [3, 25, 33]. The key object associated to any Fuchsian or non-Fuchsian system of linear ODE’s is the tau-function introduced by Jimbo, Miwa and Ueno [30]; until now its significance in the framework of the monodromy symplectomorphism remained unclear; the main goal of this paper is to fill this gap and prove a recent conjecture by Its–Lisovyy–Prokhorov [28].

Our interest in this subject stems from the study of monodromy map of a second order equation on a Riemann surface; such a map was also proven to be a symplectomorphism [9, 10, 32, 34]. Remarkably, the generating function of the monodromy symplectomorphism (the “Yang–Yang” function) plays an important role in the theory of supersymmetric Yang–Mills equations [37] ; several steps towards understanding of this generating function were made in [9].

In this paper we address the question about the role of the generating function of the monodromy symplectomorphism in the context of Fuchsian equations on the Riemann sphere. The conclusion we arrive to is somewhat unexpected: such generating function can be naturally identified with the isomonodromic tau-function; moreover, this interpretation allows to define the dependence of the tau-function on monodromy data.

The version of the monodromy map for Fuchsian systems used in the current paper is slightly different from the monodromy map considered in [3, 25, 33]. This version is standard in the theory of isomonodromy deformations [39] and it was also considered in [11, 29] from the symplectic point of view.

To describe the monodromy map we remind the basics of the theory of solutions of Fuchsian systems of differential equations on \({{\mathbb {P}}^1}\), following [39]. Consider the equation

$$\begin{aligned} \frac{\partial \Psi }{\partial z}= \sum _{i=1}^N \frac{A_i}{z-t_i} \Psi \;,\qquad \Psi (z=\infty )=\mathbf{1 }, \end{aligned}$$
(1.1)

where \(A_i\in sl(n)\) and \(t_j\ne t_k\) such that \(\sum _{i=1}^N A_i=0\). Assume also that eigenvalues of each \(A_j\) are simple and furthermore do not differ by an integer. Choose a system of cuts \(\gamma _1,\dots ,\gamma _N\) connecting \(\infty \) with \(t_1,\dots ,t_N\) respectively, and assume that the ends of these cuts emanating from \(\infty \) are ordered as \((1,\dots ,N)\) counter-clockwise (Fig. 1). The normalization condition \(\Psi (\infty )=\mathbf{1 }\) is then understood in the sense that \(\lim _{z\rightarrow \infty } \Psi (z)=\mathbf{1 }\) where the limit is taken in the sector between \(\gamma _1\) and \(\gamma _N\).

The set of generators \(\sigma _1,\dots ,\sigma _N\) of the fundamental group \(\pi _1({{\mathbb {P}}^1}\setminus \{t_j\}_{j=1}^N,\infty )\) is chosen such that the loop representing \(\sigma _j\) crosses only the cut \(\gamma _j\), and its orientation is chosen so that the relation between \(\sigma _j\) takes the form \( \sigma _N\cdot \dots \cdot \sigma _1=\mathrm{Id}\) (Fig. 1).

Fig. 1
figure 1

Choice of cuts and generators of the fundamental group

The solution \(\Psi \) of (1.1) is single-valued in the simply connected domain \({{\mathbb {P}}^1}\setminus \{\gamma _j\}_{j=1}^N\). Denote the diagonal form of the matrix \(A_j\) by \(L_j\), \(j=1,\dots , N\) (the matrices \(A_j\) are diagonalizable due to our assumption about their eigenvalues). Then the asymptotics of \(\Psi \) near \(t_j\) has the standard form [39]:

$$\begin{aligned} \Psi (z)=(G_j+O(z-t_j)) (z-t_j)^{L_j} C^{-1}_j\;. \end{aligned}$$
(1.2)

The matrix \(G_j\) is a diagonalizing matrix for \(A_j\):

$$\begin{aligned} A_j=G_j L_j G_j^{-1}\;. \end{aligned}$$
(1.3)

The matrices \(C_j\) are called the connection matrices. Notice that the matrices \(G_j\) and \(C_j\) are not uniquely defined by Eq. (1.1) since a simultaneous transformation \(G_j\rightarrow G_j D_j\) and \(C_j\rightarrow C_j D_j\) with diagonal \(D_j\)’s changes neither the asymptotics (1.2) nor the Eq. (1.1).

Analytic continuation of \(\Psi (z)\) along \(\sigma _j\) yields \(\Psi (z) M_j^{-1}\), where the monodromy matrix \(M_j\in SL(n)\) is related to the connection matrix \(C_j\) and the exponent of monodromy \(L_j\) by:

$$\begin{aligned} M_j=C_j \Lambda _j C_j^{-1}\ ,\qquad \Lambda _j:= \mathrm{e}^{2i\pi L_j}. \end{aligned}$$
(1.4)

Alternatively, the matrix \(M_j\) can be viewed as the jump matrix on \(\gamma _j\): orienting \(\gamma _j\) from \(\infty \) towards \(t_j\), the boundary values of \(\Psi \) on the right (“\(+\)”) and left (“−”) sides of \(\gamma _j\) are related by \(\Psi _+=\Psi _i M_j\). Our assumption about the ordering of the branch cuts \(\gamma _j\) and generators \(\sigma _j\) implies the relation

$$\begin{aligned} M_1\cdots M_N=\mathbf{1 }\;. \end{aligned}$$
(1.5)

The monodromy map introduced in [39] sends the set of pairs \((G_j,L_j)\) to the set of pairs \((C_j,\Lambda _j)\) for a given set of poles \(t_j\).

The map between the set of coefficients \(A_j\) and the set of monodromy matrices \(M_j\) is a different version of monodromy map associated to Eq. (1.1); the symplectic aspects of this version of the monodromy map were studied in [3, 25, 33].

To describe our framework in more details we introduce the following two spaces. The first space is the quotient

$$\begin{aligned} {\mathcal {A}}=\bigg \{(G_j,L_j)_{j=1}^N,\;\ G_j\in SL(n), \ L_j \in {{\mathfrak {h}}}_{ss}^{nr}\ ,\forall j{=}1,{\dots } ,N: \sum _{j=1}^N G_j L_j G_j^{-1}{=}0\bigg \}\big /\sim \end{aligned}$$
(1.6)

where \({{\mathfrak {h}}}_{ss}^{nr}\) denotes the set of diagonal matrices with simple eigenvalues not differing by integers (non-resonant). The equivalence relation is given by the SL(n) action \(G_j\mapsto S G_j\) with S independent of j.

The second space is the quotient

$$\begin{aligned} {\mathcal {M}}=\bigg \{\{C_j\,,\,L_j\}_{j=1}^N,\;\ C_j\in SL(n), \ L_j \in {{\mathfrak {h}}}_{ss}^{nr} :\ \prod _{j=1}^N C_j e^{2\pi i L_j} C_j^{-1}=\mathbf{1 }\bigg \}\big /\sim \;. \end{aligned}$$
(1.7)

Similarly to (1.6), the equivalence is given by the SL(n) action \(C_j\mapsto S C_j\) (with the same S for all j’s).

For a fixed set of poles \(\{t_j\}_{j=1}^N\) we denote the “monodromy map” from the (GL)-space to (CL)-space by

$$\begin{aligned} {\mathcal {F}}^t:{\mathcal {A}}\rightarrow {\mathcal {M}}\;. \end{aligned}$$
(1.8)

Poisson and symplectic structures on \({\mathcal {A}}\) and dynamical r-matrix. Let \({\mathcal {H}}:= SL(n,{\mathbb {C}})\times {{\mathfrak {h}}}_{ss} = \{(G,L)\}\). Here \(G\in {SL(N)}\) and \(L=\mathrm{diag}(\lambda _1,\dots ,\lambda _n)\) is a diagonal traceless matrix with \(\lambda _j\ne \lambda _k\) and \(\lambda _1+\dots +\lambda _n=0\). Consider the following 1-form on \({\mathcal {H}}\):

$$\begin{aligned} \theta =\mathrm{tr}(L G^{-1} \,\mathrm{d}G)\;. \end{aligned}$$
(1.9)

We prove in Proposition 2.1 that the form \(\omega = \,\mathrm{d}\theta \) is non-degenerate, and therefore, is a symplectic form on \({\mathcal {H}}\). For any matrix M we use the following notation for the Kronecker products

$$\begin{aligned} \mathop {M}\limits ^1 = M\otimes \mathbf{1 }\;,\qquad \mathop {M}\limits ^2 =\mathbf{1 }\otimes M. \end{aligned}$$

Then the Poisson structure on \({\mathcal {H}}\) associated to the symplectic form \(\omega \) is (see Proposition 2.2):

$$\begin{aligned} \big \{\mathop {G}\limits ^1,\mathop {G}\limits ^2\big \} =-\mathop {G}\limits ^1\mathop {G}\limits ^2 r(L)\;, \qquad \big \{\mathop {G}\limits ^1,\mathop {L}\limits ^2\big \} =-\mathop {G}\limits ^1 \Omega \;, \end{aligned}$$
(1.10)

where

$$\begin{aligned} r(L)=\sum _{i<j} \frac{E_{ij}\otimes E_{ji}-E_{ji} \otimes E_{ij}}{\lambda _i-\lambda _j} \end{aligned}$$
(1.11)

and

$$\begin{aligned} \Omega = \Omega _{{{\mathfrak {gl}}}(n)}=\sum _{i=1}^n E_{ii} \otimes E_{ii} - \frac{1}{n} \mathbf{1 } \otimes \mathbf{1 }\;; \end{aligned}$$

we use the standard notation \(E_{ij}\) for the matrix with only one non-vanishing element equal to 1 in the (ij) entry. The matrix r(L) is a simplest example of dynamical r-matrix [17]. Theorem 2.1 shows that the bracket (1.10) induces the Kirillov–Kostant Poisson bracket for \(A=GLG^{-1}\).

The bracket (1.10) can be used to define the Poisson structure on the space \({\mathcal {A}}\) as follows. Denote first by \({\mathcal {A}}_0\) the space of pairs \(\{(G_j,L_j)\}_{j=1}^N\) with the product symplectic structure, or, equivalently, with the following Poisson bracket:

$$\begin{aligned} \{\mathop {G_{j}}\limits ^1,\mathop {G_{k}}\limits ^2\} =-\mathop {G_{j}}\limits ^1\mathop {G_{k}}\limits ^2 r(L_k) \,\delta _{jk}\;,\qquad \{\mathop {G_{j}}\limits ^1, \mathop {L_{k}}\limits ^2\}=-\mathop {G_{k}}\limits ^1 \Omega \,\delta _{jk}\;. \end{aligned}$$
(1.12)

The moment map corresponding to the group action \(G_j\rightarrow SG_j\) (\(S\in SL(n)\)) on \({\mathcal {A}}_0\) is given by \(\sum _{j=1}^N G_j L_j G_j^{-1}\). The space \({\mathcal {A}}\) (1.6) inherits a symplectic form from \({\mathcal {A}}_0\) via the standard symplectic reduction [4]:

Theorem 1

(see Theorem 2.3). The Poisson structure induced on \({\mathcal {A}}\) from the Poisson structure (1.12) on \({\mathcal {A}}_0\) via the reduction on the level set \(\sum _{j=1}^N G_j L_j G_j^{-1}=0\) of the moment map, corresponding to the group action \(G_j\rightarrow SG_j\), is non-degenerate and the corresponding symplectic form is \( \omega _{{\mathcal {A}}}=\,\mathrm{d}\theta _{\mathcal {A}}\), where the symplectic potential \( \theta _{\mathcal {A}}\) for \(\omega _{{\mathcal {A}}}\) is given by

$$\begin{aligned} \theta _{\mathcal {A}}=\sum _{k=1}^N\mathrm{tr}(L_k G_k^{-1} \,\mathrm{d}G_k)\;. \end{aligned}$$

This symplectic structure appeared in [11] but the connection to dynamical r-matrix and associated Poisson structure was not known until now.

Symplectic structure on \({\mathcal {M}}\). Define the following 2-form on the space \({\mathcal {M}}\):

$$\begin{aligned} \omega _{{\mathcal {M}}}= \frac{1}{4\pi i}(\omega _1+\omega _2) \end{aligned}$$

where

$$\begin{aligned}&\omega _1= \mathrm{tr}\sum _{\ell =1}^{N}\mathrm{tr}\left( M_\ell ^{-1} \,\mathrm{d}M _{\ell } \wedge K_{\ell }^{-1} \,\mathrm{d}K_{\ell } \right) +\mathrm{tr}\sum _{\ell =1}^N \left( \Lambda ^{-1}_\ell C_{\ell } ^{-1}\,\mathrm{d}C_{\ell }\wedge \Lambda _\ell C_\ell ^{-1}\,\mathrm{d}C_\ell \right) \\&\omega _2=2\sum _{\ell =1}^N \mathrm{tr}\left( \Lambda _\ell ^{-1} \,\mathrm{d}\Lambda _\ell \wedge C_\ell ^{-1} \,\mathrm{d}C_\ell \right) \end{aligned}$$

with \(K_{\ell }=M_1\cdots M_\ell \) and \(\Lambda _j=e^{2\pi i L_j}\).

The form \(-\omega _1/2\) coincides with the symplectic form on the symplectic leaves \(\Lambda _j=const\) of the SL(n) Goldman bracket (see (3.14) of [2] in the case \(g=0\)). The first result of this paper (see Theorem 3.2 and its proof in Sect. 3) is that given a set of poles \(\{t_j\}_{j=1}^N\) and a point \(p_0\in {\mathcal {M}}\) in a neighbourhood of which the monodromy map is invertible, the pullback of the form \(\omega _{\mathcal {M}}\) under the map \({\mathcal {F}}^t :{\mathcal {A}}\rightarrow {\mathcal {M}}\) coincides with \(\omega _{\mathcal {A}}\):

$$\begin{aligned} ({\mathcal {F}}^t)^{*} \omega _{\mathcal {M}}= \omega _{\mathcal {A}}\;. \end{aligned}$$
(1.13)

This statement implies that (see Corollary 3.1 and its proof) the form \(\omega _{\mathcal {M}}\) is closed and non-degenerate, and, therefore, defines a symplectic structure on \({\mathcal {M}}\).

For a given set of monodromy data the monodromy map is invertible outside of a locus of codimension 1 in the space of poles [12]. Since the form \(\omega _{{\mathcal {M}}}\) is independent of \(\{t_j\}\), this form is always non-degenerate on the monodromy manifold.

The equality (1.13) generalizes the results of [3, 25, 33], where it was proved that the monodromy map between the “smaller” spaces—the space of coefficients \(A_j\) with fixed eigenvalues and the symplectic leaf of the GL(n) character variety of N-punctured sphere—is a symplectomorphism; the formula (1.13) was proved in a different way in [11].

Time dependence. To describe the dependence on the \(t_j\)’s (the “times”) we extend the spaces \({\mathcal {A}}\) and \({\mathcal {M}}\) to include also the coordinates \(\{t_j\}\):

$$\begin{aligned}&\widetilde{{\mathcal {A}}}=\Big \{(p,\{t_j\}_{j=1}^N)\;,\; p\in {\mathcal {A}},\; t_j\in {\mathbb {C}},\; t_j\ne t_k\Big \}\;, \end{aligned}$$
(1.14)
$$\begin{aligned}&\widetilde{{\mathcal {M}}}=\Big \{(p,\{t_j\}_{j=1}^N)\;,\; p\in {\mathcal {M}},\; t_j\in {\mathbb {C}},\; t_j\ne t_k\Big \}\;. \end{aligned}$$
(1.15)

The monodromy map \({\mathcal {F}}^t\) then extends to the map \({\mathcal {F}}:\widetilde{{\mathcal {A}}}\rightarrow \widetilde{{\mathcal {M}}}\;\). The locus in \(\widetilde{{\mathcal {M}}}\) where the map is not invertible is usually referred to as the Malgrange divisor. Denote the pullback of the form \(\omega _{\mathcal {A}}\) from \({\mathcal {A}}\) to \(\widetilde{{\mathcal {A}}}\) by \(\widetilde{\omega }_{\mathcal {A}}\) and the pullback of the form \(\omega _{\mathcal {M}}\) from \({\mathcal {M}}\) to \(\widetilde{{\mathcal {M}}}\) by \(\widetilde{\omega }_{\mathcal {M}}\) (notice that the forms \(\widetilde{\omega }_{\mathcal {A}}\) and \(\widetilde{\omega }_{\mathcal {M}}\) are closed but degenerate). Now we are in a position to formulate the next theorem (see Theorem 3.1)

Theorem 2

(see Theorem 3.1 together with Theorem 3.2). The following identity holds between two-forms on \(\widetilde{{\mathcal {A}}}\)

$$\begin{aligned} {\mathcal {F}}^{*} \widetilde{\omega }_{\mathcal {M}}= \widetilde{\omega }_{\mathcal {A}}-\sum _{k=1}^N d H_k\wedge dt_k \end{aligned}$$
(1.16)

where

$$\begin{aligned} H_k=\sum _{j\ne k} \frac{\mathrm{tr}A_j A_k}{t_k-t_j}\;,\qquad k=1,\dots ,N \end{aligned}$$
(1.17)

are the canonical Hamiltonians of the Schlesinger system.

We remind that the Schlesinger equations [12] consist of the following system of PDEs for the coefficients of A(z)

$$\begin{aligned} \frac{\partial A_k}{\partial t_j} = \frac{[A_k,A_j]}{t_k-t_j},\ \ j \ne k\,;\qquad \frac{\partial A_j}{\partial t_j} = -\sum _{k\ne j} \frac{[A_k,A_j]}{t_k-t_j} \end{aligned}$$
(1.18)

and they define the deformations of the connection A(z) which preserve the monodromy representation. They are Hamiltonian equations with respect to the standard Kirillov–Kostant Poisson bracket with time–dependent Hamiltonians \(H_k\) as in (1.17).

Tau function and generating function of the monodromy map. The above theorem allows to establish the relationship between the isomonodromic tau-function and the generating function of the monodromy map. Namely, consider some local symplectic potential \(\theta _{\mathcal {M}}\) for the form \(\omega _{\mathcal {M}}\) such that

$$\begin{aligned} \,\mathrm{d}\theta _{\mathcal {M}}=\omega _{\mathcal {M}}\end{aligned}$$

on the space \({\mathcal {M}}\) (globally \(\theta _{\mathcal {M}}\) can be defined on a covering of \({\mathcal {M}}\)) and denote its pullback to \(\widetilde{{\mathcal {M}}}\) by \(\widetilde{\theta }_{\mathcal {M}}\). Denote by \(\widetilde{\theta }_{\mathcal {A}}\) the pullback of \(\theta _{\mathcal {A}}\) under the natural projection \(\widetilde{{\mathcal {A}}} \rightarrow {\mathcal {A}}\). Then \(\widetilde{\theta }_{\mathcal {A}}\) is the potential of the symplectic form \(\widetilde{\omega }_{\mathcal {A}}\) on \(\widetilde{{\mathcal {A}}}\), and (1.16) implies existence of a locally defined generating function \({\mathcal {G}}\) on \(\widetilde{{\mathcal {A}}}\).

Definition 1.1

The generating function (corresponding to a given choice of the symplectic potential \(\theta _{\mathcal {M}}\)) of the monodromy map between spaces \(\widetilde{{\mathcal {A}}}\) and \(\widetilde{{\mathcal {M}}}\) is defined by

$$\begin{aligned} d{\mathcal {G}}=\widetilde{\theta }_{{\mathcal {A}}}-\sum _{j=1}^N H_k dt_k -{\mathcal {F}}^{*} \widetilde{\theta }_{\mathcal {M}}\;. \end{aligned}$$
(1.19)

A different choice of \(\theta _{\mathcal {M}}\) (and hence of its pullback \(\widetilde{\theta }_{\mathcal {M}}\)) adds a \(\{t_k\}\)-independent term to \({\mathcal {G}}\) i.e. it corresponds to a transformation \({\mathcal {G}}\rightarrow {\mathcal {G}}+f(\{C,L\})\) for some local function f on \({\mathcal {M}}\).

The dependence of \({\mathcal {G}}\) on \(\{t_j\}\) is, however, completely fixed by (1.19). Namely, locally one can write (1.19) in the coordinate system where \(\{t_j\}_{j=1}^N\) and \(\{C_j,L_j\}_{j=1}^N\) are considered as independent variables. Then derivatives of \(G_j\) on \(\{t_k\}\) for constant \(\{C_j,L_j\}_{j=1}^N\) i.e. for constant monodromy data, are given by Schlesinger equations of isomonodromic deformations in G-variables:

$$\begin{aligned} \frac{\partial G_k}{\partial t_j}= \frac{A_j G_k}{t_j-t_k}\;,\;\;\; j\ne k\,; \qquad \frac{\partial G_k}{\partial t_k}=-\sum _{k\ne j} \frac{A_j G_k}{t_j-t_k}\;. \end{aligned}$$
(1.20)

The Eq. (1.20) imply (1.18) but not viceversa. In this paper we obtain the following Hamiltonian formulation of Eq. (1.20):

Theorem 3

(see Theorem 2.2). The Eq. (1.20) are Hamiltonian,

$$\begin{aligned} \frac{\partial G_k}{\partial t_j} = \{H_j, G_k \} \end{aligned}$$
(1.21)

where \(\{.,.\}\) is the quadratic Poisson bracket (1.10) and the Hamiltonians are given by (1.17).

A direct computation shows that in \((t_j,C_j,L_j)\) coordinates the part of \(\widetilde{\theta }_{\mathcal {A}}\) containing \(\,\mathrm{d}t_j\)’s is given by \(2\sum _{j=1}^N H_j \,\mathrm{d}t_j\); together with (1.19) this implies

$$\begin{aligned} \frac{\partial {\mathcal {G}}}{\partial t_j} = H_j\;. \end{aligned}$$
(1.22)

Therefore, we get the following theorem:

Theorem 4

(see Theorem 3.3). For any choice of symplectic potential \(\theta _{\mathcal {M}}\) on \({\mathcal {M}}\) the dependence of the generating function \({\mathcal {G}}\) (1.19) on \(\{t_j\}_{j=1}^N\) coincides with the \(t_j\)-dependence of the isomonodromic Jimbo–Miwa tau-function. In other words, \(e^{-{\mathcal {G}}}\tau _{JM}\) depends only on monodromy data \(\{C_j,L_j\}_{j=1}^N\).

The above theorem shows that the generating function \({\mathcal {G}}\) can be used to define the tau-function not only as a function of positions of singularities of the Fuchsian differential equation but also as a function of monodromy matrices. The ambiguity built into this definition corresponds to the freedom to choose different symplectic potentials on different open sets of the monodromy manifold.

The symplectic potential we use in this paper was found in [8] using the coordinates introduced by Fock and Goncharov in [21] (for \(SL(2,{\mathbb {R}})\) case these coordinates called shear coordinates are attributed to Thurston, see [16, 20]; see also [14] where the complex analogs of the shear coordinates were used for the explicit parametrization of the open subset of full dimension of the \(SL(2,{\mathbb {C}})\) character variety of four-punctured sphere).

Definition 1.2

The SL(n) tau-function \(\tau \) on \(\widetilde{{\mathcal {M}}}\) is locally defined by the following set of compatible equations. The equations with respect to \(t_j\) are given by the formulas

$$\begin{aligned} \frac{\partial \log \tau }{\partial t_j}=\frac{1}{2}\mathop {\mathrm{res}}\limits _{z=t_j}\mathrm{tr} A^2(z)\;, \end{aligned}$$
(1.23)

where

$$\begin{aligned} A(z):= \sum _{i=1}^N \frac{A_i}{z-t_i}. \end{aligned}$$
(1.24)

The equations with respect to coordinates on monodromy manifold \({\mathcal {M}}\) are given by

$$\begin{aligned} \,\mathrm{d}_{\mathcal {M}}\log \tau =\sum _{j=1}^N \mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_{\mathcal {M}}G_j) -\theta _{\mathcal {M}}[\Sigma _0] \end{aligned}$$
(1.25)

where \(\theta _{\mathcal {M}}[\Sigma _0]\) is a symplectic potential (5.34) for the form \(\omega _{\mathcal {M}}\) defined using the Fock–Goncharov coordinates corresponding to a ciliated triangulation \(\Sigma _0\) (see Sect. 5.4).

Explicit formulas for derivatives of \(\tau \) with respect to Fock–Goncharov coordinates will be given in Sect. 6.

Conjecture by A.Its, O.Lisovyy and A.Prokhorov. Theorem 2 emphasizes a close relationship with the recent work [28] where the issue of dependence of the Jimbo–Miwa tau-function on monodromy matrices was also addressed. In particular, the relevance of the Goldman bracket and the corresponding symplectic form on its symplectic leaves was observed in [28] in the case of \(2\times 2\) system with four simple poles (the associate isomonodromic deformations give Painlevé 6 equation).

Moreover, the authors of [28] introduced a form which we denote by \(\Theta _{ILP}\) (this form is denoted by \(\omega \) in (2.7) of [28]). This form appeared in [28] as a result of computation involving the 1-form introduced by Malgrange in [35], similarly to this work, which in our notations is given by

$$\begin{aligned} \Theta _{ILP}=\sum _{j<k}^N\mathrm{tr}A_j A_k \,\mathrm{d}\log (t_j-t_k)+\sum _{j=1}^N \mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_{{\mathcal {M}}} G_j) \end{aligned}$$
(1.26)

where \(\,\mathrm{d}_{{\mathcal {M}}}\) denotes the differential with respect to monodromy data. Proposition 2.3 of [28] shows that the external derivative of the form (1.26) is a closed 2-form independent of \(\{t_j\}_{j=1}^N\). Furthermore, in Section 1.6 the authors of [28] formulate the following

Conjecture 1

[Its–Lisovyy–Prokhorov] The form \(\,\mathrm{d}\Theta _{ILP}\) coincides with the natural symplectic form on the monodromy manifold.

There are two natural versions of this conjecture:

  • The “weak” ILP conjecture. In this version \(\,\mathrm{d}_{\mathcal {M}}\) means the differential on a symplectic leaf \(\{\Lambda _j=\mathrm{const}\}_{j=1}^N\) of the SL(n) character variety of \(\pi _1({{\mathbb {P}}^1}\setminus \{t_j\}_{j=1}^N)\) (we denote this symplectic leaf by \({\mathcal {M}}_\Lambda \)). The canonical symplectic form on \({\mathcal {M}}_\Lambda \) is given by the inversion of the SL(n) Goldman’s bracket [23] and can be written explicitly in terms of monodromy data as shown in ([2], formula (3.14) for \(g=0\) and \(k=2\pi \)). By “weak” ILP conjecture we understand the coincidence of \(\,\mathrm{d}\Theta _{ILP}\) (1.26) with the Goldman’s symplectic form on the symplectic leaves.

    The problem with this formulation is that the choice of matrices \(G_j\) should be such that they satisfy the Schlesinger equations (1.20); this requirement is not natural from the symplectic point of view.

  • The “strong” ILP conjecture. In this version the differential \(\,\mathrm{d}_{{\mathcal {M}}}\) in (1.26) means the differential on the full space \({\mathcal {M}}\) (1.6) which contains both the eigenvalues of the monodromy matrices and the connection matrices. Then (omitting the pullbacks) the strong ILP conjecture states that

    $$\begin{aligned} \,\mathrm{d}\Theta _{ILP}=\omega _{\mathcal {M}}\;. \end{aligned}$$
    (1.27)

The weak version of the ILP conjecture can be derived from known results of [3, 25] or [33], as shown in Sect. 4.

The strong version of the ILP conjecture is equivalent to our Theorem 2. To see this equivalence it is sufficient to write (1.19) in coordinates which are split into “times” \(\{t_j\}\) and some coordinates on the monodromy manifold \({\mathcal {M}}\). Then the “t-part” of the form \(\widetilde{\theta }_{\mathcal {A}}\) is given by \(2\sum _{k=1}^N H_k d t_k\) (this follows from the isomonodromic Eq. (1.20) for \(\{G_j\}\)) and the monodromy part coincides with the second term of the form (1.26) where the differential \(\,\mathrm{d}_{{\mathcal {M}}}\) is understood as the differential on \({\mathcal {M}}\). Now, taking the external derivative of (1.19) we come to (1.16) where the right-hand side coincides with the form \(\,\mathrm{d}\Theta _{ILP}\) of [28]. Finally, we notice that the formula (1.19) allows to interpret the generating function \({\mathcal {G}}\) as the action of the multi-time hamiltonian system, according to Conjecture 2 of [26] (see also [27]).

Summarizing, the main results of this paper are the following:

  1. 1.

    We give a new hamiltonian formulation of Schlesinger system written in terms of (GL)-variables; this formulation involves a quadratic Poisson structure defined by the dynamical r-matrix (Sect. 2).

  2. 2.

    We prove that the monodromy map for a Fuchsian system is a symplectomorphism between (GL) and \((C,\Lambda )\) spaces (Sect. 3).

  3. 3.

    We prove the “weak” (Sect. 4) and “strong” (Theorem 3.2) versions of the Its–Lisovyy–Prokhorov conjecture about coincidence of the external derivative of the Malgrange form with the natural symplectic form on the monodromy manifold

  4. 4.

    We introduce defining equations for the Jimbo–Miwa–Ueno tau-function with respect to Fock–Goncharov coordinates on the monodromy manifold (Definition 6.1 and formula (6.5)).

  5. 5.

    In the SL(2) case we derive equations which define the monodromy dependence of \(\Psi \), \(G_j\) and tau-functions (Theorem 6.1, Corollary 6.1 and Proposition 6.3).

2 Dynamical r-Matrix Formulation of the Schlesinger System

In this section we describe the Hamiltonian formulation of Schlesinger equations. We start from considering the GL(n) case and then indicate the modifications required in the SL(n) case.

2.1 Quadratic Poisson bracket via dynamical r-matrix

Let us introduce the space

$$\begin{aligned} {\mathcal {H}}:= GL(n,{\mathbb {C}}) \times {{\mathfrak {h}}}_{ss} \end{aligned}$$
(2.1)

where \({{\mathfrak {h}}}_{ss}\) is the space of diagonal matrices with distinct eigenvalues. We denote an element of \({\mathcal {H}}\) by (GL) where \(G\in GL(n)\) and \(L\in h_{ss}\).

Proposition 2.1

Consider the following one-form on \({\mathcal {H}}\):

$$\begin{aligned} \theta := \mathrm{tr}(L G^{-1} \,\mathrm{d}G)\;. \end{aligned}$$
(2.2)

Then the 2-form \(\omega =\,\mathrm{d}\theta \) given by

$$\begin{aligned} \omega = \mathrm{tr}( \,\mathrm{d}L \wedge G^{-1} \,\mathrm{d}G ) -\mathrm{tr}(L G^{-1} \,\mathrm{d}G \wedge G^{-1} \,\mathrm{d}G ) \end{aligned}$$
(2.3)

is symplectic on \({\mathcal {H}}\).

Proof

The form \(\omega \) is obviously closed; to verify its non-degeneracy we consider two tangent vectors in \(T_{(G,L)}{\mathcal {H}}\) and represent them as \((X_i,D_i)\in gl(n) \oplus {{\mathfrak {h}}}\) (\(i=1,2\)) where \({{\mathfrak {h}}}\) denotes the Cartan subalgebra of gl(n). Then

$$\begin{aligned}&\omega ( (X_1,D_1), (X_2, D_2) )\,{=}\, \mathrm{tr}\bigg (D_1 X_2 {-}D_2 X_1 {-} L[X_1,X_2]\bigg ){=}\,\mathrm{tr}\nonumber \\&\quad \bigg (D_1 X_2 {-}\big ( D_2 {+} [X_2,L]\big )X_1\bigg )\;. \end{aligned}$$
(2.4)

Suppose that \(\omega \) is degenerate i.e. the vector \((X_2, D_2)\) can be chosen so that (2.4) vanishes identically for all \((X_1,D_1)\). Then, choosing \(D_1=0\), we have \(\mathrm{tr}(( D_2 + [X_2,L]) X_1)=0\). Then, since \(X_1\) is arbitrary, we have \( D_2 + [X_2,L]=0\); since L is diagonal, the commutator is diagonal-free and hence \(D_2 =0\); since L is semisimple (the eigenvalues are distinct), it follows that \(X_2\) must be diagonal.

Then, choosing \(X_1=0\) and \(D_1\) arbitrary we see that the diagonal part of \(X_2\) must vanish as well. Thus the pairing is nondegenerate and the form \(\omega \) (2.3) is symplectic. \(\square \)

The corresponding Poisson structure is given by the following proposition.

Proposition 2.2

The nonzero Poisson brackets corresponding to the symplectic form \(\omega \) are

$$\begin{aligned} \{ G_{bj},G_{c\ell }\} =\frac{ G_{b\ell } G_{cj} }{\lambda _j- \lambda _\ell }\ , \ \ j\ne \ell \;,\qquad \ \ \{ G_{bk}, \lambda _\ell \} = -G_{bk} \delta _{\ell k}\;. \end{aligned}$$
(2.5)

Proof

The form (2.4) defines a map \(\Phi _{(G,L)} : T_{(G,L)}{\mathcal {H}}\rightarrow T_{(G,L)}^\star {\mathcal {H}}\) given by

$$\begin{aligned} \left\langle \Phi _{(G,L) }(X_1,D_1) , (X_2, D_2) \right\rangle := \omega \big ((X_1 ,D_1), (X_2, D_2)\big ) \end{aligned}$$
(2.6)

for all \((X_2, D_2)\). Then (2.4) implies

$$\begin{aligned} \Phi _{(G,L) }(X,D)= \bigg ( -D - [X,L],X^{^D}\bigg ) \in T^\star _{(G,L)} {\mathcal {H}}\end{aligned}$$
(2.7)

where \(X^{^D}\) and \(X^{^{OD}}\) denote the diagonal and off-diagonal parts of the matrix X, respectively and the identification between a matrix and its dual is defined by the trace pairing. We denote

$$\begin{aligned} Q= -D - [X,L]\;,\qquad \delta =X^{^D}\;. \end{aligned}$$
(2.8)

Given now \((Q,\delta )\in T^\star _{(G,L)}{\mathcal {H}}\) we observe from (2.8) that \(D = -Q^{^D}\) and \(X = \delta + \mathrm{ad}^{-1}_L (Q^{^{OD}})\). The inverse of \(\mathrm{ad}_{L}(\cdot ) = [L,\cdot ]\) is given explicitly by

$$\begin{aligned} \mathrm{ad}^{-1}_L(M)_{ab} = \frac{M_{ab}}{L_{aa} - L_{bb}}, \qquad a\ne b \end{aligned}$$
(2.9)

as a linear invertible map on the space of off–diagonal matrices.

Thus \(\Phi _{(G,L)}^{-1}: T_{(G,L)}^\star {\mathcal {H}}\rightarrow T_{(G,L)} {\mathcal {H}}\) is given by

$$\begin{aligned} \Phi _{(G,L)}^{-1} (Q,\delta ) = \bigg (\delta + \mathrm{ad}_L^{-1} (Q^{^{OD}}), - Q^{^D}\bigg ) \end{aligned}$$
(2.10)

where \(Q^{^{OD}}\) and \(Q^{^D}\) denote the off-diagonal and diagonal parts, respectively. The Poisson tensor \({\mathbb {P}} \in \bigwedge ^2 T_{(G,L)} {\mathcal {H}}\simeq ( \bigwedge ^2 T^\star _{(G,L)} {\mathcal {H}})^\vee \) is defined by

$$\begin{aligned} {\mathbb {P}}_{(G,L)} \bigg ((Q_1,\delta _1), (Q_2, \delta _2)\bigg ) :=\omega \bigg (\Phi _{(G,L)}^{-1} (Q_1,\delta _1),\Phi _{(G,L)}^{-1} (Q_2,\delta _2)\bigg ). \end{aligned}$$
(2.11)

Using the definition (2.4), (2.6) we get

$$\begin{aligned}&{\mathbb {P}}_{(G,L)} \bigg ((Q_1,\delta _1), (Q_2, \delta _2)\bigg ) \\&\quad =\mathrm{tr}\left( -Q_1^{^D}\left( \delta _2 + \mathrm{ad}_L^{-1} (Q_2^{^{OD}})\right) +Q_2^{^D}\left( \delta _1 + \mathrm{ad}_L^{-1} (Q_1^{^{OD}})\right) \right. \\&\qquad \left. + \mathrm{ad}_L \left( \delta _1 + \mathrm{ad}_L^{-1} (Q_1^{^{OD}})\right) \left( \delta _2 + \mathrm{ad}_L^{-1} (Q_2^{^{OD}})\right) \right) \end{aligned}$$

which is equal to

$$\begin{aligned} \mathrm{tr}\Bigg (Q_2^{^D}\delta _1-Q_1^{^D}\delta _2 +Q_1^{^{OD}} \mathrm{ad}_L^{-1} (Q_2^{^{OD}})\Bigg ) \;. \end{aligned}$$

To obtain the Poisson bracket between the matrix entries of G and L we now write \(Q = G^{-1} \,\mathrm{d}G\) and \(\delta = \,\mathrm{d}L = \mathrm{diag} (\,\mathrm{d}\lambda _1,\dots , \,\mathrm{d}\lambda _n)\).

Choosing \(Q_1 = {\mathbb {E}}_{jk}, \delta _1 =0\) and \(Q_2 = 0, \delta _2 ={\mathbb {E}}_{\ell \ell }\) we have

$$\begin{aligned} (G^{-1})_{jb}\{ G_{bk}, \lambda _\ell \} = {\mathbb {P}}( (G^{-1} \,\mathrm{d}G)_{jk} , \,\mathrm{d}\lambda _\ell ) =- \delta _{jk} \delta _{\ell k}\ \Rightarrow \ \{ G_{bk}, \lambda _\ell \} =- G_{bk} \delta _{\ell k}\; . \end{aligned}$$

Choosing \(Q_1 = {\mathbb {E}}_{ij}, Q_2 = {\mathbb {E}}_{k\ell },\ \ \delta _1= \delta _2 = 0\) we have

$$\begin{aligned} {\mathbb {P}}_{(G,L)} \big ((G^{-1} \,\mathrm{d}G)_{ij},(G^{-1} \,\mathrm{d}G)_{k\ell } \big ) =(G^{-1})_{ib} (G^{-1})_{kc}\{ G_{bj},G_{c\ell }\} = \frac{\delta _{jk} \delta _{i\ell }}{ \lambda _j - \lambda _\ell }\;. \end{aligned}$$

\(\square \)

Proposition 2.3

Introduce the GL(n) dynamical r-matrix ([17], p.4):

$$\begin{aligned} r(L)=\sum _{i<j} \frac{E_{ij}\otimes E_{ji}-E_{ji} \otimes E_{ij}}{\lambda _i-\lambda _j}\;. \end{aligned}$$

where \(E_{ij}\) is an \(n\times n\) matrix whose (ij) entry equals 1 while all other entries vanish. Introduce also the matrix

$$\begin{aligned} \Omega = \Omega _{{{\mathfrak {gl}}}(n)} :=\sum _{i=1}^n E_{ii} \otimes E_{ii}\; . \end{aligned}$$

Then the bracket (2.5) can be written as follows:

$$\begin{aligned}&\{\mathop {G}\limits ^1,\mathop {G}\limits ^2\} =-\mathop {G}\limits ^1\mathop {G}\limits ^2 r(L) \;, \end{aligned}$$
(2.12)
$$\begin{aligned}&\{\mathop {G}\limits ^1,\mathop {L}\limits ^2\} =-\mathop {G}\limits ^1 \Omega \;. \end{aligned}$$
(2.13)

The proof is a straightforward computation. Notice that the formula (2.13) can alternatively be written as follows:

$$\begin{aligned} \{G,\lambda _j\}=-G E_{jj}\;. \end{aligned}$$
(2.14)

The Jacobi identity involving the brackets \(\{\{\mathop {G}\limits ^1, \mathop {G}\limits ^2\},\mathop {G}\limits ^3\}\) implies (taking into account that \(\displaystyle \mathop {r}\limits ^{ij}=-\mathop {r}\limits ^{ji}\)) the classical dynamical Yang–Baxter equation: (see (3) of [17]).

$$\begin{aligned} {[}\mathop {r}\limits ^{12},\mathop {r}\limits ^{13}] +[\mathop {r}\limits ^{12},\mathop {r}\limits ^{23}] +[\mathop {r}\limits ^{23},\mathop {r}\limits ^{31}] +\sum _{i=1}^n\frac{\displaystyle \partial \mathop {r}\limits ^{12}(L)}{\partial \lambda _i} {{\mathop {E}\limits ^3}}_{ii}+\frac{\displaystyle \partial \mathop {r}\limits ^{23}(L)}{\partial \lambda _i} {\mathop {E}\limits ^1}_{ii} +\frac{\displaystyle \partial \mathop {r}\limits ^{31}(L)}{\partial \lambda _i} {\mathop {E}\limits ^2}_{ii}=0\;. \nonumber \\ \end{aligned}$$
(2.15)

Remark 2.1

We did not find the construction of this section in the existing literature. In the special case of the SL(2) group, the Poisson algebra (2.12), (2.13) appeared in the work [1] in the context of classical Poisson geometry of \(T^{*}SL(2)\), see formulas (2),(3) in loc.cit.

As it was mentioned to us by L.Feher, the Poisson structure (2.12), (2.13) can be obtained from the canonical Poisson structure on \(T^{*} SL(n)\) as follows. Consider an element \((G,A)\in T^{*} SL(n)\) and denote by L the diagonal form of the matrix \(A\in sl(n)\) (on an open part of the space where the matrix A is diagonalizable). The condition that A is diagonal i.e \(A=L\) is then a constraint of the second kind, according to Dirac’s classification. The computation of the Dirac bracket for the pair (GL) starting from the canonical Poisson structure on \(T^{*} SL(n)\) leads to the Poisson structure (2.12), (2.13), similarly to a computation given in [18].

2.1.1 Reduction to SL(n)

To reduce to SL(n) we observe that the proof of Proposition 2.1 holds also if we assume \(\mathrm{tr}L=0\) and \(\det G=1\). To compute the corresponding Poisson bracket we recall that inverting the restriction of a symplectic form to a symplectic submanifold is equivalent to the computation of the Dirac bracket.

Let \(h_1 :=\log \det G\) and \(h_2:= \mathrm{tr}L\); the Dirac bracket is then

$$\begin{aligned} \{F, H\}_D = \{F, H\} - \sum _{j=1}^2 \{F, h_j\} A_{jk} \{h_k, H\} \end{aligned}$$

where \(A_{jk}\) is the inverse matrix to \(\{h_j, h_k\}\): in our case we have

$$\begin{aligned} \{\log \det G, \mathrm{tr}L\} = -n\ \ \ \Rightarrow \ \ A = \frac{1}{n} \left( \begin{array}{cc} 0 &{}1\\ -1 &{}0 \end{array}\right) \;. \end{aligned}$$

Moreover a simple computation using (2.5) shows that

$$\begin{aligned} \{G_{jk}, \det G\} =0\ ,\ \ \ \ \ \{G_{jk}, \mathrm{tr}L\} = -G_{jk} \;. \end{aligned}$$

Then (we denote by \(\{\}_{SL(n)}\) the Dirac bracket restricted to \(\det G =1, \ \mathrm{tr}L =0\))

$$\begin{aligned} \{G_{bj}, G_{c\ell }\}_{SL(n)}= & {} \{G_{bj}, G_{c\ell }\} \;, \nonumber \\ \{\lambda _j, \lambda _k\}_{SL(n)}= & {} \{\lambda _j, \lambda _k\}=0\;, \nonumber \\ \{G_{bk}, \lambda _\ell \}_{SL(n)}= & {} \{G_{bk}, \lambda _\ell \} + \frac{1}{n} \{G_{bk}, \mathrm{tr}L \} \{ \log \det G, \lambda _{\ell }\}\nonumber \\= & {} - G_{bk} \delta _{\ell k} + \frac{1}{n} G_{bk} = G_{bk} \left( \frac{1}{n}- \delta _{\ell k}\right) \;. \end{aligned}$$
(2.16)

Equivalently the SL(n) bracket is written as

$$\begin{aligned} \{\mathop {G}\limits ^1, \mathop {G}\limits ^2\}_{SL(n)} =-\mathop {G}\limits ^1\mathop {G}\limits ^2 r(L)\;, \qquad \{\mathop {G}\limits ^1, \mathop {L}\limits ^2\}_{SL(n)} =-\mathop {G}\limits ^1 \Omega \end{aligned}$$
(2.17)

where now the matrix \(\Omega \) is given by

$$\begin{aligned} \Omega := \Omega _{_{{{\mathfrak {sl}}}(n)}}= \sum _{j=1}^n E_{jj} \otimes E_{jj} - \frac{1}{n} \mathbf{1 } \otimes \mathbf{1 } = \sum _{j,k=1}^{n-1} ({\mathbb {A}}^{-1})_{jk}\;\alpha _j\otimes \alpha _k \end{aligned}$$

and \(\alpha _j = \mathrm{diag}(0,\dots , 1,-1,0,\dots )\) are the simple roots of SL(n) and \({\mathbb {A}}\) is the Cartan matrix of SL(n);

$$\begin{aligned} {\mathbb {A}} = \left( \begin{array}{cccccc} 2 &{} \quad -1&{}\quad 0 \dots \\ -1&{}\quad 2&{}\quad -1&{}\quad 0\dots \\ 0 &{}\quad -1 &{} \quad 2&{}\quad -1&{}\quad \dots \\ 0 &{} \quad 0 &{}\quad \ddots \\ \dots &{}\quad &{}\quad &{}\quad -1 &{}\quad 2 \end{array}\right) \;. \end{aligned}$$
(2.18)

2.1.2 Relation to the Kirillov–Kostant bracket

The Kirillov–Kostant bracket on GL(n), in tensor notation, takes the form

$$\begin{aligned} \{\mathop {A}\limits ^1,\mathop {A}\limits ^2\}=[\mathop {A}\limits ^1,P] =P(\mathop {A}\limits ^2-\mathop {A}\limits ^1). \end{aligned}$$
(2.19)

Here P is the permutation matrix of size \(n^2\times n^2\) given by

$$\begin{aligned} P=\sum _{i,j=1}^n E_{ij}\otimes E_{ij}\;. \end{aligned}$$
(2.20)

The regular symplectic leaves are the (co)adjoint orbits of diagonal matrices L with distinct eigenvalues, and on the orbit passing through L the symplectic form of the Kirillov–Kostant bracket (2.19) is equal to (see [5], pp. 44, 45):

$$\begin{aligned} \omega _{KK}=-\mathrm{tr}\left( L G^{-1} \,\mathrm{d}G\wedge G^{-1} dG\right) \end{aligned}$$
(2.21)

where G is any matrix diagonalizing A i.e. \(A=GLG^{-1}\). The form \(\omega _{KK}\) is invariant under the transformation \(G\rightarrow GD\) where D is a diagonal matrix which may depend on G; such transformation leaves A invariant.

Theorem 2.1

The map \((G,L)\mapsto A= GLG^{-1}\) is a Poisson morphism between the Poisson structure (2.5) and the Kirillov–Kostant Poisson structure on A;

$$\begin{aligned} \{\mathrm{tr}(AF) ,\mathrm{tr}(AH) \}_{_{KK}} = \mathrm{tr}\bigg ( A[H,F]\bigg ) \;,\qquad \forall F, H\in {{\mathfrak {gl}}}_n\simeq {{\mathfrak {gl}}}_n^\vee \end{aligned}$$
(2.22)

or, equivalently,

$$\begin{aligned} \{\mathop {A}\limits ^1,\mathop {A}\limits ^2\}=[\mathop {A}\limits ^1,P]\;. \end{aligned}$$
(2.23)

Proof

We have

$$\begin{aligned} \,\mathrm{d}A = [\,\mathrm{d}G G^{-1} ,A] + G \,\mathrm{d}L G^{-1} = G\bigg ([X,L] +\Lambda \bigg ) G^{-1}\;. \end{aligned}$$

Then

$$\begin{aligned}&\{(GLG^{-1})_{ab}, (GLG^{-1})_{cd}\} = {\mathbb {P}}_{(G,L)} \bigg (\,\mathrm{d}(GLG^{-1})_{ab}, \,\mathrm{d}(GLG^{-1})_{cd}\bigg )\nonumber \\&\quad =\sum _{i,j,k,\ell =1}^{n}G_{ai}(G^{-1})_{jb} G_{ck} (G^{-1})_{\ell d}{\mathbb {P}}_{(G,L)} \nonumber \\&\qquad \bigg (\left( [G^{-1}\,\mathrm{d}G,L] + \,\mathrm{d}L \right) _{ij}, \left( [G^{-1}\,\mathrm{d}G,L] + \,\mathrm{d}L \right) _{k\ell }\bigg )\;. \end{aligned}$$
(2.24)

From the Poisson bracket (2.14) we have

$$\begin{aligned}&{\mathbb {P}}( (G^{-1} \,\mathrm{d}G)_{jk} , \,\mathrm{d}\lambda _\ell ) =- \delta _{jk} \delta _{\ell k}\nonumber \\&{\mathbb {P}} \big ((G^{-1} \,\mathrm{d}G)_{ij},(G^{-1} \,\mathrm{d}G)_{k\ell } \big ) = \frac{\delta _{jk} \delta _{\ell i}}{\lambda _j - \lambda _\ell }\ ,\ \ j\ne \ell \;,\nonumber \\&{\mathbb {P}}( \,\mathrm{d}\lambda _j , \,\mathrm{d}\lambda _\ell ) =0\;. \end{aligned}$$
(2.25)

Plugging (2.25) in (2.24) we see that the only terms giving non-trivial contributions are the following:

$$\begin{aligned}&\sum _{i,j,k,\ell =1}^n G_{ai}(G^{-1})_{jb} G_{ck} (G^{-1})_{\ell d}{\mathbb {P}}_{(G,L)}\\&\quad \bigg (\left( [G^{-1}\,\mathrm{d}G,L] + \,\mathrm{d}L \right) _{ij}, \left( [G^{-1}\,\mathrm{d}G,L] + \,\mathrm{d}L \right) _{k\ell }\bigg ) \\&\quad =\sum _{i,j,k,\ell =1}^n G_{ai}(G^{-1})_{jb} G_{ck} (G^{-1})_{\ell d} (\lambda _i - \lambda _j)(\lambda _k -\lambda _\ell ) {\mathbb {P}} \bigg ( (G^{-1}\,\mathrm{d}G)_{ij},(G^{-1}\,\mathrm{d}G)_{k\ell }\bigg )\\&\quad = \sum _{j,\ell =1}^n G_{a\ell }(G^{-1})_{jb} G_{cj} (G^{-1})_{\ell d} (\lambda _\ell - \lambda _j) = A_{ad} \delta _{b c} - A_{cb}\delta _{ad}\;. \end{aligned}$$

This expression coincides with the Kirillov–Kostant Poisson bracket (2.22). \(\square \)

A slight modification of this computation shows that the quadratic Poisson bracket (2.5) implies the Kirillov–Kostant bracket in SL(n) case.

2.2 Hamiltonian formulation of the Schlesinger system in G–variables

Consider the Schlesinger system written in terms of the matrices \(G_j\in SL(n)\):

$$\begin{aligned} \frac{\partial G_k}{\partial t_j}= \frac{A_j G_k}{t_j-t_k}\;,\;\;\; j\ne k\;; \qquad \frac{\partial G_k}{\partial t_k}=-\sum _{k\ne j} \frac{A_j G_k}{t_j-t_k} \;,\qquad \frac{\partial L_k}{\partial t_j}=0 \end{aligned}$$
(2.26)

where

$$\begin{aligned} A_j=G_j L_j G_j^{-1}\;. \end{aligned}$$
(2.27)

Matrices \(L_j\in {{\mathfrak {sl}}}(n)\) are diagonal and the eigenvalues of \(L_j\) are assumed to be distinct.

The Poisson structure of the Schlesinger system (1.18) written in terms of \(A_j\) is known to be linear: it is based on the Kirillov–Kostant bracket. On the other hand, the hamiltonian formulation of the system (2.26) involves the quadratic bracket defined by the dynamical r-matrix.

The following theorem can be checked by direct calculation:

Theorem 2.2

Denote by \({\mathcal {A}}_0 \) the space of matrices \(\{(G_j,L_j)\}_{j=1}^N\) where \(L_j\) are diagonal matrices with distinct eigenvalues. Then the system (2.26) is a multi-time hamiltonian system with respect to the Poisson structure on \({\mathcal {A}}_0 \)

$$\begin{aligned} \{\mathop {G_{j}}\limits ^1,\mathop {G_{k}}\limits ^2\} =-\mathop {G_{j}}\limits ^1\mathop {G_{k}}\limits ^2 r(L_k)\,\delta _{jk} \;,\qquad \{\mathop {G_{j}}\limits ^1,\mathop {L_{k}}\limits ^2\} =-\mathop {G_{k}}\limits ^1 \Omega \,\delta _{jk}\;. \end{aligned}$$
(2.28)

where \(\delta _{jk}\) is the Kronecker delta. The Hamiltonian defining the evolution with respect to “time” \(t_k\) is given by

$$\begin{aligned} H_k=\sum _{j\ne k} \frac{\mathrm{tr}A_j A_k}{t_k-t_j}\;,\qquad k=1,\dots ,N\;. \end{aligned}$$

We notice that for the Schlesinger system for matrices \(A_j\) (1.18) the Hamiltonians \(H_j\) are the same as for the system (2.26).

2.3 Symplectic form and potential

In the sequel we shall use the symplectic form associated to the bracket (2.28). A direct computation using the Poisson bracket (2.16) shows that the matrix \(A= GLG^{-1}\) has the following Poisson brackets with G and L:

$$\begin{aligned} {\{A_{ab}, G_{jk}\} = G_{ak} \delta _{bj} - G_{jk} \frac{\delta _{ab}}{n}\;,}\qquad \{A_{ab}, \lambda _k\}=0\; . \end{aligned}$$
(2.29)

Thus \(\{\mathrm{tr}(X A), G\} = X G\) for any fixed matrix \(X\in {{\mathfrak {sl}}}(n)\) and, therefore, the matrix \(A = GLG^{-1}\) is the moment map for the group action \(G\mapsto S G\) on the space \({\mathcal {H}}\). A similar statement, of course, holds for GL(n) using the Poisson bracket (2.5) instead.

Consider now the diagonal group action on \({\mathcal {A}}_0\) given by

$$\begin{aligned} \{G_j,L_j\}_{j=1}^N\rightarrow \{S G_j,L_j\}_{j=1}^N \end{aligned}$$
(2.30)

where S is an SL(n) matrix. The previous computation shows immediately that the moment map corresponding to the group action \(G_j\rightarrow SG_j\) on \({\mathcal {A}}_0\) is given by

$$\begin{aligned} \{G_j,L_j\}_{j=1}^N\rightarrow {{\mathfrak {m}}} =\sum _{j=1}^N G_j L_j G_j^{-1}\;. \end{aligned}$$
(2.31)

The space \({\mathcal {A}}\) is defined by (1.6) as the space of the orbits of the action (2.30) of in the zero level set of the moment map (2.31). This implies the following theorem proven via the standard symplectic reduction [4]:

Theorem 2.3

The Poisson structure induced on \({\mathcal {A}}\) from the Poisson structure (1.12) on \({\mathcal {A}}_0\) via the reduction on the level set \(\sum _{j=1}^N G_j L_j G_j^{-1}=0\) of the moment map, corresponding to the group action \(G_j\rightarrow SG_j\), is non-degenerate and the corresponding symplectic form is given by

$$\begin{aligned} \omega _{{\mathcal {A}}}=-\sum _{k=1}^N\mathrm{tr}(L_k G_k^{-1} \,\mathrm{d}G_k \wedge G_k^{-1} \,\mathrm{d}G_k) + \mathrm{tr}(\,\mathrm{d}L_k \wedge G_k^{-1} \,\mathrm{d}G_k)\;. \end{aligned}$$
(2.32)

A symplectic potential \( \theta _{\mathcal {A}}\) for \(\omega _{{\mathcal {A}}}\) is given by

$$\begin{aligned} \theta _{\mathcal {A}}=\sum _{k=1}^N\mathrm{tr}(L_k G_k^{-1} dG_k)\;. \end{aligned}$$
(2.33)

3 Monodromy Symplectomorphism via Malgrange’s Form

We start from introducing the Malgrange form associated to a Riemann–Hilbert problem on an oriented graph and discussing some of its properties, following [6, 7, 35]. From now on we work with the SL(n) case.

Let \(\Sigma \) be an embedded graph on \({\mathbb {C}}{\mathbb {P}}^1\) whose edges are smooth oriented arcs meeting transversally at the vertices. We denote by \(\mathbf{V}\) the set of vertices of \(\Sigma \). Consider a “jump matrix” i.e. a function \(J(z):\Sigma \setminus \mathbf{V} \rightarrow SL(n)\) that satisfies the following properties

Assumption 3.1

  1. 1.

    In a small neighbourhood of each point \(z_0\in \Sigma \setminus \mathbf{V} \) the matrix \(J(z)\) is given by a germ of analytic function;

  2. 2.

    for each \(v\in \mathbf{V} \), denote by \(\gamma _1,\dots , \gamma _{n_v}\) the edges incident at v in a small disk centered thereof. Suppose first that all these edges are oriented away from v and enumerated in counter-clockwise order. Denote by \(J^{(v)}_j(z)\) the analytic restrictions of \(J\) to \(\gamma _j\). Assume that each \(J_j^{(v)}(z)\) admits an analytic extension to a full neighbourhood of v and that these extensions satisfy the local no-monodromy condition

    $$\begin{aligned} J_1^{(v)}(z)\cdots J^{(v)}_{n_v}(z)=\mathbf{1 }\;. \end{aligned}$$
    (3.1)

    If the edge \(\gamma _j\) is oriented towards v then \(J_j^{(v)}(z)\) is taken to be the inverse of \(J(z)\).

Suppose now that the jump matrices form an analytic family depending on some deformation parameters and satisfying Assumption 3.1, and consider a family of Riemann–Hilbert problems on \(\Sigma \).

Malgrange form for an arbitrary Riemann Hilbert Problem. Let \(\Phi (z):{\mathbb {C}}{\mathbb {P}}^1 \setminus \Sigma \rightarrow SL(n)\) be a matrix-valued function, bounded everywhere and analytic on each face of \(\Sigma \). We also assume that the boundary values on the two sides of each edge of \(\Sigma \) are related by

$$\begin{aligned} \Phi _+(z)=\Phi _-(z) J(z)\;, \ \ \ \forall z\in \Sigma \setminus \mathbf{V} \ ,\qquad \Phi (\infty )=\mathbf{1 }\; . \end{aligned}$$
(3.2)

where the \({+/-}\) boundary value is from the left/right, respectively, of the oriented edge.

Definition 3.1

[35]. The Malgrange 1-form on the deformation space of Riemann–Hilbert problems with given graph \(\Sigma \) and jump matrices \(J\) is defined by

$$\begin{aligned} \Theta [\Sigma ,J]=\frac{1}{2i\pi } \int _{\Sigma } \mathrm{tr}\left( \Phi _-^{-1}\frac{\,\mathrm{d}\Phi _-}{\,\mathrm{d}z} \,\mathrm{d}J(z) J^{-1}(z)\right) \,\mathrm{d}z \end{aligned}$$
(3.3)

where \(\,\mathrm{d}J\) denotes the total differential of \(J\) in the space of deformation parameters for fixed z.

In ([7], Thm. 2.1) it was proved the following formula for the exterior derivative of (3.10):

$$\begin{aligned} \,\mathrm{d}\Theta [\Sigma ,J] = - \frac{1}{2} \int _{\Sigma } \frac{\,\mathrm{d}z}{2i\pi }\mathrm{tr}\bigg (\frac{\,\mathrm{d}}{\,\mathrm{d}z} (\,\mathrm{d}JJ^{-1}) \wedge (\,\mathrm{d}JJ^{-1}) \bigg )+\eta _{_{\mathbf{V}}}\end{aligned}$$
(3.4)

where

$$\begin{aligned} \eta _{_{\mathbf{V}}}=-\frac{1}{4i\pi }\sum _{v\in \mathbf{V} } \sum _{\ell =1}^{n_v-1} \mathrm{tr}\bigg ( (J_{\ell }^{(v)})^{-1} \,\mathrm{d}J_{\ell }^{(v)} \wedge \,\mathrm{d}J^{(v)}_{[\ell +1:n_v]} (J^{(v)}_{[\ell +1:n_v]})^{-1} \bigg )\;. \end{aligned}$$
(3.5)

Here the notation \(J^{(v)}_{[a:b]}\) stands for the product \(J^{(v)}_{a} \cdots J^{(v)}_b\) for any two indices \(a<b\).

In [7] the formula for \(\eta _{_{\mathbf{V}}}\) is written in a slightly different form and can be recast as the above expression by using the conditions (3.1).

Malgrange form and Schlesinger systems. Let us now discuss how the form (3.3) can be used in the context of the Fuchsian equation (1.1) and the associated Riemann–Hilbert problem.

Let \({\mathbb {D}}_j\) be small, pairwise non-intersecting disks centered at \(t_j\), \(j=1,\dots , N\).

In order to define the inverse monodromy map unambiguously, we need to fix the determination of the power in (1.2). To this end, fix a point \(\beta _j\) on the boundary of each of the disks \({\mathbb {D}}_j\) and declare that, within the disk \({\mathbb {D}}_j\), the power \((z-t_j)^{L_j}\) stands for \( |z-t_j|^{L_j} \mathrm{e}^{i\arg (z-t_j)L_j}\), where the argument is chosen between \(\arg (\beta _j-t_j)\) and \(\arg (\beta _j-t_j) + 2\pi \). In particular the logarithm \(\ln (z-t_j)\) is assumed to have the branch cut connecting \(t_j\) with \(\beta _j\) and the determination implied by the above.

Choose now a collection of non-intersecting edges \(l_1,\dots , l_N\) connecting \(\infty \) with each of the \(\beta _j\)’s (\(l_j\) is assumed to be transversal to the boundary \(\partial {\mathbb {D}}_j\) at \(\beta _j\)). Denote by \(\Sigma \) the union of all the circles \(\partial {\mathbb {D}}_j\) and the edges \(l_j\). Denote by \({\mathscr {D}}_\infty \) the “exterior” domain which is the complement of the union of the disks \({\mathbb {D}}_j\) and the graph \(\Sigma \).

The solution \(\Psi (z)\) of (1.1) is a single–valued matrix function in \({\mathscr {D}}_\infty \) normalized by \(\lim _{z\rightarrow \infty } \Psi (z) =\mathbf{1 }\) where the direction lies within a sector lying between edges \(l_1\) and \(l_N\). Within each disk, with the above choice of determination of the logarithm, the analytic continuation of \(\Psi \) has the local expression (1.2). The “connection matrices” \(C_j\) are uniquely determined by a choice of \(G_j\) and the determination of the logarithm.

We have therefore defined the (extended) monodromy map

$$\begin{aligned} \left\{ (G_j, L_j):\ \ \ \sum _{j=1}^{N} G_jL_j G_j^{-1} =0\right\} \rightarrow \left\{ (C_j, L_j\}: \ \ \prod _{j=1}^N C_j \mathrm{e}^{2i\pi L_j} C_{j}^{-1} = \mathbf{1 }\right\} .\nonumber \\ \end{aligned}$$
(3.6)

Although this monodromy map depends on \(\Sigma \) and the determinations of the logarithms, we are not going to indicate it explicitly.

Fig. 2
figure 2

Graph \(\Sigma \) and jump matrices on its edges used in the calculation of the form \(\Theta \)

An example of the graph \(\Sigma \) is shown in Fig. 2; the graph looks like N “cherries” whose “stems” are attached to the point \(z=\infty \). Introduce the piecewise analytic matrix on its faces as follows

$$\begin{aligned} \Phi (z) = \left\{ \begin{array}{cc} \Psi (z)\;, &{} z\in {\mathbb {D}}={\mathbb {C}}{\mathbb {P}}^1\setminus \Sigma \setminus \bigcup _{j=1}^N {\mathbb {D}}_j\;; \\ \Phi _j(z) := \Psi (z) C_j (z-t_j)^{-L_j} \;, &{} z\in {\mathbb {D}}_j\; . \end{array}\right. \end{aligned}$$
(3.7)

The function \(\Phi \) solves a Riemann–Hilbert Problem on \(\Sigma \) with the jump matrices on its edges indicated in Fig. 2:

$$\begin{aligned} J = \left\{ \begin{array}{ll} M_j = C_j \mathrm{e}^{2i\pi L_j} C_j^{-1},&{} \quad z\in l_j\\ C_j (z-t_j)^{-L_j}, &{} \quad z\in \partial {\mathbb {D}}_j \end{array}\right. \end{aligned}$$
(3.8)

where \(l_j\) is the “stem” of the jth cherry.

The matrix function \(\Phi \) given by (3.7) is the unique solution of the Riemann–Hilbert problem with jump matrices (3.8):

$$\begin{aligned} \Phi _+(z) = \Phi _-(z) J(z)\; , \qquad \Phi (\infty )=\mathbf{1 }, \end{aligned}$$
(3.9)

The solution of the Riemann–Hilbert problem exists for generic set of data \(\{C_j,L_j,t_j\}\); this solution provides the inverse of the map in (3.6). We emphasize that the inverse monodromy map depends on the isotopy class of \(\Sigma \) and on the fixing of the branches of the logarithms.

In the context of Fuchsian systems the general Malgrange form in Definition 3.1 specializes to the following definition:

Definition 3.2

The Malgrange one form \(\Theta \in T^{*}_p \widetilde{{\mathcal {M}}}\) is the form defined by the expression (3.3), where \(\Phi \) is the solution of the Riemann–Hilbert problem (3.8), (3.9).

It is known [35] that the form \(\Theta \) is a meromorphic form on \(\widetilde{{\mathcal {M}}}\); the set of poles of \(\Theta \) is called the “Malgrange divisor”; on this divisor the Riemann–Hilbert problem fails to have a solution. Moreover, the residue along this divisor is a positive integer [35].

The deformation parameters involved in the expression (3.3) for \(\Theta \) are \(C_j, L_j\) subject to the monodromy relation \(\prod _{j=1}^N C_j \mathrm{e}^{2i\pi L_j} C_j^{-1}=\mathbf{1 }\), and the locations of the poles \(t_1,\dots , t_N\).

Theorem 3.1

The form \(\Theta \in T^{*}\widetilde{{\mathcal {M}}}\) (3.3) and the potential \(\widetilde{\theta }_{{\mathcal {A}}}\in T^{*}\widetilde{{\mathcal {A}}}\) are related by

$$\begin{aligned} \Theta =(\widetilde{ {\mathcal {F}}}^{-1})^{*} \left( \theta _{\mathcal {A}}-\sum _{j=1}^N H_j \,\mathrm{d}t_j \right) \end{aligned}$$
(3.10)

where \(H_j\) are the Hamiltonians (1.17). Denote now by \(\partial _{t_j} \in T\widetilde{{\mathcal {M}}}\) the vector field of differentiation w.r.t. \(t_j\) keeping the monodromy data constant. Then the contraction of \(\Theta \) with \(\partial _{t_j}\) is given by

$$\begin{aligned} \Theta (\partial _{t_j}) = H_j\,. \end{aligned}$$
(3.11)

Equivalently, the contraction of (the pullback via the inverse monodromy map of) \(\widetilde{\theta }_{\mathcal {A}}\) with \(\partial _{t_j}\) equals \(2H_j\).

Proof

The simplest way to prove (3.10) is via the localization formula [28] using the Riemann–Hilbert problem defined on the graph \(\Sigma \) shown in Fig. 2. To simplify the notation we will not indicate explicitly the pullbacks, but simply consider the matrices \(G_j\) as functions of times and monodromy data via the inverse monodromy map.

In the formula (3.3) the function \(\Phi _-\) coincides with the boundary value of the solution, \(\Psi \), of the ODE (1.1) in the domain \({\mathbb {D}}\). Therefore, denoting \(\,\mathrm{d}\Phi /\,\mathrm{d}z\) by \(\Phi '\) we have:

$$\begin{aligned} \mathrm{tr}\left( \Phi _-^{-1} \Phi _-' \,\mathrm{d}JJ^{-1} \right) =\mathrm{tr}\left( A(z) \Phi _-\,\mathrm{d}JJ^{-1}\Phi _-^{-1} \right) \;. \end{aligned}$$
(3.12)

Here we have used the fact that \(\Phi _-\) coincides with \(\Psi \) and therefore \(\Phi _-'\Phi _-^{-1} = A(z)\). Moreover we have

$$\begin{aligned} \Phi _-\,\mathrm{d}JJ^{-1}\Phi _-^{-1} = \,\mathrm{d}\left( \Phi _- J\right) J^{-1}\Phi _-^{-1} - \,\mathrm{d}\Phi _- \Phi _-^{-1} = \,\mathrm{d}\Phi _+ \Phi _+^{-1} -\,\mathrm{d}\Phi _- \Phi _-^{-1} \end{aligned}$$

since \(\Phi _+ = \Phi _- J\). Thus (3.3) can be equivalently written as follows

$$\begin{aligned} \Theta = \frac{1}{2\pi i} \int _\Sigma \mathrm{tr}\left( A(z)(\,\mathrm{d}\Phi _+ \Phi _+^{-1} -\,\mathrm{d}\Phi _- \Phi _-^{-1} )\right) \,\mathrm{d}z\; \end{aligned}$$
(3.13)

and further represented as

$$\begin{aligned} \Theta = \frac{1}{2\pi i} \int _{\partial {\mathbb {D}}} \mathrm{tr}(A(z) \,\mathrm{d}\Psi \Psi _-^{-1}) \,\mathrm{d}z+ \frac{1}{2\pi i} \sum _{j=1}^N \int _{\partial {\mathbb {D}}_j} \mathrm{tr}(A(z)\,\mathrm{d}\Phi _+ \Phi _+^{-1})\,\mathrm{d}z\; . \end{aligned}$$
(3.14)

The first integral in the r.h.s. of (3.14) vanishes since the integrand is holomorphic in \({\mathbb {D}}\). Thus (3.14) reduces to (this is the expression that also appears in [28], formula (1.11)):

$$\begin{aligned} \Theta = \sum _{j} \mathop {\mathrm{res}}\limits _{z=t_j} \mathrm{tr}\left( A(z) \,\mathrm{d}\Phi _j(z) \Phi _j^{-1}(z) \right) \,\mathrm{d}z\,. \end{aligned}$$
(3.15)

The expression (3.15) can be further evaluated in the coordinate system given by \((C_j,L_j,t_j)\). Namely, the contribution of derivatives with respect to monodromy data \((C_j,L_j)\) into (3.15) is obtained by evaluation of \(\,\mathrm{d}\Phi _j(z) \Phi _j^{-1}(z)\) at the poles \(t_j\) which gives the monodromy part of \(\widetilde{\theta }_{{\mathcal {A}}}\) in (3.10).

A straightforward local analysis using (3.7) shows that:

$$\begin{aligned} \partial _{t_k} \Phi _j(z) \Phi _j^{-1}(z)\bigg |_{z=t_j} = \partial _{t_k} G_j G_j^{-1} - \delta _{kj} \partial _{t_k} G_kG_k^{-1} -\delta _{kj} [A_k, \Phi _j'(t_j)\Phi _j(t_j)^{-1}]\;. \end{aligned}$$

Thus

$$\begin{aligned} \Theta= & {} \sum _{j} \mathop {\mathrm{res}}\limits _{z=t_j} \mathrm{tr}\bigg ( A(z) \,\mathrm{d}\Phi _j(z)\Phi _j^{-1} (z) \bigg ) dz\\= & {} \sum _{j} \mathrm{tr}\bigg ( A_j \,\mathrm{d}G_jG_j^{-1} \bigg ) - \sum _{j} \,\mathrm{d}t_j \mathrm{tr}\bigg (A_j \partial _{t_j} G_j G_j^{-1} \bigg )\;. \end{aligned}$$

Finally, due to the Schlesinger equations for \(G_j\) (1.20) we get

$$\begin{aligned} \Theta = \sum _{j} \mathrm{tr}\left( A_j \,\mathrm{d}G_jG_j^{-1} \right) - \sum _{j} \,\mathrm{d}t_j \sum _{ k\ne j}\frac{\mathrm{tr}A_j A_k }{t_j-t_k}\;. \end{aligned}$$

Recalling that the Jimbo–Miwa Hamiltonians are given by \(H_j=\sum _{k\ne j}\frac{\mathrm{tr}A_j A_k }{t_j-t_k}\) and that the first term equals the potential \( \widetilde{\theta }_{\mathcal {A}}\) on \(\widetilde{{\mathcal {A}}}\), we arrive at (3.10).

As a corollary of the Schlesinger equations (1.20) the contraction of \(\widetilde{\theta }_{\mathcal {A}}\) with a vector field \(\partial _{t_j}\) (for fixed monodromy data) is

$$\begin{aligned} \widetilde{\theta }_{\mathcal {A}}(\partial _{t_j}) = 2 H_j\;. \end{aligned}$$

Therefore, the total \(\,\mathrm{d}t_j\)—part of the form \(\Theta \) for fixed monodromies equals to \(\sum _{j=1}^N H_j\,\mathrm{d}t_j\). \(\square \)

Symplectic form on the monodromy manifold. We start from defining the two-form on the monodromy manifold which is one of central objects of this paper.

Definition 3.3

Define the following 2-form on \({\mathcal {M}}\) (1.7):

$$\begin{aligned} \omega _{\mathcal {M}}=\frac{1}{4\pi i}(\omega _1+\omega _2) \end{aligned}$$
(3.16)

where

$$\begin{aligned}&\omega _1= \sum _{\ell =1}^{N}\mathrm{tr}\left( M_\ell ^{-1} \,\mathrm{d}M _{\ell } \wedge K_{\ell }^{-1} \,\mathrm{d}K_{\ell }\right) + \sum _{\ell =1}^N \mathrm{tr}\left( \Lambda ^{-1}_\ell C_{\ell } ^{-1}\,\mathrm{d}C_{\ell }\wedge \Lambda _\ell C_\ell ^{-1}\,\mathrm{d}C_\ell \right) \;,\nonumber \\ \end{aligned}$$
(3.17)
$$\begin{aligned}&\omega _2=2\sum _{\ell =1}^N \mathrm{tr}\left( \Lambda _\ell ^{-1} \,\mathrm{d}\Lambda _\ell \wedge C_\ell ^{-1} \,\mathrm{d}C_\ell \right) \end{aligned}$$
(3.18)

and \(K_{\ell }=M_1\dots M_\ell \).

On the monodromy manifold \(M_1\dots M_N=\mathbf{1 }\) the form \(\omega _{\mathcal {M}}\) is invariant under simultaneous transformation \(C_j\rightarrow S C_j\) with S is an arbitrary SL(n)-valued function on \({\mathcal {M}}\).

Remark 3.1

The restriction of the form \(-2i\pi \omega _{\mathcal {M}}\) on the leaves \(\Lambda _j = \) constant (under such restriction \(\omega _2=0\) and hence \(-2i\pi \omega _{\mathcal {M}}= -\omega _1/2\)) coincides with the symplectic form on the symplectic leaves of the GL(n) Goldman bracket found in [2] (formula (3.14); the case of this formula relevant for us corresponds to \(k=2\pi \) and \(g=0\) in the notation of [2]).

As we prove below in Corollary 3.1, the form \(\omega _{\mathcal {M}}\) is non-degenerate on the space \({\mathcal {M}}\), which is a torus fibration (with fiber the product of N copies of the SL(n) torus of diagonal matrices) over the union of all the symplectic leaves of the Goldman bracket. The fact that \({\mathcal {M}}\) is a torus fibration is simply due to the fact that the fibers of the map \((C_j,\Lambda _j) \rightarrow M_j = C_j \Lambda _j C_j^{-1}\) are obtained by multiplication of the \(C_j\)’s on the right by diagonal matrices.

Let us trivially extend the form \(\omega _{\mathcal {M}}\) to the space \(\widetilde{{\mathcal {M}}}\) (1.15) which includes also the variables \(t_j\). This extension is denoted by \(\widetilde{\omega }_{\mathcal {M}}\).

Relation between forms \(\Theta \) and \(\omega _{{\mathcal {M}}}\). The following theorem was stated in [6] in slightly different notations without direct proof. The proof is given below.

Theorem 3.2

The exterior derivative of the form \(\Theta \) is given by the pullback of the form \(\widetilde{\omega }_{\mathcal {M}}\) (3.16) under the monodromy map:

$$\begin{aligned} \,\mathrm{d}\Theta =[\widetilde{{\mathcal {F}}}]^{*}\widetilde{\omega }_{\mathcal {M}}\;. \end{aligned}$$
(3.19)

Proof

Let us apply the formulas (3.4), (3.5) to the graph \(\Sigma \) depicted in Fig. 2 with indicated jump matrices. The integral over \(\Sigma \) in the formula (3.4) then reduces to a sum of integrals over \(\partial {\mathbb {D}}_\ell \)’s because the jump matrix J(z) on the cuts is constant with respect to z. We denote by \(\beta _\ell \) the three-valent vertices where the circles around \(t_\ell \) meet with the edges going towards \(z_0\). Let us consider the contribution of one of the integrals over \(\partial {\mathbb {D}}_\ell \) to (3.4).

We will drop the index \(_\ell \) for brevity in the formulas below. Notice also that \(\,\mathrm{d}L \wedge \,\mathrm{d}L=0\) because the matrix L is diagonal. Letting \(J(z) =C (z-t)^{-L}\) we get

$$\begin{aligned}&-\frac{1}{2} \oint \frac{\,\mathrm{d}z}{2i\pi }\mathrm{tr}\left( \frac{\,\mathrm{d}}{\,\mathrm{d}z} \Big ( \,\mathrm{d}J(z) J(z)^{-1} \Big )\wedge \,\mathrm{d}J(z) J(z)^{-1} \right) \nonumber \\&\quad =\frac{1}{2} \left( \frac{\,\mathrm{d}t\wedge L\,\mathrm{d}L}{(\beta -t)} +\,\mathrm{d}L \wedge C^{-1}\,\mathrm{d}C\right) \;. \end{aligned}$$
(3.20)

In the course of the computation we have used that

$$\begin{aligned} \int _{\beta }^\beta \frac{\,\mathrm{d}z}{2i\pi }\frac{\log (z-t)}{(z-t)^2} = -\frac{1}{\beta -t} \end{aligned}$$

where the integration goes along the circle \(|z-t|=|\beta -t|\) starting at \(z=\beta \). We now turn to the evaluation of the term \(\eta _{_{\mathbf{V}}}\) (3.5). The set of vertices \(\mathbf{V} \) consists of \(\mathbf{V} = \{z_0, \beta _1,\dots , \beta _N\}\). The contribution coming from the vertex \(z_0\) is precisely the first term in \(\omega _1\) (3.17) (in (3.17) this term is simplified using the local no-monodromy condition (3.1)).

To evaluate the contribution of the vertex \(\beta =\beta _\ell \in \mathbf{V} \) we observe that this vertex is tri-valent and the jump matrices on the three incident arcs are

$$\begin{aligned} J_1 = C {\Lambda ^{-1}} C^{-1} ,\qquad J_2 = C (\beta -t)^{-L} \ ,\qquad J_3 = (\beta -t)^L\mathrm{e}^{2i\pi L} C^{-1} \end{aligned}$$

where \(\Lambda := \mathrm{e}^{2i\pi L}\). In the definition it is assumed that \((z-t)^L\) is defined with a branch cut extending from t to \(\beta \). Since \(J_1J_2J_3 =\mathbf{1 }\) the contribution of the vertex to (3.5) reduces to the term

$$\begin{aligned} \frac{-1}{4i\pi } \mathrm{tr}\left( J_1\,\mathrm{d}J_2\wedge \,\mathrm{d}J_3\right) =\frac{-1}{4i\pi } \mathrm{tr}\left( J_2^{-1}\,\mathrm{d}J_2\wedge \,\mathrm{d}J_3J_3^{-1}\right) \;. \end{aligned}$$

Recall that \(L, \Lambda \) are diagonal; we have then

$$\begin{aligned} J_2^{-1} \,\mathrm{d}J_2= & {} (\beta -t)^L C^{-1} \,\mathrm{d}C (\beta - t)^{-L} +(\beta -t)^L \frac{L \,\mathrm{d}t}{\beta -t} (\beta - t)^{-L} - \,\mathrm{d}L \log (\beta -t)\;,\nonumber \\ \,\mathrm{d}J_3J_3^{-1}= & {} \frac{-\,\mathrm{d}t L}{(\beta - t)} + \left( \log (\beta -t) + 2i\pi \right) \,\mathrm{d}L - (\beta -t)^L \Lambda C^{-1} \,\mathrm{d}C \Lambda ^{-1} (\beta -t)^{-L}.\nonumber \\ \end{aligned}$$
(3.21)

Then a straightforward computation gives

$$\begin{aligned}&\frac{-1}{4i\pi }\mathrm{tr}\left( J_2^{-1}\,\mathrm{d}J_2 \wedge \,\mathrm{d}J_3J_3^{-1}\right) \nonumber \\&\quad = \frac{-1}{4i\pi }\mathrm{tr}\bigg (C^{-1} \,\mathrm{d}C \wedge \Lambda ^{-1}\,\mathrm{d}\Lambda -C^{-1} \,\mathrm{d}C \wedge \Lambda C^{-1} \,\mathrm{d}C \Lambda ^{-1} +2i\pi \frac{L \,\mathrm{d}t}{\beta -t} \wedge \,\mathrm{d}L\bigg ) \;. \end{aligned}$$
(3.22)

Summing up (3.20) (the contribution of the integral) with (3.22) (the contribution coming from the vertex \(\beta =\beta _\ell \)) we get

$$\begin{aligned} (3.20) + (3.22) =\frac{1}{4i\pi }\mathrm{tr}\bigg (-2C^{-1} \,\mathrm{d}C \wedge \Lambda ^{-1}\,\mathrm{d}\Lambda +C^{-1} \,\mathrm{d}C \wedge \Lambda C^{-1} \,\mathrm{d}C \Lambda ^{-1}\bigg )\;. \end{aligned}$$

Then summing over all contributions from vertices \(\beta _\ell \) leads to (3.16).

Summarizing, the first term in (3.17) corresponds to the N-valent vertex. The second term in (3.17) together with the term (3.18) arise from the contributions of cherries and the three-valent vertices formed by cherries and their stems. \(\square \)

This theorem immediately implies the following corollary, which can also be deduced from previous results of [11].

Corollary 3.1

The form \(\omega _{\mathcal {M}}\) (3.16) is closed and non-degenerate on the monodromy manifold \({\mathcal {M}}\).

Strong version of Its–Lisovyy–Prokhorov conjecture. The Theorem 3.2 proves the “strong” version of the ILP conjecture (1.26). To state this conjecture in the present setting we consider the form (1.11) or (2.7) of [28] which we denote by \(\Theta _{ILP}\) to avoid confusion with the notations of this paper (see also the identity (4.20) below):

$$\begin{aligned} \Theta _{ILP}=\sum _{j<k}^N\mathrm{tr}A_j A_k d\log (t_j-t_k)+\sum _{j=1}^N \mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_{{\mathcal {M}}} G_j)\;. \end{aligned}$$
(3.23)

The Conjecture from section 1.6 of [28] refers to the restriction of the form to the symplectic leaves \(L_j=\)constants. We refer to this as the weak Its–Lisovyy–Prokhorov conjecture; in this formulation \(\,\mathrm{d}_{{\mathcal {M}}}\) refers to the differential only with respect to the connection matrices \(C_j\). This “weak” version of the conjecture is proved on the basis of known results [3, 25, 33] in the next section.

The statement of Theorem 3.2 is the strong version of the above conjecture: in this version the differential \(\,\mathrm{d}_{{\mathcal {M}}}\) is with respect to all monodromy data including the \(L_j\)’s.

Generating function of the monodromy map. The closure of \(\omega _{\mathcal {M}}\) guarantees the local existence of a symplectic potential. Denoting any such local potential by \(\theta _{\mathcal {M}}\) (such that \(\,\mathrm{d}\theta _{\mathcal {M}}=\omega _{\mathcal {M}}\)) we define the (local on \(\widetilde{{\mathcal {M}}}\)) generating function \({\mathcal {G}}\) as follows

$$\begin{aligned} \,\mathrm{d}{\mathcal {G}}=\sum _{k=1}^N\mathrm{tr}(L_k G_k^{-1} \,\mathrm{d}G_k)-\sum _{j=1}^N H_k \,\mathrm{d}t_k - \widetilde{\theta }_{\mathcal {M}}\end{aligned}$$
(3.24)

where \(G_k\) and \(H_k\) are considered as functions on \(\widetilde{{\mathcal {M}}}\) under the inverse monodromy map.

The Eq. (3.24) can be used to extend the definition of Jimbo–Miwa tau-function to include its dependence on monodromies. Irrespectively of the choice of \(\theta _{\mathcal {M}}\), the formula (3.11) implies the following theorem

Theorem 3.3

For any choice of symplectic potential \(\theta _{\mathcal {M}}\) on \({\mathcal {M}}\) the dependence of the generating function \({\mathcal {G}}\) (1.19) on \(\{t_j\}_{j=1}^N\) coincides with \(t_j\)-dependence of the isomonodromic Jimbo–Miwa tau-function. In other words, \(e^{-{\mathcal {G}}}\tau _{JM}\) depends only on monodromy data \(\{C_j,L_j\}_{j=1}^N\).

In Sect. 6 we are going to use this theorem to define the isomonodromic tau function as exponent of the generating function G under a special choice of the symplectic potential \(\theta _{\mathcal {M}}\) based on the use of Fock–Goncharov coordinates.

Remark 3.2

“Extended” character varieties with non-degenerate symplectic form were considered in the ’94 paper [29] and later in the paper [11]. In ([11] Corollary 1) it was proven that the pullback of a symplectic form from the extended monodromy manifold coincides with a symplectic form on \((L_j, G_j)\) side. The description of the corresponding Poisson bracket, construction of symplectic potentials, Malgrange form, the tau-function and coordinatization in term of Fock–Goncharov parameters were not considered before, to the best of our knowledge.

4 Standard Monodromy Map and Weak Version of Its–Lisovyy–Prokhorov Conjecture

Here we show that a weak version of Its–Lisovyy–Prokhorov conjecture can be derived in a simple way from previous results of [3, 25] or [33] where a symplectomorphism between the space of coefficients \(\{A_j\}\) with given set of eigenvalues of the Fuchsian equation (1.1) and a symplectic leaf of Goldman bracket was proved.

First, consider the submanifold \({\mathcal {A}}_L\) of \({\mathcal {A}}\) such that the diagonal form of each of the matrices \(A_j\) is fixed:

$$\begin{aligned} {\mathcal {A}}_L=\left\{ \{A_i\}_{i=1}^N,\;\; A_i\in {{\mathcal {O}}} (L_i)\;,\;\;\sum _{i=1}^N A_i=0\right\} /\sim \end{aligned}$$
(4.1)

where \(\sim \) is the equivalence over simultaneous adjoint transformation \(A_i\rightarrow SA_iS^{-1}\) of all \(A_i\) for \(S\in SL(n)\); \(L=(L_1,\dots ,L_N)\) where \(L_j\) is the diagonal form of \(A_j\) and \({{\mathcal {O}}}(L)\) is the (co)-adjoint orbit of the diagonal matrix L. We assume that diagonal entries of each \(L_j\) do not differ by an integer.

Consider similarly also the space \({\mathcal {M}}_L\) which is the subspace of the SL(n) character variety of \(\pi _1({{\mathbb {P}}^1}\setminus \{t_j\}_{j=1}^N)\) such that the diagonal form of the matrix \(M_j\) equals to \(\Lambda _j=e^{2\pi i L_j}\).

The Kirillov–Kostant brackets (2.19) for each \(A_j\):

$$\begin{aligned} \{{\mathop {A}\limits ^1}_j,{\mathop {A}\limits ^2}_k\} =[{\mathop {A}\limits ^1}_j,P]\;\delta _{jk} \end{aligned}$$
(4.2)

can be equivalently rewritten in the r-matrix form

$$\begin{aligned} \{\mathop {A}\limits ^1(z){\,,\, }\mathop {A}\limits ^2(w)\} =\frac{1}{z-w}\,[P, \mathop {A}\limits ^1(z) + \mathop {A}\limits ^2(w)]\;. \end{aligned}$$
(4.3)

The Schlesinger equations for \(A_j=G_j L_j G_j^{-1}\) which follow from the system (1.20) for \(G_j\) take the form:

$$\begin{aligned} \frac{\partial A_k}{\partial t_j}= \frac{[A_k, A_j]}{t_k-t_j}\;,\qquad j\ne k \;; \qquad \frac{\partial A_j}{\partial t_j}=-\sum _{k\ne j} \frac{[A_k, A_j]}{t_k-t_j}\;. \end{aligned}$$
(4.4)

These equations are Hamiltonian,

$$\begin{aligned} \frac{\partial A_k}{\partial t_j}=\{H_j, A_k\}\;, \end{aligned}$$

with the Poisson structure given by (4.3) and the (time dependent) Hamiltonians \(H_j\) defined by (1.17). Notice that these Hamiltonians commute \(\{H_k,H_j\}=0\) and satisfy the equations \(\partial _{t_k} H_j = \partial _{t_j} H_k\).

After the symplectic reduction to the space of orbits of the global \(Ad_{GL(N)}\) action and restriction to the level set \(\sum _{j=1}^N A_j=0\) of the corresponding moment map one gets a degenerate Poisson structure; its symplectic leaves coincide with \({\mathcal {A}}_L\) [25]. The symplectic form on \({\mathcal {A}}_L\) can be written as

$$\begin{aligned} \omega ^L_{{\mathcal {A}}}=-\sum _{k=1}^N\mathrm{tr}(L_k G_k^{-1} dG_k\wedge G_k^{-1} dG_k) \;. \end{aligned}$$
(4.5)

The form (4.5) is independent of the choice of matrices \(G_j\) which diagonalize \(A_j\); moreover, it is invariant under simultaneous transformation \(A_j\rightarrow S A_j S^{-1}\) and thus it is indeed defined on the space \({\mathcal {A}}_L\).

The SL(n) character variety is equipped with the Poisson structure given by the Goldman bracket defined as follows (see p.266 of [24]): for any two loops \(\sigma ,\widetilde{\sigma }\in \pi _1({{\mathbb {P}}^1}\setminus \{t_i\}_{i=1}^N)\) the Poisson bracket between the traces of the corresponding monodromies is given by

$$\begin{aligned} \Big \{\mathrm{tr}M_\sigma ,\;\mathrm{tr}M_{\widetilde{\sigma }}\Big \}_G =\sum _{p\in \sigma \cap \widetilde{\sigma }} \nu (p) \,\left( \mathrm{tr}(M_{\sigma _p\widetilde{\sigma }}) -\frac{1}{n} \mathrm{tr}M_\sigma \mathrm{tr}M_{\widetilde{\sigma }} \right) \;. \end{aligned}$$
(4.6)

where \(\nu (p)=\pm 1\) is the contribution of point p to the intersection index of \(\sigma \) and \(\widetilde{\sigma }\).

The space \({\mathcal {M}}_L\) is a symplectic leaf of the SL(n) Goldman bracket; the Goldman’s symplectic form on \({\mathcal {M}}_L\) coincides with \(-\frac{1}{2}\omega _1\) [2] where \(\omega _1\) is defined in (3.17). We define

$$\begin{aligned} \omega _{\mathcal {M}}^L=\frac{1}{4\pi i} \omega _1\;. \end{aligned}$$
(4.7)

The study of the symplectic properties of the map (1.8) was initiated in [3, 25, 33]. In [3, 25] two different proofs were given of the fact that the monodromy map \({\mathcal {F}}^t\) is a symplectomorphism i.e.

$$\begin{aligned} ({\mathcal {F}}^t)^{*} \omega _{\mathcal {M}}^L = \omega _{\mathcal {A}}^L\;. \end{aligned}$$
(4.8)

In [33] the brackets between the monodromy matrices themselves were obtained starting from (4.3); the result is given by

$$\begin{aligned}&\{{\mathop {M}\limits ^1}_i, {\mathop {M}\limits ^2}_i \}^{*} =\pi i\,P({\mathop {M}\limits ^1}_i{\mathop {M}\limits ^1}_i -{\mathop {M}\limits ^2}_i{\mathop {M}\limits ^2}_i)\;, \end{aligned}$$
(4.9)
$$\begin{aligned}&\{{\mathop {M}\limits ^1}_i , {\mathop {M}\limits ^2}_j\}^{*} =\pi i \,P \,\Big ( {\mathop {M}\limits ^1}_j {\mathop {M}\limits ^1}_i +{\mathop {M}\limits ^2}_i {\mathop {M}\limits ^2}_j -{\mathop {M}\limits ^1}_i {\mathop {M}\limits ^2}_j - {\mathop {M}\limits ^1}_j {\mathop {M}\limits ^2}_i \Big )\;, \qquad i< j \end{aligned}$$
(4.10)

where P is the matrix of permutation of two spaces. The brackets (4.9), (4.10) were computed for the basepoint \(z_0=\infty \) on the level set \(\sum _{j=1}^N A_j=0\) of the moment map; thus the algebra (4.9), (4.10) does not satisfy the Jacobi identity. However, the Jacobi identity is restored for the algebra of Ad-invariant objects i.e. for traces of monodromies; moreover, for any two loops \(\sigma \) and \(\widetilde{\sigma }\) we have ([40]; see also Thm. 5.2 of [14] where this statement was proved for \(n=4\), \(N=2\) case):

$$\begin{aligned} \{\mathrm{tr}M_\sigma ,\mathrm{tr}M_{\widetilde{\sigma }}\}^{*} =-2\pi i\{\mathrm{tr}M_\sigma ,\mathrm{tr}M_{\widetilde{\sigma }}\}_G \end{aligned}$$
(4.11)

which gives an alternative proof of (4.8).

Let us now show that (4.8) implies the weak version of the Its–Lisovyy–Prokhorov conjecture. Similarly to (1.14) and (1.15) we introduce the two spaces

$$\begin{aligned} \widetilde{{\mathcal {A}}}_L= & {} \Big \{(p,\{t_j\}_{j=1}^N)\;,\; p\in {\mathcal {A}}_L, \; t_j\in {\mathbb {C}},\; t_j\ne t_k\Big \}\;, \end{aligned}$$
(4.12)
$$\begin{aligned} \widetilde{{\mathcal {M}}}_L= & {} \Big \{(p,\{t_j\}_{j=1}^N)\;,\; p\in {\mathcal {M}}_L, \; t_j\in {\mathbb {C}},\; t_j\ne t_k\Big \}\;. \end{aligned}$$
(4.13)

Denote the pullback of the form \(\omega _{{\mathcal {A}}}^L\) with respect to the natural projection of \(\widetilde{{\mathcal {A}}}_L\) to \({\mathcal {A}}_L\) by \(\widetilde{\omega }_{{\mathcal {A}}}^L\) and the pullback of the form \(\omega _{{\mathcal {M}}}\) with respect to the natural projection of \(\widetilde{{\mathcal {M}}}_L\) to \({\mathcal {M}}_L\) by \(\widetilde{\omega }_{{\mathcal {M}}}^L\).

Proposition 4.1

The following identity holds between two-forms on \(\widetilde{{\mathcal {A}}}_L\):

$$\begin{aligned} \widetilde{{\mathcal {F}}}^{*} [\widetilde{\omega }_{{\mathcal {M}}}^L]= \widetilde{\omega }_{{\mathcal {A}}}^L -\sum _{k=1}^N \,\mathrm{d}H_k\wedge \,\mathrm{d}t_k \end{aligned}$$
(4.14)

where \( H_k\) are the Hamiltonians (1.17).

Proof

Denote by 2d the dimension of the spaces \({{\mathcal {A}}}_L\) and \({{\mathcal {M}}}_L\). Introduce some local Darboux coordinates \((p_i,q_i)\) on \({{\mathcal {A}}}^L\) for the form \(\omega ^L_{\mathcal {A}}\) (4.5) and also some Darboux coordinates \((P_i,Q_i)\) on \({{\mathcal {M}}}^L\) for the form \(\omega ^L_{\mathcal {M}}\) given by (4.7).

We are going to verify (4.14) using coordinates \(\{t_j\}_{j=1}^N\) and \(\{P_j,Q_j\}_{j=1}^d\). Let us split the operator \(\,\mathrm{d}\) into two parts:

$$\begin{aligned} \,\mathrm{d}=\,\mathrm{d}_t + \,\mathrm{d}_{\mathcal {M}}\end{aligned}$$

where \(\,\mathrm{d}_{\mathcal {M}}\) is the differential with respect to \(\{P_j,Q_j\}_{j=1}^d\). Then relation (4.8) can be written as

$$\begin{aligned} \sum _{j=1}^d \,\mathrm{d}P_j\wedge \,\mathrm{d}Q_j=\sum _{j=1}^d \,\mathrm{d}_{\mathcal {M}}p_i\wedge \,\mathrm{d}_{\mathcal {M}}q_i\;. \end{aligned}$$
(4.15)

The right-hand side can be further rewritten using the Hamilton equations \(\frac{\partial {p_i}}{\partial {t_k}}=-\frac{\partial H_k}{\partial q_i}\); \(\frac{\partial {q_i}}{\partial {t_k}}=\frac{\partial H_k}{\partial p_i}\) (where the Hamiltonians \(H_k\) are given by (1.17)). Using

$$\begin{aligned} \,\mathrm{d}_{\mathcal {M}}p_i =\,\mathrm{d}p_i+ \sum _{k=1}^N \frac{\partial H_k}{\partial q_i} \,\mathrm{d}t_k \; \qquad \,\mathrm{d}_{\mathcal {M}}q_i =\,\mathrm{d}q_i- \sum _{k=1}^N \frac{\partial H_k}{\partial p_i} \,\mathrm{d}t_k \end{aligned}$$

one gets

$$\begin{aligned}&\sum _{i=1}^d \,\mathrm{d}_{\mathcal {M}}p_i\wedge \,\mathrm{d}_{\mathcal {M}}q_i=\sum _{i=1}^d \,\mathrm{d}p_i \wedge \,\mathrm{d}q_i + \sum _{k=1}^N \,\mathrm{d}t_k\wedge \sum _{i=1}^d \left( \frac{\partial H_k}{\partial q_i} \,\mathrm{d}q_i +\frac{\partial H_k}{\partial p_i} \,\mathrm{d}p_i \right) \nonumber \\&-\sum _{\ell <k=1}^{N} \sum _{i=1}^d \left( \frac{\partial H_\ell }{\partial q_i} \frac{\partial H_k}{\partial p_i}- \frac{\partial H_\ell }{\partial p_i} \frac{\partial H_k}{\partial q_i}\right) \,\mathrm{d}t_\ell \wedge \,\mathrm{d}t_k\;. \end{aligned}$$
(4.16)

To simplify the second sum in (4.16) we recall that

$$\begin{aligned} \,\mathrm{d}H_k= \sum _{i=1}^d \left( \frac{\partial H_k}{\partial q_i} \,\mathrm{d}q_i +\frac{\partial H_k}{\partial p_i} \,\mathrm{d}p_i \right) +\sum _{\ell =1}^N \frac{\partial H_k}{\partial t_\ell }\Big |_{p_i,q_i=const} \,\mathrm{d}t_\ell \;; \end{aligned}$$

thus the second sum can be written as

$$\begin{aligned} \sum _{k=1}^H \,\mathrm{d}t_k\wedge \,\mathrm{d}H_k+\sum _{l,k,\, l<k} \left( \frac{\partial H_k}{\partial t_l}\Big |_{p,q}-\frac{\partial H_l}{\partial t_k} \Big |_{p,q}\right) \,\mathrm{d}t_l\wedge \,\mathrm{d}t_k\;. \end{aligned}$$

Adding all the terms in (4.16) we obtain

$$\begin{aligned} \sum _{j=1}^{d} \,\mathrm{d}P_j\wedge \,\mathrm{d}Q_j= & {} \sum _{i=1}^d \,\mathrm{d}p_i\wedge \,\mathrm{d}q_i +\sum _{k=1}^N \,\mathrm{d}t_k \wedge \,\mathrm{d}H_k \\&-\sum _{\ell <k} \left( \frac{\partial H_\ell }{\partial t_k}\Big |_{p,q} -\frac{\partial H_k}{\partial t_\ell }\Big |_{p,q} + \big \{H_k, H_\ell \big \}\right) \,\mathrm{d}t_\ell \wedge \,\mathrm{d}t_k\;. \end{aligned}$$

The coefficient of \(\,\mathrm{d}t_\ell \wedge \,\mathrm{d}t_k\) vanishes because the Hamiltonians satisfy the zero–curvature equations implied by commutativity of the flows with respect to \(t_j\) and \(t_\ell \); in fact in this particular case they satisfy a stronger compatibility: \(\{H_k ,H_\ell \}=0\) and \(\partial _{t_\ell } H_k = \partial _{t_k}H_\ell \). Therefore we arrive at (4.14). \(\square \)

Let us show that (4.14) implies

Proposition 4.2

(Weak ILP conjecture). The following identity holds on the space \(\widetilde{{\mathcal {M}}}_L\):

$$\begin{aligned} \,\mathrm{d}\Theta _{ILP}^L = \widetilde{\omega }_{{\mathcal {M}}}^L \end{aligned}$$
(4.17)

where

$$\begin{aligned} \Theta _{ILP}^L=\sum _{j<k}^N\mathrm{tr}A_j A_k \,\mathrm{d}\log (t_j-t_k) +\sum _{j=1}^N \mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_{\mathcal {M}}G_j) \end{aligned}$$
(4.18)

and matrices \(G_j\) diagonalizing \(A_j\) are chosen to satisfy the Schlesinger equations (1.20); \(\,\mathrm{d}_{\mathcal {M}}\) denotes the differential with respect to monodromy coordinates. The form \( \Theta _{ILP}^L \) is the “weak” version of the form (1.26). The form \(\widetilde{\omega }_{{\mathcal {M}}}^L \) is the pullback of Alekseev–Malkin form (4.7) from \({\mathcal {M}}_L\) to \(\widetilde{{\mathcal {M}}}_L\).

Proof

The symplectic potential for the form \(\tilde{\omega }_{\mathcal {A}}^L\) can be written as

$$\begin{aligned} \tilde{\theta }_{\mathcal {A}}^L=\sum _{j=1}^n \mathrm{tr}[L_j G_j^{-1} (\,\mathrm{d}_t+\,\mathrm{d}_{\mathcal {M}}) G_j]\;. \end{aligned}$$
(4.19)

We notice that the potential \(\tilde{\theta }_{\mathcal {A}}^L\), in contrast to the form \(\tilde{\omega }_{\mathcal {A}}^L\) itself, is not well-defined on the space \(\widetilde{{\mathcal {A}}}_L\) due to ambiguity \(G_j\rightarrow G_j D_j\) for diagonal \(D_j\) in the definition of \(G_j\). Under such transformation \(\theta _{\mathcal {A}}^L\) changes by an exact form. Therefore for the purpose of proving (4.17) one can pick any concrete representative for each \(G_j\). The most natural choice is to assume that \(\{G_j\}\) satisfy the system (1.20). Then the “t”-part of potential (4.19) can be computed using (1.20) and the definition of the Hamiltonians (1.17) to give

$$\begin{aligned} \sum _{j=1}^n\mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_t G_j) = 2\sum _{j=1}^N H_j \,\mathrm{d}t_j\;. \end{aligned}$$
(4.20)

Therefore, the relation (4.14) can be rewritten as

$$\begin{aligned} \widetilde{{\mathcal {F}}}^{*} [\widetilde{\omega }_{{\mathcal {M}}}^L]= \,\mathrm{d}\left( \sum _{k=1}^N \,\mathrm{d}H_k\wedge \,\mathrm{d}t_k +\sum _{j=1}^n\mathrm{tr}(L_j G_j^{-1}\,\mathrm{d}_{\mathcal {M}}G_j)\right) \end{aligned}$$
(4.21)

which coincides with (4.17). \(\square \)

Comparison of weak and strong ILP conjectures. In spite of the formal similarity, there is a significant difference between the statements of the weak and strong ILP conjectures. In the strong version the form \(\sum \mathrm{tr}(L_j \,\mathrm{d}G_j G_j^{-1})\) is a well-defined form on the phase space \({\mathcal {A}}\) as well as on its extension \(\widetilde{{\mathcal {A}}}\).

In the weak version the same form is not defined on the space \({\mathcal {A}}^L\) since to get the equality (4.17) one needs to take the residues \(A_j\) (which are given by a point of \({\mathcal {A}}^L\) up to a conjugation) and then diagonalize each \(A_j\) into \(G_j L_j G_j^{-1}\) in a way which is non-local in times \(t_j\): the matrices \(G_j\)’s themselves must satisfy the Schlesinger system (1.20). This requirement can not be satisfied staying entirely within the space \({\mathcal {A}}^L\) and thus \(G_j\)’s can not be chosen as functionals of \(A_j\)’s only; their choice encodes a highly non-trivial \(t_j\)-dependence which fixes the freedom in the right multiplication of each \(G_j\) by a diagonal matrix which also can be time-dependent.

The strong version of the ILP conjecture (Theorem 3.2) is a stronger statement since the form \(\theta _{\mathcal {A}}\) is a 1-form defined on the underlying phase space.

5 Log–Canonical Coordinates and Symplectic Potential

Here we summarize results of [8] where the form \(\omega _{\mathcal {M}}\) was expressed in \(\log \)-canonical form an open subspace of highest dimension of \({\mathcal {M}}\) using the (extended) system of Fock–Goncharov coordinates [21]. This allows to find the corresponding symplectic potential and use it in the definition of the tau-function.

5.1 Fock–Goncharov coordinates

To define the Fock–Goncharov coordinates we introduce the following auxiliary graphs (see Fig. 3):

  1. 1.

    The graph \(\Sigma _0\) with N vertices \(v_{1},\dots , v_{N}\) which defines a triangulation of the N-punctured sphere; we assume that each vertex \(v_j\) lies in a small neighbourhood of the corresponding pole \(t_j\). Since \(\Sigma _0\) is a triangulation there are \(2N-4\) faces \(\{f_k\}_{k=1}^{2N-4}\) and \(3N-6\) edges \(\{e_k\}_{k=1}^{3N-6}\); the edges are assumed to be oriented.

  2. 2.

    Consider a small loop around each \(t_k\) (the cherry) and attach it to the vertex \(v_k\) by an edge (the stem of the cherry). The cherries are assumed to not intersect the edges of \(\Sigma _0\). The union of \(\Sigma _0\), the stems and the cherries is denoted by \(\Sigma _1\).

    The graph \(\Sigma _1\) is fixed by \(\Sigma _0\) if one chooses the ciliation at each vertex of the graph \(\Sigma _0\); the ciliation determines the position of the stem of the corresponding cherry.

  3. 3.

    Choose a point \(p_f\) inside each face \(f_k\) of \(\Sigma _0\) and connect it by edges \({\mathcal {E}}_{f}^{(i)}\), \(i=1,2,3\) to the vertices of the face, oriented towards the point \(p_{f}\). We will denote by \(\Sigma \) the graph obtained by the augmentation of \(\Sigma _1\) and these new edges. It is the graph \(\Sigma \) which will be used to compute the form \(\omega _{\mathcal {M}}\).

We will make use of the following notations: by \(\alpha _i\), \(i=1,\dots , n-1\) we denote the simple positive roots of SL(n); by \(\mathrm{h}_i\) the we denote the dual roots:

$$\begin{aligned}&\alpha _i:= \mathrm{diag}( 0,\dots , \mathop {1}^{i-pos} ,-1,0,\dots ), \nonumber \\&\qquad \mathrm{h}_i := \left( \begin{array}{cc} (n-i)\mathbf{1 }_i &{} 0\\ 0&{} -i\mathbf{1 }_{n-i} \end{array}\right) , \qquad \mathrm{tr}(\alpha _i \mathrm{h}_k) = n \delta _{ik}\;. \end{aligned}$$
(5.1)

For any matrix M we define \(M^\star := \mathbf{P}M\mathbf{P}\) where \(\mathbf{P}\) is the “long permutation” in the Weyl group,

$$\begin{aligned} \mathbf{P}_{ab}= \delta _{a,n+1-b}\;. \end{aligned}$$

In particular \(\alpha _i^\star = -\alpha _{n-i}\), \(\mathrm{h}_i^\star =-{\mathrm h}_{n-i}\;.\) Let

$$\begin{aligned} \sigma =\mathrm{diag} (1,-1,1,-1,\dots ) \end{aligned}$$

be the signature matrix.

Introduce the \((n-1)\times (n-1)\) matrix \({\mathbb {G}}\) given by

$$\begin{aligned} {\mathbb {G}}_{jk}=\mathrm{tr}(\mathrm{h}_j \mathrm{h}_k)=n^2\left( \mathrm{min}(j,k) -\frac{jk}{n}\right) \;. \end{aligned}$$
(5.2)

The matrix \({\mathbb {G}}\) coincides with \(n^2 A_{n-1}^{-1}\) with \(A_{n-1}\) being the Cartan matrix of SL(n).

Fig. 3
figure 3

The support of the jump matrices J. The graph \(\Sigma _0\) is in black (the triangulation)

The full set of coordinates on \({\mathcal {M}}\) consists of three groups: the coordinates assigned to vertices of the graph \(\Sigma _0\), to its edges and faces. Below we describe these three groups separately and use them to parametrize the jump matrices of the Riemann–Hilbert problem on the graph \(\Sigma \).

Edge coordinates and jump matrices on \(e_j\). To each edge \(e \in E(\Sigma _0)\) we associate \(n-1\) non-vanishing variables

$$\begin{aligned} {\varvec{z}} = \varvec{z}_e = ( z_1, \dots , z_{n-1} ) \in ({\mathbb {C}}^\times )^{n-1} \end{aligned}$$
(5.3)

and introduce their exponential counterparts:

$$\begin{aligned} \mathbf{\zeta }=\mathbf{\zeta }_e = (\zeta _{1},\dots , \zeta _{n-1}) \in {\mathbb {C}}^{n-1}\;, \qquad \zeta _j= \frac{1}{n}\log z_j^n\;. \end{aligned}$$
(5.4)

The jump matrix on the oriented edge \(e\in \mathbf{E}(\Sigma _0)\) is given by (see [38])

figure a

where \(h_i\) are the dual roots (5.1). For the inverse matrix we have

$$\begin{aligned} S^{-1}(\mathbf{z}) =\sigma \mathbf{P}\mathbf{z}^\mathbf{h} = (-1)^{n-1} \mathbf{z}^\mathbf{h^\star } \mathbf{P}\sigma \;. \end{aligned}$$

The notation \(\mathbf{z}^\mathbf{h}\) stands for

$$\begin{aligned} \mathbf{z}^\mathbf{h} = z_1^{h_1}\dots z_{n-1}^{h_{n-1}}\;. \end{aligned}$$
(5.6)

The sets of variables (5.3), (5.4) corresponding to an oriented edge e of \(\Sigma _0\) and the opposite edge \(-e\) are related as follows:

$$\begin{aligned} {\varvec{\zeta }}_{-e} = (\zeta _{e,n-1} ,\dots , \zeta _{e,1}) \;; \qquad \mathbf{z}_{-e} :=(-1)^{n-1} (z_{e,n-1}, \dots , z_{e,1})\;. \end{aligned}$$
(5.7)

Face coordinates and jump matrices on \({\mathcal {E}}_{f}^{(i)}\). To each face \(f\in F(\Sigma _0)\) (i.e. a triangle of the original triangulation) we associate \(\frac{(n-1)(n-2)}{2}\) variables \({\varvec{\xi }}_f = \{\xi _{f;\,abc}:\ \ a, b, c\in {\mathbb {N}},\ \ \ a + b + c= n\}\) and their exponential counterparts \(x_{f;\,abc}:= \mathrm{e}^{\xi _{f;\,abc}}\) as follows.

The variables \(\xi _{f;\,abc}\) define the jump matrices \(A_{i}({\varvec{\xi }}_f)\) on three edges \(\{{\mathcal {E}}_{f}^{(i)}\}_{i=1}^3\), which connect a chosen point \(p_f\) in each face f of the graph \(\Sigma _0\) with its three vertices (these edges are shown in red in Fig. 3). The enumeration of vertices \(v_1\), \(v_2\) and \(v_3\) is chosen arbitrarily for each face f. Namely, for a given vertex v and the face f of \(\Sigma _0\) such that \(v\in \partial f\) we define the index \(f(v) \in \{1,2,3\}\) depending on the enumeration that we have chosen for the three edges \(\{{\mathcal {E}}_{f}^{(i)}\}\) lying in the face f. For example in Fig 3 for the face f containing point \(p_i\) we define \(f(v_\ell )=1\), \(f(v_k)=3\) and \(f(v_s) = 2\).

The matrices \(A_{1,2,3}(\varvec{\xi }_f)\) are defined following [21]. First, the matrix \(A_1\) is defined by the formula

$$\begin{aligned} A_1(\mathbf{x}) =\sigma \left( \prod _{k=n-1}^1 N_k\right) \mathbf{P}\;, \end{aligned}$$
(5.8)

where \(E_{ik}\) are the elementary matrices and

$$\begin{aligned}&F_i= \mathbf{1 } + E_{i+1,i}\;,\qquad H_i(x) := x^{h_i} \nonumber \\&=\mathrm{diag} (\overbrace{x^{i-n},\dots , x^{i-n}}^{i\ \mathrm{times}}, x^i,\dots x^i) \; ,\ \ \ \ \ i=1,\dots , n-1\;; \end{aligned}$$
(5.9)
$$\begin{aligned}&N_k= \left( \prod _{k\le i \le n-2}H_{i+1}(x_{n-i-1, i-k+1, k}) F_i\right) F_{n-1}\;. \end{aligned}$$
(5.10)

The matrices \(A_2\) and \(A_3\) are obtained from \(A_1\) by cyclically permuting the indices of the variables:

$$\begin{aligned} A_2({\varvec{x}}) = A_1(\{x_{bca}\})\;, \qquad A_3({\varvec{x}}) = A_1(\{x_{cab}\})\;; \end{aligned}$$
(5.11)

the important property of the matrices \(A_i\) is the equality

$$\begin{aligned} A_1 A_2 A_3 =\mathbf{1 } \end{aligned}$$
(5.12)

which guarantees the triviality of total monodromy around the point \(p_f\) on each face f. In the first two non-trivial cases the matrices \(A_i\) have the following forms:

SL(2)::

there are no face variables and all matrices \(A_i=A\) are given by

$$\begin{aligned} A=\left( \begin{array}{cc} 0&{}1\\ -1&{}-1 \end{array} \right) \;. \end{aligned}$$
(5.13)
SL(3)::

there is one parameter \(x=x_{111}\) for each face. The matrices \(A_1,A_2\) and \(A_3\) coincide in this case, too; they are given by

$$\begin{aligned} A(x) = \frac{1}{x} \left( \begin{array}{ccc} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad -1&{}\quad -1 \\ {x}^{3}&{}{x}^{3}+1&{}1 \end{array}\right) \; . \end{aligned}$$
(5.14)

Jump matrices on stems. The jump matrix on the stem of the cherry connected to a vertex v is defined from the triviality of total monodromy around v.

For each vertex v of \(\Sigma _0\) of valence \(n_v\) the jump matrix on the stem of the cherry attached to v is given by

$$\begin{aligned} M_v^{0} = \left( \prod _{i=1}^{n_v} A_{f_i} S_{e_i} \right) ^{-1} \end{aligned}$$
(5.15)

where \(f_1,\dots , f_{n_v}\) and \(e_1,\dots e_{n_v}\) are the faces/edges ordered counterclockwise starting from the stem of the cherry, with the edges oriented away from the vertex (using if necessary the formula (5.7)). Since each product \(A_{f_i} S_{e_i}\) is a lower triangular matrix, the matrices \(M_v^0\) are also lower–triangular. The diagonal part of \(M^0_v\) will be denoted by \(\Lambda _v\) and parametrized as follows:

$$\begin{aligned} \Lambda _v = \mathrm{diag} \left( m_{v;1} , \frac{m_{v;2}}{m_{v;1}}, \dots , \frac{m_{v;n-1}}{m_{v;n-2}}, \frac{1}{m_{v;n-1}}\right) . \end{aligned}$$
(5.16)

Notice that the relation (5.16) can also be written as \(\Lambda _v=\mathbf{m}_v^{\varvec{\alpha }}=\prod _{j=1}^{n-1} m_{v;j}^{\alpha _j}\) where \(\alpha _j\) are the roots (5.1).

In order to express \(\Lambda _v\) in terms of \(\zeta \) and \(\xi \)-coordinates , we enumerate the faces and edges incident at the vertex v by \(f_1,\dots , f_{n_v}\) and \(e_1,\dots , e_{n_v}\), respectively. We assume the edges to be oriented away from v using (5.7). We also assume without loss of generality that the arc \({\mathcal {E}}_{f_j}^{(1)}\) is the one connected to the vertex v for all \(j=1,\dots , n_v\). Then (see (4.20) of [8]) we have

$$\begin{aligned} \Lambda _v = \mathrm{e}^{2i\pi L_v}= \mathbf{P}\left( \prod _{f\perp v} \mathbf{x}_f^{\mathbf{h_1}}\right) \left( \prod _{e\perp v} \mathbf{z}_{e}^{\mathbf{h}} \right) \mathbf{P}\end{aligned}$$
(5.17)

Introduce now the variables \(\mu _{v;\ell }\) via

$$\begin{aligned} \mu _{v;n-\ell } = \sum _{f\perp v} \sum _{\begin{array}{c} a+b+c=n\\ a,b,c\ge 1 \end{array}} \xi _{f;abc}\, {\mathbb {G}}_{a\ell } + \sum _{e\perp v} \sum _{j=1}^{n-1} \zeta _{e;j}\, {\mathbb {G}}_{j\ell } \end{aligned}$$
(5.18)

where the matrix \({\mathbb {G}}\) equals to \(n^2\) times the inverse Cartan matrix [see (5.2)].

The relationship between \(\mu \)’s and variables \(m_v\) is

$$\begin{aligned} m_{v;\ell }^n = \mathrm{e}^{n\mu _{v;\ell }} \end{aligned}$$

i.e. \(\mu _{v;\ell }\) defines \(m_{v;\ell }\) up to an nth root of unity. Therefore, the entries \(\lambda _{v;j}\) of the diagonal matrices \(L_v\) are related to \(\mu _{v;j}\) as follows:

$$\begin{aligned} \lambda _{v;j}\equiv \frac{1}{2\pi i} (\mu _{v;j}-\mu _{v;j-1}) \quad (\mathrm{mod}\;\; {\mathbb {Z}}) \;,\qquad j=1,\dots ,n\;. \end{aligned}$$
(5.19)

Vertex coordinates and jump matrices on cherries To each vertex v of the graph \(\Sigma _0\) we associate a set of \(n-1\) non-vanishing complex numbers \(r_{v;i} \), \(i=1,\dots ,n-1\) in the following way.

Since the matrix \(M^0_v\) is lower-triangular it can be diagonalized by a lower-triangular matrix \(C_v^0\) such that all diagonal entries of \(C_v^0\) equal to 1:

$$\begin{aligned} M^0_v=C_v^0\Lambda _v (C_v^0)^{-1}\;. \end{aligned}$$
(5.20)

Any other lower-triangular matrix \(C_v\) diagonalizing \(M^0_v\) can be written as

$$\begin{aligned} C_v= C_v^0 R_v \end{aligned}$$
(5.21)

where the matrix \(R_v\) (which equals to the diagonal part of \(C_v\), \(R_v=(C_v)^D\)), is parametrized by \(n-1\) variables \(r_1, \dots , r_{n-1}\) and their logarithmic counterparts

$$\begin{aligned} \rho _i = \log r_i \;,\qquad i=1,\dots , n-1\;, \end{aligned}$$

as follows (we omit the index v below):

$$\begin{aligned}&R= \prod _{i=1}^{n-1} r_{i}^{ {\mathrm{h}_i}} = \mathbf{r} ^\mathbf{h} =\left( \prod _{i=1}^{n-1} r_i^i\right) ^{-1} \mathrm{diag}\nonumber \\&\quad \Big (\prod _{i=1}^{n-1} r_i^n, \,\prod _{i=2}^{n-1} r_i^n\;,\, \dots ,r_{n-2}^nr_{n-1}^n,\, r_{n-1}^n,\, 1\Big )\;. \end{aligned}$$
(5.22)

The jump on the boundary of the cherry is defined to be

$$\begin{aligned} J_v=C_v (z-t_v)^{-L_v}\;. \end{aligned}$$
(5.23)

The point of discontinuity of the function \(J_v\) on the boundary of the cherry is assumed to coincide with the point where the stem is connected to the cherry (this point is denoted by \(\beta \) in Fig. 4).

Fig. 4
figure 4

On contribution of one of the loops to the form \(d\Theta \)

Fig. 5
figure 5

Local triangular monodromy \(M_{v_j}^0\) and global monodromy \(M_{v_j}^0 = T_j M_{v_j}^0 T_j^{-1}\)

5.2 Parametrization of the space \({\mathcal {M}}\)

The set of jump matrices on the graph \(\Sigma \) constructed in the previous section can be used to parametrize the space \({\mathcal {M}}\). Recall that the vertices of the graph \(\Sigma _0\) are in one-to-one correspondence with points \(t_j\); thus the vertex connected to the cherry around \(t_j\) will be denoted by \(v_j\). To construct the monodromy map as SL(n) representation of \(\pi _1( \mathbf{CP}^1\setminus \{t_1,\dots , t_N\}, \infty )\) we topologically identify the punctured sphere with the complement of connected and simply connected neighbourhoods of the \(t_j\)’s that contain also the distal vertex of the stem. The fundamental group of the punctured sphere and of this sphere with deleted neighbourhood is the same. Equivalently, for an element in the fundamental group we choose a representative that does not intersect the cherry and stem.

Then the map is then defined as follows; for \(\sigma \in \pi _1( \mathbf{CP}^1\setminus \{U_1,\dots , U_N\}, \infty )\) the corresponding monodromy is given by

$$\begin{aligned} M_\sigma :=\prod _{e \in \sigma \cap \mathbf{E} (\Sigma )} J_{e}^{\nu (e,\sigma )} \end{aligned}$$
(5.24)

where the product is taken in the same order as the order of the edges being crossed by \(\sigma \) and \(\nu (e,\sigma )\in \{\pm 1\}\) is the orientation of the intersection of the (oriented) edge e and \(\sigma \) at the point of intersection. With this definition the analytic continuation of \(\Psi \) satisfies \(\Psi (z^{\sigma })=\Psi (z) M_\sigma ^{-1}\). This allows us to relate the normalization of the eigenvector matrices \(C_j\) with that of the matrices \(C_j^0\) (5.20). To this end, choose \(z_0^j\) in the connected region of \({\mathbb {P}}^1\setminus \Sigma \) that contains the jth cherry (see Fig. 5).

Then the monodromy matrix \(M_j\) equals to the ordered product of jump matrices at the edges of \(\Sigma \) crossed by \(\sigma _j\) and it has the form

$$\begin{aligned} M_j=T_j M_{v_j}^0 T_j^{-1} \end{aligned}$$
(5.25)

where the matrix \(T_j\) equals to the product of jump matrices on the edges of \(\Sigma \) crossed by \(\sigma _j\) as it is traversed from \(z_0=\infty \) to \(z_0^j\). Therefore, the diagonal form of the monodromy matrix \(M_j\) is:

$$\begin{aligned} M_j=C_j \Lambda _j C_j^{-1}, \qquad C_j=T_j C_{v_j}^0 R_{v_j} \;. \end{aligned}$$
(5.26)

This determines, unambiguously, the normalization of the matrix \(C_j\) in terms of the Fock–Goncharov coordinates, thus providing a complete parametrization of \(\widehat{{\mathcal {M}}}\).

5.3 Symplectic form

The computation of the symplectic form \(\,\mathrm{d}\Theta = \omega _{{\mathcal {M}}}\) is given in [8].

Theorem 5.1

(Thm. 4.1 of [8]). In the coordinate chart parametrized by coordinates

$$\begin{aligned} \bigg \{{\varvec{z}}_e, {\varvec{x}}_f, {\varvec{r}}_v:\ \ e\in E(\Sigma _0), \ f\in F(\Sigma _0), \ v\in V(\Sigma _0)\bigg \} \end{aligned}$$
(5.27)

the symplectic form \(\omega _{\mathcal {M}}\) (3.16) is given by

$$\begin{aligned} -2\pi i\;\omega _{\mathcal {M}}\!=\!\frac{1}{2} \sum _{v\in V(\Sigma _0)} \omega _v \!+\!\frac{1}{2}\!\sum _{f\in F(\Sigma _0)} \omega _f \!+\! n \!\sum _{v\in V(\Sigma _0)} \!\sum _{i=1}^{n-1} { \,\mathrm{d}\rho _{v;i} \wedge \,\mathrm{d}\mu _{v;i}}\;.\quad \end{aligned}$$
(5.28)

The variables \(\mu _{v;j}\) are defined by (5.18).

The form \(\omega _v\) in (5.28) is defined as follows: for each vertex \(v\in V(\Sigma _0)\) of valence \(n_v\) let \( \{e_1,\dots e_{n_v}\}\) be the incident edges ordered counterclockwise starting from the one on the left of the stem and oriented away from v. Let \(\{f_1,\dots , f_{n_v}\}\in F(T)\) be the faces incident to v and counted in counterclockwise order from the one containing the cherry. We denote the order relation by \(\prec \). Then

$$\begin{aligned} \omega _v= & {} \sum _{e'\prec e \perp v} {\mathbb {G}}_{ij} \,\mathrm{d}{\zeta }_{e'; i} \wedge \,\mathrm{d}{\zeta }_{e;j} + \sum _{f\prec e\perp v} \sum _{a+b+c =n} \sum _{\ell =1}^{n-1} {\mathbb {G}}_{f(v),\ell } \,\mathrm{d}\xi _{f;abc} \wedge \,\mathrm{d}\zeta _{e;\ell }\nonumber \\&+ \sum _{e\prec f\perp v} \sum _{a+b+c =n}\sum _{\ell =1}^{n-1} {\mathbb {G}}_{f(v),\ell } \,\mathrm{d}\zeta _{e;\ell } \wedge \,\mathrm{d}\xi _{f;abc}\nonumber \\&+\sum _{ f' \prec f\perp v } \sum _{\begin{array}{c} a+b+c=n\\ a' +b'+c'=n \end{array}}{\mathbb {G}}_{f'(v),f(v)} \,\mathrm{d}{\xi }_{f'; a'b'c'} \wedge \,\mathrm{d}{\xi }_{f;abc} \end{aligned}$$
(5.29)

where the subscript f(v) indicates the index ab or c depending on the value \(f(v) \in \{1,2,3\}\), respectively.

The form \(\omega _f\) for face f is given by

$$\begin{aligned} \omega _f =\sum _{\begin{array}{c} i + j + k = n\\ i' + j' + k' = n \end{array}} F_{ijk; i'j'k'} \,\, \,\mathrm{d}\xi _{f; ijk} \wedge \,\mathrm{d}\xi _{f; i'j'k'} \end{aligned}$$
(5.30)

where \(F_{ijk; i'j'k'}\) are the following constants

$$\begin{aligned}&\frac{1}{n} F_{ijk; i'j'k'} =\Big ( k \Delta i -i \Delta k \Big ) H( \Delta i \Delta k ) + \Big ( j \Delta k - k \Delta j \Big )\nonumber \\&\quad H( \Delta j \Delta k )+\Big ( i \Delta j -j \Delta i \Big ) H( \Delta i \Delta j ) \end{aligned}$$
(5.31)

and

$$\begin{aligned} \Delta i = i'-i; \qquad \Delta j = j'-j; \qquad \Delta k = k'-k\;; \end{aligned}$$

H(x) is the Heaviside function:

$$\begin{aligned} H(x) = \left\{ \begin{array}{cc} 1 &{} \quad x>0\\ \frac{1}{2} &{}\quad x=0\\ 0 &{} \quad x<0 \end{array}\right. \;. \end{aligned}$$
(5.32)

We point out that while the coordinates \(\varvec{\xi }, \varvec{\zeta }, \varvec{\rho }\) are defined on a covering space of the character variety (with the deck transformations being shifts by integer multiples of \(2i\pi \)), the symplectic form (5.29) is defined on the character variety itself. Notice also that for SL(2) and SL(3) the form \(\omega _f\) vanishes.

5.4 Symplectic potential

We are going to choose a symplectic potential \(\theta _{\mathcal {M}}\) satisfying the equation \(\,\mathrm{d}\theta _{\mathcal {M}}=\omega _{\mathcal {M}}\) for the symplectic form \(\omega _{\mathcal {M}}\) using the representation (5.28). For convenience we introduce a uniform notation for coordinates \(\zeta _e\) and \(\xi _{f;ijk}\); the number of these coordinates equals \(\mathrm{dim}{\mathcal {M}}-(n-1)N\) (we subtract the number of coordinates \(\rho _j\) from the total dimension of \({\mathcal {M}}\)). These coordinates we denote collectively by

$$\begin{aligned} \{\kappa _j\}_{j=1}^{\mathrm{dim}{\mathcal {M}}-(n-1)N} \end{aligned}$$

Then the formula (5.28) can be written as

$$\begin{aligned} -2\pi i \, \omega _{\mathcal {M}}= \frac{1}{2}\sum _{j<\ell } n_{j\ell }\,\mathrm{d}\kappa _j \wedge \,\mathrm{d}\kappa _\ell + n\sum _{v\in V(\Sigma _0)} \sum _{j=1}^{n-1} \,\mathrm{d}\rho _{v;j} \wedge \,\mathrm{d}\mu _{v;j} \end{aligned}$$
(5.33)

where all \(n_{j\ell }\) are integer numbers and \(\mu _{v;j}\) are linear functions of \(\kappa _j\)’s.

Definition 5.1

The symplectic potential \(\theta _{\mathcal {M}}\) is defined by the following relation:

$$\begin{aligned} 2 \pi i\, \theta _{\mathcal {M}}=\frac{1}{4}\sum _{j<\ell } n_{j\ell } (\kappa _\ell \,\mathrm{d}\kappa _j- \kappa _j \,\mathrm{d}\kappa _\ell ) + \frac{n}{2}\sum _{v\in V(\Sigma _0)} \sum _{j=1}^{n-1} (\mu _{v;j} \, \,\mathrm{d}\rho _{v;j}- \rho _{v;j}\,\mathrm{d}\mu _{v;j}) \;.\nonumber \\ \end{aligned}$$
(5.34)

Obviously, there exist infinitely many choices of the potential for the form \(\omega _{\mathcal {M}}\). Our choice (5.34) is due to Theorem 7.1 of [8] which states that the potential \(\theta _{\mathcal {M}}\) (5.34) remains invariant if any of the cherries is moved to the neighbouring face.

5.5 SL(2) case

For \(n=2\) the general formula in Theorem 5.1 simplifies considerably to the following (for details see [8])

$$\begin{aligned} -2\pi i \, \omega _{\mathcal {M}}=\sum _{k=1}^N \left( \sum _{\begin{array}{c} e,e' \perp v_k\\ e'\prec e \end{array}} \,\mathrm{d}\zeta _{e'}\wedge \,\mathrm{d}\zeta _{e} + 2\sum _{e\perp v_k} \,\mathrm{d}\rho _k \wedge \,\mathrm{d}\zeta _e\right) \;; \end{aligned}$$
(5.35)

the symplectic potential (5.34) \(\theta _{\mathcal {M}}\) is given by:

$$\begin{aligned} 2\pi i \theta _{\mathcal {M}}=\sum _{k=1}^N\left( \frac{1}{2}\sum _{\begin{array}{c} e,e'\perp v_k\\ e' \prec e \end{array}}( \zeta _{e} \,\mathrm{d}\zeta _{e'}-\zeta _{e'} \, \,\mathrm{d}\zeta _{e}) +\sum _{e\perp v_k} (\zeta _e\, \,\mathrm{d}\rho _k-\rho _k \,\mathrm{d}\zeta _e)\right) \;. \end{aligned}$$
(5.36)

Notice that the expression (5.36) “forgets” about the orientation of vertices since the coordinate \(\zeta _e\) remains invariant if the orientation of the edge e is changed i.e. when \(z_e\) transforms to \(-z_e\). Unlike the form \(\omega _{\mathcal {M}}\), the potential \(\theta _{\mathcal {M}}\) depends on the choice of triangulation \(\Sigma _0\); the change of triangulation implies a non-trivial change of \(\theta _{\mathcal {M}}\).

5.5.1 Change of triangulation

Fig. 6
figure 6

Transformation of edges and jump matrices under an elementary flip of an edge of \(\Sigma _0\)

One triangulation \(\Sigma _0\) can be transformed to any other by a sequence of “flips” of the diagonal in the quadrilateral formed by two triangles with a common edge, see Fig. 6. Let us assume that the four cherries attached to the vertices are placed as shown in Fig. 6. Then, the assumption that all the monodromies around the four vertices of these triangles are preserved, implies the following equations [8]:

$$\begin{aligned} \tilde{\kappa }_1=\frac{\kappa }{\kappa +1} \kappa _1\;, \quad \tilde{\kappa }_2=(\kappa +1) \kappa _2\;,\quad \tilde{\kappa }_3=\frac{\kappa }{\kappa +1} \kappa _3\;, \quad \tilde{\kappa }_4=(\kappa +1) \kappa _4\;,\quad \tilde{\kappa }=\frac{1}{\kappa } \end{aligned}$$
(5.37)

where \( \kappa _j=z_j^2\), \(\tilde{\kappa }_j=\tilde{z}_j^2 \), \(j=1,\dots ,4\); \(\tilde{\kappa }=\tilde{z}^2\) is the variable on the “flipped” edge. The variables \(r_j\) remain invariant under the change of triangulation due to the choice of cherries positions in Fig. 6.

Denote the symplectic potential corresponding to the new triangulation by \(\tilde{\theta }_{\mathcal {M}}\) and introduce the Rogers dilogarithm L which for \(x\ge 0\) is defined by (see (1.9) of [36]):

$$\begin{aligned} L\left( \frac{x}{x+1}\right) := \frac{1}{2}\int _0^x \left\{ \frac{\log (1+y)}{y}-\frac{\log y}{1+y}\right\} \;. \end{aligned}$$
(5.38)

As it was shown in Prop. 7.1 of [8], the symplectic potentials \(\tilde{\theta }_{\mathcal {M}}\) and \(\theta _{\mathcal {M}}\) are related as follows:

$$\begin{aligned} 2\pi i (\tilde{\theta }_{\mathcal {M}}-\theta _{\mathcal {M}})= \,\mathrm{d}\left[ L\left( \frac{\kappa }{1+\kappa } \right) \right] \;. \end{aligned}$$
(5.39)

Therefore, the function \(L\left( \frac{\kappa }{1+\kappa } \right) \) is the generating function of the symplectomorphism corresponding to the elementary flip of the edge of \(\Sigma _0\).

6 Tau-Function as Generating Function of Monodromy Symplectomorphism

Here we extend the definition of the Jimbo–Miwa–Ueno tau-function by including an explicit description of its dependence on the monodromy data.

Definition 6.1

The tau function is defined by the following set of compatible equations. The equations with respect to \(t_j\) are given by the Jimbo–Miwa–Ueno formulæ

$$\begin{aligned} \frac{\partial \log \tau }{\partial t_j}=\frac{1}{2}\mathop {\mathrm{res}}\limits _{z=t_j}\mathrm{tr} A^2(z)\;; \end{aligned}$$
(6.1)

the equations with respect to coordinates on monodromy manifold \({\mathcal {M}}\) are given by

$$\begin{aligned} \,\mathrm{d}_{\mathcal {M}}\log \tau =\sum _{j=1}^N \mathrm{tr}(L_j G_j^{-1} \,\mathrm{d}_{\mathcal {M}}G_j) -\theta _{\mathcal {M}}[\Sigma _0] \end{aligned}$$
(6.2)

where \(\theta _{\mathcal {M}}[\Sigma _0]\) is the symplectic potential (5.34) for the form \(\omega _{\mathcal {M}}\); we consider the matrices \(G_j\) as (meromorphic) functions on \(\widetilde{{\mathcal {M}}}\) defined by the formula

$$\begin{aligned} G_j= \Phi (t_j) \end{aligned}$$
(6.3)

with \(\Phi \) the solution of the Riemann–Hilbert problem (3.9).

Using Theorem 3.1 and in particular (3.11), we can rewrite this definition in an alternative form, which encodes the complete system (6.1), (6.2):

Definition 6.1\('\). The tau-function on \(\widetilde{{\mathcal {M}}}\) is locally defined by equations

$$\begin{aligned} \,\mathrm{d}\log \tau = \Theta -\widetilde{\theta }_{\mathcal {M}}[\Sigma _0] \end{aligned}$$
(6.4)

where \(\Theta \) is the Malgrange form (3.3) corresponding to solution \(\Phi \) (3.9) and \(\widetilde{\theta }_{\mathcal {M}}[\Sigma _0]\) is the pullback to \(\widetilde{{\mathcal {M}}}\) of \(\theta _{\mathcal {M}}[\Sigma _0]\).

The formula (6.2) means that \(\log \tau \) is nothing but the generating function of the monodromy symplectomorphism: \(\,\mathrm{d}\log \tau \) equals to the difference of symplectic potentials defined in terms of the (extended) Kirillov–Kostant symplectic potential \(\theta _{\mathcal {A}}\) and the symplectic potential on the monodromy manifold. It was proven in [35] that the residue of \(\Theta \) along the points of the Malgrange divisor is a positive integer; thus \(\tau \) is actually locally analytic on \(\widetilde{{\mathcal {M}}}\); multiplicity of zero of \(\tau \) equals to the residue of \(\Theta \).

We now analyze in more detail the dependence of \(\tau \) on the Fock–Goncharov coordinates. The tau-function \(\tau \) defined by (6.2) depends on the full set of variables \(({\varvec{z}},\,{\varvec{x}},\, {\varvec{r}})\) on \({\mathcal {M}}\). The right-hand sides of equations (6.2) depend on the choice of the triangulation \(\Sigma _0\) defining the symplectic potential \(\theta _{\mathcal {M}}\). However, according to Thm.6.1 of [8], the potential (5.34) is independent of the choice of ciliation of the graph \(\Sigma _0\).

The next proposition shows how the tau-function defined in Definition 6.1 depends on variables \(\rho _{j,i}\): namely, define the second tau-function \(\tau _1\) by

$$\begin{aligned} \tau _1=\tau \; \exp \left\{ - \frac{n}{2\pi i} \sum _{j=1}^N \sum _{i=1}^{n-1} \rho _{j;i} \;\mu _{j;i}\right\} \;. \end{aligned}$$
(6.5)

Proposition 6.1

The tau-function \(\tau _1\) (6.5) is independent of the variables \(\{r_{j;i}\}, \ j=1\dots N\), \( i=1 \dots n-1\) i.e.

$$\begin{aligned} \frac{\partial \log \tau _1}{\partial r_{j;i}}=0\;. \end{aligned}$$
(6.6)

Proof

Denote by \(G_j^0\) the set of matrices \(G_j\) which correspond to all variables \(r_{j;i}=1\). Then matrices \(G_j\) can be expressed in terms of \(G_j^0\) and \(r_{j;i}\) as follows:

$$\begin{aligned} G_j=G_j^0 R_j \end{aligned}$$
(6.7)

where the diagonal matrix \(R_j\) is given by (5.22). Then,

$$\begin{aligned} G_j^{-1} \,\mathrm{d}G_j - (G_j^0)^{-1} \,\mathrm{d}G^0_j = R_j^{-1} \,\mathrm{d}R_j\;. \end{aligned}$$

Therefore, the first sum in (6.2) gets an additive term equal to

$$\begin{aligned} \sum _{j=1}^N \mathrm{tr}\, L_j R_j^{-1} \,\mathrm{d}R_j\;. \end{aligned}$$
(6.8)

On the other hand, matrices \(C_j\) transform under (6.7) in the same way:

$$\begin{aligned} C_j=C_j^0 R_j \end{aligned}$$
(6.9)

where the matrices \(C_j^0\) are assumed to be triangular with all 1’s on the diagonal.

To get the variation of \(\theta _{\mathcal {M}}\) under the transformation (6.9) we observe that the form \(\omega _{\mathcal {M}}\) (3.16) transforms under (6.9) as follows:

$$\begin{aligned} \omega _{\mathcal {M}}\rightarrow \omega _{\mathcal {M}}+\sum _{j=1}^N \mathrm{tr}\,\mathrm{d}L_j \wedge R_j^{-1}\,\mathrm{d}R_j\;. \end{aligned}$$

Therefore, according to our definition of \(\theta _{\mathcal {M}}\), the last sum in this expression should be integrated to give

$$\begin{aligned} \theta _{\mathcal {M}}\rightarrow \theta _{\mathcal {M}}+\sum _{j=1}^N \mathrm{tr}L_j R_j^{-1}\,\mathrm{d}R_j \end{aligned}$$
(6.10)

which cancels against (6.8) (alternatively, one can derive (6.10) using the definition of \(\mu _{v;i}\) and \(\rho _{v;i}\) and (5.34)). \(\square \)

The equations for the tau-function with respect to variables \({\varvec{z}}\) and \({\varvec{x}}\) (or, equivalently, \({\varvec{\zeta }}\) and \({\varvec{\xi }}\)) implied by Definition 6.1 can be obtained from expression (5.34) for the potential \(\theta _{\mathcal {M}}\). Below we write these equations explicitly in the SL(2) case.

6.1 SL(2) tau-function

In the SL(2) case the coordinates on \({\mathcal {M}}_N^{SL(2)}\) are given by edge coordinates \(\{\zeta _e\}\) and vertex coordinates \(\{\rho _k\}_{k=1}^N\); the potential \(\theta _{\mathcal {M}}\) is given by (5.36). Then

$$\begin{aligned} L_j=\left( \begin{array}{cc} \lambda _j &{} \quad 0 \\ 0 &{} \quad -\lambda _j \end{array}\right) =\frac{1}{2\pi i} \left( \begin{array}{cc} \mu _j &{} 0 \\ 0 &{} -\mu _j \end{array}\right) \;. \end{aligned}$$

and the relationship (6.5) becomes:

$$\begin{aligned} \tau (\{\zeta _e, \rho _j\}, \{t_j\}) =\tau _1(\{\zeta _e\}, \{ t_j\}) \exp \left\{ \frac{1}{\pi i} \sum _{j=1}^N \rho _{j} \;\mu _{j}\right\} . \end{aligned}$$
(6.11)

where \(\mu _j\) is the sum of the \(\zeta _e\) for all edges incident to the j-th vertex. The equations for \(\tau _1\) with respect to the edge coordinates take the following form:

Definition 6.2

For a given triangulation \(\Sigma _0\) the tau-function \(\tau _1\) of an SL(2) Fuchsian system is defined by the system (6.1) with respect to poles \(\{t_j\}_{j=1}^N\) and the following equations with respect to coordinates \(\{\zeta _{e_j}\}_{j=1}^{3N-6}\):

$$\begin{aligned} \frac{\partial }{\partial \zeta _e}\log \tau _1\!=\!\sum _{j=1}^N \mathrm{tr}\left( L_j G_j^{-1} \frac{\partial G_j}{\partial \zeta _e}\right) \!-\!\frac{1}{4\pi i}\left( \sum _{e'\perp v_1\atop e \prec e'} \zeta _{e'}\!-\! \!\sum _{e'\perp v_1\atop e'\prec e} \zeta _{e'}\!+\!\sum _{e'\perp v_2\atop e \prec e'} \zeta _{e'}- \sum _{e'\perp v_2\atop e'\prec e} \zeta _{e'}\right) \nonumber \\ \end{aligned}$$
(6.12)

where \(v_1\) and \(v_2\) are vertices of \(\Sigma _0\) connected by the edge e.

This definition depends on the choice of triangulation \(\Sigma _0\). The change of the tau-function \(\tau \) under an elementary flip of an edge of the triangulation \(\Sigma _0\) acting on the underlying triangulation follows from (5.39):

Proposition 6.2

Let \(\tau \) and \(\tilde{\tau }\) be tau-functions corresponding to triangulations related by the flip of the edge e shown in Fig. 6 . Then

$$\begin{aligned} \frac{\tilde{\tau }}{\tau }= \exp \left[ -\frac{1}{2\pi i} L \left( \frac{\mathrm{e}^{2\zeta _e}}{\mathrm{e}^{2\zeta _e}+1}\right) \right] \end{aligned}$$
(6.13)

under an appropriate choice of branch of the Rogers’ dilogarithm L (5.38).

6.2 Equations with respect to Fock–Goncharov coordinates

Here we derive equations for \(\Psi \), \(G_j\) and \(\tau \) with respect to an edge coordinate \(\zeta \).

First we notice that for any Riemann–Hilbert problem on an oriented contour C with jump matrix J the variation of the solution of the Riemann–Hilbert problem takes the form :

$$\begin{aligned} \delta \Psi \Psi ^{-1}(w) = \frac{1}{2\pi i} \int _C\frac{\Psi _- \delta J J^{-1} \Psi _-^{-1}}{z-w}\,\mathrm{d}w\;. \end{aligned}$$
(6.14)

The formula (6.14) can be easily derived by applying the variation \(\delta \) to the equation \(\Psi _+=\Psi _- J\) on C which gives \(\delta \Psi _+=\delta \Psi _- J+\Psi _- \delta J\) and then solving the resulting non-homogeneous Riemann–Hilbert problem via Cauchy kernel.

We apply (6.14) to variations of the solution \(\Psi \) of the Riemann–Hilbert problem on the Fock–Goncharov graph with respect to the variable \(\zeta \) corresponding to the edge e. Without loss of generality we assume that the positions of cherries are chosen as in Fig. 7.

Fig. 7
figure 7

Positioning of the cherries for the computation of the derivative in \(\zeta _e\)

For simplicity we assume that both vertices \(v_1\) and \(v_2\) connected by e are three-valent but it is not a significant restriction.

The jump matrices depending on \(\zeta \) are the following: the jump matrix on the edge e (and the one on the reverse edge) according to the general rules in (5.7) is

$$\begin{aligned} S (\zeta )=\left( \begin{array}{cc} 0 &{} \quad - e^{-\zeta } \\ e^{\zeta } &{} \quad 0 \end{array}\right) ;\qquad \zeta _{-e} = \zeta +i\pi \;. \end{aligned}$$
(6.15)

Furthermore, using the expression (5.13) for the matrix A and denoting \(S_j= S(\zeta _j)\) the jump matrices \(Q_1\) and \(Q_2\) on the stems \(s_1\) and \(s_2\) depend on \(\zeta \) as follows:

$$\begin{aligned}&Q_1=(S A S_1 A S_2 A)^{-1}=\left( \begin{array}{cc} e^{\zeta +\zeta _1+\zeta _2} &{} \quad -e^{-(\zeta +\zeta _1+\zeta _2)} -e^{-\zeta -\zeta _1+\zeta _2} -e^{-\zeta +\zeta _1+\zeta _2}\\ 0 &{} \quad e^{-(\zeta +\zeta _1+\zeta _2)} \end{array}\right) ,\\&Q_2=(AS_4 AS_3 A S^{-1})^{-1}= \left( \begin{array}{cc} e^{-(\zeta +\zeta _3+\zeta _4+i\pi )} &{} \quad 0 \\ e^{\zeta +\zeta _3+\zeta _4+i\pi } +e^{\zeta +\zeta _3-\zeta _4+i\pi } +e^{\zeta -\zeta _3-\zeta _4+i\pi } &{} \quad e^{\zeta +\zeta _3+\zeta _4+i\pi } \end{array}\right) . \end{aligned}$$

Notice that logarithmic derivatives of the matrices S and \(Q_1^{-1}\) and \(Q_2\) with respect to \(\zeta \) are the same and are given by

$$\begin{aligned} J_\zeta J^{-1}= -\sigma _3=\left( \begin{array}{cc} -1 &{} \quad 0\\ 0 &{} \quad 1 \end{array}\right) \;. \end{aligned}$$
(6.16)

Then the exponents of monodromy are

$$\begin{aligned} \lambda _1=\frac{1}{2i\pi }(\zeta +\zeta _1+\zeta _2) \;,\qquad \lambda _2=\frac{-1}{2i\pi } (\zeta +\zeta _3+\zeta _4+i\pi )\;. \end{aligned}$$

Thus

$$\begin{aligned} L_1=\frac{1}{2\pi i} \left( \begin{array}{cc} \zeta +\zeta _1+\zeta _2 &{} \quad 0 \\ 0 &{} \quad -( \zeta +\zeta _1+\zeta _2) \end{array}\right) \;,\qquad C_1= \left( \begin{array}{cc} 1 &{} \quad f_1(\zeta ) \\ 0 &{} \quad 1 \end{array}\right) \end{aligned}$$
(6.17)

where

$$\begin{aligned} f_1(\zeta )= \frac{ e^{-\zeta }(e^{-\zeta _1-\zeta _2} +e^{-\zeta _1+\zeta _2} +e^{\zeta _1+\zeta _2})}{e^{\zeta +\zeta _1+\zeta _2}- e^{-(\zeta +\zeta _1+\zeta _2)}} \end{aligned}$$

and

$$\begin{aligned} L_2=\frac{1}{2\pi i} \left( \begin{array}{cc} -(\zeta +\zeta _3+\zeta _4+i\pi ) &{} \quad 0 \\ 0 &{} \quad \zeta +\zeta _3+\zeta _4+i\pi \end{array}\right) \;, \qquad C_2= \left( \begin{array}{cc} 1 &{} \quad 0 \\ f_2(\zeta ) &{} \quad 1 \end{array}\right) \nonumber \\ \end{aligned}$$
(6.18)

where

$$\begin{aligned} f_2(\zeta )= \frac{ e^{\zeta }(e^{\zeta _3+\zeta _4} +e^{\zeta _3-\zeta _4} +e^{-\zeta _3-\zeta _4} )}{e^{-(\zeta +\zeta _3+\zeta _4)}- e^{\zeta +\zeta _3+\zeta _4} }\;. \end{aligned}$$

Introduce the graph \(\Sigma _0'\) by identifying the vertex \(v_j\) with the corresponding pole \(t_j\). The variational formula takes the simplest form if we make an explicit assumption on the growth (1.2) of the solution \(\Psi \) near the poles, that is on the real part of the eigenvalues of \(L_j\); indeed it is known [31] that for a given monodromy representation there is a lattice of solutions to the inverse monodromy representation. The reason is simply that the eigenvalues matrices \(L_j\) are defined up to addition of diagonal matrices in \(sl_n({\mathbb {Z}})\). For \(n=2\) there is therefore a \({\mathbb {Z}}^N\) (N being the number of poles) lattice of inverse solutions. The transformations between different solutions in this lattice are referred to as “discrete Schlesinger transformations”. We use this observation to shift the real part of \(\lambda _j\)’s to a value within the interval \(\mathfrak {R}\lambda _j \in [-\frac{1}{2}, \frac{1}{2})\). Excluding the non-generic cases \(\mathfrak {R}\lambda _j = -\frac{1}{2}\) we have

Theorem 6.1

Denote by \(\zeta _{jk}\) the FG coordinate corresponding to the edge \(e_{jk}\). Suppose all eigenvalues of matrices \(L_j\) satisfy the conditions

$$\begin{aligned} |\mathfrak {R}\lambda _j|<\frac{1}{2} \; . \end{aligned}$$
(6.19)

Then the function \(\Psi \) satisfies the following system of equations with respect to coordinates \(\zeta _{jk}\):

$$\begin{aligned} \frac{\,\mathrm{d}\Psi (z)}{\,\mathrm{d}\zeta _{jk}}= \frac{1}{2\pi i} \left[ \int _{t_j}^{t_k}\frac{\Psi _-(w)\sigma _3 \Psi _-^{-1}(w)}{z-w}\,\mathrm{d}w\right] \Psi (z)\;,\qquad z\ne t_j, t_k \end{aligned}$$
(6.20)

where the integral along the oriented edge \(e_{jk}\) of \(\Sigma _0'\) in the right hand side is convergent at the endpoints due to condition (6.19). The integral in (6.20) is discontinuous across the edge \(e_{jk}\).

Proof

We denote \(\zeta = \zeta _{jk}\) for simplicity, and \(j=1, k=2\). The expression \(\partial _{\zeta } J J^{-1}\) is nonzero only on the edge, the two stems and the boundaries of the two cherries. A direct computation shows (with the edges oriented as shown in Fig. 7), that the expression of \(\partial _{\zeta } J J^{-1}\) on the edge and on the two stems is equal to \(\sigma _3\). The jump matrix on the cherry 1 equals to \(J_1=C_1 (z-t_1)^{-L_1}\) and on \(c_2\) it equals to \(J_2=C_2 (z-t_2)^{-L_2}\). Then a direct computation gives (since \({L_1}_\zeta =\frac{\sigma _3}{2i\pi }\) and \({L_2}_\zeta =\frac{-\sigma _3}{2i\pi } \)):

$$\begin{aligned} {J_1}_{\zeta } J_1^{-1}= & {} {C_1}_{\zeta } C_1^{-1} -\frac{\log (z-t_1)}{2i\pi }C_1 {L_1}_{\zeta } C_1^{-1}\\= & {} \left( \begin{array}{cc} 0 &{} \quad -\frac{2 f_1}{1-e^{-4\pi i \lambda _1}} \\ 0 &{} \quad 0 \end{array}\right) -\frac{\log (z-t_1)}{2i\pi }\left( \begin{array}{cc} 1 &{} \quad -2f_1 \\ 0 &{} \quad -1 \end{array}\right) , \\ {J_2}_{\zeta } J_2^{-1}= & {} {C_2}_{\zeta } C_2^{-1} -\frac{\log (z-t_2)}{2i\pi }C_2 {L_2}_{\zeta } C_2^{-1}\\= & {} \left( \begin{array}{cc} 0 &{} \quad 0 \\ \frac{2 f_2}{1-e^{-4\pi i \lambda _2}} &{}0 \end{array}\right) + \frac{\log (z-t_2)}{2i\pi }\left( \begin{array}{cc} -1 &{} \quad 0 \\ -2f_2(\zeta ) &{} \quad 1 \end{array}\right) . \end{aligned}$$

Consider the first cherry (the second cherry can be treated in parallel); we shall call \(\beta \) the point of intersection of the stem and the cherry. Within a neighbourhood containing the cherry, we have \(\Psi (z) = \Phi _1(z) (z-t_1)^{L_1} C_1^{-1}\), with \(\Phi _1(t_1) = G_1\). In the integral (6.14) the contribution coming from the first cherry is then the integral

$$\begin{aligned}&\int _{\beta _+} ^{\beta _-} \frac{\Psi _- \partial _\zeta J_1 J_1^{-1} \Psi _-^{-1}}{w-z} \frac{\,\mathrm{d}w}{2i\pi }\nonumber \\&\quad =\int _{\beta _+} ^{\beta _-} \Phi _1 \left( f_1 \sigma _+\left( \frac{i\mathrm{e}^{2i\pi \lambda _1}}{\sin (2\pi \lambda _1)} (w-t_1)^{2\lambda _1}{-}\frac{\log (w-t_1)}{i\pi } \right) {-}\frac{\log (w-t_1)}{2i\pi } C_1^{-1}\sigma _3 C_1\right) \nonumber \\&\qquad \Phi _1^{-1} \frac{\,\mathrm{d}w}{2i\pi (w-z)} \end{aligned}$$
(6.21)

where the contour of integration is the circle \(|w-t_1|= const\) starting at \(\beta _+\) and ending at \(\beta _-\) and the branch-cut of the power is the segment from \(t_1\) to \(\beta \). We also have used that \([C_1,\sigma _+]=0\). Under the condition \(-1< 2\mathfrak {R}\lambda _1\) the contribution of the integration on the cherry tends to zero in the limit of zero radius of the cherry. \(\square \)

Remark 6.1

While the general formula (6.21) is valid without any restriction on the real parts of \(\lambda _j\)’s, the integral on the boundary of the cherries provide some sort of “regularization” to the integral along the edge. However, if the conditions (6.19) in fulfilled, the regularization are not needed and we arrive at the simplified formula (6.20).

Introducing the notation

$$\begin{aligned} F(z)= -\Psi _-(z)\sigma _3 \Psi _-^{-1}(z) \end{aligned}$$
(6.22)

we can formulate the following

Corollary 6.1

The derivatives of \(G_\ell \) on coordinates \(\zeta _{jk}\) take the form:

$$\begin{aligned}&\frac{\,\mathrm{d}G_\ell }{\,\mathrm{d}\zeta _{jk}}= \frac{1}{2\pi i} \left[ \int _{t_j}^{t_k}\frac{F(w)}{w-t_\ell }\,\mathrm{d}w\right] G_\ell \;,\qquad \ell \ne j,k\;, \end{aligned}$$
(6.23)
$$\begin{aligned}&\frac{\,\mathrm{d}G_j}{\,\mathrm{d}\zeta _{jk}} G_{j}^{-1} = \lim _{z\rightarrow t_j} \left[ \int _{t_j}^{t_k}\frac{F(w)}{w-z}\frac{\,\mathrm{d}w}{2i\pi } + G_j\left( \frac{i f_j (z-t_j)^{2\lambda _j} \sigma _+}{\sin (2\pi \lambda _j) } -\frac{\log (z-t_j)}{2i\pi } \sigma _3\right) G_j^{-1} \right] , \end{aligned}$$
(6.24)
$$\begin{aligned}&\frac{\,\mathrm{d}G_k}{\,\mathrm{d}\zeta _{jk}} G_{k}^{-1}{=}\lim _{z\rightarrow t_k} \left[ \int _{t_j}^{t_k}\frac{F(w)}{w{-}z}\frac{\,\mathrm{d}w}{2i\pi } {+}G_k\left( \frac{-i f_k (z-t_k)^{-2\lambda _k} \sigma _-}{\sin (2\pi \lambda _k) } {+}\frac{\log (z-t_k)}{2i\pi } \sigma _3\right) G_k^{-1} \right] \end{aligned}$$
(6.25)

where \(f_j = (C_j)_{12}\) and \(f_k = (C_k)_{21}\) as defined in (6.17) and (6.18).

Proof

The first formula follows from the fact that the connection matrices and exponents at the vertices not connected to the edge are constant under the variation. For definiteness assume \(j=1\), \(k=2\). To find \(G_1\) (\(G_2\) can be treated in the same way) we need to evaluate \(\Phi _1(z) =\Psi (z) C_1 (z-t_1)^{L_1}\) at \(z=t_1\). Differentiating the identity \(\Phi _1(z) = \Psi (z) C_1 (z-t_1)^{-L_1}\) with respect to \(\zeta \) and taking the limit \(z\rightarrow t_1\) gives the formula, recalling that \(G_1 = \Phi _1(t_1)\). Indeed

$$\begin{aligned}&\partial \Phi _1(z) \Phi _1(z)^{-1} = \partial \Psi (z) \Psi (z)^{-1} +\Phi _1 (z) \nonumber \\&\quad \left( \frac{i f_1}{\sin (2\pi \lambda _1)} \sigma _{+} (z-t_1)^{2\lambda _1} - \frac{\log (z-t_1)}{2i\pi } \sigma _3 \right) \Phi _1(z)^{-1}\;. \end{aligned}$$
(6.26)

If \(\mathfrak {R}2\lambda _1>-1\) (which is our standing assumption), in the limit as \(z\rightarrow t_1\) we can substitute \(\Phi _1(z)\) by \(G_1\) in the above formula, and we obtain the statement.

On a related note; the same result is obtained by looking at the singular behaviour of the integral (6.20) (using results on Cauchy-type integrals in [22], for example) and simply removing the singular part. \(\square \)

Now we come to the following

Proposition 6.3

The Eq. (6.12) for the tau-function \(\tau _1\) with respect to FG coordinates can be equivalently written as follows:

$$\begin{aligned}&2\pi i \frac{\partial }{\partial \zeta _{jk}}\log \tau _1 =\sum _{\ell =1}^N \mathrm{reg}\int _{t_j}^{t_k} \frac{\mathrm{tr}A_{\ell } F(w)}{w-t_\ell }\,\mathrm{d}w\\&\quad -\frac{1}{2}\left( \sum _{e'\perp v_j\atop e \prec e'} \zeta _{e'}- \sum _{e'\perp v_j\atop e'\prec e} \zeta _{e'}+\sum _{e'\perp v_k\atop e \prec e'} \zeta _{e'}- \sum _{e'\perp v_k\atop e'\prec e} \zeta _{e'}\right) \end{aligned}$$

where F is given by (6.22); the terms of the sum which need the regularization correspond to \(\ell =j\) and \(\ell =k\); they are given by

$$\begin{aligned} \mathrm{reg}\int _{t_j}^{t_k}\frac{\mathrm{tr}A_{j} F(w)}{w-t_j}\,\mathrm{d}w =\lim _{z\rightarrow t_j} \left[ \int _{t_j}^{t_k} \frac{\mathrm{tr}A_j F(w)}{w-z}\,\mathrm{d}w - 2\lambda _j \log (z-t_j) \right] \end{aligned}$$
(6.27)

and

$$\begin{aligned} \mathrm{reg}\int _{t_j}^{t_k}\frac{\mathrm{tr}A_{k} F(w)}{w-t_k}\,\mathrm{d}w =\lim _{z\rightarrow t_k} \left[ \int _{t_j}^{t_k} \frac{\mathrm{tr}A_k F(w)}{w-z}\,\mathrm{d}w +2\lambda _k \log (z-t_k) \right] \;. \end{aligned}$$
(6.28)

The proof follows from Eqs. (6.23), (6.24) and (6.25). \(\square \)