Abstract
We study the q-difference sixth Painlevé equation (\(q\text {P}_{\text {VI}}\)) through its associated Riemann–Hilbert problem (RHP) and show that the RHP is always solvable for irreducible monodromy data. This enables us to identify the solution space of \(q\text {P}_{\text {VI}}\) with a monodromy manifold for generic parameter values. We deduce this manifold explicitly and show it is a smooth and affine algebraic surface when it does not contain reducible monodromy. Furthermore, we describe the RHP for reducible monodromy data and show that, when solvable, its solution is given explicitly in terms of certain orthogonal polynomials yielding special function solutions of \(q\text {P}_{\text {VI}}\).
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
1 Introduction
Despite widespread knowledge of how a Riemann–Hilbert formulation allow us to describe the solutions of the Painlevé equations, the corresponding description remains incomplete for discrete Painlevé equations. In this paper, we provide such a formulation for an important equation known as the q-difference sixth Painlevé equation and show that (under certain conditions) the corresponding Riemann–Hilbert problem is solvable, the resulting monodromy mapping is bijective, and the monodromy manifold is an algebraic surface given by an explicit equation.
Assuming \(q\in {\mathbb {C}}\), \(0<|q|<1\), and given nonzero parameters \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\in {\mathbb {C}}^4\), the system known as the q-difference sixth Painlevé equation is
where \(f,g:T\rightarrow \mathbb{C}\mathbb{P}^1\) are complex functions defined on a domain T invariant under multiplication by q and we have used the abbreviated notation \(f=f(t)\), \(g=g(t)\), \({\overline{f}}=f(qt)\), \({\overline{g}}=g(qt)\), for \(t\in T\). We will refer to Eq. (1.1) by the abbreviation \(q\text {P}_{\text {VI}}\).
\(q\text {P}_{\text {VI}}\) was first derived by Jimbo and Sakai [20] as the compatibility condition of a pair of linear q-difference systems. They showed that this formulation could be interpreted as a q-difference version of isomonodromic deformation, in close parallel to the role played by the classical sixth Painlevé equation as the isomonodromic condition for a rank-two Fuchsian system with four regular singular points at 0, 1, \(\infty \), t, where t is allowed to move in \({\mathbb {C}}\setminus \{0,1\}\) [12, 21].
The sixth Painlevé equation (\(\text {P}_{\text {VI}}\)) plays an important role in many settings in mathematics and physics. We mention the construction of self-dual Einstein metrics in general relativity [33], classification of 2D-topological field theories [7], mirror symmetry, and quantum cohomology [24] as noteworthy examples.
Letting \(q\rightarrow 1\) in \(q\text {P}_{\text {VI}}\), with \(\kappa _j=q^{k_j}\) for \(j=0,t,1,\infty \), under the assumption that \(f\rightarrow u\) and \(g\rightarrow (u-t)/(u-1)\), the system reduces to \(\text {P}_{\text {VI}}\):
where
Due to its relation to \(\text {P}_{\text {VI}}\), the q-difference equation \(q\text {P}_{\text {VI}}\) has drawn increasing interest in recent times. Mano [25] derived the generic leading order asymptotics of solutions near \(t=0\) and \(t=\infty \) and gave an implicit solution to the corresponding nonlinear connection problem. Jimbo et al. [22] extended Mano’s asymptotic result near \(t=0\) to an explicit asymptotic expansion beyond all orders for the generic solution. They obtained this asymptotic representation through an interesting connection of \(q\text {P}_{\text {VI}}\) with conformal field theory, analogous to the one for \(\text {P}_{\text {VI}}\) established by Gamayun et al. [13].
In this paper, we study \(q\text {P}_{\text {VI}}\) via the Jimbo–Sakai linear problem [20]. Using Birkhoff’s theory [1], we define an associated Riemann–Hilbert problem (RHP), which captures the general solution of \(q\text {P}_{\text {VI}}\). The jump matrices of this RHP across a single closed contour form a corresponding monodromy manifold that is a focal point of this paper.
Recently, this monodromy manifold was the object of an extensive study by Ohyama et al. [27], who showed that such a manifold forms an algebraic surface. Furthermore, they conjectured, see [27, Conjecture 7.10], that the algebraic surface is smooth, under additional conditions. In this paper, we prove a stronger version of this conjecture, see Theorem 2.17 and Remark 2.18.
Consider the general class of solutions (f, g) of \(q\text {P}_{\text {VI}}\) defined on a domain T given by a discrete q-spiral, i.e., \(T=q^{\mathbb {Z}}t_0\), for some \(t_0\in {\mathbb {C}}^*\). The deformation of the Jimbo–Sakai linear problem (see §3.2) yields an auxiliary equation associated with \(q\text {P}_{\text {VI}}\)
We refer to (f, g) as a solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) and call the triplet (f, g, w) a solution of \(q\text {P}_{\text {VI}}^{\text {aux}}(\kappa ,t_0)\).
Starting with an initial value of (f, g) in \({\mathbb {C}}^*\times {\mathbb {C}}^*\), and iterating in t, \(q\text {P}_{\text {VI}}\) can become apparently singular when (f, g) takes the value of one of the following eight base-points,
Each of these can be resolved through a blow up, so that the iteration is once again well-defined [31]. There are, however, formal solutions of equations (1.1), which never take a value in \({\mathbb {C}}^*\times {\mathbb {C}}^*\). We exclude such solutions from our consideration.
1.1 Main results
The main results of this paper are given by Theorems 2.12, 2.15, 2.17 and 2.20 in Sect. 2. Throughout the paper, it is assumed that the parameters \(\kappa \) and \(t_0\) satisfy the non-resonance conditions,
As in Ohyama et al. [27], the non-splitting conditions
where \(\epsilon _j\in \{\pm 1\}\), \(j=0,t,1,\infty \), also play an important role. The monodromy manifold contains reducible monodromy when one or more of these conditions are violated – see Lemma 2.10.
The RHP corresponding to \(q\text {P}_{\text {VI}}\) is given by Definition 2.7. Our first main result, Theorem 2.12, shows that the RHP with irreducible monodromy is always solvable. This has important ramifications for the mapping that sends solutions of \(q\text {P}_{\text {VI}}\) to points on the monodromy manifold, which we will refer to as the monodromy mapping. In particular, Corollary 2.13 shows that the monodromy mapping is bijective when the non-splitting conditions are satisfied.
The RHP may be solvable in some cases of reducible monodromy. In Sect. 4.2, we show that in such cases, the RHP is solved explicitly in terms of certain orthogonal polynomials yielding special function solutions of \(q\text {P}_{\text {VI}}\).
Our second main result, Theorem 2.15, constructs an embedding of the monodromy manifold into \((\mathbb{C}\mathbb{P}^1)^4/{\mathbb {C}}^*\), where the quotient is taken with respect to scalar multiplication. The image of this embedding is described as the zero set of a polynomial, given explicitly in Definition 2.14, minus a curve.
This embedding allows us to study algebro-geometric properties of the monodromy manifold. Our third main result, Theorem 2.17, focuses on the singularities of the monodromy manifold and proves that it is smooth if and only if it excludes reducible monodromy, i.e., if and only if the non-splitting conditions hold true.
Finally, our fourth main result, Theorem 2.20, identifies the monodromy manifold with an explicit affine algebraic surface when the non-splitting conditions are satisfied.
1.2 Notation
Here, we briefly describe the notation used in this paper. The symbol \(\sigma _3\) is the well-known Pauli matrix \(\sigma _3={\text {diag}}(1,-1)\). The q-Pochhammer symbol is the (convergent) product
Note that the entire function \((z;q)_\infty \) satisfies
with \((0;q)_\infty =1\) and, moreover, possesses simple zeros at \(q^{-{\mathbb {N}}}\). The \( q \)-theta function
is analytic on \({\mathbb {C}}^*\), with essential singularities at \(z=0, \infty \), and has simple zeros on the q-spiral \(q^{\mathbb {Z}}\). It satisfies
For \(n\in {\mathbb {N}}^*\), we use the common abbreviation for repeated products of these functions
We will refer to the complex projective space \({\mathbb {C}}{\mathbb {P}}^1\) as \({\mathbb {P}}^1\) and, for positive integer k, denote the k-fold direct product \({\mathbb {P}}^1\times \ldots \times {\mathbb {P}}^1\) by \(({\mathbb {P}}^1)^k\). (We remind the reader that \({\mathbb {P}}^1\times {\mathbb {P}}^1\) is not the same space as \({\mathbb {P}}^2\).)
1.3 Outline of the paper
In Sect. 2, we give the precise statements of the main results of the paper. Section 3 is devoted to the Jimbo–Sakai linear system. Here, we renormalize the linear system of [20] and describe the outcomes of Birkhoff’s classical theory [1] for this system. In Sect. 4, we study the solvability of RHP I, defined in Definition 2.7, and prove Theorem 2.12. Section 5 concerns the monodromy manifold and proofs of Theorems 2.15, 2.17 and 2.20 are given there. We conclude the paper with a conclusion in Sect. 6.
2 Detailed Statement of Results
In order to state our main results, we recall the Jimbo–Sakai linear problem for \(q\text {P}_{\text {VI}}\) and define the corresponding monodromy manifold and mapping in Sect. 2.1. In Sect. 2.2, we formulate the associated RHP via Birkhoff’s theory. In Sect. 2.3 we state our first main result, Theorem 2.12. Then, in Sect. 2.4, we state our main results on the monodromy manifold, that is, Theorems 2.15, 2.17 and 2.20.
2.1 The Jimbo–Sakai linear system
Suppose \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\in \mathbb C^4\), all nonzero, are given and \(t\in T\) lies on a discrete q-spiral \(T=q^{\mathbb {Z}}t_0\). Consider the linear system
where A(z, t) is a \(2\times 2\) matrix polynomial with determinant given by
and assume that
for an \(H=H(t)\in GL_2({\mathbb {C}})\). This is the Jimbo–Sakai linear problem [20], which we have scaled to remove redundant parameters (see Sect. 3.1 for details). Throughout this paper we assume that the parameters \(\kappa \) and \(t_0\) satisfy the non-resonance conditions (1.4), which ensure that the linear problem is fully non-resonant (see [23, Definition 1.1]).
By Carmichael [2], the linear system (2.1) has solutions \(Y_0(z,t)\) and \(Y_\infty (z,t)\) respectively given by convergent series expansions around \(z=0\) and \(z=\infty \) of the following form,
where \(q^{k_j}=\kappa _j\) for \(j=0,\infty \). The matrix functions \(\Psi _\infty (z,t)\) and \(\Psi _0(z,t)^{-1}\) extend to single-valued analytic functions in z on \({\mathbb {P}}^1\setminus \{0\}\) and \({\mathbb {C}}\) respectively. Furthermore, their determinants are explicitly given by
A central object of study in this paper is the connection matrix
This matrix is single-valued in z on \({\mathbb {C}}^*\) and is related to Birkhoff’s connection matrix
by
For our purposes, it is more convenient to work with C(z, t), rather than P(z, t), due to its single-valuedness. We will also refer to the connection matrix C(z, t) as the monodromy of the linear system (2.1).
For any fixed t, C(z, t) has the following analytic characterisation in z.
-
(1)
It is a single-valued analytic function in \(z\in {\mathbb {C}}^*\).
-
(2)
It satisfies the q-difference equation
$$\begin{aligned} C(qz,t)=\frac{t}{z^2}\kappa _0^{\sigma _3}C(z,t)\kappa _\infty ^{-\sigma _3}. \end{aligned}$$ -
(3)
Its determinant is given by
$$\begin{aligned} |C(z,t)|=c\, \theta _q\left( \kappa _t^{+1}\frac{z}{t},\kappa _t^{-1}\frac{z}{t},\kappa _1^{+1}z,\kappa _1^{-1}z\right) , \end{aligned}$$for some \(c\in {\mathbb {C}}^*\).
We correspondingly make the following definition.
Definition 2.1
We denote by \({\mathfrak {C}}(\kappa ,t)\), for any fixed \(t\in {\mathbb {C}}^*\), the set of all \(2\times 2\) matrix functions satisfying properties (1)–(3) above.
Next, we consider deformations of the linear system (2.1), as \(t\rightarrow qt\), which leave the matrix function P(z, t) invariant, i.e. such that \(P(z,qt)=P(z,t)\), which is equivalent to
We call such a deformation isomonodromic.
Jimbo and Sakai [22] showed that, upon introducing the following coordinatesFootnote 1 (f, g, w) on A,
isomonodromic deformation of the linear system (2.1) is locally equivalent to (f, g, w) satisfying \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\). Building on this, we prove the following lemma in Sect. 3.2.
Lemma 2.2
Let (f, g, w) be any solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) and denote
Then, the linear system A(z, t) is regular in t on \(q^{\mathfrak {M}} t_0\) and the corresponding connection matrix is given by
for a matrix \(C_0(z)\in {\mathfrak {C}}(\kappa ,t_0)\), unique up the left-multiplication by diagonal matrices. Here D(t) is a diagonal matrix which may be eliminated from Eq. (2.9) by rescaling \(H(t)\mapsto H(t)D(t)\) in Eq. (2.4).
In Lemma 2.2, we have the freedom of rescaling the auxiliary variable w by \(w\mapsto {\widetilde{w}}=dw\), \(d\in {\mathbb {C}}^*\), which is equivalent to gauging the linear system by a constant diagonal matrix,
and thus rescaling the matrix \(C_0(z)\in {\mathfrak {C}}(\kappa ,t_0)\) as
Hence, Lemma 2.2 provides us with a mapping
which associated to any solution (f, g) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) the equivalence class of \(C_0(z)\) in \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. This warrants the following definition.
Definition 2.3
We define \({\mathcal {M}}(\kappa ,t_0)\) to be the space of connection matrices \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. We refer to \({\mathcal {M}}(\kappa ,t_0)\) as the monodromy manifold of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\).
Correspondingly, we call the mapping (2.10), which associates with any solution (f, g) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\), a point on the monodromy manifold, the monodromy mapping.
Remark 2.4
The space \({\mathcal {M}}(\kappa ,t_0)\) was first introduced and studied in Ohyama et al. [27][§4.1.1], where it is denoted as \({\mathcal {F}}\). Ohyama et al. [27] showed how this space can naturally be endowed with the structure of a complex algebraic variety, under certain assumptions of genericity including the non-resonance (1.4) and non-splitting conditions (1.5). Compatible with this structure, we endow \({\mathcal {M}}(\kappa ,t_0)\) with the structures of a complex manifold and algebraic variety, in Theorems 2.17 and 2.20 respectively. The proof that these structures are compatible with those in [27] is postponed to the end of the paper, see Remark 5.6.
In Sect. 3.3, we prove the following lemma concerning injectivity of the monodromy mapping.
Lemma 2.5
The monodromy mapping, defined in Definition 2.3, is injective.
2.2 The main Riemann–Hilbert problem
In this paper, we analyse the monodromy mapping through the, via Birkhoff’s theory [1], corresponding Riemann–Hilbert problem (RHP).
To introduce this RHP, we return to the single-valued matrix functions \(\Psi _0(z,t)\) and \(\Psi _\infty (z,t)\), defined in Eq. (2.5). Let us denote \(t_m=q^mt_0\) for \(m\in {\mathbb {Z}}\). By Lemma 2.2, we may choose H such that
for \(m\in {\mathfrak {M}}\).
Next, we need to choose Jordan curves \(\gamma ^{{(}{m}{)}}\), \(m\in {\mathbb {Z}}\), which separate the points in the complex plane where \(\Psi _\infty (z,t_m)\) and \(\Psi _0(z,t_m)\) are respectively non-invertible and singular. These points are precisely the zeros of the determinants (2.6a) and (2.6b) respectively. We thus make the following definition.
Definition 2.6
Consider a family \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\) of positively oriented Jordan curves in \({\mathbb {C}}^*\) and denote by \(D_+^{{(}{m}{)}}\) and \(D_-^{{(}{m}{)}}\) the inside and outside of \(\gamma ^{{(}{m}{)}}\) respectively, for \(m\in {\mathbb {Z}}\). Then we call this family of curves admissable if, for \(m\in {\mathbb {Z}}\),
where we use the notation \(U\cdot V=\{uv:u\in U,v\in V\}\) for compatible sets U and V, and
see Fig. 1.
We can always construct an admissible family of curves and it follows that
defines a solution of the following RHP, with \(C(z)=C_0(z)\), for \(m\in {\mathfrak {M}}\).
Definition 2.7
(RHP I). Given a connection matrix \(C\in {\mathfrak {C}}(\kappa ,t_0)\) and a family of admissable curves \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\), for \(m\in {\mathbb {Z}}\), find a matrix function \(\Psi ^{{(}{m}{)}}(z)\) which satisfies the following conditions.
-
(i)
\(\Psi ^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).
-
(ii)
\(\Psi ^{{(}{m}{)}}(z')\) has continuous boundary values \(\Psi _+^{{(}{m}{)}}(z)\) and \(\Psi _-^{{(}{m}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_+^{{(}{m}{)}}\) and \(D_-^{{(}{m}{)}}\) respectively, related by
$$\begin{aligned} \Psi _+^{{(}{m}{)}}(z)=\Psi _-^{{(}{m}{)}}(z)z^mC(z),\quad z\in \gamma ^{{(}{m}{)}}. \end{aligned}$$ -
(iii)
\(\Psi ^{{(}{m}{)}}(z)\) satisfies
$$\begin{aligned} \Psi ^{{(}{m}{)}}(z)=I+{\mathcal {O}}\left( z^{-1}\right) \quad z\rightarrow \infty . \end{aligned}$$
The matrix function \(\Psi ^{{(}{m}{)}}(z)\), defined in Eq. (2.12), is uniquely characterised as the solution of RHP I. Indeed, we have the following lemma, which we prove in Sect. 3.3.
Lemma 2.8
For any fixed \(m\in {\mathbb {Z}}\), if RHP I in Definition 2.7 has a solution \(\Psi ^{{(}{m}{)}}(z)\), then this solution is globally invertible on the complex plane and unique.
From here on we say that \(\Psi ^{{(}{m}{)}}(z)\) exists if and only if RHP I has a solution for that particular value of m, as justified by the uniqueness in the above lemma.
If RHP I is solvable, then we can construct a corresponding isomonodromic linear system, by setting
This defines a matrix polynomial of the form (2.2) and the values of (f, g, w) may be read directly from the solution of the RHP as follows (details are given in Sect. 3.3). Let
and denote \(H=(h_{ij})\) and \(U=(U_{ij})\), then
2.3 Solvability of the main RHP
The notion of reducible monodromy, given in the following definition, plays an important role in our main results.
Definition 2.9
We call a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) irreducible when none of its entries are identically zero, otherwise we call it reducible. Similarly, we call monodromy \([C(z)]\in {\mathcal {M}}(\kappa ,t_0)\) irreducible when C(z) is irreducible and reducible otherwise.
Lemma 2.10
The monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) does not contain reducible monodromy if and only if the non-splitting conditions (1.5) hold true.
Remark 2.11
This lemma can be inferred from Ohyama et al. [27][Theorem 4.3]. We give a proof in Sect. 4.1.
We are now in a position to state our first main result, which we prove in Sect. 4.1.
Theorem 2.12
Consider RHP I defined in Definition 2.7. If the connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) is irreducible, see Definition 2.9, then this RHP is solvable. More precisely, for any \(m\in {\mathbb {Z}}\), at least one of the solutions \(\Psi ^{{(}{m}{)}}(z)\) and \(\Psi ^{{(}{m+1}{)}}(z)\) of RHP I exists.
Let (f, g, w) be the unique corresponding solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) via Eq. (2.16). Then, for \(m\in {\mathbb {Z}}\), \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\).
Corollary 2.13
If the non-splitting conditions (1.5) hold true, then the monodromy mapping is bijective.
Proof
Due to Lemma 2.5, the monodromy mapping is injective. Take any monodromy in the monodromy manifold. Then, by Lemma 2.10, it must be irreducible. Theorem 2.12 thus shows that there exists a solution of \(q\text {P}_{\text {VI}}\) with that monodromy. So the monodromy mapping is also surjective and the corollary follows. \(\square \)
For reducible monodromy, solvability of RHP I is more subtle than in the irreducible case handled in Theorem 2.12. We discuss this in Sect. 4.2, where we show that the RHP with reducible monodromy can be transformed into the standard Fokas-Its-Kitaev RHP [8, 9] for certain orthogonal polynomials. We further show that the corresponding solutions of \(q\text {P}_{\text {VI}}\) can be expressed in terms of determinants containing Heine’s basic hypergeometric functions. We thus see that special function solutions occur when the monodromy of the linear problem is reducible, a phenomenon well-known for the classical sixth Painlevé equation [26].
2.4 Results on the monodromy manifold
Our second main result is the identification of the monodromy manifold with an explicit surface. To state this result, we define a set of coordinates on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), using a construction introduced in our previous paper [23].
Firstly, we require the following notation: for any \(2\times 2\) matrix R of rank one, let \(R_1\) and \(R_2\) be respectively its first and second column, then we define \(\pi (R)\in {\mathbb {P}}^1\) by
with \(\pi (R)=0\) if and only if \(R_1=(0,0)^T\) and \(\pi (R)=\infty \) if and only if \(R_2=(0,0)^T\).
Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and denote
Let \(1\le k\le 4\), then |C(z)| has a simple zero at \(z=x_k\) and thus \(C(x_k)\), while nonzero, is not invertible. We define the coordinates
Note that \((\rho _1,\rho _2,\rho _3,\rho _4)\) are invariant under left multiplication of C(z) by diagonal matrices. However, multiplication by diagonal matrices from the right has the effect of scaling
for some \(c\in {\mathbb {C}}^*\).
Therefore, the coordinates \(\rho \) naturally lie in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) and we obtain a mapping
which is easily seen to be an embedding (see Lemma 5.1).
We proceed in giving an explicit description of the image of the monodromy manifold under \({\mathcal {P}}\). To this end, we make the following definition.
Definition 2.14
Define the quadratic polynomial
with coefficients given by
Note that T is homogeneous and multilinear in the variables \(\rho =(\rho _1,\rho _2,\rho _3,\rho _4)\). Therefore, if we denote its homogeneous form by
then, using homogeneous coordinates \(\rho _k=[\rho _k^x: \rho _k^y]\in {\mathbb {P}}^1\), \(1\le k\le 4\), the equation
defines a surface in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). We denote this surface by
Our second main result is given by the following theorem, which is proven in Sect. 5.1.
Theorem 2.15
Denote by \({\widehat{\kappa }}\) the tuple of complex parameters \(\kappa \) after replacing \(\kappa _0\mapsto 1\). Then the image of the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) under the mapping \({\mathcal {P}}\), defined in Eq. (2.19), is given by the surface \({\mathcal {S}}(\kappa ,t_0)\), minus the curve
Let us denote
then, the mapping
is a bijection.
The curve \({\mathcal {X}}(\kappa ,t_0)\) in the above theorem has a geometric interpretation, which is described in the following remark.
Remark 2.16
The curve \({\mathcal {X}}={\mathcal {X}}(\kappa ,t_0)\) does not depend on \(\kappa _0\) and can be written as the intersection
Informally, one can think of points on the curve \({\mathcal {X}}\) in \({\mathcal {S}}(\kappa ,t_0)\) as corresponding to connection matrices \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) whose determinant is identically zero, i.e. they satisfy properties (1) and (2) of Definition 2.1, but property (3) with \(c=0\). Therefore, these coordinate values do not lie in the image of \({\mathcal {P}}\). In the proof of Theorem 2.15, we obtain an explicit parametrisation of \({\mathcal {X}}\), see Eq. (5.20).
We note that, any point \([\rho ]\in {\mathcal {S}}(\kappa ,t_0)\) with more than two coordinates zero or more than two coordinates infinite, necessarily lies on the closed curve \({\mathcal {X}}\), defined in Eq. (2.22), and is thus not a point on the surface \({\mathcal {S}}^*(\kappa ,t_0)\).
However, when one of the non-splitting conditions (1.5) is violated, one of the coefficients of the polynomial \(T(\rho )\), in Definition 2.14, vanishes. In that case, there exist points \([\rho ]\in {\mathcal {S}}(\kappa ,t_0)\) with precisely two coordinates zero or two coordinates infinite. Such points cannot lie on the closed curve \({\mathcal {X}}\) (as this would imply that \(\kappa _0\in q^{\mathbb {Z}}\)), and are in one to one correspondence with reducible monodromy on the monodromy manifold.
For example,
and
If \(\kappa _0=\kappa _\infty t_0\), so that \(T_{34}=0\), then these two subspaces correspond respectively to the equivalence classes of the collection of upper-triangular connection matrices
and the equivalence classes of the collection of lower-triangular connection matrices
in the monodromy manifold.
Furthermore, these two subspaces intersect at the single point \([(0,0,\infty ,\infty )]\in {\mathcal {S}}^*(\kappa ,t_0)\), which corresponds to the equivalence class of the diagonal connection matrix in the monodromy manifold given by setting \(c=0\) in any of the above two formulas.
By Theorem 2.15, the monodromy manifold inherits any topological properties of the space \({\mathcal {S}}^*(\kappa ,t_0)\) via the mapping \({\mathcal {P}}\). Diagonal monodromy, or anti-diagonal monodromy, form singularities on the monodromy manifold, which is the content of our third main result, proven in Sect. 5.2.
Theorem 2.17
If the non-splitting conditions (1.5) hold true, then the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) is a smooth complex surface.
On the other hand, if one or more of the non-splitting conditions are violated, then the set
is non-empty (but finite), its elements form singularities of the monodromy manifold and away from them the monodromy manifold is smooth.
Remark 2.18
We note that the above theorem implies the assertion in Conjecture 7.10 of Ohyama, Ramis and Sauloy [27]. This conjecture is made under the conditions (1.4), (1.5) and additional assumptions on the parameters, but our proof shows that the result holds without these additional assumptions.
In our fourth and final result we identify the monodromy manifold with an explicit affine algebraic surface via an embedding into \({\mathbb {C}}^6\). To construct this embedding, let us denote by
the quadratic polynomial \(T(p)=T(p;\kappa ,t_0)\) after replacing \(\kappa _0\mapsto 1\).
Take \(1\le i<j\le 4\) and consider the coordinate
So, for example, \(\eta _{12}\) is given by
in homogeneous coordinates.
Note that \(\eta _{ij}\) is invariant under scalar multiplication \(\rho \mapsto c \rho \), \(c\in {\mathbb {C}}^*\). Furthermore, the denominator of \(\eta _{ij}\) does not vanish on \({\mathcal {S}}^*(\kappa ,t_0)\), as any such point \([\rho ]\) would necessarily lie on the curve \({\mathcal {X}}\), see Eq. (2.22).
This means that the \(\eta _{ij}\), \(1\le i<j\le 4\), are six well-defined coordinates on \({\mathcal {S}}^*(\kappa ,t_0)\), and thus on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), which lie in \({\mathbb {C}}^6\). Furthermore, by construction, they satisfy the following four equations,
where the coefficients \(a_{ij}=T_{ij}'/T_{ij}\), \(1\le i<j\le 4\), read
and
Definition 2.19
We denote by \({\mathcal {F}}(\kappa ,t_0)\) the affine algebraic surface in
defined by Eq. (2.26). We correspondingly denote by
the mapping defined through the \(\eta \)-coordinates (2.25) and write
where \({\mathcal {P}}\) is the mapping defined in Eq. (2.19).
Our fourth and final main result is given by the following theorem, which is proved in Sect. 5.3.
Theorem 2.20
Let \(\kappa \) and \(t_0\) be parameters satisfying the non-resonance conditions (1.4) and the non-splitting conditions (1.5). Then the mapping \(\Phi _{{\mathcal {M}}}\), given in Definition 2.19, is an isomorphism between the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) and the affine algebraic surface \({\mathcal {F}}(\kappa ,t_0)\).
Remark 2.21
We note that the algebraic surface \({\mathcal {F}}(\kappa ,t_0)\) is invariant under the translations
since the coefficients in Eq. (2.26) are invariant under them.
The surface \({\mathcal {F}}(\kappa ,t_0)\) can be identified with the intersection of two quadrics in \({\mathbb {C}}^4\). This can be seen by using Eq. (2.26a) and (2.26b) to eliminate any two of the six variables.
For example, consider eliminating \(\{\eta _{24},\eta _{34}\}\) from (2.26) using (2.26a) and (2.26b). The relevant determinant is given by
Let us assume that \(\kappa _t\kappa _1\kappa _\infty ^2t_0\notin q^{\mathbb {Z}}\). If not, then we can instead choose another pair of coordinates to eliminate. The non-resonance conditions (1.4) and non-splitting conditions (1.5) now guarantee that the above determinant is non-zero. Upon eliminating \(\{\eta _{24},\eta _{34}\}\), Eqs. (2.26c) and (2.26d) respectively become
with coefficients given by
and
Thus, for generic parameter values, the monodromy manifold of \(q\text {P}_{\text {VI}}\) is isomorphic to the intersection of the two quadrics defined by Eq. (2.27) in \({\mathbb {C}}^4\). Intersections of two quadrics in \({\mathbb {P}}^4\) are known as Segre surfaces and it is well-known that they are isomorphic to Del Pezzo surfaces of degree four, see e.g. [15].
It is interesting to contrast this with the monodromy manifolds of the classical Painlevé equations. They are isomorphic to affine cubic surfaces [35]. In particular, their corresponding projective completions are Del Pezzo surfaces of degree three [15].
We further note that Chekhov et al. [3] conjectured explicit affine Del Pezzo surfaces of degree three as the monodromy manifolds of the q-Painlevé equations higher up in Sakai’s classification scheme [31] than \(q\text {P}_{\text {VI}}\).
From Corollary 2.13, Theorems 2.17 and 2.20, we obtain the following corollary.
Corollary 2.22
Let \(\kappa \) and \(t_0\) be such that the non-resonance conditions (1.4) and non-splitting conditions (1.5) are fulfilled. Then, composition of the monodromy mapping with \(\Phi _{{\mathcal {M}}}\), defined in Definition 2.19, yields a bijective mapping from the solution space of \(q\text {P}_{\text {VI}}(\kappa _,t_0)\) to the smooth algebraic surface \({\mathcal {F}}(\kappa ,t_0)\),
In particular, we may write the general solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) as
with \(t\in q^{\mathbb {Z}} t_0\) and \(\eta \) varying in \({\mathcal {F}}(\kappa ,t_0)\).
Remark 2.23
By identifying the domain of the mapping (2.28) with the initial value space of \(q\text {P}_{\text {VI}}\) at \(t=t_0\), the mapping becomes a bijective correspondence between complex (algebraic) surfaces. One can show that this correspondence is a biholomorphism using standard arguments. Namely, one observes that the matrix functions \(\Psi _j(z,t_0)\), \(j=0,\infty \), defined in Eq. (2.5), can be chosen locally analytically in (f, g) as long as one stays away from the exceptional lines above the base points \(b_7\) and \(b_8\). The corresponding connection matrix is then locally analytic in (f, g) and, consequently, so are the \(\eta \)-coordinates. To prove the latter statement around points on the exceptional lines above \(b_7\) and \(b_8\), one simply applies the argument with \(t=q\,t_0\) rather than \(t=t_0\), recalling that the time-evolution is a biholomorphism beween the initial value spaces at \(t=t_0\) and \(t=q\, t_0\). It follows that the mapping (2.28) is a bijective holomorphism and thus biholomorphism.
Remark 2.24
By specialising to the parameter setting
the \(q\text {P}_{\text {VI}}(\kappa )\) equation collapses to its symmetric form
where
and h is related to (f, g) as
As both the non-resonance and non-splitting conditions (1.4) and (1.5) are generically not violated by (2.30), all the aspects of our treatment of \(q\text {P}_{\text {VI}}\) can be carried over to \(q\text {SP}_{\text {VI}}\). We further note that \(q\text {SP}_{\text {VI}}\) is also known as \(q\text {P}_\text {III}\) in the literature [14].
Remark 2.25
Regarding Painlevé VI, and its associated standard linear problem, the corresponding monodromy mapping was thoroughly studied by Inaba et al. [17]. The associated monodromy manifold can be identified with an explicit affine cubic surface, a fact which first appeared in Fricke and Klein [11] and was rediscovered by Jimbo [19] in the context of Painlevé VI. Our construction of the surface \({\mathcal {F}}(\kappa ,t_0)\), in Theorem 2.20, may be considered as a q-analog of this. Iwasaki [18] studied the smoothness of the Painlevé VI monodromy manifold and associated cubic. Theorem 2.17 can be considered a q-analog of [18, Theorem 1] in the non-resonant parameter regime.
3 The Linear Problem
Consider the linear system
where A(z) is a complex \(2\times 2\) matrix polynomial of degree two,
with both \(A_0\) and \(A_2\) invertible and semi-simple.
Jimbo and Sakai [20] showed that isomonodromic deformation of such a linear system, as the eigenvalues of \(A_0\) as well as two of the zeros of the determinant of A(z) evolve via multiplication by q, defines an evolution of the coefficient matrix A(z) which is birationally equivalent to \(q\text {P}_{\text {VI}}^{aux}\).
In Sect. 3.1, we show that the linear system (3.1) can always be normalised to the standard form (2.1) we use in this paper. Then, in Sect. 3.2, we formulate the main results of Jimbo and Sakai [20] regarding isomonodromic deformation of the linear system (2.1) and prove Lemma 2.2.
Finally, in Sect. 3.3, we show how the linear system (2.1) can be recovered from RHP I, defined in Definition 2.7, yielding in particular Lemma 2.5.
3.1 Normalising the linear system
In this section we normalise the linear system (3.1) to the standard form (2.1).
Recall that \(A_0\) and \(A_2\) are semi-simple and we denote their eigenvalues by \(\{\sigma _1,\sigma _2\}\) and \(\{\mu _1,\mu _2\}\) respectively. By means of gauging the linear system with a constant matrix, \(Y(z)\mapsto G Y(z)\), so that \(A(z)\mapsto G A(z)G^{-1}\), we may ensure that \(A_2={\text {diag}}(\mu _1,\mu _2)\) is diagonal.
We further denote the zeros of the determinant of A(z) by \(x_k\), \(1\le k\le 4\), so that
Evaluating this determinant at \(z=0\) gives the identity
By means of a scalar gauge as well as a scaling of the independent variable,
so that the linear system transforms as \(A(z)\mapsto s A(cz)\), we may ensure that
We introduce a time variable t, satisfying \(t^2=\sigma _1\sigma _2\), and four nonzero parameters \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\), through
and note that the linear system (3.1) has now been normalised to the form (2.1).
3.2 Isomonodromic deformation of the linear system
In this section we state important results by Jimbo and Sakai [20] on the isomonodromic deformation of the linear system (2.1). Here we recall that isomonodromic deformation stands for deformation as \(t\rightarrow q\,t\) such that \(P(z,qt)=P(z,t)\), or equivalently, such that the connection matrix satisfies
Theorem 3.1
(Jimbo and Sakai [20]). Considering the linear system (2.1), Eq. (3.3) holds if and only if both \(Y_0(z,t)\) and \(Y_\infty (z,t)\), defined in Eq. (2.5), satisfy
for an (a posteriori unique), rational in z, matrix function B(z, t), which takes the form
We proceed in making the time-evolution defined by (3.4) more explicit. Note that compatibility of the linear system (2.1) and time deformation (3.4) amounts to the following evolution of the coefficient matrix A,
as well as the following evolution of the diagonalising matrix H(t) in (2.4),
We use the standard coordinates \(f=f(t),g=g(t)\) and \(w=w(t)\), defined by Eq. (2.7), on the linear system, whose definition we repeat here for convenience of the reader,
Then the linear system is given in terms of \(\{f,g,w\}\) by
where
and, temporarily using the notation \(\mathring{\kappa }=\kappa +\kappa ^{-1}\),
Equation (3.5) is equivalent to the following conditions on the matrix \(B_0(t)\),
The first two equations follow from the fact that both the left and right-hand side of (3.5) are necessarily analytic in \(z\in {\mathbb {C}}\) and the third follows from equating the degree one terms in z of both sides of Eq. (3.5).
These equations form an over-determined system for \(B_0=B_0(t)\). They allow one to express \(B_0\) explicitly in terms of \(\{f,g,w\}\), for example
and Jimbo and Sakai [20] showed that Eq. (3.9) are then equivalent to the \(q\text {P}_{\text {VI}}^{aux}\) time evolution of (f, g, w).
Furthermore, by means of a direct computation, one can check that Eqs. (2.4) and (3.6) translate to the elements of the diagonalising matrix \(H=(h_{ij})_{1\le i,j\le 2}\) satisfying
We are now in a position to prove Lemma 2.2.
Proof of Lemma 2.2
We start by showing that the linear system \(A=A(z,t)\) is regular in t away from values where \((f,g)=(\infty ,\kappa _\infty )\). To this end, consider the parametrisation of \(A=A(z,t)\) with respect to (f, g, w). By direct inspection, one can see that this parametrisation is regular for all values of \((f,g)\in {\mathbb {C}}^*\times {\mathbb {C}}^*\) and \(w\in {\mathbb {C}}^*\). The same is true near each of the six basepoints \(b_k\), \(1\le k\le 6\), defined in Eq. (1.3).
For example, consider the basepoint \(b_3=(\kappa _t t,0)\). We apply a change of variables,
so that \(\{F\in {\mathbb {C}},G=0\}\) lies on the exceptional line above \(b_3\), after a local blow up. The parametrisation of the matrix polynomial A is regular at \(G=0\), and takes the form
with
Geometrically, the line \(\{F\in {\mathbb {C}},G=0\}\), above \(b_3\), parametrises coefficient matrices A whose second column vanishes at \(z=\kappa _t t\). The one remaining point on the exceptional line above \(b_3\), which does not lie on this line, is an inaccessible initial value. Namely, the corresponding formal solution of \(q\text {P}_{\text {VI}}\) never takes value in \({\mathbb {C}}^*\times {\mathbb {C}}^*\) and is thus not a genuine solution. We conclude that A is regular for (f, g) near \(b_3\). Similarly, it is shown that A is regular near the other basepoints \(b_k\), \(1\le k\le 6\), \(k\ne 3\).
The situation is slightly more involved for the remaining base-points \(b_7\) and \(b_8\), as the auxiliary equation (1.2) is singular at these points. Firstly, as (f, g) approaches \(b_8=(\infty ,\kappa _\infty ^{-1}q^{-1})\), \({\overline{g}}\) approaches \(\kappa _\infty \) and consequently w vanishes, due to the auxiliary equation. Consider thus the change of variables
In the local chart \(\{F,G,W\}\), the coefficient matrix A is regular at \(F=0\). Geometrically, the line \(\{F=0,G\in {\mathbb {C}}\}\), above \(b_8\), parametrises coefficient matrices A for which the entry \(A_{12}(z)\) is constant. In particular, A is regular near \(b_8\).
Finally, by the same reasoning, it follows that \(w\rightarrow \infty \), as (f, g) approaches \(b_7=(\infty ,\kappa _\infty )\), and that the coefficient matrix A is thus singular there.
We conclude that A(z, t) is singular at \(t=t_*\) if and only if \((f(t_*),g(t_*))=(\infty ,\kappa _\infty )\). Correspondingly, we write
For every \(t\in q^{{\mathfrak {M}}}t_0\), we choose any H(t) satisfying (2.4), but not necessarily (3.6), and let C(z, t) denote the corresponding connection matrix. We proceed with proving Eq. (2.9) in the lemma.
To prove (2.9), it is enough to show that, for any \(m\in {\mathfrak {M}}\),
for some diagonal matrix \(\Delta \), if \(m+1\in {\mathfrak {M}}\), and
for some diagonal matrix \(\Delta \), if \(m+1\notin {\mathfrak {M}}\) (so that necessarily \(m+2\in {\mathfrak {M}}\)).
The first case is a direct consequence of Theorem 3.1. We may further ensure that \(\Delta =I\) by imposing Eq. (3.6) at \(t=t_m\).
As to the second case, we note that, analogues to the proof of Theorem 3.1 by Jimbo and Sakai [20], one can show that \(P(z,q^2t)=P(z,t)\) if and only if \(Y_0(z,t)\) and \(Y_\infty (z,t)\) both satisfy
for an (a posteriori unique), rational in z, matrix function F(z, t) which takes the form
The corresponding time evolution of the coefficient matrix
is equivalent to two iterations of \(q\text {P}_{\text {VI}}^{aux}\), and
By specialising to \(t=t_m\), we obtain (3.11). We may further ensure that \(\Delta =I\), by imposing
This establishes Eq. (2.9).
The last statement of the lemma, follows from the fact that, rescaling \(H(t)\mapsto H(t)D(t)\), yields \(\Psi _0(z,t)\mapsto \Psi _0(z,t)D(t)\) and thus \(C(z,t)\mapsto D(t)^{-1} C(z,t)\). \(\square \)
3.3 On the \(q\text {P}_{\text {VI}}\) RHP
In Sect. 2.2, we formulated the main Riemann–Hilbert problem for the \(q\text {P}_{\text {VI}}\) equation, RHP I, in Definition 2.7. Let (f, g) be a solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) and [C(z)] be its corresponding monodromy in the monodromy manifold via the monodromy mapping, see Definition 2.3. Then Eq. (2.12) defines a solution of RHP I. In this section, we show now we may reconstruct the solution (f, g) from the solution of RHP I, giving in particular formulas (2.16). This furthermore yields a proof of Lemma 2.5.
Firstly, we prove Lemma 2.8.
Proof of Lemma 2.8
Note that the determinant of \(z^{m}C(z)\) may be written as
for some \(c_m\in {\mathbb {C}}^*\). Assume we have a solution \(\Psi ^{{(}{m}{)}}(z)\) of RHP I, defined in Definition 2.7. Then its determinant \(\Delta ^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\), it satisfies the jump condition
and \(\Delta ^{{(}{m}{)}}(z)=1+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \).
This scalar RHP is uniquely solved by
Indeed, the right-hand side satisfies this scalar RHP and, denoting the quotient of the left- and right-hand side of (3.12) by g(z), it follows that g(z) is an entire function on the complex plane satisfying \(g(z)\rightarrow 1\) as \(z\rightarrow \infty \). By Liouville’s theorem, \(g(z)\equiv 1\), which yields Eq. (3.12). In particular, the solution \(\Psi ^{{(}{m}{)}}(z)\) is globally invertible on \({\mathbb {C}}\).
Suppose we have another solution \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\) of RHP I, then the quotient
is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\). Furthermore, R(z) has a trivial jump on \(\gamma ^{{(}{m}{)}}\), i.e. \(R_+(z)=R_-(z)\). Therefore, R(z) extends to an analytic function on the entire complex plane. Finally, we know that \(R(z)=I+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \), thus \(R(z)\equiv I\), again by Liouville’s theorem, and the lemma follows. \(\square \)
Starting with a solution of \(q\text {P}_{\text {VI}}\), we showed how to obtain a connection matrix in Sect. 2.2. Therefore, we obtain a solution of RHP I – see (2.12). We now describe how conversely, any solution of RHP I leads to a solution of \(q\text {P}_{\text {VI}}\).
Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose RHP I has a solution for at least one \(m\in {\mathbb {Z}}\). We write
For \(m\in {\mathfrak {M}}\), define \(A(z,q^mt_0)\) by Eq. (2.13). Due to the jump conditions of \(\Psi ^{{(}{m}{)}}(z)\) in RHP I, the matrix \(A(z,q^mt_0)\) has trivial jumps on \(\gamma ^{{(}{m}{)}}\) and \(q^{-1}\gamma ^{{(}{m}{)}}\) and thus extends to a single-valued function on the complex z-plane. Furthermore, it follows from the global analyticity and invertibility of \(\Psi ^{{(}{m}{)}}(z)\), see Lemma 2.8, that \(A(z,q^mt_0)\) is entire. Finally, as \(\Psi ^{{(}{m}{)}}(z)=I+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \), it follows that \(A(z,q^mt_0)\) is a degree two matrix polynomial satisfying
and, due to Eqs. (3.12) and (2.13),
Thus, \(A(z,q^mt_0)\) is a coefficient matrix of the form (2.2), for \(m\in {\mathfrak {M}}\). By construction, the connection matrix associated with \(A(z,q^mt_0)\) is given by \(z^mC(z)\), \(m\in {\mathfrak {M}}\).
For all \(m\in {\mathfrak {M}}\), assume that
Then the corresponding coordinates (f, g, w) are well-defined on A, via Eq. (2.7), and they form a solution of \(q\text {P}_{\text {VI}}^{\text {aux}}(\kappa ,t_0)\). Furthermore, we can read the values of (f, g, w) directly from the solution \(\Psi ^{{(}{m}{)}}(z)\) of the RHP through formulas (2.16).
These formulas are derived as follows. By expanding Eq. (2.13) around \(z=\infty \), and considering the (1, 2) and (1, 1) entry, we respectively obtain
The first equation is precisely Eq. (2.16a) for w. The formula (2.16b) for f follows by subtracting (3.9c) from (3.9d) and solving for f. By substituting \(\alpha =(1-q^{-1})u_{11}-f\) in Eq. (3.9c) we obtain Eq. (2.16d) for \(g_1\). Finally formula (2.16c) for g now follows from Eq. (3.8).
We are now in a position to prove Lemma 2.5.
Proof of Lemma 2.5
We have shown that, for any solution (f, g) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\), there exists a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\), such that the values of (f, g) may be read directly from the solution \(\Psi ^{{(}{m}{)}}(z)\) of RHP I in Definition 2.7, via Eq. (2.13). Here \([C(z)]=\textsc {M}\in {\mathcal {M}}(\kappa ,t_0)\) is the monodromy attached to (f, g) via the monodromy mapping.
To prove the lemma, it remains to show be shown that these formulas are invariant under choosing a different representation \([{\widetilde{C}}(z)]=\textsc {M}\) of the monodromy, so that (f, g) indeed only depends on the class \(\textsc {M}\). We proceed in proving this statement.
As \([{\widetilde{C}}(z)]=[C(z)]\), there exist invertible diagonal matrices \(D_{1,2}\) such that
Thus, the solution \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\) of RHP I, with \(C(z)\rightarrow {\widetilde{C}}(z)\), is related to \(\Psi ^{{(}{m}{)}}(z)\) by
Consequently, the matrix function \({\widetilde{H}}\) and \({\widetilde{U}}\), defined by Eqs. (2.14) and (2.15) for \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\), are related to H and U by
The formulas (2.16b) and (2.16c) for f and g are invariant under such rescaling and the lemma follows. \(\square \)
We finish this section with some remarks on assumption (3.13). Firstly, note that this is a necessary assumption for the coordinates (f, g, w) to be well-defined. Now, suppose that \(A_{12}(z,q^mt_0)\equiv 0\), for some \(m\in {\mathfrak {M}}\), and write \(t_m=q^m t_0\). Then, we have
where, by Eq. (2.3),
Furthermore, as the eigenvalues of \(A(0,t_m)\) are \(\kappa _0^{\pm 1}t_m\), necessarily
By comparing the different possible values of \(v_{1},\ldots , v_4\) in the above two equations, it follows that the parameters must satisfy
for some \(\epsilon _j\in \{\pm 1\}\), \(j=0,t,1,\infty \). So, at least one of the non-splitting conditions (1.5) is violated.
Furthermore, from the defining equations of \(\Psi _{0}\) and \(\Psi _{\infty }\), Eq. (2.5), it follows that \(\Psi _{\infty }(z,t_m)\) is lower-triangular and either \(\left( \Psi _{0}\right) _{11}(z,t_m)\) or \(\left( \Psi _{0}\right) _{12}(z,t_m)\) is identically zero. In particular, either \(C_{12}(z)\equiv 0\) or \(C_{22}(z)\equiv 0\), which means that C(z) is reducible, see Definition 2.9.
We discuss RHP I with reducible monodromy in further detail in Sect. 4.2.
4 Solvability, Reducible Monodromy and Orthogonal Polynomials
In this section we study the solvability of RHP I, defined in Definition 2.7, and consequently the invertibility of the monodromy mapping introduced in Definition 2.3. In Sect. 4.1, we prove Lemma 2.10 and Theorem 2.12. In Sect. 4.2, we discuss RHP I with reducible monodromy.
4.1 Solvability
We start this section by proving Lemma 2.10. To this end, we briefly recall some fundamental properties of q-theta functions, i.e. analytic functions \(\theta (z)\) on \({\mathbb {C}}^*\) such that \(\theta (z)/\theta (qz)\) is a monomial. For \(\alpha \in {\mathbb {C}}^*\) and \(n\in {\mathbb {N}}\), we denote by \(V_n(\alpha )\) the set of all analytic functions \(\theta (z)\) on \({\mathbb {C}}^*\), satisfying
We note that \(V_n(\alpha )\) is a vector space of dimension n if \(n\ge 1\), see e.g. [29].
For \(r\in {\mathbb {R}}_+\), we call
a fundamental annulus. As described in the following lemma, q-theta functions are, up to scaling, completely determined by the location of their zeros within any fixed fundamental annulus.
Lemma 4.1
Let \(\alpha \in {\mathbb {C}}^*\), \(n\in {\mathbb {N}}\) and \(\theta (z)\) be a nonzero element of \(V_n(\alpha )\). Then, within any fixed fundamental annulus, \(\theta (z)\) has precisely n zeros, counting multiplicity, say \(\{a_1,\ldots ,a_n\}\), and there exist unique \(c\in {\mathbb {C}}^*\) and \(s\in {\mathbb {Z}}\) such that
Conversely, for any choice of the parameters, Eq. (4.2) defines an element of \(V_n(\alpha )\).
Proof
See for instance [29]. \(\square \)
We proceed in proving Lemma 2.10.
Proof of Lemma 2.10
Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose that C(z) is reducible. Then C(z) is triangular or anti-triangular.
Assume C(z) is triangular, then
for some \(c\in {\mathbb {C}}^*\), where the second equality follows from Definition 2.1. Writing
it follows from Lemma 4.1 that
for some labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), \(c_{11},c_{22}\in {\mathbb {C}}^*\) and \(n\in {\mathbb {Z}}\).
Furthermore, by Definition 2.1,
which implies
violating the non-splitting conditions (1.5).
Similarly, if C(z) is anti-triangular, then
for some re-labeling \(\{i,j,k,l\}=\{1,2,3,4\}\) and \(n\in {\mathbb {Z}}\), again violating the non-splitting conditions (1.5).
Conversely, if the non-splitting conditions (1.5) do not hold true, then either equalities (4.4) or equalities (4.5) can be realised by a re-labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), for some \(n\in {\mathbb {Z}}\). In the former case, Eq. (4.3) with \(C_{12}(z)\equiv C_{21}(z)\equiv 0\) define a reducible connection matrix in \({\mathfrak {C}}(\kappa ,t_0)\).
It follows similarly that \({\mathfrak {C}}(\kappa ,t_0)\) contains reducible monodromy in the latter case and the lemma follows. \(\square \)
To study the solvability of RHP I, in Definition 2.7, it is helpful to consider the following slightly more general RHP.
Definition 4.2
(RHP II). Given a connection matrix \(C\in {\mathfrak {C}}(\kappa ,t_0)\) and a family of admissable curves \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\), for \(m,n\in {\mathbb {Z}}\), find a matrix function \(\Psi ^{{(}{m,n}{)}}(z)\) which satisfies the following conditions.
-
(i)
\(\Psi ^{{(}{m,n}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).
-
(ii)
\(\Psi ^{{(}{m,n}{)}}(z')\) has continuous boundary values \(\Psi _-^{{(}{m,n}{)}}(z)\) and \(\Psi _+^{{(}{m,n}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by
$$\begin{aligned} \Psi _+^{{(}{m,n}{)}}(z)=\Psi _-^{{(}{m,n}{)}}(z)z^mC(z),\quad z\in \gamma ^{{(}{m}{)}}. \end{aligned}$$ -
(iii)
\(\Psi ^{{(}{m,n}{)}}(z)\) satisfies
$$\begin{aligned} \Psi ^{{(}{m,n}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{n\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$
By comparison with RHP I in Definition 2.7, we can identify \(\Psi ^{{(}{m,0}{)}}(z)=\Psi ^{{(}{m}{)}}(z)\). More generally, for any fixed \(n\in {\mathbb {Z}}\), RHP II is equivalent to RHP I, with C(z) replaced by \(C(z)z^{-n\sigma _3}\). In particular, we have the following analog of Lemma 2.8.
Lemma 4.3
For any fixed \(m,n\in {\mathbb {Z}}\), if RHP II in Definition 4.2 has a solution \(\Psi ^{{(}{m,n}{)}}(z)\), then this solution is globally invertible on the complex plane and unique.
Proof
The proof is analogous to that of Lemma 2.8. \(\square \)
Given the uniqueness in the above lemma, we say that \(\Psi ^{{(}{m,n}{)}}(z)\) exists if and only if RHP II has a solution for that value of \(m,n\in {\mathbb {Z}}\).
The main reason for considering the more general RHP above, is that we have the following result due to Birkhoff [1].
Lemma 4.4
For any fixed \(m\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n}{)}}(z)\) to RHP II, in Definition 4.2, exists for at least one \(n\in {\mathbb {Z}}\).
Proof
See Birkhoff [1][§21] or the proof of Lemma 4.4 in [23]. \(\square \)
Our next step is to study the dynamics of \(\Psi ^{{(}{m,n}{)}}(z)\) as n varies, with the ultimate goal to obtain criteria for the existence of \(\Psi ^{{(}{m,n}{)}}(z)\) at \(n=0\), as these will allow us to prove solvability of RHP I and thus prove Theorem 2.12.
To this end, if \(\Psi ^{{(}{m,n}{)}}(z)\) exists, we denote its expansion around \(z=\infty \) by
as \(z\rightarrow \infty \), and associate a coefficient matrix \(A^{{(}{m,n}{)}}(z)\) as in Eq. (2.13),
Then \(A^{{(}{m,n}{)}}(z)\) is a degree two matrix polynomial of the form (2.2) except for a generally different normalisation at \(z=\infty \),
In particular, the corresponding coordinates \(f^{{(}{n}{)}}(q^m t_0)\), \(g^{{(}{n}{)}}(q^m t_0)\) and \(w^{{(}{n}{)}}(q^m t_0)\) define a solution of \(q\text {P}_{\text {VI}}(\kappa ^{{(}{n}{)}},t_0)\) with
if RHP II is solvable in m for that value of n.
We have the following lemma regarding solvability of RHP II as n varies.
Lemma 4.5
Fix \(m,n\in {\mathbb {Z}}\) and suppose that the solution \(\Psi ^{{(}{m,n}{)}}(z)\) of RHP II in Definition 4.2 exists. Then, recalling the definition of the matrices \(U=(u_{ij})\) and \(V=(v_{ij})\) in Eq. (4.6), either
-
(i)
\(u_{12}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists.
-
(ii)
\(u_{12}^{{(}{m,n}{)}}= 0\) but \(v_{12}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n+1}{)}}(z)\) does not exist but \(\Psi ^{{(}{m,n+2}{)}}(z)\) does exist.
-
(iii)
\(u_{12}^{{(}{m,n}{)}}= 0\) and \(v_{12}^{{(}{m,n}{)}}= 0\), in which case \(\Psi ^{{(}{m,n+k}{)}}(z)\) does not exist for any \(k>0\) and necessarily \(C_{12}(z)\equiv 0\) or \(C_{22}(z)\equiv 0\).
Similarly, either
-
(I)
\(u_{21}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n-1}{)}}(z)\) exists.
-
(II)
\(u_{21}^{{(}{m,n}{)}}= 0\) but \(v_{21}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n-1}{)}}(z)\) does not exist but \(\Psi ^{{(}{m,n-2}{)}}(z)\) does exist.
-
(III)
\(u_{21}^{{(}{m,n}{)}}= 0\) and \(v_{21}^{{(}{m,n}{)}}= 0\), in which case \(\Psi ^{{(}{m,n-k}{)}}(z)\) does not exist for any \(k>0\) and necessarily \(C_{11}(z)\equiv 0\) or \(C_{21}(z)\equiv 0\).
Proof
We start with the fundamental observation that, for any \(k\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n+k}{)}}(z)\) exists if and only if there exists a matrix polynomial R(z) which satisfies
Indeed, if such a matrix R(z) exists, then \(\Psi ^{{(}{m,n+k}{)}}(z)=R(z)\Psi ^{{(}{m,n}{)}}(z)\) solves RHP II. Conversely, suppose \(\Psi ^{{(}{m,n+k}{)}}(z)\) exists, define
then R(z) has a trivial jump on \(\gamma ^{{(}{m}{)}}\) and consequently extends to an analytic matrix function on the whole complex plane, satisfying
It follows that R(z) is a matrix polynomial and Eq. (4.8) follows directly from the normalisation of \(\Psi ^{{(}{m,n+k}{)}}(z)\) at \(z=\infty \).
By the above observation, the existence of \(\Psi ^{{(}{m,n+k}{)}}(z)\) can be studied through examining the solvability of Eq. (4.8), which is how we proceed in establishing the lemma.
Firstly, we consider \(k=1\). The matrix R(z) must take the form
and Eq. (4.8) reduces to the following linear system of equations,
This system is solvable if and only if \(u_{12}^{{(}{m,n}{)}}\ne 0\). Consequently, \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists if and only if \(u_{12}^{{(}{m,n}{)}}\ne 0\). This establishes part (i) of the lemma.
Next, assume \(u_{12}^{{(}{m,n}{)}}=0\) and we proceed in studying the solvability of Eq. (4.8) with \(k=2\). The matrix R(z) must take the form
and (4.8) reduces to
It follows from direct computation that the above linear system has a solution if and only if \(v_{12}^{{(}{m,n}{)}}\ne 0\). We therefore conclude that, if \(u_{12}^{{(}{m,n}{)}}=0\), then \(\Psi ^{{(}{m,n+2}{)}}(z)\) exists if and only if \(v_{12}^{{(}{m,n}{)}}\ne 0\). This establishes part (ii) of the lemma.
Finally, consider the case when both \(u_{12}^{{(}{m,n}{)}}=0\) and \(v_{12}^{{(}{m,n}{)}}=0\). Then it follows directly from Eq. (4.7) that the entry \(A_{12}^{{(}{m,n}{)}}(z)\) of the matrix polynomial \(A^{{(}{m,n}{)}}(z)\) is identically zero, by considering its expansion around \(z=\infty \). Furthermore, as
for \(z\in q^{-1}D_+^{{(}{m}{)}}\), it follows that \(\Psi _{12}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_+^{{(}{m}{)}}\).
Now, consider Eq. (4.8) for any \(k>0\). Its (2, 2)-entry reads
as \(z\rightarrow \infty \). However, recall that
and thus Eq. (4.9) has no polynomial solution \(R_{22}(z)\). It follows that \(\Psi ^{{(}{m,n+k}{)}}(z)\) does not exist, for any \(k>0\).
Finally, we prove that one of the entries of C(z) must be identically zero. To this end, note that
for \(z\in D_-^{{(}{m}{)}}\). There are two options, either
in which case it follows from (4.10) that \(\Psi _{12}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_-^{{(}{m}{)}}\) and consequently that \(C_{12}(z)\equiv 0\); or
in which case it follows from (4.10) that \(\Psi _{11}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_-^{{(}{m}{)}}\) and consequently that \(C_{22}(z)\equiv 0\). This proves part (iii) of the lemma.
Parts (I)–(III) of the lemma are proven analogously. \(\square \)
Note that we have the following immediate corollary from Lemmas 4.4 and 4.5.
Corollary 4.6
Consider RHP II in Definition 4.2 and assume C(z) is irreducible. Then, for any \(m,n\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n}{)}}(z)\) or \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists.
We now have all the ingredients to prove Theorem 2.12.
Proof of Theorem 2.12
Take an irreducible connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). In RHP II, see Definition 4.2, we have an additional integer parameter n and we denote its solution by \(\Psi ^{{(}{m,n}{)}}(z)\), when it exists. For \(n=0\), this RHP is precisely RHP I. Proving the first part of Theorem 2.12, is thus equivalent to showing that, for any fixed \(m\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,0}{)}}(z)\) or \(\Psi ^{{(}{m+1,0}{)}}(z)\) of RHP II exists. We do this via a proof by contradiction.
Take \(m\in {\mathbb {Z}}\) and suppose that neither \(\Psi ^{{(}{m,0}{)}}(z)\) nor \(\Psi ^{{(}{m+1,0}{)}}(z)\) exists. As C(z) is irreducible, Corollary 4.6 implies that \(\Psi ^{{(}{m,-1}{)}}(z)\) and \(\Psi ^{{(}{m+1,-1}{)}}(z)\) necessarily exist.
To deduce a contradiction, we define the following matrix function
The jump conditions of RHP II that \(\Psi ^{{(}{m+1,-1}{)}}(z)\) and \(\Psi ^{{(}{m,-1}{)}}(z)\) satisfy imply that B(z) has only trivial jumps on \(\gamma ^{{(}{m}{)}}\) and \(\gamma ^{{(}{m+1}{)}}\). Consequently, B(z) extends to a meromorphic function on the complex plane.
The only possible source of singularities (i.e., poles) on the right side of Eq. (4.11), is the term \(C(z)^{-1}\). (Note that \(\Psi ^{(m,n)}\) are analytic functions of z, which moreover are invertible for all z, see Lemma 4.3.) In \(D_-^{{(}{m}{)}}\cap D_+^{{(}{m+1}{)}}\), we know that the determinant of C(z) only vanishes at \(z=\kappa _t^{\pm 1}q^{m+1} t_0\), so that \(C(z)^{-1}\) has (simple) poles there. Therefore, B(z) has simple poles at \(z=\kappa _t^{\pm 1}q^{m+1} t_0\). This, combined with the fact that \(B(0)=0\) and \(B(\infty )=I\), yields
for a constant matrix \(B_0\).Footnote 2
We now turn our attention to the coefficient matrices \(A^{{(}{m,-1}{)}}(z)\) and \(A^{{(}{m+1,-1}{)}}(z)\) related to \(\Psi ^{{(}{m,-1}{)}}(z)\) and \(\Psi ^{{(}{m+1,-1}{)}}(z)\) via Eq. (4.7). It follows from the defining equation of B(z), Eq. (4.11), that these coefficient matrices are related by
To deduce this, it suffices to note that, for \(z\in q^{-1}D_+^{{(}{m}{)}}\), compatibility of the first rows of the right-hand sides of Eqs. (4.7) and (4.11), yields Eq. (4.12). By analytic continuation, Eq. (4.12) holds globally.
Now, recall that \(\Psi ^{{(}{m,n}{)}}(z)\) has an asymptotic expansion at infinity, see Eq. (4.6), of the form
Due to part (i) of Lemma 4.5, we know that \(u_{12}^{{(}{m,-1}{)}}=0\) and \(u_{12}^{{(}{m+1,-1}{)}}=0\). We will proceed in showing that also \(v_{12}^{{(}{m,-1}{)}}=0\), which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.
To get there, we first note that, by considering the expansion of B(z) as \(z\rightarrow \infty \) in the first row of Eq. (4.11), we obtain \((B_0)_{12}=0\).
Similarly, as \(u_{12}^{{(}{m,-1}{)}}=0\), it follows from the first row of the right-hand side of Eq. (4.7) that the (1, 2) entry of A satisfies \(A_{12}^{{(}{m,-1}{)}}(z)={\mathcal {O}}(1)\) as \(z\rightarrow \infty \). Namely
where c is a constant.
We now show that, Eqs. (4.12), (4.13) and the fact that \((B_0)_{12}=0\) imply that \(v_{12}^{{(}{m,-1}{)}}=0\).
Firstly, by comparing the determinants of the left and right-hand sides of Eq. (4.12), we obtain
As \((B_0)_{12}=0\), this implies the following dichotomy: either
-
(I)
\(B_0=\begin{pmatrix} -\kappa _tq^{m+1} t_0 &{}\quad 0\\ b_{21} &{}\quad -\kappa _t^{-1}q^{m+1} t_0 \end{pmatrix}\), or
-
(II)
\(B_0=\begin{pmatrix} -\kappa _t^{-1}q^{m+1} t_0 &{}\quad 0\\ b_{21} &{}\quad -\kappa _t q^{m+1} t_0 \end{pmatrix}\),
for some \(b_{21}\in {\mathbb {C}}\).
Secondly, the left-hand side of Eq. (4.12) is analytic at \(z=\kappa _t^{\pm 1}q^{m} t_0\), but B(qz), on the right-hand side, has a pole at those two points. This means that
We now consider the (1, 2)-entry of Eq. (4.14) for the two choices of the sign ±. The positive choice leads to a tautology in Case (I), while the negative choice gives
On the other hand, the positive choice in Case (II) gives
while the negative choice is a tautology. Due to the non-resonance conditions (1.4), \(\kappa _t^2\ne 1\), and so it follows from the above results that \(c=0\). Therefore, \(A_{12}^{{(}{m,-1}{)}}(z)\) is identically zero, by Eq. (4.13).
Since A is lower triangular, it follows from Eq. (4.7) that \(\Psi ^{{(}{m,-1}{)}}(z)\) must be lower triangular for \(z\in D_+^{{(}{m}{)}}\). In particular, \(u_{12}^{{(}{m,-1}{)}}=v_{12}^{{(}{m,-1}{)}}=0\), which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.
We conclude that solution \(\Psi ^{{(}{m}{)}}(z)\) or \(\Psi ^{{(}{m+1}{)}}(z)\) of RHP I exists for any \(m\in {\mathbb {Z}}\), establishing the first part of the theorem.
Let (f, g, w) be the corresponding solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) via (2.16). The second part of the theorem asserts that for \(m\in {\mathbb {Z}}\), \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\).
So, suppose \(m\in {\mathbb {Z}}\) is such that \(\Psi ^{{(}{m}{)}}(z)\) fails to exist. If \((f(t_m),g(t_m))\ne (\infty ,\kappa _\infty )\), then it follows from Lemma 2.2 that the coefficient matrix \(A(z,t_m)\) is well-defined. But then Eq. (2.12) would yield a solution of RHP I, that is, \(\Psi ^{{(}{m}{)}}(z)\) exists, which contradicts our assumption. Thus \((f(t_m),g(t_m))= (\infty ,\kappa _\infty )\).
On the other hand, if \(\Psi ^{{(}{m}{)}}(z)\) exists, then \(A(z,t_m)\) is well-defined, via Eq. (2.13), and consequently \((f(t_m),g(t_m))\ne (\infty ,\kappa _\infty )\), by Lemma 2.2. So, indeed, \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\). This completes the proof of the theorem. \(\square \)
4.2 Reducible monodromy, orthogonal polynomials and special function solutions
In the case of \(\text {P}_{\text {VI}}\), it is well-known that reducible monodromy yields special function solutions – see Mazzocco [26]. Furthermore, in such case, the solution of the standard RHP for \(\text {P}_{\text {VI}}\), when solvable, can be solved explicitly in terms of certain orthogonal polynomials [6, 10].
In this subsection, we show that the same phenomenon occurs for \(q\text {P}_{\text {VI}}\). Recall that the monodromy manifold contains reducible monodromy if and only if conditions (1.5a) or conditions (1.5b) are violated. We discuss one example from each of these two sets of non-splitting conditions.
Firstly, we consider the case where
violating one of the conditions in (1.5a), and consider RHP II, defined in Definition 4.2, with the following upper-triangular connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\),
Here \(c\in {\mathbb {C}}\) and \(\nu \in {\mathbb {C}}^*\) are two monodromy datums that can be chosen at pleasure.
Writing \(t_m=q^m t_0\), the jump matrix of \(\Psi ^{{(}{m,n}{)}}(z)\) in RHP II can be written as
We bring RHP II into the standard Fokas-Its-Kitaev RHP form [8, 9] for orthogonal polynomials, by applying a transformation
where \(D_1\) and \(D_2\) diagonal matrices and \(F_0^{{(}{m}{)}}(z)\) and \(F_\infty ^{{(}{m}{)}}(z)\) analytic and invertible matrix functions on respectively \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\).
After such a transformation, the jump matrix of \(Y^{{(}{m,n}{)}}(z)\) reads
and we wish to choose \(D_{1,2}\) and \(F_{0,\infty }\) such that this jump matrix is upper-triangular with diagonal entries constant and equal to 1. To this end, we choose \(F_{0,\infty }\) so that they cancel the q-theta functions on the diagonal,
and we choose \(D_1\) and \(D_2\) to normalise the now constant diagonal entries so that they equal 1,
Then the jump matrix reads
where
and \(Y^{{(}{m,n}{)}}(z)\) solves the following RHP, if it exists.
Definition 4.7
(RHP III). For \(m,n\in {\mathbb {Z}}\), find a matrix function \(Y^{{(}{m,n}{)}}(z)\) which satisfies the following conditions.
-
(i)
\(Y^{{(}{m,n}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).
-
(ii)
\(Y^{{(}{m,n}{)}}(z')\) has continuous boundary values \(Y_-^{{(}{m,n}{)}}(z)\) and \(Y_+^{{(}{m,n}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by
$$\begin{aligned} Y_+^{{(}{m,n}{)}}(z)=Y_-^{{(}{m,n}{)}}(z)\begin{pmatrix} 1 &{}\quad w(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}\qquad (z\in \gamma ^{{(}{m}{)}}), \end{aligned}$$where w(z, t) is the weight function defined in Eq. (4.16).
-
(iii)
\(Y^{{(}{m,n}{)}}(z)\) satisfies
$$\begin{aligned} Y^{{(}{m,n}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{n\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$
RHP III is the standard Fokas-Its-Kitaev RHP for orthogonal polynomials on the contour \(\gamma ^{{(}{m}{)}}\) with respect to the weight function \(w(z,t_m)\). We refer to Deift [4] for more background information on the theory of orthogonal polynomials and corresponding RHPs.
We proceed to draw some immediate conclusions from the equivalence between the RHPs II and III, given in Definitions 4.2 and 4.7 respectively, and the theory of orthogonal polynomials. If \(n<0\), then RHP III is unsolvable for every \(m\in {\mathbb {Z}}\) and thus the same holds true for RHP II.
When \(n=0\), RHP III is solvable for every \(m\in {\mathbb {Z}}\) and the solution is explicitly given by
where \({\mathcal {C}}^{{(}{m}{)}}\) denotes the Cauchy operator on \(\gamma ^{{(}{m}{)}}\),
When \(n>0\), RHP III is solvable if and only if the Hankel determinant of moments
is nonzero, in which case the solution of the RHP is explicitly given by
where \(p_n(z;t_m)\), for \(n\ge 0\), denotes the (generically) degree n polynomial
The latter polynomials satisfy the orthogonality condition
and thus form a sequence of orthogonal polynomials with respect to the complex functional
when none of the Hankel determinants vanish.
We denote
and assume \(c\ne 0\). We may employ a similar argument as in the proof of Theorem 2.12, to show that, for any \(m\in {\mathbb {Z}}\), \(m\in {\mathfrak {M}}_n\) or \(m+1\in {\mathfrak {M}}_n\). We thus obtain a corresponding solution \((w^{{(}{n}{)}},f^{{(}{n}{)}},g^{{(}{n}{)}})\) of \(q\text {P}_{\text {VI}}^\text {aux}(\kappa ^{{(}{n}{)}},t_0)\), where
for \(n\ge 0\).
We proceed to derive explicit formulas for \(f^{{(}{n}{)}}\) and \(g^{{(}{n}{)}}\). To this end, we note that the next to highest order coefficient, in the asymptotic expansion
can be written explicitly as
where \(\Delta _n(t_m)\) denotes the n-th Hankel determinant of moments and
with \(\Gamma _1(t_m)=\mu _1\) and \(\Gamma _0(t_m)=0\).
By direct substitution of the corresponding asymptotic expansion of \(\Psi ^{{(}{m,n}{)}}(z)\) around \(z=\infty \) into Eq. (2.13), we find
and
where \(t=t_m\) and the linear term L reads
Upon substituting the explicit formula for \(w^{{(}{n}{)}}\) into the auxiliary Eq. (1.2), and solving for g, we obtain
Note that, by the above formulas, \(\Delta _n(t)=0\) if and only if \(f^{{(}{n}{)}}(t)=\infty \) and \(g^{{(}{n}{)}}(t)=\kappa _{\infty ,n}\), consistent with Theorem 2.12.
Furthermore, the moments \(\mu _k=\mu _k(t)\) can be expressed explicitly in terms of Heine’s basic hypergeometric functions. Indeed, a residue computation yields that the k-th moment equals
Sakai [30] first derived special function solutions of \(q\text {P}_{\text {VI}}\), written in terms of Casorati determinants of Heine’s basic hypergeometric functions, which correspond to setting \(\nu =\kappa _t^{-1}\) or \(\nu =\kappa _0^2/\kappa _t\) in the above, so that \(S_1=0\) or \(S_2=0\) respectively.
Ormerod et al. [28] related a family of semi-classical orthogonal polynomials to \(q\text {P}_{\text {VI}}\), via the Jimbo–Sakai linear system, and derived formulas similar to (4.18) and (4.19) above. To relate the orthogonal polynomials in this section to those in [28], we write the complex functional (4.17) in terms of q-Jackson integrals. Assuming that \(|\kappa _0|<|q|^{-\frac{1}{2}}\), a residue computation gives
for any entire function p(z), where the right-hand side integrals are standard Jackson integrals, W(z, t) is the weight function
and the dependence of the integral operator on the monodromy data \(\{c,\nu \}\) is hidden in the coefficients in front of the Jackson integrals,
Note that both coefficients satisfy \(\alpha (qt)=\frac{1}{\kappa _t \nu }\alpha (t)\) and the orthogonal polynomials in Ormerod et al. [28] then coincide with the polynomials \(p_n\) above, up to scalar multiplication, in the case when \(\nu \) is chosen such that \(\alpha _1(t)=-\alpha _2(t)\). In other words, \(\nu =\nu (t_0)\) is chosen such that
Next, we briefly consider an example coming from one of the conditions in (1.5b) being violated. Namely, we set
and consider RHP I, defined in Definition 2.7, with a corresponding upper-triangular connection matrix of the form,
where the monodromy datums \(c\in {\mathbb {C}}\) and \(\nu \in {\mathbb {C}}^*\) can again be chosen at pleasure.
We note that the jump matrix \(z^mC(z)\) can be rewritten as
where we denoted \(t_m:=q^m t_0\).
We apply the transformation
where
and
Then the jump matrix for \(Y^{{(}{m}{)}}(z)\) reads
where
and \(Y^{{(}{m}{)}}(z)\) solves the following RHP, if it exists.
Definition 4.8
(RHP IV). For \(m\in {\mathbb {Z}}\), find a matrix function \(Y^{{(}{m}{)}}(z)\) which satisfies the following conditions.
-
(i)
\(Y^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).
-
(ii)
\(Y^{{(}{m}{)}}(z')\) has continuous boundary values \(Y_-^{{(}{m}{)}}(z)\) and \(Y_+^{{(}{m}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by
$$\begin{aligned} Y_+^{{(}{m}{)}}(z)=Y_-^{{(}{m}{)}}(z)\begin{pmatrix} 1 &{}\quad {\widehat{w}}(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}\qquad (z\in \gamma ^{{(}{m}{)}}), \end{aligned}$$where \({\widehat{w}}(z,t)\) is the weight function defined in Eq. (4.20).
-
(iii)
\(Y^{{(}{m}{)}}(z)\) satisfies
$$\begin{aligned} Y^{{(}{m}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{m\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$
This RHP takes the form of the Fokas-Its-Kitaev RHP for orthogonal polynomials, but with the contour \(\gamma ^{{(}{m}{)}}\) and weight function \({\widehat{w}}(z,t_m)\) scaling with the ‘degree’ m of the corresponding orthogonal polynomials. In particular, RHP IV is unsolvable for \(m<0\) and thus so is RHP I in Definition 2.7.
For \(m=0\), RHP IV is solvable and its solution is given by
From Eq. (2.13) it follows that the corresponding linear system A(z, t) at \(t=t_0\) takes the upper-triangular form
For \(m\ge 0\), RHP IV is solvable if and only if the mth Hankel determinant of moments for the weight function \({\widehat{w}}(z,t_m)\), with respect to the contour \(\gamma ^{{(}{m}{)}}\), is nonzero. We denote
then \({\mathfrak {M}}\subseteq {\mathbb {N}}\) and, if \(c\ne 0\), then, by the same argument as in the proof of Theorem 2.12, we may show that, for any \(m\ge 0\), \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\). Thus the domain of the corresponding solution (f, g) is given by the semi q-spiral \(q^{{\mathbb {N}}}t_0\).
Note that, by Eq. (4.21), the value of g at \(t=t_0\) is given by
and thus Eq. (1.1) has a singularity at \(t=q^{-1} t_0\) which cannot be resolved. In particular, there exists no isomonodromic continuation of the solution past \(t=t_0\), see also [5][Prop. 4.1].
This phenomenon has also been observed for solutions of other discete Painlevé equations associated with orthogonal polynomials, see e.g. Assche [34].
We emphasise that, also in this case, one can derive explicit expressions for \(f(q^mt_0)\) and \(g(q^mt_0)\), \(m\ge 0\), in terms of determinants of moments, but with the sizes of the determinants growing with m.
Finally, note that, if we set \(c=0\), so that C(z) is diagonal, we have \({\widehat{w}}(z,t)=0\) and \(A_{12}(z,t_0)\equiv 0\). In this singular case, \({\mathfrak {M}}=\{0\}\) and there is no solution (f, g) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) corresponding to this monodromy.
We finish this section by noting that, in general, the domain where RHP I, defined in Definition 2.7, is solvable,
can take one of five particular forms when C(z) is reducible, characterised by
-
(1)
\(\forall m\in {\mathbb {Z}}\), \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\);
-
(2)
\(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}\subseteq {\mathbb {Z}}_{\ge m_0}\) and \(\forall m\ge m_0\): \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\);
-
(3)
\(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}\subseteq {\mathbb {Z}}_{\le m_0}\) and \(\forall m\le m_0\): \(m\in {\mathfrak {M}}\) or \(m-1\in {\mathfrak {M}}\);
-
(4)
\(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}=\{m_0\}\);
-
(5)
\({\mathfrak {M}}=\emptyset \).
In the first example of this section, we saw cases (1) and (5). In the second example, we saw cases (2) and (4) with \(m_0=0\).
5 The Monodromy Manifold
This section is devoted to the monodromy manifold defined in Definition 2.3. In Sects. 5.1, 5.2 and 5.3 we prove Theorems 2.15, 2.17 and 2.20 respectively.
5.1 On the embedding of the monodromy manifold
In Sect. 2.4, see Eq. (2.19), we defined a mapping \({\mathcal {P}}\) of the monodromy manifold to \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). In this section we show that this mapping is an embedding and determine its image, proving Theorem 2.15.
Firstly, we have the following lemma.
Lemma 5.1
The mappings \({\mathcal {P}}\), defined in Eq. (2.19), is injective.
Proof
Take any two connection matrices \(C(z),{\widetilde{C}}(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose that their respective coordinate values \(\rho \) and \({\widetilde{\rho }}\) are identical up to scaling, i.e. \({\widetilde{\rho }}=c\rho \), for some \(c\in {\mathbb {C}}^*\). Then the matrix function
is analytic on \({\mathbb {C}}^*\). But D(z) satisfies
and, as \(\kappa _0^2\notin q^{\mathbb {Z}}\), it follows from the general theory of q-theta functions, see e.g. Lemma 4.1, that \(D(z)\equiv D\) must be a constant diagonal matrix. Therefore, \([{\widetilde{C}}(z)]\) and [C(z)] represent the same point on the monodromy manifold. The thesis follows. \(\square \)
To determine the image of the monodromy manifold under \({\mathcal {P}}\), it is convenient to consider a related embedding into \(({\mathbb {P}}^1)^4\) of a finer quotient of the space \({\mathfrak {C}}(\kappa ,t_0)\), given in the following definition.
Definition 5.2
We define \(M(\kappa ,t_0)\) to be the space of connection matrices \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left-multiplication by invertible diagonal matrices. We denote the equivalence class of \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) in \(M(\kappa ,t_0)\) by \(\llbracket C(z)\rrbracket \) and denote by
the quotient mapping of \(M(\kappa ,t_0)\) onto the monodromy manifold.
Note that the coordinates \(\rho =(\rho _1,\rho _2,\rho _3,\rho _4)\) introduced in Sect. 2.4, i.e.
are invariant under left-multiplication by diagonal matrices and are thus well defined on equivalence classes in \(M(\kappa ,t_0)\). We thus obtain a mapping
This mapping is an embedding, by the same argument as given in the proof of Lemma 5.1, with c set equal to 1.
Let \(\iota _{\mathbb {P}}\) denote the quotient mapping
The proof of Theorem 2.15 revolves around the diagram
which is commutative, because right multiplication by a diagonal matrix translates to scalar multiplication of \(\rho \) as shown in Eq. (2.18). We first determine the image of \(M(\kappa ,t_0)\) under P, following the technique developed in our previous paper [23], and then obtain Theorem 2.15 by projecting this image into \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) via \(\iota _{\mathbb {P}}\).
To describe the image of \(M(\kappa ,t_0)\) under P, we make the following definition.
Definition 5.3
Recall the definition of the quadratic polynomial \(T(\rho :\kappa ,t_0)\) as well as its homogeneous form \(T_{hom}\) in Definition 2.14. Using homogeneous coordinates \(\rho _k=[\rho _k^x: \rho _k^y]\in {\mathbb {P}}^1\), \(1\le k\le 4\), the equation
defines a threefold in \(({\mathbb {P}}^1)^4\), which we denote by
Regarding the image of \(M(\kappa ,t_0)\) under P, we have the following result.
Proposition 5.4
Denote by \({\widehat{\kappa }}\) the tuple of complex parameters \(\kappa \) after replacing \(\kappa _0\mapsto 1\). The image of \(M(\kappa ,t_0)\) under the mapping P, defined in Eq. (5.1), is given by the threefold \(S(\kappa ,t_0)\) minus the codimension one subspace
We denote by \(S^*(\kappa ,t_0)\) the space obtained by cutting this subspace from \(S(\kappa ,t_0)\), then the mapping
is a bijection.
Proof
Let us take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). It will be convenient to work with the following uniform notation,
For any \(1\le i,j\le 2\), the matrix-entry \(C_{ij}(z)\) is an element of the two-dimensional vector space
see Eq. (4.1), and we know that
for some \(c\in {\mathbb {C}}^*\).
For each \(1\le k\le 4\), the equation \(\pi (C(x_k))=\rho _k\) translates to
where we used homogeneous coordinates \(\rho _k=[\rho _k^x:\rho _k^y]\).
We proceed in studying Eq. (5.7) by choosing explicit bases of the vector spaces \(V_{ij}\), \(1\le i,j\le 2\). To this end, we introduce the following eight q-theta functions,
For any \(1\le i,j\le 2\), the collection \(\{u_1^{ij}(z),u_2^{ij}(z)\}\) forms a basis of \(V_{ij}\). We may thus write
for some coefficients \(\alpha _{1}^{ij},\alpha _{2}^{ij}\in {\mathbb {C}}\).
Equation (5.7) now translate to eight equations among the coefficients in (5.8), which we group into the following two homogeneous systems,
and
As the determinant of C(z) cannot be identically zero, we know that both vectors on the left-hand side of Eqs. (5.9) and (5.10) are nonzero. This in turn implies that the determinants of the \(4\times 4\) matrices on the left-hand side are zero. By means of a lengthy calculation, one can check that both determinants, coincide, up to some nonzero scalar multipliers, with the equation
where \(T_{hom}\) is defined in Definition 2.14. We refer the interested reader to our previous work [23][Appendix B] where an analogous computation is given.
It follows that P embeds \(M(\kappa ,t_0)\) into the threefold \(S(\kappa ,t_0)\).
We proceed to determine those coordinate-values in \(S(\kappa ,t_0)\) which cannot be realised by any connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\).
Take any \(\rho \in S(\kappa ,t_0)\), then we know that both homogeneous equations (5.9) and (5.10) have non-trivial solutions. Let us take a solution of each respectively,
and let C(z) denote the corresponding matrix function via Eq. (5.8).
Then we know that C(z) is analytic on \({\mathbb {C}}^*\), it satisfies
and \(|C(x_k)|=0\) for \(1\le k \le 4\). Furthermore, by construction,
There are two options, either Eq. (5.6) holds for some \(c\in {\mathbb {C}}^*\), which means that \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and thus \(\rho \) lies inside the range of P; or the determinant of C(z) is identically zero,
In the latter case, \(\rho \) does not lie inside the range of P. To show this, suppose on the contrary that there is a \({\widetilde{C}}(z)\in {\mathfrak {C}}(\kappa ,t_0)\) with \(\pi ({\widetilde{C}}(x_k))=\rho _k\) for \(1\le k\le 4\). Then, by the same argument as in the proof of Lemma 5.1, we have
for some diagonal matrix D. However, as the determinant of C(z) is identically zero, we must have \(|D|=0\). Consequently, Eq. (5.15) contradicts equations (5.13). It follows that, in the case when the determinant of C(z) is identically zero, \(\rho \) indeed does not lie in the range of P.
Therefore, to prove the proposition, it remains to be shown that the determinant of the matrix C(z), constructed above, is identically equal to zero if and only if the coordinate-values \(\rho \) lie in \(X=X(\kappa ,t_0)\), and that this space X is a codimension one subspace of \(S(\kappa ,t_0)\).
To this end, let us note that Eqs. (5.13) and (5.14) imply that either
-
(i)
\(C_{11}(z)\equiv 0\) and \(C_{21}(z)\equiv 0\),
-
(ii)
\(C_{12}(z)\equiv 0\) and \(C_{22}(z)\equiv 0\), or
-
(iii)
\(C_{11}(z)C_{22}(z)=C_{12}(z)C_{21}(z)\not \equiv 0\).
Case (i) corresponds, via Eqs. (5.9) and (5.10), to the four lines
Indeed, \(C_{11}(z)\equiv 0\) implies that the coefficients \(\alpha _{1}^{11},\alpha _2^{11}\) in Eq. (5.9) are zero. A non-trivial solution of (5.9) with these constraints exists if and only if the coordinate-values \(\rho \) lies inside one of the above four lines.
Similarly, case (ii) corresponds to the four lines
Note that the eight lines, defined in (5.16) and (5.17), indeed lie inside X.
Finally, in case (iii), C(z) must take the form
with
for some \(\tau \in {\mathbb {C}}^*\) and nonzero constant multipliers satisfying \(c_{11}c_{22}=c_{12}c_{21}\). The corresponding \(\rho \)-coordinates of this matrix are given by
Consequently, for any choice of \(c,\tau \in {\mathbb {C}}^*\), Eq. (5.18) defines a point on the threefold \(S(\kappa ,t_0)\). We now make the important observation that formulae (5.18) are \(\kappa _0\)-independent. That is, Eq. (5.18) defines a point on \(S(\lambda _0,\kappa _t,\kappa _1,\kappa _\infty ,t_0)\), for any value of \(\lambda _0\). Thus these points lie in the subspace X.
To prove the proposition, it suffices to show that, conversely, any point in X lies either on one of the eight lines (5.16) and (5.17), or is given by (5.18) for a choice of \(c,\tau \in {\mathbb {C}}^*\). To this end, let us take a point \(\rho \in X\) which is not on one of the eight lines. Construct a corresponding matrix C(z) via Eqs. (5.9) and (5.10), see Eq. (5.11). So C(z) is analytic on \({\mathbb {C}}^*\), it satisfies (5.12) and Eq. (5.13) hold true.
As \(\rho \in X\subseteq S(1,\kappa _t,\kappa _1,\kappa _\infty ,t_0)\), we can similarly construct a matrix \({\widetilde{C}}(z)\) via Eqs. (5.9) and (5.10), which satisfies
This matrix function is also analytic on \({\mathbb {C}}^*\), and satisfies (5.13).
Now suppose, for the sake of obtaining a contradiction, that \(\rho \) is not given by (5.18) for some \(c,\tau \in {\mathbb {C}}^*\), so that \(|C(z)|\not \equiv 0\). Consider the quotient \(D(z)={\widetilde{C}}(z)C(z)^{-1}\). As, by construction, C(z) and \({\widetilde{C}}(z)\) have the same \(\rho \)-coordinate values, it follows that D(z) is an analytic function on \({\mathbb {C}}^*\). However, D(z) satisfies the q-difference equation
and therefore, by Lemma 4.1, \(D(z)\equiv 0\) and consequently \({\widetilde{C}}(z)\equiv 0\), which contradicts the fact that \({\widetilde{C}}(z)\) satisfies (5.13).
We conclude that the subspace X is explicitly parametrised by
where \(\phi \) is the function defined in (5.18) and the closure is taken in \(({\mathbb {P}}^1)^4\). Thus X is a codimension one closed subspace of \(S(\kappa ,t_0)\). Furthermore, we have shown that X consists precisely of the points in the threefold \(S(\kappa ,t_0)\) that cannot be realised as coordinate-values \(\rho \) of any connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). Thus the image of \(M(\kappa ,t_0)\) under the embedding P is given by \(S(\kappa ,t_0)\setminus X\) and the proposition follows. \(\square \)
Proof of Theorem 2.15
Recall from Definition 5.2 that elements of \(M(\kappa ,t_0)\) are connection matrices C equivalent under left multiplication by a diagonal matrix, while the entries of \({\mathcal {M}}(\kappa ,t_0)\) are those equivalent under right and left multiplication by diagonal matrices. Note that the desired bijection is already proved for \(M(\kappa ,t_0)\) in Proposition 5.4. So the proof of the present theorem will follow under an appropriate quotient mapping \(M(\kappa ,t_0)\) to \({\mathcal {M}}(\kappa ,t_0)\) and the corresponding quotient from \(S^*(\kappa ,t_0)\) to \({\mathcal {S}}^*(\kappa ,t_0)\). Recall that Definition 5.2 denotes the former quotient by \(\iota _M\). The latter quotient is denoted by \(\iota _{{\mathbb {P}}}\), defined in Eq. (5.2).
Now, consider the commutative diagram (5.3). By Proposition 5.4, the image of P is given by \(S^*(\kappa ,t_0)\). Therefore, the image of the composition \(\iota _{{\mathbb {P}}}\circ P\) is given by \({\mathcal {S}}^*(\kappa ,t_0)\). As \(\iota _M\) is surjective, it follows from the commutativity of diagram (5.3) that the image of \({\mathcal {P}}\) is given by \({\mathcal {S}}^*(\kappa ,t_0)\).
In Lemma 5.1, it was shown that \({\mathcal {P}}\) is injective and it thus follows that \({\mathcal {P}}\) is a bijection, which proves the theorem. \(\square \)
Proof of Remark 2.16
We note that, by Eq. (5.19), we have the following explicit parametrisation of the curve \({\mathcal {X}}={\mathcal {X}}(\kappa ,t_0)\),
where the closure is taken in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) and \(x_k,1\le k\le 4\), are as defined in Eq. (2.17). Note that this parametrisation is \(\kappa _0\)-independent, which implies
By the definition of \({\mathcal {X}}\), Eq. (2.22), the right-hand side is also a subset of \({\mathcal {X}}\) and they are therefore equal, yielding the desired result, Eq. (2.24). \(\square \)
5.2 Smoothness of the monodromy manifold
In this subsection, we study the smoothness of the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) and prove Theorem 2.17.
The monodromy manifold does not naturally come with a topology. However, due to Theorem 2.15 and Proposition 5.4, we have the following refined version of a commutative diagram (5.3),
where both P and \({\mathcal {P}}\) are bijective, and \(\iota _{S^*}\) denotes the quotient mapping \(\iota _{{\mathbb {P}}}\) restricted to \(S^*(\kappa ,t_0)\). The monodromy manifold inherits a topology from \({\mathcal {S}}^*(\kappa ,t_0)\) via \({\mathcal {P}}\). Similarly, \(M(\kappa _0,t_0)\) inherits a topology from the threefold \(S^*(\kappa ,t_0)\).
To prove Theorem 2.17, we first study the smoothness of the space \(S^*(\kappa ,t_0)\). We then deduce corresponding results for the surface \({\mathcal {S}}^*(\kappa ,t_0)\), by taking the quotient with respect to scalar multiplication. Finally, we translate the results for \({\mathcal {S}}^*(\kappa ,t_0)\) to the monodromy manifold.
The following proposition describes the singular set of the space \(S^*(\kappa ,t_0)\) and shows that it is empty if and only if the non-splitting conditions hold.
Proposition 5.5
The space \(S^*(\kappa ,t_0)\) is a complex 3-manifold singularities at points in the finite set
where
Furthermore, all these singularities are ordinary double-point singularities.
In particular, the following statements are equivalent.
-
(i)
The space \(S^*(\kappa ,t_0)\) is smooth.
-
(ii)
The set \(S^*_{sing}\) is empty.
-
(iii)
The non-splitting conditions (1.5) hold true.
Proof
Recall that the space \(S^*(\kappa ,t_0)\) is defined as \(S(\kappa ,t_0)\setminus X(\kappa ,t_0)\), where \(S(\kappa ,t_0)\) is the zero locus of the polynomial \(T(\rho ;\kappa ,t_0)\) in \(({\mathbb {P}}^1)^4\) and \(X(\kappa ,t_0)\) denotes a subspace of \(S(\kappa ,t_0)\), defined in Eq. (5.4). From here on, we will often suppress the explicit parameter dependence on \((\kappa ,t_0)\) of \(T(\rho ),S,X\) and \(S^*=S{\setminus } X\).
Firstly, as X is, by definition, the zero locus of two polynomials, it is closed in S. Hence, \(S^*\) is open in S. To prove the first part of the proposition, we study whether the gradient of \(T(\rho )\) vanishes anywhere on the open subset \(S^*\) of S.
We start by considering whether \(S^*\) has any singularities in its affine part \(S^*\cap {\mathbb {C}}^4\). The zero locus of the gradient of \(T(\rho _1,\rho _2,\rho _3,\rho _4)\) is characterised by the linear equation
where H is the Hessian matrix of T, i.e.
We proceed to show that the determinant of H is nonzero. This implies that Eq. (5.25) has only one solution \({\underline{0}}:=(0,0,0,0)\in X\), which does not lie in \(S^*\). In particular, \(S^*\) has no singularities in its affine part.
In fact, we will prove that the determinant of H is given explicitly by
so that \(|H|\ne 0\), due to the non-resonance conditions (1.4).
To this end, we first note that |H| depends analytically on each of the parameters \(\kappa _j\in {\mathbb {C}}^*\), \(j=0,t,1,\infty \), and \(t_0\in {\mathbb {C}}^*\). We begin by studying the dependence of the determinant on \(\kappa _0\) and denote
Since each of the entries of H satisfies the q-difference equation
\(1\le i<j\le 4\), we have
It follows from Lemma 4.1 that h has precisely eight zeros, counting multiplicity, in \(\{\kappa _0\in {\mathbb {C}}^*\}\), modulo \(q^{\mathbb {Z}}\). We further note the following helpful symmetries,
A direct calculation yields that h, evaluated at \(\kappa _0=1\), formally factorises as
The factor with \(\epsilon _1=\epsilon _2=-1\) vanishes identically by the addition law for theta functions, hence \(h(1)=0\). it furthermore follows from symmetries (5.29) that \(h'(1)=0\), so that \(\kappa _0=1\) is at least a double zero of h.
An analogous computation shows that \(\kappa _0=-1\) is at least a double zero of h.
Similarly, it follows that \(\kappa _0=q^{\frac{1}{2}}\) is a zero of h. To show that it is at least a double zero, we take the derivative of Eq. (5.28),
By evaluating this identity, and the second equation in (5.29), at \(\kappa _0=q^{-\frac{1}{2}}\), it follows that \(h'(q^{\frac{1}{2}})=0\) so that \(\kappa _0=q^{\frac{1}{2}}\) is at least a double zero of h. The same statement follows analogously for \(\kappa _0=-q^{\frac{1}{2}}\).
In conclusion, we have found four zeros of h, \(\kappa _0=\pm 1,\pm q^{\frac{1}{2}}\), each at least of degree two. But h is a degree 8 theta function. It follows from this, and Eq. (5.28), that
where \({\widetilde{h}}\) is a function independent of \(\kappa _0\).
By following the same procedure with respect to the variables \(\kappa _t,\kappa _1,\kappa _\infty \), we obtain
for some constant c which may only depend on \(t_0\) and q.
At this point, one simply evaluates both sides at \(\kappa _0=\kappa _t=\kappa _1=\kappa _\infty =i\), to obtain \(c=1\), which yields Eq. (5.27).
We now return to the proof of the proposition. We have already established that \({\mathcal {S}}^*\) has no singularities in its affine part. It remains to study whether \(S^*\) has singularities with one or more of their coordinates equal to \(\infty \). Note that we only have to check the cases where one or two of their coordinates are equal to \(\infty \), as points, with more than two coordinates equal to \(\infty \), lie in X and thus not in \(S^*\).
Let us start by considering whether there are any singularities in
To this end, we evaluate the gradient of
at \(\rho _4^y=0\), yielding
For this gradient to vanish, it is required that \(T_{14}=T_{24}=T_{34}=0\), which cannot be realised without violating one of the non-resonance conditions (1.4). Therefore, \(S^*\) has no singularities with \(\rho _4=\infty \) and the remaining coordinates finite. Applying the same argument in the three other cases, it follows that the manifold \(S^*\) has no singularities with precisely one of their coordinates equal to \(\infty \).
Next, we consider the existence of singularities on \(S^*\) with two of their coordinates infinite. Let us for example consider \(\rho _3=\rho _4=\infty \) with \(\rho _1\) and \(\rho _2\) finite. Setting \(\rho _3^y=\rho _4^y=0\) in
reduces it to
Therefore,
In turn, \(T_{34}=0\) if and only if \(\kappa _0^{+1}\kappa _\infty t_0\in q^{\mathbb {Z}}\) or \(\kappa _0^{-1}\kappa _\infty t_0\in q^{\mathbb {Z}}\). Thus \(T_{34}\ne 0\) when the non-splitting conditions (1.5) hold true.
More generally, if the non-splitting conditions (1.5) hold true, then all of the coefficients \(T_{ij}\), \(1\le i<j\le 4\), are nonzero and consequently there are no points on \(S^*\) with two coordinates equal to \(\infty \). Thus we can conclude that \(S^*\) is smooth when conditions the non-splitting conditions hold true.
Returning to the example above, i.e. \(\rho _3=\rho _4=\infty \), under the assumption that \(T_{34}=0\), evaluation of the gradient of
at \(\rho _3^y=\rho _4^y=0\), yields
which vanishes at \(\rho _1^x=\rho _2^x=0\), and only at this point, as
where H the Hessian of T defined in Eq. (5.26).
The determinant of the Hessian of F at the point \((\rho _1^x,\rho _2^x,\rho _3^y,\rho _4^y)={\underline{0}}\) equals |H|, which is nonzero, and thus this point is a non-degenerate saddle point of F. In particular, \(\{F=0\}\) has an ordinary double point singularity at \({\underline{0}}\), by the complex Morse lemma. Therefore, the manifold \(S^*\) has an ordinary double point singularity at \(\rho =(0,0,\infty ,\infty )\), when \(\kappa _0^{+1}\kappa _\infty t_0\in q^{\mathbb {Z}}\) or \(\kappa _0^{-1}\kappa _\infty t_0\in q^{\mathbb {Z}}\).
More generally, if some of the non-splitting conditions (1.5) are violated, then the intersection \(S_{sing}^*\) of \(\Theta \) and \(S^*\) is non-empty, and at each point in \(S_{sing}^*\), \(S^*\) has an ordinary double point singularity and \(S^*\) is smooth elsewhere. Otherwise, \(S_{sing}^*\) is empty and in that case we have already shown that \(S^*\) has no singularities. This completes the proof of the proposition. \(\square \)
We now proceed to prove Theorem 2.17 by using Proposition 5.5.
Proof of Theorem 2.17
The first part of the proof is to show that the smoothness properties of the 3-manifold \(S^*(\kappa ,t_0)\), established in Proposition 5.5, are preserved by the quotient map to \({\mathcal {S}}^*(\kappa ,t_0)\). The second step will be to translate these results to the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\).
Recall that \({\mathcal {S}}^*(\kappa ,t_0)\) is the zero set of the polynomial \(T(\rho ;\kappa ,t_0)\), given in Definition 2.14. Due to Proposition 5.5, it can be singular only at points in the finite set \(\Theta \), given in Eq. (5.23). Recall also that \(S^*_{sing}\) refers to the subset of singular points lying on the 3-manifold \(S^*(\kappa ,t_0)\). Consider the smooth complex 3-manifold
We denote the image of \(\Theta \) under the quotient map \(\iota _{S^*}\) by \({\widehat{\Theta }}\), so that the image of \({\widetilde{S}}^*(\kappa ,t_0)\) under \(\iota _{S^*}\) is given by
As (non-zero) scalar multiplication acts smoothly on \({\widetilde{S}}^*(\kappa ,t_0)\), and no element of \({\widetilde{S}}^*(\kappa ,t_0)\) is invariant under this operation, it follows that \(\widetilde{{\mathcal {S}}}^*(\kappa ,t_0)\) is a smooth complex surface.
Now, consider a point \(\rho _0\in S_{sing}\). Since this point is invariant under the smooth action \(\rho \mapsto c\,\rho \), \(c\in {\mathbb {C}}^*\), it is easy to see that the quotient space \({\mathcal {S}}^*\) is not Hausdorff near its image \([\rho _0]\). In fact, near points in \(S_{sing}\), the space \({\mathcal {S}}^*\) even fails to locally be a \(T_1\) space. In particular, the smooth structure on \(\widetilde{{\mathcal {S}}}^*(\kappa ,t_0)\) cannot be extended to include points in \(S_{sing}\).
To complete the proof of the theorem, we translate the results on \({\mathcal {S}}^*(\kappa ,t_0)\) to \({\mathcal {M}}(\kappa ,t_0)\) via the mapping \({\mathcal {P}}\). To this end, recall that \({\mathcal {P}}\) maps the finite set \({\mathcal {M}}_{sing}\) onto \({\mathcal {S}}_{sing}\).
We have shown that \({\mathcal {S}}^*(\kappa ,t_0){\setminus } {\mathcal {S}}_{sing}\) is a smooth complex surface. Hence \({\mathcal {M}}(\kappa ,t_0)\setminus {\mathcal {M}}_{sing}\) is a smooth complex surface. Furthermore, elements of \({\mathcal {M}}_{sing}\) form singularities on the monodromy manifold, as points in \({\mathcal {S}}_{sing}\) are singularities on \({\mathcal {S}}^*(\kappa ,t_0)\).
Finally, we note that \({\mathcal {M}}_{sing}\) is non-empty if and only if \({\mathcal {S}}_{sing}\) is non-empty, and the latter holds true if and only if some of the non-splitting conditions are violated, by the equivalence in Proposition 5.5. The theorem follows. \(\square \)
5.3 The monodromy manifold as an algebraic surface
In this section, we prove Theorem 2.20, which allows us to identify the monodromy manifold with an affine algebraic surface embedded in \({\mathbb {C}}^6\). Furthermore, we describe how the monodromy manifold can also be embedded in \(({\mathbb {P}}^1)^3\).
Proof of Theorem 2.20
The mapping \(\Phi _{{\mathcal {M}}}\) is composed of two parts: \(\mathcal P: {\mathcal {M}}\rightarrow {\mathcal {S}}^*\) and \(\Phi :{\mathcal {S}}^*\rightarrow \mathcal F\). The mapping \({\mathcal {P}}\) is a (topological) isomorphism due to theorem 2.15. Hence, it only remains to show that the mapping \(\Phi \) is an isomorphism. To prove this, we construct a continuous inverse, which we denote by \(\Psi \), of \(\Phi \).
We start by recalling that \({\mathcal {S}}^*\), defined in Eq. (2.23), is locally described by coordinates \([\rho ]\) in the ambient space \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). Similarly, \({\mathcal {F}}\) is described by the coordinates \(\eta _{ij}\), \(1\le i<j \le 4\), in \({\mathbb {C}}^6\).
The mapping \(\Phi \) is a continuous mapping from \({\mathcal {S}}^*\) to \({\mathcal {F}}\), described by Eq. (2.25) with respect to the above coordinates. In particular, note that, due to Eq. (2.25), for any labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), we have
This means that \(\Phi \) maps the open subdomain \({\mathcal {S}}_0\subseteq {\mathcal {S}}^*\), given by
into the subspace
of the co-domain.
We proceed by defining an inverse of \(\Phi \) on this subdomain and co-domain, and subsequently extending this inverse to one on the full domain.
The relevant mapping on \({\mathcal {F}}_0\) is the following,
which we now show to be an inverse of \(\Phi |_{{\mathcal {S}}_0}\). By Eqs. (2.26a), (2.26c) and (2.26d), the image of \(\Psi |_{{\mathcal {F}}_0}\) is contained in \({\mathcal {S}}\). Furthermore, due to (2.26b), any point in the image cannot lie in \({\mathcal {X}}\). It thus follows that the image of \(\Psi |_{{\mathcal {F}}_0}\) is contained in \({\mathcal {S}}^*\). Furthermore, as \({\mathcal {F}}_0\) by definition excludes any of the \(\eta \)-coordinates to equal zero, \(\Psi |_{{\mathcal {F}}_0}\) maps \({\mathcal {F}}_0\) into \({\mathcal {S}}_0\). Finally, note that, for any point \(\rho \in {\mathcal {S}}_0\),
where, in the second equality, we used Eq. (2.25).
Similarly, it can be seen that \(\Phi |_{{\mathcal {S}}_0}\circ \Psi |_{{\mathcal {F}}_0}\) is the identity map on \({\mathcal {F}}_0\). It follows that \(\Psi |_{{\mathcal {F}}_0}\) is a (continuous) inverse of \(\Phi |_{{\mathcal {S}}_0}\).
The set \({\mathcal {S}}_0\) is an open dense subset of the domain \({\mathcal {S}}\) and, similarly, \({\mathcal {F}}_0\) is an open dense subset of the co-domain. It remains to deal with the special cases where one or more of the \(\rho _k\), \(1\le k\le 4\), is zero or infinite, and equivalently one or more of the \(\eta _{ij}\), \(1\le i<j\le 4\) is zero.
We handle each of these cases separately. The cases are described by
for \(1\le i,j\le 4\) with \(i\ne j\). Note that \({\mathcal {S}}_{i,j}^{0,\infty }\) provides the boundaries of \({\mathcal {S}}_i^0\) and \({\mathcal {S}}_j^\infty \). Since no point on \({\mathcal {S}}_0\) can have two or more components all zero or all infinite, the sets defined in Eq. (5.33) glue together to provide all the boundaries or limit sets of \({\mathcal {S}}_0\) within \({\mathcal {S}}^*\).
We now express the surface \({\mathcal {S}}^*\) as a disjoint union of all of these cases with \({\mathcal {S}}_0\), that is,
where the last line indicates disjoint union of all \({\mathcal {S}}_{i,j}^{0,\infty }\), \(1\le i,j\le 4\), with \(i\ne j\).
We correspondingly decompose the codomain \({\mathcal {F}}\) into disjoint components. Motivated by Eq. (5.32), we define these components by
Equation (2.26) imply that any element \(\eta \) of \({\mathcal {F}}\) has either zero, three or four components equal to zero and the components in (5.34) indeed cover all of \({\mathcal {F}}\setminus {\mathcal {F}}_0\).
Then, inspired by (5.32), we correspondingly decompose \({\mathcal {F}}\) as a disjoint union,
Due to (5.32), \(\Phi \) maps each component in the decomposition of \({\mathcal {S}}^*\) into the corresponding component in the decomposition of \({\mathcal {F}}\). We extend \(\Psi \) to a global inverse of \(\Phi \) on \({\mathcal {F}}\), by locally defining it on each of the components in the decomposition of \({\mathcal {F}}\). The arguments for each of the three types of components are similar, and so we give the details for one of each type below to illustrate the details.
For example, for
we set
which defines an inverse of \(\Phi |_{{\mathcal {S}}_1^0}\). Similarly, for
we define
which is an inverse of \(\Phi |_{{\mathcal {S}}_1^\infty }\). For the third and final example
we take
which is an inverse of \(\Phi |_{{\mathcal {S}}_{1,2}^{0,\infty }}\).
This extends \(\Psi \) to a global inverse of \(\Phi \) on \({\mathcal {F}}\). \(\Psi \) is continuous on each of the separate components and it is straightforward to check that its continuations to common boundary points of different components agree with each other. \(\square \)
We finish this section by describing an embedding of the monodromy manifold into \(({\mathbb {P}}^1)^3\). We assume that the non-splitting conditions (1.5) hold true. In particular, all the coefficients of the polynomial \(T(p:\kappa ,t_0)\) are nonzero and, therefore, there are no points \(\rho \in S^*(\kappa ,t_0)\) with two or more components all zero or all infinite. Thus,
form six well-defined coordinates on the surface \({\mathcal {S}}^*(\kappa ,t_0)\) and thus also on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\).
Ohyama et al. [27] study the \(q\text {P}_{\text {VI}}\) monodromy manifold using these coordinates.Footnote 3 Theorem 2.15 yields explicit algebraic relations among them. For example, \(\rho _{12},\rho _{23}\) and \(\rho _{34}\) are related by
Analogously to the proof of Theorem 2.20, we can show that these three coordinates yield an embedding of the monodromy manifold into \(({\mathbb {P}}^1)^3\),
with range given by the surface (5.35) minus a curve. This curve is defined by the intersection of (5.35) as \(\kappa _0\) varies over \({\mathbb {C}}^*\).
Remark 5.6
Assuming the non-splitting conditions (1.5), the six coordinates \(\rho _{ij}\), \(1\le i,j\le 4\), are analytic rational functions from \({\mathcal {F}}(\kappa ,t_0)\) to \(\mathbb{C}\mathbb{P}^1\), which together embed the surface into \((\mathbb{C}\mathbb{P}^1)^6\). The same statements holds true for these coordinates, as functions on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), with respect to the structure of a complex algebraic variety defined in Ohyama et al. [27]. It follows that this structure is compatible with the one induced by Theorem 2.20.
6 Conclusion
In this paper, we studied the \(q\text {P}_{\text {VI}}\) equation through its associated linear problem. Assuming non-resonant parameter conditions, we defined the corresponding Riemann–Hilbert problem, which captures the general solution of \(q\text {P}_{\text {VI}}\). This problem was shown to be solvable for irreducible monodromy, leading to a one-to-one correspondence between solutions of \(q\text {P}_{\text {VI}}\) and points on the corresponding monodromy manifold, when the non-splitting conditions are satisfied.
In turn, we constructed an explicit embedding of the monodromy manifold into \((\mathbb{C}\mathbb{P}^1)^4/{\mathbb {C}}^*\), whose image is described by the zero locus of a single quadratic polynomial, minus a curve. This allowed us to show that the monodromy manifold is a smooth complex surface, when the non-splitting conditions hold true. We further proved that it can be identified with an affine algebraic surface, under the same assumptions. This surface can be described as the intersection of two quadrics in \({\mathbb {C}}^4\) and its projective completion is thus a Segre surface.
The results of this paper suggests a possible framework for tackling several open questions. These include, for example, the classification of algebraic or symmetric solutions of \(q\text {P}_{\text {VI}}\), the construction of (classes of) special transcendental solutions via the geometry of the monodromy manifold, and the derivation of solutions with distinctive (e.g. bounded) global asymptotic behaviours.
Notes
See Eqs. (3.7) for a full parametrisation of A with respect to \(\{f,g,w\}\).
Note that this is essentially the derivation of the forward implication of Theorem 3.1.
To be precise, in [27][§5.1.1] the ‘dual’ coordinates \(\Pi _{ij}:=\frac{\rho _i'}{\rho _j'}\in {\mathbb {P}}^1\) are used, where \(\rho _k'=\pi (C(x_k)^T)\) for \(1\le k\le 4\). These coordinates are bi-rationally equivalent to the \(\rho _{ij}\) coordinates and we note that \((\rho _1',\rho _2',\rho _3',\rho _4')\) lies on the threefold \(S^*(\kappa _\infty ^{-1},\kappa _t,\kappa _1,\kappa _0^{-1},t_0)\).
References
Birkhoff, G.D.: The generalized Riemann problem for linear differential equations and the allied problems for linear difference and \(q\)-difference equations. Proc. Am. Acad. Arts Sci. 49, 521–568 (1913)
Carmichael, R.D.: The general theory of linear \(q\)-difference equations. Am. J. Math. 34, 147–168 (1912)
Chekhov, L., Mazzocco, M., Rubtsov, V.: Quantised Painlevé monodromy manifolds, Sklyanin and Calabi-Yau algebras. Adv. Math. 376, 52 (2021)
Deift, P.A.: Orthogonal polynomials and random matrices: a Riemann–Hilbert approach, vol. 3. Courant Lecture Notes in Mathematics, New York University, Courant Institute of Mathematical Sciences, New York and American Mathematical Society, Providence (1999)
Dreyfus, T., Heu, V.: Degeneration from difference to differential Okamoto spaces for the sixth Painlevé equation (2020). Preprint arXiv:2005.12805v1 [math.CA]
Dubrovin, B., Kapaev, A., A Riemann–Hilbert approach to the Heun equation, SIGMA, 14, Paper No. 093, 24 (2018)
Dubrovin, B.: Geometry of 2D Topological Field Theories. Springer Lecture Notes in Mathematics, vol. 1620, pp 120–348 (1995)
Fokas, A.S., Its, A.R., Kitaev, A.V.: Discrete Painlevé equations and their appearance in quantum gravity. Commun. Math. Phys. 142(2), 313–344 (1991)
Fokas, A.S., Its, A.R., Kitaev, A.V.: The isomonodromy approach to matrix models in \(2\)D quantum gravity. Commun. Math. Phys. 147(2), 395–430 (1992)
Fokas, A.S., Its, A.R., Kapaev, A.A., Novokshenov, V.Y.: Painlevé Transcendents, Mathematical Surveys and Monographs. The Riemann–Hilbert Approach, vol. 128, p. xii+553. American Mathematical Society, Providence (2006)
Fricke, R., Klein, F.: Vorlesungen über die Theorie der automorphen Funktionen I. Druck und Verlag von B.G. Teubner, Leipzig (1897)
Fuchs, R.: Sur Quelques Équations Différentielles Linéaires Du Second Ordre. Comptes Rendus de l’Académie des Sciences Paris 141, 555–558 (1905)
Gamayun, O., Iorgov, N., Lisovyy, O.: Conformal field theory of Painlevé VI. J. High Energy Phys. 10, 1–25 (2012)
Grammaticos, B., Ramani, A.: Discrete Painlevé Equations: A Review, Conference = Discrete Integrable Systems. Lecture Notes in Physics, vol. 644, pp. 245–321. Springer, Berlin (2004)
Griffiths, P., Harris, J.: Principles of Algebraic Geometry. Pure and Applied Mathematics, p. xii+813. Wiley, New York (1978)
Guzzetti, D.: A review of the sixth Painlevé equation. Constr. Approx. 41(3), 495–527 (2015)
Inaba, M.A., Iwasaki, K., Saito, M.H.: Dynamics of the sixth Painlevé equation, Théories asymptotiques et équations de Painlevé, Séminaries Congress, vol. 14, pp. 103–167. Soc. Math. France, Paris (2006)
Iwasaki, K.: A modular group action on cubic surfaces and the monodromy of the Painlevé VI equation. Proc. Japan Acad. Ser. A Math. Sci. 78(7), 131–135 (2002)
Jimbo, M.: Monodromy problem and the boundary condition for some Painlevé equations. Publ. Res. Inst. Math. Sci. 18(3), 1137–1161 (1982)
Jimbo, M., Sakai, H.: A \(q\)-analogue of the sixth Painlevé equation. Lett. Math. Phys. 38, 145–154 (1996)
Jimbo, M., Miwa, T., Ueno, K.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. I. General theory and \(\tau \)-function. Physica D 2(2), 306–352 (1981)
Jimbo, M., Nagoya, H., Sakai, H.: CFT approach to the \(q\)-Painlevé VI equation. J. Integr. Syst. 2(1), 27 (2017)
Joshi, N., Roffelsen, P.: On the Riemann–Hilbert problem for a \(q\)-difference Painlevé equation. Commun. Math. Phys. 384(1), 549–585 (2021)
Manin, Y.I.: Sixth Painlevé equation, universal elliptic curve and mirror of P2. Am. Math. Soc. Transl. 186, 131–151 (1998)
Mano, T.: Asymptotic behaviour around a boundary point of the \(q\)-Painlevé VI equation and its connection problem. Nonlinearity 23(7), 1585–1608 (2010)
Mazzocco, M.: Rational solutions of the Painlevé VI equation, Kowalevski Workshop on mathematical methods of regular dynamics (Leeds, 2000). J. Phys. A 34(11), 2281–2294 (2001)
Ohyama, Y., Ramis, J.P., Sauloy, J.: The space of monodromy data for the Jimbo–Sakai family of \(q\)-difference equations. Ann. Fac. Sci. Toulouse Math. (6) 29(5), 1119–1250 (2020)
Ormerod, C.M., Witte, N.S., Forrester, P.J.: Connection preserving deformations and \(q\)-semi-classical orthogonal polynomials. Nonlinearity 24(9), 2405–2434 (2011)
Roffelsen, P.: On the global asymptotic analysis of a \(q\)-discrete Painlevé equation. PhD thesis, The University of Sydney. Available at https://ses.library.usyd.edu.au/handle/2123/16601 (2017)
Sakai, H.: Casorati determinant solutions for the \(q\)-difference sixth Painlevé equation. Nonlinearity 11(4), 823–833 (1998)
Sakai, H.: Rational surfaces associated with affine root systems and geometry of the Painlevé equations. Commun. Math. Phys. 220, 165–229 (2001)
Sauloy, J.: Galois Theory of \(q\)-Difference Equations: The“Analytical’’ Approach, Differential Equations and the Stokes Phenomenon, pp. 277–292. World Sci. Publ., River Edge, NJ (2002)
Tod, K.P.: Self-dual Einstein metrics from the Painlevé VI equation. Phys. Lett. 190, 221–224 (1994)
Van Assche, W.: Orthogonal Polynomials and Painlevé Equations. Australian Mathematical Society Lecture Series, vol. 27. Cambridge University Press, Cambridge (2018)
van der Put, M., Saito, M.: Moduli spaces for linear differential equations and the Painlevé equations. Annales de l’Institut Fourier 59, 2611–2667 (2009)
Acknowledgements
This research was supported by Australian Research Council Discovery Project #DP210100129 and the Leverhulme Trust Visiting Professorship VP2-2018-013. The authors are grateful to Peter Forrester, Marta Mazzocco, Yousuke Ohyama and Andrea Ricolfi for stimulating discussions about topics related to the work presented in this paper.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by K. Johansson.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Joshi, N., Roffelsen, P. On the Monodromy Manifold of q-Painlevé VI and Its Riemann–Hilbert Problem. Commun. Math. Phys. 404, 97–149 (2023). https://doi.org/10.1007/s00220-023-04834-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00220-023-04834-2