On the Monodromy Manifold of q-Painlevé VI and Its Riemann–Hilbert Problem

We study the q-difference sixth Painlevé equation (qPVI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q\text {P}_{\text {VI}}$$\end{document}) through its associated Riemann–Hilbert problem (RHP) and show that the RHP is always solvable for irreducible monodromy data. This enables us to identify the solution space of qPVI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q\text {P}_{\text {VI}}$$\end{document} with a monodromy manifold for generic parameter values. We deduce this manifold explicitly and show it is a smooth and affine algebraic surface when it does not contain reducible monodromy. Furthermore, we describe the RHP for reducible monodromy data and show that, when solvable, its solution is given explicitly in terms of certain orthogonal polynomials yielding special function solutions of qPVI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$q\text {P}_{\text {VI}}$$\end{document}.


Introduction
Despite widespread knowledge of how a Riemann-Hilbert formulation allow us to describe the solutions of the Painlevé equations, the corresponding description remains incomplete for discrete Painlevé equations. In this paper, we provide such a formulation for an important equation known as the q-difference sixth Painlevé equation and show that (under certain conditions) the corresponding Riemann-Hilbert problem is solvable, the resulting monodromy mapping is bijective, and the monodromy manifold is an algebraic surface given by an explicit equation.
Assuming q ∈ C, 0 < |q| < 1, and given nonzero parameters κ = (κ 0 , κ t , κ 1 , κ ∞ ) ∈ C 4 , the system known as the q-difference sixth Painlevé equation is qP VI : , (1.1) where f, g : T → CP 1 are complex functions defined on a domain T invariant under multiplication by q and we have used the abbreviated notation f = f (t), g = g(t), f = f (qt), g = g(qt), for t ∈ T . We will refer to Eq. (1.1) by the abbreviation qP VI . qP VI was first derived by Jimbo and Sakai [20] as the compatibility condition of a pair of linear q-difference systems. They showed that this formulation could be interpreted as a q-difference version of isomonodromic deformation, in close parallel to the role played by the classical sixth Painlevé equation as the isomonodromic condition for a rank-two Fuchsian system with four regular singular points at 0, 1, ∞, t, where t is allowed to move in C \ {0, 1} [12,21].
The sixth Painlevé equation (P VI ) plays an important role in many settings in mathematics and physics. We mention the construction of self-dual Einstein metrics in general relativity [33], classification of 2D-topological field theories [7], mirror symmetry, and quantum cohomology [24] as noteworthy examples.
Letting q → 1 in qP VI , with κ j = q k j for j = 0, t, 1, ∞, under the assumption that f → u and g → (u − t)/(u − 1), the system reduces to P VI : where α = (2k ∞ + 1) 2 2 , β = −2k 2 0 , γ = 2k 2 Due to its relation to P VI , the q-difference equation qP VI has drawn increasing interest in recent times. Mano [25] derived the generic leading order asymptotics of solutions near t = 0 and t = ∞ and gave an implicit solution to the corresponding nonlinear connection problem. Jimbo et al. [22] extended Mano's asymptotic result near t = 0 to an explicit asymptotic expansion beyond all orders for the generic solution. They obtained this asymptotic representation through an interesting connection of qP VI with conformal field theory, analogous to the one for P VI established by Gamayun et al. [13].
In this paper, we study qP VI via the Jimbo-Sakai linear problem [20]. Using Birkhoff's theory [1], we define an associated Riemann-Hilbert problem (RHP), which captures the general solution of qP VI . The jump matrices of this RHP across a single closed contour form a corresponding monodromy manifold that is a focal point of this paper.
Recently, this monodromy manifold was the object of an extensive study by Ohyama et al. [27], who showed that such a manifold forms an algebraic surface. Furthermore, they conjectured, see [27,Conjecture 7.10], that the algebraic surface is smooth, under additional conditions. In this paper, we prove a stronger version of this conjecture, see Theorem 2.17 and Remark 2.18.
Consider the general class of solutions ( f, g) of qP VI defined on a domain T given by a discrete q-spiral, i.e., T = q Z t 0 , for some t 0 ∈ C * . The deformation of the Jimbo-Sakai linear problem (see §3.2) yields an auxiliary equation associated with qP VI We refer to ( f, g) as a solution of qP VI (κ, t 0 ) and call the triplet ( f, g, w) a solution of qP aux VI (κ, t 0 ). Starting with an initial value of ( f, g) in C * × C * , and iterating in t, qP VI can become apparently singular when ( f, g) takes the value of one of the following eight base-points, Each of these can be resolved through a blow up, so that the iteration is once again well-defined [31]. There are, however, formal solutions of equations (1.1), which never take a value in C * × C * . We exclude such solutions from our consideration.

Main results.
The main results of this paper are given by Theorems 2.12, 2.15, 2.17 and 2.20 in Sect. 2. Throughout the paper, it is assumed that the parameters κ and t 0 satisfy the non-resonance conditions, As in Ohyama et al. [27], the non-splitting conditions 5b) where j ∈ {±1}, j = 0, t, 1, ∞, also play an important role. The monodromy manifold contains reducible monodromy when one or more of these conditions are violated -see Lemma 2.10.
The RHP corresponding to qP VI is given by Definition 2.7. Our first main result, Theorem 2.12, shows that the RHP with irreducible monodromy is always solvable. This has important ramifications for the mapping that sends solutions of qP VI to points on the monodromy manifold, which we will refer to as the monodromy mapping. In particular, Corollary 2.13 shows that the monodromy mapping is bijective when the non-splitting conditions are satisfied.
The RHP may be solvable in some cases of reducible monodromy. In Sect. 4.2, we show that in such cases, the RHP is solved explicitly in terms of certain orthogonal polynomials yielding special function solutions of qP VI .
Our second main result, Theorem 2.15, constructs an embedding of the monodromy manifold into (CP 1 ) 4 /C * , where the quotient is taken with respect to scalar multiplication. The image of this embedding is described as the zero set of a polynomial, given explicitly in Definition 2.14, minus a curve.
This embedding allows us to study algebro-geometric properties of the monodromy manifold. Our third main result, Theorem 2.17, focuses on the singularities of the monodromy manifold and proves that it is smooth if and only if it excludes reducible monodromy, i.e., if and only if the non-splitting conditions hold true.
Finally, our fourth main result, Theorem 2.20, identifies the monodromy manifold with an explicit affine algebraic surface when the non-splitting conditions are satisfied.
We will refer to the complex projective space CP 1 as P 1 and, for positive integer k, denote the k-fold direct product P 1 × . . . × P 1 by (P 1 ) k . (We remind the reader that P 1 × P 1 is not the same space as P 2 .) 1.3. Outline of the paper. In Sect. 2, we give the precise statements of the main results of the paper. Section 3 is devoted to the Jimbo-Sakai linear system. Here, we renormalize the linear system of [20] and describe the outcomes of Birkhoff's classical theory [1] for this system. In Sect. 4, we study the solvability of RHP I, defined in Definition 2.7, and prove Theorem 2.12. Section 5 concerns the monodromy manifold and proofs of Theorems 2.15, 2.17 and 2.20 are given there. We conclude the paper with a conclusion in Sect. 6.

Detailed Statement of Results
In order to state our main results, we recall the Jimbo-Sakai linear problem for qP VI  2.1. The Jimbo-Sakai linear system. Suppose κ = (κ 0 , κ t , κ 1 , κ ∞ ) ∈ C 4 , all nonzero, are given and t ∈ T lies on a discrete q-spiral T = q Z t 0 . Consider the linear system where A(z, t) is a 2 × 2 matrix polynomial with determinant given by and assume that for an H = H (t) ∈ G L 2 (C). This is the Jimbo-Sakai linear problem [20], which we have scaled to remove redundant parameters (see Sect. 3.1 for details). Throughout this paper we assume that the parameters κ and t 0 satisfy the non-resonance conditions (1.4), which ensure that the linear problem is fully non-resonant (see [23,Definition 1.1]). By Carmichael [2], the linear system (2.1) has solutions Y 0 (z, t) and Y ∞ (z, t) respectively given by convergent series expansions around z = 0 and z = ∞ of the following form, (2.6b) A central object of study in this paper is the connection matrix This matrix is single-valued in z on C * and is related to Birkhoff's connection matrix For our purposes, it is more convenient to work with C(z, t), rather than P(z, t), due to its single-valuedness. We will also refer to the connection matrix C(z, t) as the monodromy of the linear system (2.1). For any fixed t, C(z, t) has the following analytic characterisation in z.
(1) It is a single-valued analytic function in z ∈ C * .
(2) It satisfies the q-difference equation (3) Its determinant is given by We correspondingly make the following definition.
Next, we consider deformations of the linear system (2.1), as t → qt, which leave the matrix function P(z, t) invariant, i.e. such that P(z, qt) = P(z, t), which is equivalent to We call such a deformation isomonodromic. Jimbo and Sakai [22] showed that, upon introducing the following coordinates 1 ( f, g, w) on A, Then, the linear system A(z, t) is regular in t on q M t 0 and the corresponding connection matrix is given by for a matrix C 0 (z) ∈ C(κ, t 0 ), unique up the left-multiplication by diagonal matrices.
Here D(t) is a diagonal matrix which may be eliminated from Eq. (2.9) by rescaling In Lemma 2.2, we have the freedom of rescaling the auxiliary variable w by w → w = dw, d ∈ C * , which is equivalent to gauging the linear system by a constant diagonal matrix, and thus rescaling the matrix C 0 (z) ∈ C(κ, t 0 ) as Hence, Lemma 2.2 provides us with a mapping which associated to any solution ( f, g) of qP VI (κ, t 0 ) the equivalence class of C 0 (z) in C(κ, t 0 ) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. This warrants the following definition.

Definition 2.3.
We define M(κ, t 0 ) to be the space of connection matrices C(κ, t 0 ) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. We refer to M(κ, t 0 ) as the monodromy manifold of qP VI (κ, t 0 ). Correspondingly, we call the mapping (2.10), which associates with any solution ( f, g) of qP VI (κ, t 0 ), a point on the monodromy manifold, the monodromy mapping.
Remark 2.4. The space M(κ, t 0 ) was first introduced and studied in Ohyama et al. [27][ §4. 1.1], where it is denoted as F. Ohyama et al. [27] showed how this space can naturally be endowed with the structure of a complex algebraic variety, under certain assumptions of genericity including the non-resonance (1.4) and non-splitting conditions (1.5). Compatible with this structure, we endow M(κ, t 0 ) with the structures of a complex manifold and algebraic variety, in Theorems 2.17 and 2.20 respectively. The proof that these structures are compatible with those in [27] is postponed to the end of the paper, see Remark 5.6.
In Sect. 3.3, we prove the following lemma concerning injectivity of the monodromy mapping.

The main Riemann-Hilbert problem.
In this paper, we analyse the monodromy mapping through the, via Birkhoff's theory [1], corresponding Riemann-Hilbert problem (RHP).
To introduce this RHP, we return to the single-valued matrix functions 0 (z, t) and ∞ (z, t), defined in Eq. (2.5). Let us denote t m = q m t 0 for m ∈ Z. By Lemma 2.2, we may choose H such that for m ∈ M. Next, we need to choose Jordan curves γ (m) , m ∈ Z, which separate the points in the complex plane where ∞ (z, t m ) and 0 (z, t m ) are respectively non-invertible and singular. These points are precisely the zeros of the determinants (2.6a) and (2.6b) respectively. We thus make the following definition. Definition 2.6. Consider a family (γ (m) ) m∈Z of positively oriented Jordan curves in C * and denote by D (m) + and D (m) − the inside and outside of γ (m) respectively, for m ∈ Z. Then we call this family of curves admissable if, for m ∈ Z, where we use the notation U · V = {uv : u ∈ U, v ∈ V } for compatible sets U and V , and We can always construct an admissible family of curves and it follows that defines a solution of the following RHP, with C(z) = C 0 (z), for m ∈ M.
Definition 2.7 (RHP I). Given a connection matrix C ∈ C(κ, t 0 ) and a family of admissable curves (γ (m) ) m∈Z , for m ∈ Z, find a matrix function (m) (z) which satisfies the following conditions.
The matrix function (m) (z), defined in Eq. (2.12), is uniquely characterised as the solution of RHP I. Indeed, we have the following lemma, which we prove in Sect. 3.3.
This defines a matrix polynomial of the form (2.2) and the values of ( f, g, w) may be read directly from the solution of the RHP as follows (details are given in Sect. 3

.3). Let
and denote H = (h i j ) and U = (U i j ), then Proof. Due to Lemma 2.5, the monodromy mapping is injective. Take any monodromy in the monodromy manifold. Then, by Lemma 2.10, it must be irreducible. Theorem 2.12 thus shows that there exists a solution of qP VI with that monodromy. So the monodromy mapping is also surjective and the corollary follows.
For reducible monodromy, solvability of RHP I is more subtle than in the irreducible case handled in Theorem 2.12. We discuss this in Sect. 4.2, where we show that the RHP with reducible monodromy can be transformed into the standard Fokas-Its-Kitaev RHP [8,9] for certain orthogonal polynomials. We further show that the corresponding solutions of qP VI can be expressed in terms of determinants containing Heine's basic hypergeometric functions. We thus see that special function solutions occur when the monodromy of the linear problem is reducible, a phenomenon well-known for the classical sixth Painlevé equation [26].

Results on the monodromy manifold.
Our second main result is the identification of the monodromy manifold with an explicit surface. To state this result, we define a set of coordinates on the monodromy manifold M(κ, t 0 ), using a construction introduced in our previous paper [23].
Firstly, we require the following notation: for any 2 × 2 matrix R of rank one, let R 1 and R 2 be respectively its first and second column, then we define π(R) ∈ P 1 by with π(R) = 0 if and only if R 1 = (0, 0) T and π(R) = ∞ if and only if R 2 = (0, 0) T . Take a connection matrix C(z) ∈ C(κ, t 0 ) and denote Let 1 ≤ k ≤ 4, then |C(z)| has a simple zero at z = x k and thus C(x k ), while nonzero, is not invertible. We define the coordinates Note that (ρ 1 , ρ 2 , ρ 3 , ρ 4 ) are invariant under left multiplication of C(z) by diagonal matrices. However, multiplication by diagonal matrices from the right has the effect of scaling for some c ∈ C * . Therefore, the coordinates ρ naturally lie in (P 1 ) 4 /C * and we obtain a mapping which is easily seen to be an embedding (see Lemma 5.1). We proceed in giving an explicit description of the image of the monodromy manifold under P. To this end, we make the following definition. Definition 2.14. Define the quadratic polynomial with coefficients given by Note that T is homogeneous and multilinear in the variables ρ = (ρ 1 , ρ 2 , ρ 3 , ρ 4 ). Therefore, if we denote its homogeneous form by , (2.20) then, using homogeneous coordinates ρ k = [ρ x k : ρ y k ] ∈ P 1 , 1 ≤ k ≤ 4, the equation defines a surface in (P 1 ) 4 /C * . We denote this surface by Our second main result is given by the following theorem, which is proven in Sect. 5 Let us denote then, the mapping is a bijection.
The curve X (κ, t 0 ) in the above theorem has a geometric interpretation, which is described in the following remark.
Remark 2. 16. The curve X = X (κ, t 0 ) does not depend on κ 0 and can be written as the intersection Informally, one can think of points on the curve X in S(κ, t 0 ) as corresponding to connection matrices C(z) ∈ C(κ, t 0 ) whose determinant is identically zero, i.e. they satisfy properties (1) and (2) of Definition 2.1, but property (3) with c = 0. Therefore, these coordinate values do not lie in the image of P. In the proof of Theorem 2.15, we obtain an explicit parametrisation of X , see Eq. (5.20).
We note that, any point [ρ] ∈ S(κ, t 0 ) with more than two coordinates zero or more than two coordinates infinite, necessarily lies on the closed curve X , defined in Eq. (2.22), and is thus not a point on the surface S * (κ, t 0 ). However, when one of the non-splitting conditions (1.5) is violated, one of the coefficients of the polynomial T (ρ), in Definition 2.14, vanishes. In that case, there exist points [ρ] ∈ S(κ, t 0 ) with precisely two coordinates zero or two coordinates infinite. Such points cannot lie on the closed curve X (as this would imply that κ 0 ∈ q Z ), and are in one to one correspondence with reducible monodromy on the monodromy manifold.
For example,

and
[ρ] ∈ S * : If κ 0 = κ ∞ t 0 , so that T 34 = 0, then these two subspaces correspond respectively to the equivalence classes of the collection of upper-triangular connection matrices and the equivalence classes of the collection of lower-triangular connection matrices in the monodromy manifold. Furthermore, these two subspaces intersect at the single point [(0, 0, ∞, ∞)] ∈ S * (κ, t 0 ), which corresponds to the equivalence class of the diagonal connection matrix in the monodromy manifold given by setting c = 0 in any of the above two formulas.
By Theorem 2.15, the monodromy manifold inherits any topological properties of the space S * (κ, t 0 ) via the mapping P. Diagonal monodromy, or anti-diagonal monodromy, form singularities on the monodromy manifold, which is the content of our third main result, proven in Sect. 5.2. Remark 2.18. We note that the above theorem implies the assertion in Conjecture 7.10 of Ohyama, Ramis and Sauloy [27]. This conjecture is made under the conditions (1.4), (1.5) and additional assumptions on the parameters, but our proof shows that the result holds without these additional assumptions.
In our fourth and final result we identify the monodromy manifold with an explicit affine algebraic surface via an embedding into C 6 . To construct this embedding, let us denote by Take 1 ≤ i < j ≤ 4 and consider the coordinate . (2.25) So, for example, η 12 is given by in homogeneous coordinates. Note that η i j is invariant under scalar multiplication ρ → cρ, c ∈ C * . Furthermore, the denominator of η i j does not vanish on S * (κ, t 0 ), as any such point [ρ] would necessarily lie on the curve X , see Eq. (2.22).
Our fourth and final main result is given by the following theorem, which is proved in Sect. 5.3.
Theorem 2.20. Let κ and t 0 be parameters satisfying the non-resonance conditions (1.4) and the non-splitting conditions (1.5). Then the mapping M , given in Definition 2.19, is an isomorphism between the monodromy manifold M(κ, t 0 ) and the affine algebraic surface F(κ, t 0 ).

Remark 2.21.
We note that the algebraic surface F(κ, t 0 ) is invariant under the translations since the coefficients in Eq. (2.26) are invariant under them.
The surface F(κ, t 0 ) can be identified with the intersection of two quadrics in C 4 . This can be seen by using Eq. (2.26a) and (2.26b) to eliminate any two of the six variables.
If not, then we can instead choose another pair of coordinates to eliminate. The non-resonance conditions (1.4) and non-splitting conditions (1.5) now guarantee that the above determinant is non-zero. Upon eliminating {η 24 , η 34 }, Eqs. (2.26c) and (2.26d) respectively become with coefficients given by Thus, for generic parameter values, the monodromy manifold of qP VI is isomorphic to the intersection of the two quadrics defined by Eq. (2.27) in C 4 . Intersections of two quadrics in P 4 are known as Segre surfaces and it is well-known that they are isomorphic to Del Pezzo surfaces of degree four, see e.g. [15].
It is interesting to contrast this with the monodromy manifolds of the classical Painlevé equations. They are isomorphic to affine cubic surfaces [35]. In particular, their corresponding projective completions are Del Pezzo surfaces of degree three [15].
We further note that Chekhov et al. [3] conjectured explicit affine Del Pezzo surfaces of degree three as the monodromy manifolds of the q-Painlevé equations higher up in Sakai's classification scheme [31] than qP VI .
From Corollary 2.13, Theorems 2.17 and 2.20, we obtain the following corollary. , In particular, we may write the general solution of qP VI (κ, t 0 ) as Remark 2.23. By identifying the domain of the mapping (2.28) with the initial value space of qP VI at t = t 0 , the mapping becomes a bijective correspondence between complex (algebraic) surfaces. One can show that this correspondence is a biholomorphism using standard arguments. Namely, one observes that the matrix functions j (z, t 0 ), j = 0, ∞, defined in Eq. (2.5), can be chosen locally analytically in ( f, g) as long as one stays away from the exceptional lines above the base points b 7 and b 8 . The corresponding connection matrix is then locally analytic in ( f, g) and, consequently, so are the η-coordinates. To prove the latter statement around points on the exceptional lines above b 7 and b 8 , one simply applies the argument with t = q t 0 rather than t = t 0 , recalling that the time-evolution is a biholomorphism beween the initial value spaces at t = t 0 and t = q t 0 . It follows that the mapping (2.28) is a bijective holomorphism and thus biholomorphism.
Remark 2. 24. By specialising to the parameter setting the qP VI (κ) equation collapses to its symmetric form As both the non-resonance and non-splitting conditions (1.4) and (1.5) are generically not violated by (2.30), all the aspects of our treatment of qP VI can be carried over to qSP VI . We further note that qSP VI is also known as qP III in the literature [14].
Remark 2.25. Regarding Painlevé VI, and its associated standard linear problem, the corresponding monodromy mapping was thoroughly studied by Inaba et al. [17]. The associated monodromy manifold can be identified with an explicit affine cubic surface, a fact which first appeared in Fricke and Klein [11] and was rediscovered by Jimbo [19] in the context of Painlevé VI. Our construction of the surface F(κ, t 0 ), in Theorem 2.20, may be considered as a q-analog of this. Iwasaki [18] studied the smoothness of the Painlevé VI monodromy manifold and associated cubic. Theorem 2.17 can be considered a q-analog of [18, Theorem 1] in the non-resonant parameter regime.

The Linear Problem
Consider the linear system with both A 0 and A 2 invertible and semi-simple. Jimbo and Sakai [20] showed that isomonodromic deformation of such a linear system, as the eigenvalues of A 0 as well as two of the zeros of the determinant of A(z) evolve via multiplication by q, defines an evolution of the coefficient matrix A(z) which is birationally equivalent to qP aux VI . In Sect. 3.1, we show that the linear system (3.1) can always be normalised to the standard form (2.1) we use in this paper. Then, in Sect. 3.2, we formulate the main results of Jimbo and Sakai [20] regarding isomonodromic deformation of the linear system (2.1) and prove Lemma 2.2.
Finally, in Sect. 3.3, we show how the linear system (2.1) can be recovered from RHP I, defined in Definition 2.7, yielding in particular Lemma 2.5.

Normalising the linear system.
In this section we normalise the linear system (3.1) to the standard form (2.1).
Recall that A 0 and A 2 are semi-simple and we denote their eigenvalues by {σ 1 , σ 2 } and {μ 1 , μ 2 } respectively. By means of gauging the linear system with a constant matrix, We further denote the zeros of the determinant of A(z) by x k , 1 ≤ k ≤ 4, so that Evaluating this determinant at z = 0 gives the identity By means of a scalar gauge as well as a scaling of the independent variable, so that the linear system transforms as A(z) → s A(cz), we may ensure that We introduce a time variable t, satisfying t 2 = σ 1 σ 2 , and four nonzero parameters and note that the linear system (3.1) has now been normalised to the form (2.1).

Isomonodromic deformation of the linear system.
In this section we state important results by Jimbo and Sakai [20] on the isomonodromic deformation of the linear system (2.1). Here we recall that isomonodromic deformation stands for deformation as t → q t such that P(z, qt) = P(z, t), or equivalently, such that the connection matrix satisfies Theorem 3.1 (Jimbo and Sakai [20]). Considering the linear system

4)
for an (a posteriori unique), rational in z, matrix function B(z, t), which takes the form .
We proceed in making the time-evolution defined by (3.4) more explicit. Note that compatibility of the linear system (2.1) and time deformation (3.4) amounts to the following evolution of the coefficient matrix A, (3.5) as well as the following evolution of the diagonalising matrix H (t) in (2.4), We use the standard coordinates f = f (t), g = g(t) and w = w(t), defined by Eq. (2.7), on the linear system, whose definition we repeat here for convenience of the reader, Then the linear system is given in terms of { f, g, w} by where and, temporarily using the notationκ = κ + κ −1 , The first two equations follow from the fact that both the left and right-hand side of (3.5) are necessarily analytic in z ∈ C and the third follows from equating the degree one terms in z of both sides of Eq. (3.5).
These equations form an over-determined system for B 0 = B 0 (t). They allow one to express B 0 explicitly in terms of { f, g, w}, for example and Jimbo and Sakai [20] showed that Eq. (3.9) are then equivalent to the qP aux VI time evolution of ( f, g, w).
Furthermore, by means of a direct computation, one can check that Eqs. (2.4) and (3.6) translate to the elements of the diagonalising matrix H = (h i j ) 1≤i, j≤2 satisfying We are now in a position to prove Lemma 2.2.

Proof of Lemma 2.2. We start by showing that the linear system
. By direct inspection, one can see that this parametrisation is regular for all values of ( f, g) ∈ C * × C * and w ∈ C * . The same is true near each of the six basepoints For example, consider the basepoint b 3 = (κ t t, 0). We apply a change of variables, so that {F ∈ C, G = 0} lies on the exceptional line above b 3 , after a local blow up. The parametrisation of the matrix polynomial A is regular at G = 0, and takes the form Geometrically, the line {F ∈ C, G = 0}, above b 3 , parametrises coefficient matrices A whose second column vanishes at z = κ t t. The one remaining point on the exceptional line above b 3 , which does not lie on this line, is an inaccessible initial value. Namely, the corresponding formal solution of qP VI never takes value in C * × C * and is thus not a genuine solution. We conclude that A is regular for The situation is slightly more involved for the remaining base-points b 7 and b 8 , as the auxiliary equation (1.2) is singular at these points. Firstly, as ( f, g) approaches b 8 = (∞, κ −1 ∞ q −1 ), g approaches κ ∞ and consequently w vanishes, due to the auxiliary equation. Consider thus the change of variables In the local chart {F, G, W }, the coefficient matrix A is regular at F = 0. Geometrically, the line {F = 0, G ∈ C}, above b 8 , parametrises coefficient matrices A for which the entry A 12 (z) is constant. In particular, A is regular near b 8 .
Finally, by the same reasoning, it follows that w → ∞, as ( f, g) approaches b 7 = (∞, κ ∞ ), and that the coefficient matrix A is thus singular there. We (3.10) For every t ∈ q M t 0 , we choose any H (t) satisfying (2.4), but not necessarily (3.6), and let C(z, t) denote the corresponding connection matrix. We proceed with proving Eq. (2.9) in the lemma.
To prove (2.9), it is enough to show that, for any m ∈ M, for some diagonal matrix , if m + 1 ∈ M, and for some diagonal matrix , if m + 1 / ∈ M (so that necessarily m + 2 ∈ M). The first case is a direct consequence of Theorem 3.1. We may further ensure that = I by imposing Eq. (3.6) at t = t m . As to the second case, we note that, analogues to the proof of Theorem 3.1 by Jimbo and Sakai [20], one can show that P(z, for an (a posteriori unique), rational in z, matrix function F(z, t) which takes the form .
The corresponding time evolution of the coefficient matrix is equivalent to two iterations of qP aux VI , and By specialising to t = t m , we obtain (3.11). We may further ensure that = I , by imposing This establishes Eq. (2.9).
The last statement of the lemma, follows from the fact that, rescaling 3.3. On the qP VI RHP. In Sect. 2.2, we formulated the main Riemann-Hilbert problem for the qP VI equation, RHP I, in Definition 2.7. Let ( f, g) be a solution of qP VI (κ, t 0 ) and [C(z)] be its corresponding monodromy in the monodromy manifold via the monodromy mapping, see Definition 2.3. Then Eq. (2.12) defines a solution of RHP I. In this section, we show now we may reconstruct the solution ( f, g) from the solution of RHP I, giving in particular formulas (2.16). This furthermore yields a proof of Lemma 2.5.
Proof of Lemma 2.8. Note that the determinant of z m C(z) may be written as for some c m ∈ C * . Assume we have a solution (m) (z) of RHP I, defined in Definition 2.7. Then its determinant (m) (z) is analytic on C \ γ (m) , it satisfies the jump condition This scalar RHP is uniquely solved by Indeed, the right-hand side satisfies this scalar RHP and, denoting the quotient of the left-and right-hand side of (3.12) by g(z), it follows that g(z) is an entire function on the complex plane satisfying g(z) → 1 as z → ∞. By Liouville's theorem, g(z) ≡ 1, which yields Eq. (3.12). In particular, the solution (m) (z) is globally invertible on C.
Suppose we have another solution (m) (z) of RHP I, then the quotient is analytic on C \ γ (m) . Furthermore, R(z) has a trivial jump on γ (m) , i.e. R + (z) = R − (z). Therefore, R(z) extends to an analytic function on the entire complex plane. Finally, we know that R(z) = I + O(z −1 ) as z → ∞, thus R(z) ≡ I , again by Liouville's theorem, and the lemma follows.
Starting with a solution of qP VI , we showed how to obtain a connection matrix in Sect. 2.2. Therefore, we obtain a solution of RHP I -see (2.12). We now describe how conversely, any solution of RHP I leads to a solution of qP VI .
Take a connection matrix C(z) ∈ C(κ, t 0 ) and suppose RHP I has a solution for at least one m ∈ Z. We write For m ∈ M, define A(z, q m t 0 ) by Eq. (2.13). Due to the jump conditions of (m) (z) in RHP I, the matrix A(z, q m t 0 ) has trivial jumps on γ (m) and q −1 γ (m) and thus extends to a single-valued function on the complex z-plane. Furthermore, it follows from the global analyticity and invertibility of (m) (z), see Lemma 2.8, that A(z, q m t 0 ) is entire.
Finally, as (m) (z) = I + O(z −1 ) as z → ∞, it follows that A(z, q m t 0 ) is a degree two matrix polynomial satisfying , and, due to Eqs. (3.12) and (2.13), A(z, q m t 0 ) is a coefficient matrix of the form (2.2), for m ∈ M. By construction, the connection matrix associated with A(z, q m t 0 ) is given by z m C(z), m ∈ M.
For all m ∈ M, assume that Then the corresponding coordinates ( f, g, w) are well-defined on A, via Eq. (2.7), and they form a solution of qP aux VI (κ, t 0 ). Furthermore, we can read the values of ( f, g, w) directly from the solution (m) (z) of the RHP through formulas (2.16).
Consequently, the matrix function H and U , defined by Eqs. (2.14) and (2.15) for (m) (z), are related to H and U by The formulas (2.16b) and (2.16c) for f and g are invariant under such rescaling and the lemma follows.
We finish this section with some remarks on assumption (3.13). Firstly, note that this is a necessary assumption for the coordinates ( f, g, w) to be well-defined. Now, suppose that A 12 (z, q m t 0 ) ≡ 0, for some m ∈ M, and write t m = q m t 0 . Then, we have where, by Eq. (2.3), Furthermore, as the eigenvalues of A(0, t m ) are κ ±1 0 t m , necessarily By comparing the different possible values of v 1 , . . . , v 4 in the above two equations, it follows that the parameters must satisfy for some j ∈ {±1}, j = 0, t, 1, ∞. So, at least one of the non-splitting conditions (1.5) is violated. Furthermore, from the defining equations of 0 and ∞ , Eq. (2.5), it follows that ∞ (z, t m ) is lower-triangular and either ( 0 ) 11 (z, t m ) or ( 0 ) 12 (z, t m ) is identically zero. In particular, either C 12 (z) ≡ 0 or C 22 (z) ≡ 0, which means that C(z) is reducible, see Definition 2.9.
We discuss RHP I with reducible monodromy in further detail in Sect. 4.2.

Solvability, Reducible Monodromy and Orthogonal Polynomials
In this section we study the solvability of RHP I, defined in Definition 2.7, and consequently the invertibility of the monodromy mapping introduced in Definition 2.3. In Sect. 4.1, we prove Lemma 2.10 and Theorem 2.12. In Sect. 4.2, we discuss RHP I with reducible monodromy.

Solvability.
We start this section by proving Lemma 2.10. To this end, we briefly recall some fundamental properties of q-theta functions, i.e. analytic functions θ(z) on C * such that θ(z)/θ (qz) is a monomial. For α ∈ C * and n ∈ N, we denote by V n (α) the set of all analytic functions θ(z) on C * , satisfying We note that V n (α) is a vector space of dimension n if n ≥ 1, see e.g. [29]. For r ∈ R + , we call D q (r ) := {|q|r ≤ |z| < r }, a fundamental annulus. As described in the following lemma, q-theta functions are, up to scaling, completely determined by the location of their zeros within any fixed fundamental annulus.
We proceed in proving Lemma 2.10.
Proof of Lemma 2.10. Take a connection matrix C(z) ∈ C(κ, t 0 ) and suppose that C(z) is reducible. Then C(z) is triangular or anti-triangular. Assume C(z) is triangular, then for some c ∈ C * , where the second equality follows from Definition 2.1. Writing it follows from Lemma 4.1 that for some labeling {i, j, k, l} = {1, 2, 3, 4}, c 11 , c 22 ∈ C * and n ∈ Z. Furthermore, by Definition 2.1, violating the non-splitting conditions (1.5).
It follows similarly that C(κ, t 0 ) contains reducible monodromy in the latter case and the lemma follows.
To study the solvability of RHP I, in Definition 2.7, it is helpful to consider the following slightly more general RHP.

Definition 4.2 (RHP II).
Given a connection matrix C ∈ C(κ, t 0 ) and a family of admissable curves (γ (m) ) m∈Z , for m, n ∈ Z, find a matrix function (m,n) (z) which satisfies the following conditions.

(i) (m,n) (z) is analytic on C \ γ (m) . (ii) (m,n) (z ) has continuous boundary values
By comparison with RHP I in Definition 2.7, we can identify (m,0) (z) = (m) (z). More generally, for any fixed n ∈ Z, RHP II is equivalent to RHP I, with C(z) replaced by C(z)z −nσ 3 . In particular, we have the following analog of Lemma 2.8.

Lemma 4.3. For any fixed m, n ∈ Z, if RHP II in Definition 4.2 has a solution (m,n) (z), then this solution is globally invertible on the complex plane and unique.
Proof. The proof is analogous to that of Lemma 2.8.
Given the uniqueness in the above lemma, we say that (m,n) (z) exists if and only if RHP II has a solution for that value of m, n ∈ Z.
The main reason for considering the more general RHP above, is that we have the following result due to Birkhoff [1].

Lemma 4.4. For any fixed m ∈ Z, the solution (m,n) (z) to RHP II, in Definition 4.2, exists for at least one n ∈ Z.
Proof. See Birkhoff [1][ §21] or the proof of Lemma 4.4 in [23].
Our next step is to study the dynamics of (m,n) (z) as n varies, with the ultimate goal to obtain criteria for the existence of (m,n) (z) at n = 0, as these will allow us to prove solvability of RHP I and thus prove Theorem 2.12.
To this end, if (m,n) (z) exists, we denote its expansion around z = ∞ by as z → ∞, and associate a coefficient matrix A (m,n) (z) as in Eq. (2.13), (4.7) Then A (m,n) (z) is a degree two matrix polynomial of the form (2.2) except for a generally different normalisation at z = ∞, In particular, the corresponding coordinates f (n) (q m t 0 ), g (n) (q m t 0 ) and w (n) (q m t 0 ) define a solution of qP VI (κ (n) , t 0 ) with if RHP II is solvable in m for that value of n.
We have the following lemma regarding solvability of RHP II as n varies.
then R(z) has a trivial jump on γ (m) and consequently extends to an analytic matrix function on the whole complex plane, satisfying It follows that R(z) is a matrix polynomial and Eq. (4.8) follows directly from the normalisation of (m,n+k) (z) at z = ∞. By the above observation, the existence of (m,n+k) (z) can be studied through examining the solvability of Eq. (4.8), which is how we proceed in establishing the lemma.
Firstly, we consider k = 1. The matrix R(z) must take the form It follows from direct computation that the for z ∈ q −1 D (m) + , it follows that (m,n) 12 (z) ≡ 0 on D (m) + . Now, consider Eq. (4.8) for any k > 0. Its (2, 2)-entry reads as z → ∞. However, recall that (m,n) and thus Eq. (4.9) has no polynomial solution R 22 (z). It follows that (m,n+k) (z) does not exist, for any k > 0. Finally, we prove that one of the entries of C(z) must be identically zero. To this end, note that Note that we have the following immediate corollary from Lemmas 4.4 and 4.5.

Corollary 4.6. Consider RHP II in Definition 4.2 and assume C(z) is irreducible. Then, for any m, n ∈ Z, the solution (m,n) (z) or (m,n+1) (z) exists.
We now have all the ingredients to prove Theorem 2.12.
Proof of Theorem 2.12. Take an irreducible connection matrix C(z) ∈ C(κ, t 0 ). In RHP II, see Definition 4.2, we have an additional integer parameter n and we denote its solution by (m,n) (z), when it exists. For n = 0, this RHP is precisely RHP I. Proving the first part of Theorem 2.12, is thus equivalent to showing that, for any fixed m ∈ Z, the solution (m,0) (z) or (m+1,0) (z) of RHP II exists. We do this via a proof by contradiction. Take m ∈ Z and suppose that neither (m,0) (z) nor (m+1,0) (z) exists. As C(z) is irreducible, Corollary 4.6 implies that (m,−1) (z) and (m+1,−1) (z) necessarily exist.
To deduce a contradiction, we define the following matrix function , we know that the determinant of C(z) only vanishes at z = κ ±1 t q m+1 t 0 , so that C(z) −1 has (simple) poles there. Therefore, B(z) has simple poles at z = κ ±1 t q m+1 t 0 . This, combined with the fact that B(0) = 0 and B(∞) = I , yields , for a constant matrix B 0 . 2 We now turn our attention to the coefficient matrices A (m,−1) (z) and A (m+1,−1) (z) related to (m,−1) (z) and (m+1,−1) (z) via Eq. (4.7). It follows from the defining equation of B(z), Eq. (4.11), that these coefficient matrices are related by (4.12) To deduce this, it suffices to note that, for z ∈ q −1 D (m) + , compatibility of the first rows of the right-hand sides of Eqs. (4.7) and (4.11), yields Eq. (4.12). By analytic continuation, Eq. (4.12) holds globally. Now, recall that (m,n) (z) has an asymptotic expansion at infinity, see Eq. (4.6), of the form Due to part (i) of Lemma 4.5, we know that u (m,−1) 12 = 0 and u (m+1,−1) 12 = 0. We will proceed in showing that also v (m,−1) 12 = 0, which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.
To get there, we first note that, by considering the expansion of B(z) as z → ∞ in the first row of Eq. (4.11), we obtain (B 0 ) 12 = 0.

12
= 0, it follows from the first row of the right-hand side of Eq. (4.7) that the (1, 2) entry of A satisfies A (m,−1) (4.13) where c is a constant. We now show that, Eqs.
As (B 0 ) 12 = 0, this implies the following dichotomy: either Secondly, the left-hand side of Eq. (4.12) is analytic at z = κ ±1 t q m t 0 , but B(qz), on the right-hand side, has a pole at those two points. This means that (4.14) We now consider the (1, 2)-entry of Eq. (4.14) for the two choices of the sign ±. The positive choice leads to a tautology in Case (I), while the negative choice gives On the other hand, the positive choice in Case (II) gives while the negative choice is a tautology. Due to the non-resonance conditions (1.4), κ 2 t = 1, and so it follows from the above results that c = 0. Therefore, A (m,−1)

12
= 0, which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.
We conclude that solution (m) (z) or (m+1) (z) of RHP I exists for any m ∈ Z, establishing the first part of the theorem.
Let ( f, g, w) be the corresponding solution of qP aux VI (κ, t 0 ) via (2.16). The second part of the theorem asserts that for m ∈ Z, (m) (z) fails to exist if and only if ( f (t m ), g(t m )) = (∞, κ ∞ ).

Reducible monodromy, orthogonal polynomials and special function solutions.
In the case of P VI , it is well-known that reducible monodromy yields special function solutions -see Mazzocco [26]. Furthermore, in such case, the solution of the standard RHP for P VI , when solvable, can be solved explicitly in terms of certain orthogonal polynomials [6,10].
In this subsection, we show that the same phenomenon occurs for qP VI . Recall that the monodromy manifold contains reducible monodromy if and only if conditions (1.5a) or conditions (1.5b) are violated. We discuss one example from each of these two sets of non-splitting conditions. Firstly, we consider the case where violating one of the conditions in (1.5a), and consider RHP II, defined in Definition 4.2, with the following upper-triangular connection matrix C(z) ∈ C(κ, t 0 ), Here c ∈ C and ν ∈ C * are two monodromy datums that can be chosen at pleasure. Writing t m = q m t 0 , the jump matrix of (m,n) (z) in RHP II can be written as We bring RHP II into the standard Fokas-Its-Kitaev RHP form [8,9] for orthogonal polynomials, by applying a transformation where D 1 and D 2 diagonal matrices and F (m) 0 (z) and F (m) ∞ (z) analytic and invertible matrix functions on respectively D (m) − and D (m) + . After such a transformation, the jump matrix of Y (m,n) (z) reads and we wish to choose D 1,2 and F 0,∞ such that this jump matrix is upper-triangular with diagonal entries constant and equal to 1. To this end, we choose F 0,∞ so that they cancel the q-theta functions on the diagonal, and we choose D 1 and D 2 to normalise the now constant diagonal entries so that they equal 1, Then the jump matrix reads and Y (m,n) (z) solves the following RHP, if it exists.
where w(z, t) is the weight function defined in Eq. (4.16).
RHP III is the standard Fokas-Its-Kitaev RHP for orthogonal polynomials on the contour γ (m) with respect to the weight function w(z, t m ). We refer to Deift [4] for more background information on the theory of orthogonal polynomials and corresponding RHPs.
We proceed to draw some immediate conclusions from the equivalence between the RHPs II and III, given in Definitions 4.2 and 4.7 respectively, and the theory of orthogonal polynomials. If n < 0, then RHP III is unsolvable for every m ∈ Z and thus the same holds true for RHP II.
When n = 0, RHP III is solvable for every m ∈ Z and the solution is explicitly given by where C (m) denotes the Cauchy operator on γ (m) , When n > 0, RHP III is solvable if and only if the Hankel determinant of moments is nonzero, in which case the solution of the RHP is explicitly given by where p n (z; t m ), for n ≥ 0, denotes the (generically) degree n polynomial The latter polynomials satisfy the orthogonality condition and thus form a sequence of orthogonal polynomials with respect to the complex functional and assume c = 0. We may employ a similar argument as in the proof of Theorem 2.12, to show that, for any m ∈ Z, m ∈ M n or m + 1 ∈ M n . We thus obtain a corresponding solution (w (n) , f (n) , g (n) ) of qP aux VI (κ (n) , t 0 ), where κ (n) = (κ 0 , κ t , κ 1 , κ ∞,n ), κ ∞,n := q n κ ∞ , κ 0 = κ t κ 1 κ ∞ , for n ≥ 0.
We proceed to derive explicit formulas for f (n) and g (n) . To this end, we note that the next to highest order coefficient, in the asymptotic expansion can be written explicitly as where n (t m ) denotes the n-th Hankel determinant of moments and with 1 (t m ) = μ 1 and 0 (t m ) = 0. By direct substitution of the corresponding asymptotic expansion of (m,n) (z) around z = ∞ into Eq. (2.13), we find where t = t m and the linear term L reads .
Upon substituting the explicit formula for w (n) into the auxiliary Eq. (1.2), and solving for g, we obtain Note that, by the above formulas, n (t) = 0 if and only if f (n) (t) = ∞ and g (n) (t) = κ ∞,n , consistent with Theorem 2.12. Furthermore, the moments μ k = μ k (t) can be expressed explicitly in terms of Heine's basic hypergeometric functions. Indeed, a residue computation yields that the k-th moment equals Sakai [30] first derived special function solutions of qP VI , written in terms of Casorati determinants of Heine's basic hypergeometric functions, which correspond to setting ν = κ −1 t or ν = κ 2 0 /κ t in the above, so that S 1 = 0 or S 2 = 0 respectively. Ormerod et al. [28] related a family of semi-classical orthogonal polynomials to qP VI , via the Jimbo-Sakai linear system, and derived formulas similar to (4.18) and (4.19) above. To relate the orthogonal polynomials in this section to those in [28], we write the complex functional (4.17) in terms of q-Jackson integrals. Assuming that |κ 0 | < |q| − 1 2 , a residue computation gives for any entire function p(z), where the right-hand side integrals are standard Jackson integrals, W (z, t) is the weight function and the dependence of the integral operator on the monodromy data {c, ν} is hidden in the coefficients in front of the Jackson integrals, Note that both coefficients satisfy α(qt) = 1 κ t ν α(t) and the orthogonal polynomials in Ormerod et al. [28] then coincide with the polynomials p n above, up to scalar multiplication, in the case when ν is chosen such that α 1 (t) = −α 2 (t). In other words, ν = ν(t 0 ) is chosen such that Next, we briefly consider an example coming from one of the conditions in (1.5b) being violated. Namely, we set and consider RHP I, defined in Definition 2.7, with a corresponding upper-triangular connection matrix of the form, where the monodromy datums c ∈ C and ν ∈ C * can again be chosen at pleasure. We note that the jump matrix z m C(z) can be rewritten as where we denoted t m := q m t 0 . We apply the transformation Then the jump matrix for Y (m) (z) reads (4.20) and Y (m) (z) solves the following RHP, if it exists.

Definition 4.8 (RHP IV).
For m ∈ Z, find a matrix function Y (m) (z) which satisfies the following conditions.
− and D (m) + respectively, related by This RHP takes the form of the Fokas-Its-Kitaev RHP for orthogonal polynomials, but with the contour γ (m) and weight function w(z, t m ) scaling with the 'degree' m of the corresponding orthogonal polynomials. In particular, RHP IV is unsolvable for m < 0 and thus so is RHP I in Definition 2.7.
For m = 0, RHP IV is solvable and its solution is given by From Eq. (2.13) it follows that the corresponding linear system A(z, t) at t = t 0 takes the upper-triangular form .  ( f, g) is given by the semi q-spiral q N t 0 .
Note that, by Eq. (4.21), the value of g at t = t 0 is given by and thus Eq. (1.1) has a singularity at t = q −1 t 0 which cannot be resolved. In particular, there exists no isomonodromic continuation of the solution past t = t 0 , see also [5][Prop.

4.1].
This phenomenon has also been observed for solutions of other discete Painlevé equations associated with orthogonal polynomials, see e.g. Assche [34].
We emphasise that, also in this case, one can derive explicit expressions for f (q m t 0 ) and g(q m t 0 ), m ≥ 0, in terms of determinants of moments, but with the sizes of the determinants growing with m.
Finally, note that, if we set c = 0, so that C(z) is diagonal, we have w(z, t) = 0 and A 12 (z, t 0 ) ≡ 0. In this singular case, M = {0} and there is no solution ( f, g) of qP VI (κ, t 0 ) corresponding to this monodromy.
We finish this section by noting that, in general, the domain where RHP I, defined in Definition 2.7, is solvable, can take one of five particular forms when C(z) is reducible, characterised by In the first example of this section, we saw cases (1) and (5). In the second example, we saw cases (2) and (4) with m 0 = 0.

The Monodromy Manifold
This section is devoted to the monodromy manifold defined in Definition 2.3. In Sects. 5.1, 5.2 and 5.3 we prove Theorems 2.15, 2.17 and 2.20 respectively.

5.1.
On the embedding of the monodromy manifold. In Sect. 2.4, see Eq. (2.19), we defined a mapping P of the monodromy manifold to (P 1 ) 4 /C * . In this section we show that this mapping is an embedding and determine its image, proving Theorem 2.15.
Firstly, we have the following lemma. Proof. Take any two connection matrices C(z), C(z) ∈ C(κ, t 0 ) and suppose that their respective coordinate values ρ and ρ are identical up to scaling, i.e. ρ = cρ, for some c ∈ C * . Then the matrix function is analytic on C * . But D(z) satisfies and, as κ 2 0 / ∈ q Z , it follows from the general theory of q-theta functions, see e.g. Lemma 4.1, that D(z) ≡ D must be a constant diagonal matrix. Therefore, [ C(z)] and [C(z)] represent the same point on the monodromy manifold. The thesis follows.
To determine the image of the monodromy manifold under P, it is convenient to consider a related embedding into (P 1 ) 4 of a finer quotient of the space C(κ, t 0 ), given in the following definition.

Definition 5.2.
We define M(κ, t 0 ) to be the space of connection matrices C(κ, t 0 ) quotiented by arbitrary left-multiplication by invertible diagonal matrices. We denote the equivalence class of C(z) ∈ C(κ, t 0 ) in M(κ, t 0 ) by C(z) and denote by the quotient mapping of M(κ, t 0 ) onto the monodromy manifold.
are invariant under left-multiplication by diagonal matrices and are thus well defined on equivalence classes in M(κ, t 0 ). We thus obtain a mapping This mapping is an embedding, by the same argument as given in the proof of Lemma 5.1, with c set equal to 1. Let ι P denote the quotient mapping The proof of Theorem 2.15 revolves around the diagram which is commutative, because right multiplication by a diagonal matrix translates to scalar multiplication of ρ as shown in Eq. (2.18). We first determine the image of M(κ, t 0 ) under P, following the technique developed in our previous paper [23], and then obtain Theorem 2.15 by projecting this image into (P 1 ) 4 /C * via ι P .
To describe the image of M(κ, t 0 ) under P, we make the following definition.

Definition 5.3.
Recall the definition of the quadratic polynomial T (ρ : κ, t 0 ) as well as its homogeneous form T hom in Definition 2.14. Using homogeneous coordinates ρ k = [ρ x k : ρ y k ] ∈ P 1 , 1 ≤ k ≤ 4, the equation We denote by S * (κ, t 0 ) the space obtained by cutting this subspace from S(κ, t 0 ), then the mapping is a bijection.
Proof. Let us take a connection matrix C(z) ∈ C(κ, t 0 ). It will be convenient to work with the following uniform notation, For any 1 ≤ i, j ≤ 2, the matrix-entry C i j (z) is an element of the two-dimensional vector space see Eq. (4.1), and we know that for some c ∈ C * . For each 1 ≤ k ≤ 4, the equation π(C(x k )) = ρ k translates to where we used homogeneous coordinates ρ k = [ρ x k : ρ y k ]. We proceed in studying Eq. (5.7) by choosing explicit bases of the vector spaces V i j , 1 ≤ i, j ≤ 2. To this end, we introduce the following eight q-theta functions, For any 1 ≤ i, j ≤ 2, the collection {u i j 1 (z), u i j 2 (z)} forms a basis of V i j . We may thus write As the determinant of C(z) cannot be identically zero, we know that both vectors on the left-hand side of Eqs. (5.9) and (5.10) are nonzero. This in turn implies that the determinants of the 4 × 4 matrices on the left-hand side are zero. By means of a lengthy calculation, one can check that both determinants, coincide, up to some nonzero scalar multipliers, with the equation where T hom is defined in Definition 2.14. We refer the interested reader to our previous work [23][Appendix B] where an analogous computation is given. It follows that P embeds M(κ, t 0 ) into the threefold S(κ, t 0 ). We proceed to determine those coordinate-values in S(κ, t 0 ) which cannot be realised by any connection matrix C(z) ∈ C(κ, t 0 ).
Take any ρ ∈ S(κ, t 0 ), then we know that both homogeneous equations (5.9) and (5.10) have non-trivial solutions. Let us take a solution of each respectively, Then we know that C(z) is analytic on C * , it satisfies and |C(x k )| = 0 for 1 ≤ k ≤ 4. Furthermore, by construction, There are two options, either Eq. (5.6) holds for some c ∈ C * , which means that C(z) ∈ C(κ, t 0 ) and thus ρ lies inside the range of P; or the determinant of C(z) is identically zero, (5.14) In the latter case, ρ does not lie inside the range of P. To show this, suppose on the contrary that there is a C(z) ∈ C(κ, t 0 ) with π( C(x k )) = ρ k for 1 ≤ k ≤ 4. Then, by the same argument as in the proof of Lemma 5.1, we have C(z) = D C(z), (5.15) for some diagonal matrix D. However, as the determinant of C(z) is identically zero, we must have |D| = 0. Consequently, Eq. (5.15) contradicts equations (5.13). It follows that, in the case when the determinant of C(z) is identically zero, ρ indeed does not lie in the range of P. Therefore, to prove the proposition, it remains to be shown that the determinant of the matrix C(z), constructed above, is identically equal to zero if and only if the coordinatevalues ρ lie in X = X (κ, t 0 ), and that this space X is a codimension one subspace of S(κ, t 0 ).
To this end, let us note that Eqs. (5.13) and (5.14) imply that either Case (i) corresponds, via Eqs. (5.9) and (5.10), to the four lines Indeed, C 11 (z) ≡ 0 implies that the coefficients α 11 1 , α 11 2 in Eq. (5.9) are zero. A nontrivial solution of (5.9) with these constraints exists if and only if the coordinate-values ρ lies inside one of the above four lines.
Similarly, case (ii) corresponds to the four lines Note that the eight lines, defined in (5.16) and (5.17), indeed lie inside X . Finally, in case (iii), C(z) must take the form Consequently, for any choice of c, τ ∈ C * , Eq. (5.18) defines a point on the threefold S(κ, t 0 ). We now make the important observation that formulae (5.18) are κ 0independent. That is, Eq. (5.18) defines a point on S(λ 0 , κ t , κ 1 , κ ∞ , t 0 ), for any value of λ 0 . Thus these points lie in the subspace X .
To prove the proposition, it suffices to show that, conversely, any point in X lies either on one of the eight lines (5.16) and (5.17), or is given by (5.18) for a choice of c, τ ∈ C * . To this end, let us take a point ρ ∈ X which is not on one of the eight lines. Construct a corresponding matrix C(z) via Eqs. (5.9) and (5.10), see Eq. (5.11). So C(z) is analytic on C * , it satisfies (5.12) and Eq. (5.13) hold true.
As ρ ∈ X ⊆ S(1, κ t , κ 1 , κ ∞ , t 0 ), we can similarly construct a matrix C(z) via Eqs. (5.9) and (5.10), which satisfies ∞ . This matrix function is also analytic on C * , and satisfies (5.13). Now suppose, for the sake of obtaining a contradiction, that ρ is not given by (5.18) for some c, τ ∈ C * , so that |C(z)| ≡ 0. Consider the quotient D(z) = C(z)C(z) −1 . As, by construction, C(z) and C(z) have the same ρ-coordinate values, it follows that D(z) is an analytic function on C * . However, D(z) satisfies the q-difference equation 0 , and therefore, by Lemma 4.1, D(z) ≡ 0 and consequently C(z) ≡ 0, which contradicts the fact that C(z) satisfies (5.13).
We conclude that the subspace X is explicitly parametrised by where φ is the function defined in (5.18) and the closure is taken in (P 1 ) 4 . Thus X is a codimension one closed subspace of S(κ, t 0 ). Furthermore, we have shown that X consists precisely of the points in the threefold S(κ, t 0 ) that cannot be realised as coordinatevalues ρ of any connection matrix C(z) ∈ C(κ, t 0 ). Thus the image of M(κ, t 0 ) under the embedding P is given by S(κ, t 0 ) \ X and the proposition follows.
Proof of Theorem 2.15. Recall from Definition 5.2 that elements of M(κ, t 0 ) are connection matrices C equivalent under left multiplication by a diagonal matrix, while the entries of M(κ, t 0 ) are those equivalent under right and left multiplication by diagonal matrices. Note that the desired bijection is already proved for M(κ, t 0 ) in Proposition 5.4. So the proof of the present theorem will follow under an appropriate quotient mapping M(κ, t 0 ) to M(κ, t 0 ) and the corresponding quotient from S * (κ, t 0 ) to S * (κ, t 0 ). Recall that Definition 5.2 denotes the former quotient by ι M . The latter quotient is denoted by ι P , defined in Eq. (5.2). Now, consider the commutative diagram (5.3). By Proposition 5.4, the image of P is given by S * (κ, t 0 ). Therefore, the image of the composition ι P • P is given by S * (κ, t 0 ). As ι M is surjective, it follows from the commutativity of diagram (5.3) that the image of P is given by S * (κ, t 0 ).
In Lemma 5.1, it was shown that P is injective and it thus follows that P is a bijection, which proves the theorem.
Proof of Remark 2. 16. We note that, by Eq. (5.19), we have the following explicit parametrisation of the curve X = X (κ, t 0 ), where the closure is taken in (P 1 ) 4 /C * and x k , 1 ≤ k ≤ 4, are as defined in Eq. (2.17). Note that this parametrisation is κ 0 -independent, which implies By the definition of X , Eq. (2.22), the right-hand side is also a subset of X and they are therefore equal, yielding the desired result, Eq. (2.24).

Smoothness of the monodromy manifold.
In this subsection, we study the smoothness of the monodromy manifold M(κ, t 0 ) and prove Theorem 2.17. The monodromy manifold does not naturally come with a topology. However, due to Theorem 2.15 and Proposition 5.4, we have the following refined version of a commutative diagram (5.3), where both P and P are bijective, and ι S * denotes the quotient mapping ι P restricted to S * (κ, t 0 ). The monodromy manifold inherits a topology from S * (κ, t 0 ) via P. Similarly, M(κ 0 , t 0 ) inherits a topology from the threefold S * (κ, t 0 ). To prove Theorem 2.17, we first study the smoothness of the space S * (κ, t 0 ). We then deduce corresponding results for the surface S * (κ, t 0 ), by taking the quotient with respect to scalar multiplication. Finally, we translate the results for S * (κ, t 0 ) to the monodromy manifold.
The following proposition describes the singular set of the space S * (κ, t 0 ) and shows that it is empty if and only if the non-splitting conditions hold. Furthermore, all these singularities are ordinary double-point singularities.
In particular, the following statements are equivalent.
The set S * sing is empty. (iii) The non-splitting conditions (1.5) hold true.
is the zero locus of the polynomial T (ρ; κ, t 0 ) in (P 1 ) 4 and X (κ, t 0 ) denotes a subspace of S(κ, t 0 ), defined in Eq. (5.4). From here on, we will often suppress the explicit parameter dependence on (κ, t 0 ) of T (ρ), S, X and S * = S\X .
Firstly, as X is, by definition, the zero locus of two polynomials, it is closed in S. Hence, S * is open in S. To prove the first part of the proposition, we study whether the gradient of T (ρ) vanishes anywhere on the open subset S * of S.
By evaluating this identity, and the second equation in (5.29), at κ 0 = q − 1 2 , it follows that h (q 1 2 ) = 0 so that κ 0 = q 1 2 is at least a double zero of h. The same statement follows analogously for κ 0 = −q 1 2 .
In conclusion, we have found four zeros of h, κ 0 = ±1, ±q 1 2 , each at least of degree two. But h is a degree 8 theta function. It follows from this, and Eq. (5.28), that where h is a function independent of κ 0 . By following the same procedure with respect to the variables κ t , κ 1 , κ ∞ , we obtain for some constant c which may only depend on t 0 and q. At this point, one simply evaluates both sides at κ 0 = κ t = κ 1 = κ ∞ = i, to obtain c = 1, which yields Eq. (5.27).
We now return to the proof of the proposition. We have already established that S * has no singularities in its affine part. It remains to study whether S * has singularities with one or more of their coordinates equal to ∞. Note that we only have to check the cases where one or two of their coordinates are equal to ∞, as points, with more than two coordinates equal to ∞, lie in X and thus not in S * .
Let us start by considering whether there are any singularities in To this end, we evaluate the gradient of For this gradient to vanish, it is required that T 14 = T 24 = T 34 = 0, which cannot be realised without violating one of the non-resonance conditions (1.4). Therefore, S * has no singularities with ρ 4 = ∞ and the remaining coordinates finite. Applying the same argument in the three other cases, it follows that the manifold S * has no singularities with precisely one of their coordinates equal to ∞. Next, we consider the existence of singularities on S * with two of their coordinates infinite. Let us for example consider ρ 3 = ρ 4 = ∞ with ρ 1 and ρ 2 finite. Setting Therefore, In turn, T 34 = 0 if and only if κ +1 0 κ ∞ t 0 ∈ q Z or κ −1 0 κ ∞ t 0 ∈ q Z . Thus T 34 = 0 when the non-splitting conditions (1.5) hold true.
More generally, if the non-splitting conditions (1.5) hold true, then all of the coefficients T i j , 1 ≤ i < j ≤ 4, are nonzero and consequently there are no points on S * with two coordinates equal to ∞. Thus we can conclude that S * is smooth when conditions the non-splitting conditions hold true.
Returning to the example above, i.e. ρ 3 = ρ 4 = ∞, under the assumption that T 34 = 0, evaluation of the gradient of which vanishes at ρ x 1 = ρ x 2 = 0, and only at this point, as where H the Hessian of T defined in Eq. (5.26). The determinant of the Hessian of F at the point (ρ x 1 , ρ x 2 , ρ y 3 , ρ y 4 ) = 0 equals |H |, which is nonzero, and thus this point is a non-degenerate saddle point of F. In particular, {F = 0} has an ordinary double point singularity at 0, by the complex Morse lemma. Therefore, the manifold S * has an ordinary double point singularity at ρ = (0, 0, ∞, ∞), when κ +1 0 κ ∞ t 0 ∈ q Z or κ −1 0 κ ∞ t 0 ∈ q Z . More generally, if some of the non-splitting conditions (1.5) are violated, then the intersection S * sing of and S * is non-empty, and at each point in S * sing , S * has an ordinary double point singularity and S * is smooth elsewhere. Otherwise, S * sing is empty and in that case we have already shown that S * has no singularities. This completes the proof of the proposition.
We now proceed to prove Theorem 2.17 by using Proposition 5.5.
Proof of Theorem 2.17. The first part of the proof is to show that the smoothness properties of the 3-manifold S * (κ, t 0 ), established in Proposition 5.5, are preserved by the quotient map to S * (κ, t 0 ). The second step will be to translate these results to the monodromy manifold M(κ, t 0 ).
Recall that S * (κ, t 0 ) is the zero set of the polynomial T (ρ; κ, t 0 ), given in Definition 2.14. Due to Proposition 5.5, it can be singular only at points in the finite set , given in Eq. (5.23). Recall also that S * sing refers to the subset of singular points lying on the 3-manifold S * (κ, t 0 ). Consider the smooth complex 3-manifold S * (κ, t 0 ) = S * (κ, t 0 ) \ S sing .
As (non-zero) scalar multiplication acts smoothly on S * (κ, t 0 ), and no element of S * (κ, t 0 ) is invariant under this operation, it follows that S * (κ, t 0 ) is a smooth complex surface. Now, consider a point ρ 0 ∈ S sing . Since this point is invariant under the smooth action ρ → c ρ, c ∈ C * , it is easy to see that the quotient space S * is not Hausdorff near its image [ρ 0 ]. In fact, near points in S sing , the space S * even fails to locally be a T 1 space. In particular, the smooth structure on S * (κ, t 0 ) cannot be extended to include points in S sing .
To complete the proof of the theorem, we translate the results on S * (κ, t 0 ) to M(κ, t 0 ) via the mapping P. To this end, recall that P maps the finite set M sing onto S sing .
We have shown that S * (κ, t 0 )\S sing is a smooth complex surface. Hence M(κ, t 0 ) \ M sing is a smooth complex surface. Furthermore, elements of M sing form singularities on the monodromy manifold, as points in S sing are singularities on S * (κ, t 0 ).
Finally, we note that M sing is non-empty if and only if S sing is non-empty, and the latter holds true if and only if some of the non-splitting conditions are violated, by the equivalence in Proposition 5.5. The theorem follows.

The monodromy manifold as an algebraic surface.
In this section, we prove Theorem 2.20, which allows us to identify the monodromy manifold with an affine algebraic surface embedded in C 6 . Furthermore, we describe how the monodromy manifold can also be embedded in (P 1 ) 3 .
Proof of Theorem 2.20. The mapping M is composed of two parts: P : M → S * and : S * → F. The mapping P is a (topological) isomorphism due to theorem 2.15. Hence, it only remains to show that the mapping is an isomorphism. To prove this, we construct a continuous inverse, which we denote by , of .
We start by recalling that S * , defined in Eq. (2.23), is locally described by coordinates [ρ] in the ambient space (P 1 ) 4 /C * . Similarly, F is described by the coordinates η i j , 1 ≤ i < j ≤ 4, in C 6 .
The mapping is a continuous mapping from S * to F, described by Eq. (2.25) with respect to the above coordinates. In particular, note that, due to Eq. (2.25), for any labeling {i, j, k, l} = {1, 2, 3, 4}, we have η i j = 0 ⇐⇒ ρ i = 0 or ρ j = 0 or ρ k = ∞ or ρ l = ∞. of the co-domain. We proceed by defining an inverse of on this subdomain and co-domain, and subsequently extending this inverse to one on the full domain.
Equation (2.26) imply that any element η of F has either zero, three or four components equal to zero and the components in (5.34) indeed cover all of F \ F 0 .
Then, inspired by (5.32), we correspondingly decompose F as a disjoint union, . This extends to a global inverse of on F. is continuous on each of the separate components and it is straightforward to check that its continuations to common boundary points of different components agree with each other.
We finish this section by describing an embedding of the monodromy manifold into (P 1 ) 3 . We assume that the non-splitting conditions (1.5) hold true. In particular, all the coefficients of the polynomial T ( p : κ, t 0 ) are nonzero and, therefore, there are no points ρ ∈ S * (κ, t 0 ) with two or more components all zero or all infinite. Thus, form six well-defined coordinates on the surface S * (κ, t 0 ) and thus also on the monodromy manifold M(κ, t 0 ). Ohyama et al. [27] study the qP VI monodromy manifold using these coordinates. 3 Theorem 2.15 yields explicit algebraic relations among them. For example, ρ 12 , ρ 23  with range given by the surface (5.35) minus a curve. This curve is defined by the intersection of (5.35) as κ 0 varies over C * .
Remark 5.6. Assuming the non-splitting conditions (1.5), the six coordinates ρ i j , 1 ≤ i, j ≤ 4, are analytic rational functions from F(κ, t 0 ) to CP 1 , which together embed the surface into (CP 1 ) 6 . The same statements holds true for these coordinates, as functions on the monodromy manifold M(κ, t 0 ), with respect to the structure of a complex algebraic variety defined in Ohyama et al. [27]. It follows that this structure is compatible with the one induced by Theorem 2.20.

Conclusion
In this paper, we studied the qP VI equation through its associated linear problem. Assuming non-resonant parameter conditions, we defined the corresponding Riemann-Hilbert problem, which captures the general solution of qP VI . This problem was shown to be solvable for irreducible monodromy, leading to a one-to-one correspondence between solutions of qP VI and points on the corresponding monodromy manifold, when the non-splitting conditions are satisfied. In turn, we constructed an explicit embedding of the monodromy manifold into (CP 1 ) 4 /C * , whose image is described by the zero locus of a single quadratic polynomial, minus a curve. This allowed us to show that the monodromy manifold is a smooth complex surface, when the non-splitting conditions hold true. We further proved that it can be identified with an affine algebraic surface, under the same assumptions. This surface can be described as the intersection of two quadrics in C 4 and its projective completion is thus a Segre surface.
The results of this paper suggests a possible framework for tackling several open questions. These include, for example, the classification of algebraic or symmetric solutions of qP VI , the construction of (classes of) special transcendental solutions via the geometry of the monodromy manifold, and the derivation of solutions with distinctive (e.g. bounded) global asymptotic behaviours. 3 To be precise, in [27][ §5.1.1] the 'dual' coordinates i j := ρ i ρ j ∈ P 1 are used, where ρ k = π(C(x k ) T ) for 1 ≤ k ≤ 4. These coordinates are bi-rationally equivalent to the ρ i j coordinates and we note that (ρ 1 , ρ 2 , ρ 3 , ρ 4 ) lies on the threefold S * (κ −1 ∞ , κ t , κ 1 , κ −1 0 , t 0 ).