1 Introduction

The Newton Polygon construction for solving equations in terms of power series and its generalization by Puiseux has been successfully used countless times both in the algebraic [18, 24, 25] and in the differential contexts [13, 14, Ch. V], [7, 9, 11, 16, 19, 31] (this is just a biased and brief sample, see also [10] and [12, Sec. 29] for an interesting detailed historical narrative). We extend its use to q-difference equations.

Although this construction is primarily intended to give a method for computing formal power series solutions, we will use it for proving the q-analog of some results concerning the nature of power series solutions of non linear differential equations. Namely, we show properties about the growth of the coefficients of a power series solution (Maillet’s theorem) and about the set of exponents of a generalized power series solution.

The method allows us, first of all, to show that the set of exponents of power series solutions with well-ordered exponents in \(\mathbb {R}\) of a formal q-difference equation is included in the translation by a constant of a finitely generated semigroup over \(\mathbb {Z}_{\ge 0}\) (in particular, it has finite rational rank and if the exponents are all rational, then their denominators are bounded). This mirrors the results of Grigoriev and Singer [16] for differential equations. When the q-difference equation is of first order and first degree, we give a bound for this rational rank (see Theorem 3 for a precise statement). We also study properties related to what we call “finite determination” (Definition 4) of the coefficients of the solutions. This is one of the places in which the case \(|q|=1\) is essentially different from the general case. For \(|q|\ne 1\), we prove the finite determination of the coefficients.

Maillet’s theorem [22] is a classical results about the growth of the coefficients \(a_i\) of a formal power series solution of a (non-linear) differential equation: it states that \(|a_i|\le i!^{s}\,R^{i}\), for some constants R and s. Among the different proofs (for instance [15, 21, 22]), Malgrange’s [23] includes a precise bound for s. This bound is optimal except for one case: when the linearized operator along the solution has a regular singularity and the solution is a “non-regular solution”, for which any \(s>0\) works (see the last remark in Malgrange’s paper); we shall refer to it as the (RS-N) case. In [7], the Newton Polygon method allows the author to prove Maillet’s result and to show convergence (i.e. \(s=0\)) in the (RS-N) case.

The first studies on convergence of solutions of non-linear q-difference equations are due to Bézivin [4,5,6]. The q-analog of Maillet’s theorem states that when \(|q|>1\), a formal power series solution of a q-difference equation with analytic coefficients is q-Gevrey of some order s (see Definition 5). Zhang [32] proves this adapting Malgrange’s proof to the case of q-difference–differential convergent equations. In this paper, the adaptation of the Newton Polygon to q-difference equations allow us to give a new proof of the q-analogue of Maillet’s theorem and to extend it to the q-Gevrey non-convergent case. The bounds obtained for convergent equations match Zhang’s in general and are more accurate in the (RS-N) case. However, we cannot prove convergence in this case unlike for differential equations.

The first version of this paper was uploaded to the arXiv as [8] in 2012. Parts of the second section became a chapter of [3], a joint work with Ph. Barbe and W. McCormick dealing with solutions of algebraic q-difference equations. In that joint book, some results concerning the asymptotic behavior of solutions are provided, but the ones here are previous, more general (power series) and stronger (due to the specific technique). However, we remark that in [3] the topics are broader: analytic, entire and formal solutions, the radius of convergence, conditions describing the possible poles of analytic solutions, associated objects which provide information on the solution (Borel-type transforms), and many exhaustive examples, among which: the colored Jones equation for the figure 8 knot, the q-Painlevé I equation, and other combinatorial equations. Thus, the present paper is transverse to the book, and the Newton Polygon method applied to q-difference equations (which appears in both) was first used in this work.

We note, also, that the “Newton Polygon” construction used in the case of linear operators by Adams [1], Ramis [26], Sauloy [28] and others is different from the one presented here. In the linear case, the Newton Polygon is used to find local invariants of the operator while our Newton Polygon is constructed with the aim of looking for formal power series solutions. In Sect. 4 we describe the relation between Adams’ Newton Polygon and Zhang’s bounds. Adams’ construction is also used in [20] to give conditions for the convergence of the solution(s) of analytic nonlinear q-difference equations.

For the reader’s convenience, we include a final section with a detailed working example describing most of the constructions and the evolution of the Newton Polygon as one computes the successive terms of a solution.

2 The Newton–Puiseux Polygon Process for q-Difference Equations

Let q be a nonzero complex number. For \(j\in \mathbb {Z}\), let us denote by \(\sigma ^j\) the automorphism of the ring \({\mathbb C}[[x]]\) of formal power series in one variable given by \(\sigma ^j(y)(x)=y(q^jx)\), that is,

$$\begin{aligned} \sigma ^j\left( \sum _{i=0}^\infty a_i\,x^i\right) = \sum _{i=0}^\infty q^{i\,j}\,a_i\,x^i. \end{aligned}$$

Let \( P(x,Y_{0},Y_{1},\dots ,Y_{n})\in \mathbb {C}[[x,Y_{0},\dots ,Y_{n}]]\) be a formal power series. For \(y\in \mathbb {C}[[x]]\), with \({{\,\mathrm{ord}\,}}_x(y)>0\), the expression \(P(x,y,\sigma ^1(y),\dots ,\sigma ^n(y))\) is a well-defined element of \(\mathbb {C}[[x]]\) that we will be denoted by P[y]. We associate to \(P(x,Y_{0},Y_{1},\dots ,Y_{n})\) the q-difference equation

$$\begin{aligned} P(x,y,\sigma ^1(y),\dots ,\sigma ^n(y))= 0. \end{aligned}$$
(1)

We will look for solutions of Eq. (1) as formal power series with real exponents. We restrict ourselves to the Hahn field \(\mathbb {C}((x^{\mathbb {R}}))\) of generalized power series, that is, formal power series of the form \(\sum _{\gamma \in \mathbb {R}} c_{\gamma }x^{\gamma }\) whose support \(\{\gamma \mid c_{\gamma }\ne 0\}\) is a well-ordered subset of \(\mathbb {R}\) and \(c_\gamma \in \mathbb {C}\). Hahn fields were essentially introduced in [17]; see [27] for a detailed proof of the ring structure and [30] for a modern study in the context of functional equations. We fix a determination of the logarithm and extend the automorphism \(\sigma \) to \(\mathbb {C}((x^{\mathbb {R}}))\) by setting

$$\begin{aligned} \sigma \left( \sum _{\gamma \in \mathbb {R}} c_{\gamma }\,x^{\gamma }\right) = \sum _{\gamma \in \mathbb {R}} q^{\gamma }\,c_{\gamma }\,x^{\gamma }. \end{aligned}$$

For \(y\in \mathbb {C}((x^{\mathbb {R}}))\), its order \({{\,\mathrm{ord}\,}}(y)\) is the minimum of its support if \(y\ne 0\) and \({{\,\mathrm{ord}\,}}(0)=\infty \). In Sect. 2.3, we shall see that if \({{\,\mathrm{ord}\,}}(y)>0\) then the expression \(P(x,y,\sigma ^1(y),\dots ,\sigma ^n(y))\) is a well-defined element of \(\mathbb {C}((x^{\mathbb {R}}))\), hence Eqs. (1) makes sense in our setting.

Although we look for solutions in the Hahn field, their support has some finiteness properties, as in the case for differential equations. We say that \(y\in \mathbb {C}((x^{\mathbb {R}}))\) is a grid-based series if there exists \(\gamma _0\in \mathbb {R}\) and a finitely generated semigroup \(\Gamma \subseteq \mathbb {R}_{\ge 0}\) such that the support of y is contained in \(\gamma _0+\Gamma \). Puiseux series are the particular case of grid-based series in which \(\gamma _0\in \mathbb {Q}\) and \(\Gamma \subseteq \mathbb {Q}\). Puiseux series and grid-based series form subfieds of the Hahn field denoted respectively by \(\mathbb {C}((x^\mathbb {Q}))^g\) and \(\mathbb {C}((x^{\mathbb {R}}))^g\). We have

$$\begin{aligned} \mathbb {C}[[x]]\subseteq \mathbb {C}((x^\mathbb {Q}))^g \subseteq \mathbb {C}((x^{\mathbb {R}}))^g \subseteq \mathbb {C}((x^{\mathbb {R}})). \end{aligned}$$

If Eq. (1) is algebraic, i.e. of the form \(P(x,y)=0\), then by Puiseux’s Theorem all its formal power series solutions are of Puiseux type. This is no longer true if instead of \({\mathbb C}\), the base field is of positive characteristic, as the following example (due essentially to Ostrowski) shows: the equation \(-y^p+x\,y+x=0\) over the field \(\mathbb {Z}/p\mathbb {Z}\) has as solution the generalized power series \(y=\sum _{i=1}^{\infty }x^{\mu _i}\) with \(\mu _i={(p^i-1)/(p^{i+1}-p^{i})}\). Notice that the exponents are rational but they do not have a common denominator and moreover \(\mu _1<\mu _2<\cdots <1/(p-1)\) so that they do not even go to infinity. Hence y is neither a Puiseux series nor a grid-based series.

As in the case of differential equations, the number of generalized power series solutions of a given Eq. (1) is not necessary finite, neither all of its solutios are of Puiseux type. For instance, the q-difference equation \(Y_0\,Y_2-Y_1^2=0\) has \(c\,x^{\mu }\) as solutions for any \(c\in \mathbb {C}\) and \(\mu \in \mathbb {R}\).

2.1 The Newton Polygon

Let \(\mathcal {R}=\mathbb {C}[[x^{\mathbb {R}_{\ge 0}}]]\) be the ring of generalized power series with non-negative order. For a finitely generated semigroup of \(\Gamma \subset \mathbb {R}_{\ge 0}\), the ring \(\mathbb {C}[[x^{\Gamma }]]\) formed by those generalized power series with support contained in \(\Gamma \) is denoted by \(\mathcal {R}_\Gamma \). Let \(P\in \mathcal {R}[[Y_0,Y_1,\ldots ,Y_n]]\) be a nonzero formal power series in \(n+1\) variables over \(\mathcal {R}\). For \(\rho =(\rho _0,\rho _1,\ldots ,\rho _n)\in \mathbb {N}^{n+1}\), we shall write \(Y^{\rho }=Y_{0}^{\rho _{0}}\cdot Y_1^{\rho _1}\cdots Y_{n}^{\rho _{n}}\); we shall also write \(\mathcal {R}[[Y]]\) instead of \(\mathcal {R}[[Y_{0},Y_{1},\ldots ,Y_{n}]]\). The coefficient of \(Y^{\rho }\) in P will be denoted \(P_\rho (x)\in \mathcal {R}\) and, for \(\alpha \in \mathbb {R}\), the coefficient of \(x^{\alpha }\) in \(P_\rho (x)\) will be denoted \(P_{\alpha ,\rho }\in \mathbb {C}\). Notice that, as \(P\in \mathcal {R}[[Y_0,Y_{1},\ldots , Y_{n}]]\), each coefficient \(P_{\rho }(x)\) belongs to \(\mathcal {R}\), which means that \(P_{\rho }(x)\) is a power series with well-ordered support contained in \(\mathbb {R}_{\ge 0}\). Thus, we can write:

$$\begin{aligned} P= \sum _{\rho \in \mathbb {N}^{n+1}} P_\rho (x)\,Y^{\rho },\quad \text {and}\quad P_\rho (x)= \sum _{\alpha \in \Gamma _\rho } P_{\alpha ,\rho }\, x^{\alpha }, \end{aligned}$$

where for each \(\rho \), \(\Gamma _\rho \) is a well-ordered subset of \(\mathbb {R}_{\ge 0}\) (in general, the \(\Gamma _{\rho }\) will all be different). We associate to P its cloud of points \(\mathcal {C}(P)\): the set of points \((\alpha ,|\rho |)\in {\mathbb R}^2\) with \(|\rho |=\rho _{0}+\rho _{1}+\dots +\rho _{n}\), for all \((\alpha ,\rho )\) such that \(P_{\alpha ,\rho }\ne 0\).

The Newton Polygon \(\mathcal {N}(P)\) of P is the convex hull of

$$\begin{aligned} \bar{\mathcal {C}}(P)= \{(\alpha +r,|\rho |)\mid (\alpha ,|\rho |)\in \mathcal {C}(P),\,\, r\in \mathbb {R}_{\ge 0}\}. \end{aligned}$$

A supporting line L of \(\mathcal {N}(P)\) is a line such that \(\mathcal {N}(P)\) is contained in the closed right half-plane defined by L, and \(L\cap \mathcal {N}(P)\) is not empty, that is a line meeting \(\mathcal {N}(P)\) on its border.

Fig. 1
figure 1

Cloud, Newton polygon and some supporting lines of P in (2)

Figure 1 shows the points in the cloud and the Newton polygon (bold lines) of the following polynomial (which will be extensively studied in Sect. 5):

$$\begin{aligned} P= & {} -x^3\,{ Y_0}^4\,{ Y_5}^2 +4\,{Y_1}^4 -9\,{Y_0}^2\,{ Y_1}\,{ Y_2} +2\,{ Y_0}^3\,{ Y_2} \nonumber \\&\quad +q^{-4}{x{ Y_0}\,{ Y_2}} -q^{-4}{x^3\,{ Y_2}} -x^3\,{ Y_0}+x^5. \end{aligned}$$
(2)

Notice that the ordinate axis corresponds to \(|\rho |\).

It will be convenient to speak about the co-slope of a line as the opposite of the inverse of its slope, the co-slope of a vertical line being 0. In order to deal with the particular case in which P is a polynomial in the variables \(Y_0,Y_1,\ldots ,Y_n\) we define:

Finally, from now on we assume \(P\ne 0\) everywhere.

Lemma 1

Let \(P\in \mathcal {R}[[Y]]\). For any \(\mu >\mu _{-1}(P)\) there exists a unique supporting line of \(\bar{\mathcal {C}}(P)\) with co-slope \(\mu \) and the Newton polygon \(\mathcal {N}(P)\) has a finite number of sides with co-slope greater or equal than \(\mu \). If P is a polynomial then \(\mathcal {N}(P)\) has a finite number of sides and vertices. If \(P\in \mathcal {R}_\Gamma [[Y]]\) for some finitely generated semigroup \(\Gamma \subseteq \mathbb {R}_{\ge 0}\), then the Newton Polygon \(\mathcal {N}(P)\) has a finite number of sides with positive co-slope.

The unique supporting line with co-slope \(\mu \) will be denoted henceforward \(L(P;\mu )\).

Proof

If P is a polynomial, let h be its total degree in the variables \(Y_0,\ldots ,Y_n\). Otherwise we define h as follows: since \(P\ne 0\) the set \(\mathcal {C}(P)\) is nonempty; take a point \(q\in \mathcal {C}(P)\) and let L be the line passing through q with co-slope \(\mu \). Let (0, h) be the intersection of L with the OY-axis. For each \(\rho \in \mathbb {N}^{n+1}\), write \(\alpha _\rho ={{\,\mathrm{ord}\,}}\,P_\rho (x)\). Only the finite number of points \((\alpha _\rho ,|\rho |)\) with \(|\rho |\le h\) and \(P_{\rho }(x)\ne 0\) are relevant for the definition of the line \(L(P;\mu )\) and for the construction of sides with co-slope greater or equal than \(\mu \) of \(\mathcal {N}(P)\). This proves the two first statements, the last one is a consequence of the fact that for a given \(\alpha >0\), the set \(\Gamma \cap \{r<\alpha \}\) is finite. \(\square \)

For \(\mu >\mu _{-1}(P)\), define the following polynomial in the variable C:

$$\begin{aligned} \Phi _{(P;\mu )}(C)=\sum _{(\alpha ,|\rho |)\in L(P;\mu )} P_{\alpha ,\rho }\,q^{\mu \,w(\rho )} \,C^{|\rho |}, \end{aligned}$$

where \(w(\rho )=\rho _1+2\rho _2+\cdots +n\rho _n\). For a vertex v of \(\mathcal {N}(P)\), the indicial polinomial is

$$\begin{aligned} \Psi _{(P;v)}(T)=\sum _{(\alpha ,|\rho |)= v} P_{\alpha ,\rho }\,T^{w(\rho )}. \end{aligned}$$

For P given in Eq. (2), some examples of initial and indicial polynomials are: for \(v_{0}=(4,6)\), \(\Psi _{(P;v_0)}(T)=-3T^{10}\), and for \(v_1=(0,4)\), \(\Psi _{(P;v_1)}(T)=T^2(T-2)(4T-1)\). As regards the sides, the one joining (3, 6) with (0, 4), has co-slope \(\gamma _1=-3/2\) and we have \(\Phi _{(P;\gamma _1)}(C)=2q^{-3}C^{4}-9q^{-9/2}C^{4}+4q^{-6}C^{4}-q^{-15}C^6\), whereas the one joining (0, 4) and (1, 2) has co-slope \(\gamma _2=1/2\) and \(\Phi _{(P;\gamma _2)}(C)=C^4(4q^2-9q^{3/2}+2q) + q^{-3}C^2\).

2.2 A Rough Idea of the Method

Newton’s algorithm is recursive in the following sense : assume \(s(x)=cx^{\mu }+{\overline{s}}(x)\) is a solution of \(P=P_{0}\) with \({{\,\mathrm{ord}\,}}_x{\overline{s}}(x)>\mu \). Then, on one side (see Lemma 2):

$$\begin{aligned} \Phi _{(P;\mu )}(c) = 0, \end{aligned}$$
(3)

and on the other, \({\overline{s}}(x)\) is a solution of a new equation \(P_1\) derived from \(P_0\) and \(cx^{\mu }\) (see Corollary 1). The Newton Polygon is a graphical tool to describe the necessary condition (3) on c and \(\mu \): if \(\mu \) is the co-slope of a side of \(\mathcal {N}(P)\), then (3) is a polynomial in c; if \(\mu \) is a co-slope of a supporting line meeting \(\mathcal {N}(P)\) at a vertex v, then (3) becomes \(\Psi _{(P;v)}(q^{\mu })=0\) (in this special cases, any coefficient is valid because \(\Phi (P;\mu )\equiv 0\)).

Iterating the above procedure, will allow us (see Proposition 1) to prove that \(S(x)\in \mathcal {R}\) is a solution of P if and only if its support is countable (so that we can write \(S(x)=\sum _{i=0}^{\infty } a_ix^{\mu _i}\)) and these conditions hold: \(\mu _i\rightarrow \infty \), and if we denote \(S_j(x)=\sum _{i<j}a_{i}x^{\mu _i}\), \(P_j=P(S_j(x)+Y_0, \ldots , \sigma ^n(S_j(x))+Y_n)\), then for all \(j\in \mathbb {Z}_{\ge 0}\):

$$\begin{aligned} \Phi _{(P_j;\mu _{j})}(a_{j})=0. \end{aligned}$$
(4)

The geometric meaning of (4) is precisely (see Fig. 2), that the point \(L(P_j;\mu _j)\cap \{|\rho |=0\}\) is to the left of \(\mathcal {N}(P_{j+1})\cap \{|\rho |=0\}\), whereas \(\mu _i\rightarrow \infty \) implies that these points go to infinity. Newton’s idea consists of: instead of trying to compute a complete solution straightaway, reduce the problem to computing each \(\mu _{j}\), \(a_{j}\) iteratively, using the structure of \(\mathcal {N}(P_j)\) and Eq. (4) each time (which is Procedure 1). The fact that all solutions of P can be found with this method is essentially Proposition 1.

2.3 Composition

For \(s_0,\ldots ,s_n\in \mathcal {R}\), the expression \(P(s_0,\ldots ,s_n)\) can be given a precise meaning under certain conditions. We consider on \(\mathcal {R}\) the topology induced by the distance \(d(f,g)=\exp (-{{\,\mathrm{ord}\,}}(f-g))\) which is a complete topology: if \((f_n)\) is a Cauchy sequence, this means that given \(M>0\), there is \(N_{M}\) with \({{\,\mathrm{ord}\,}}(f_n-f_m)>M\) for any \(n,m\ge N_{M}\); hence, for any \(M>0\), the truncations of \(f_n\) and \(f_m\) up to order M coincide, for \(n,m\ge N_M\). Thus, there exists a single \(f\in \mathcal {R}\) (defined inductively) such that \({{\,\mathrm{ord}\,}}(f_n-f)>M\) for \(n\ge N_M\). This f is the (unique limit) of the Cauchy sequence.

If P is a polynomial, \(P(s_0,\ldots ,s_n)\) is well-defined because \(\mathbb {C}((x^{\mathbb {R}}))\) is a ring. Otherwise, we impose \({{\,\mathrm{ord}\,}}(s_i)>0\), for all i. Let \(\mu =\min _{0\le i\le n}\{{{\,\mathrm{ord}\,}}(s_i)\}\). For \(M\in \mathbb {N}\), consider the polynomial \(P_{\le M}=\sum _{|\rho |\le M} P_\rho (x)\,Y^{\rho }\). The sequence \(P_{\le M}(s_0,\ldots ,s_n)\), \(M\in \mathbb {N}\), is a Cauchy sequence because the order of \(P_\rho (x)s_0^{\rho _0}\cdots s_n^{\rho _n}\) is greater than or equal to \(\mu \,|\rho |\). Its limit is precisely \(P(s_0,\ldots ,s_n)\). Notice that if \(P\in \mathcal {R}_\Gamma [[Y]]\) and all \(s_i\in \mathcal {R}_\Gamma \), then \(P(s_0,\ldots ,s_n)\in \mathcal {R}_\Gamma \).

Given \(s_0,\ldots ,s_n\) as above, we define the series

$$\begin{aligned} P(s_0+Y_0,\ldots ,s_n+Y_n):= \sum _{\rho \in \mathbb {N}^{n+1}} \frac{1}{\rho !} \frac{\partial ^{|\rho |}P}{\partial Y^{\rho }} (s_0,\ldots ,s_n)\, \, Y^{\rho }, \end{aligned}$$
(5)

where \(\rho !=\rho _0!\cdots \rho _n!\) and \(\frac{\partial ^{|\rho |}P}{\partial Y^{\rho }}= \frac{\partial ^{|\rho |}P}{\partial Y_0^{\rho _0}\,{}\partial {}Y_1^{\rho _1}\dots \partial Y_{n}^{\rho _n} } \). For generalized power series \({\bar{s}}_0,\ldots ,{\bar{s}}_n\) with positive order it is straightforward to prove that the evaluation of the right hand side of (5) at \({\bar{s}}_0,\ldots ,\bar{s_n}\) is \(P(s_0+{\bar{s}}_0,\ldots ,s_n+{\bar{s}}_n)\).

If \(y\in \mathbb {C}((x^{\mathbb {R}}))\) has \({{\,\mathrm{ord}\,}}(y)>\mu _{-1}(P)\), then \(P(y,\sigma (y),\ldots ,\sigma ^{n}(y))\) is well defined because \({{\,\mathrm{ord}\,}}(\sigma ^{k}(y))={{\,\mathrm{ord}\,}}(y)\). We also remark that if \(y\in \mathcal {R}_\Gamma \), then \(\sigma ^{k}(y)\in \mathcal {R}_\Gamma \). The following notations will be used in the rest of the paper:

$$\begin{aligned} P[y]&=P(y,\sigma (y),\ldots ,\sigma ^{n}(y)), \nonumber \\ P[y+Y]&=P(y+Y_0,\sigma (y)+Y_1,\ldots ,\sigma ^{n}(y)+Y_n). \end{aligned}$$
(6)

We are also going to make use of the little-o notation: \(o(x^{\mu })\) will mean a generalized formal power series with order greater than \(\mu \) or the zero series if \(\mu =\infty \). The following is essentially what motivates the Newton polygon construction:

Lemma 2

Let \(y=c\,x^{\mu }+o(x^{\mu })\in \mathbb {C}((x^{\mathbb {R}}))\), and \(\mu >\mu _{-1}(P)\). Let \((\nu ,0)\) be the intersection point of \(L(P;\mu )\) with the OX-axis. Then

$$\begin{aligned} P[y]=\Phi _{(P;\mu )}(c)\,x^{\nu }+o(x^{\nu }), \end{aligned}$$

In particular, if y is a solution of the q-difference Eq. (1) then

$$\begin{aligned} \Phi _{(P;\mu )}(c)=0. \end{aligned}$$

Proof

If P is a polynomial, let M be its total degree; otherwise, \(\mu >\mu _{-1}(P)=0\) and we set M as any integer M such that \(M\mu > \nu \), say \(M=\lfloor \nu /\mu \rfloor + 1\), where \(\lfloor .\rfloor \) denotes the integral part. The truncation of P[y] up to order \(\nu \) is equal to that of \(P_{\le M}[y]\) and also \(\Phi _{(P;\mu )}(C)=\Phi _{(P_{\le M};\mu )}(C)\).

Write \(\alpha _\rho ={{\,\mathrm{ord}\,}}\,P_\rho \) for any multiindex \(\rho \). Recall that \(L(P;\mu )=\{(\alpha ,b)\mid \alpha +\mu \,b=\nu \}\) is a supporting line of \(\mathcal {C}(P)\): this implies that for any \(P_{\rho }\ne 0\), the point \((\alpha _\rho ,|\rho |)\) belongs to the closed right half-plane defined by \(L(P;\mu )\), from which follows that \(\nu \) is the minimum of \(\alpha _\rho + \mu \,|\rho |\), for \(\rho \in \mathbb {N}^{n+1}\). The following chain of equalities proves the result

$$\begin{aligned}&P_{\le M}[c\,x^{\mu }+o(x^{\mu })]\\&\quad =\sum _{|\rho |\le M} P_{\rho }(x)\,\, (c\,x^{\mu }+o(x^{\mu }))^{\rho _0} (q^\mu c\,x^{\mu }+o(x^{\mu }))^{\rho _1}\cdots (q^{n\mu } c\,x^{\mu }+o(x^{\mu }))^{\rho _n}\\&\quad = \sum _{|\rho |\le M} \left\{ P_{\alpha _\rho ,\rho } \, x^{\alpha _\rho }+ o(x^{\alpha _\rho })\right\} \left\{ c^{|\rho |} \, q^{\mu w(\rho )} \, x^{\mu |\rho |}+ o( x^{\mu |\rho |})\right\} \\&\quad =\sum _{|\rho |\le M} \left\{ P_{\alpha _\rho ,\rho } \, c^{|\rho |} \, q^{\mu w(\rho )} \, x^{\alpha _\rho +\mu |\rho |}+ o(x^{\alpha _\rho +\mu |\rho |})\right\} \\&\quad =\left\{ \sum _{\alpha _\rho +\mu \,|\rho |=\nu } P_{\alpha _\rho ,\rho }\,\, c^{|\rho |}\,q^{ {}\mu {}\, w(\rho )} \right\} x^{\nu }+o(x^{\nu })=\Phi _{(P;\mu )}(c)+o(x^{\nu }). \end{aligned}$$

where the last equality holds because, again \(L(P;\mu )=\left\{ \alpha + \mu b = \nu \right\} \). \(\square \)

Let \(y\in \mathbb {C}((x^{\mathbb {R}}))\) be a generalized power series and S be its support. If S is finite, denote by \(\omega (y)\) the cardinal of S, otherwise \(\omega (y)=\infty \). Consider the sequence \(\mu _i\in S\) defined inductively as follows: \(\mu _0\) is the minimum of S and for \(0\le i<\omega (y)\), \(\mu _{i+1}\) is the minimum of \(S\setminus \{\mu _0,\mu _1,\ldots ,\mu _{i}\}\). Let \(c_i\in \mathbb {C}\) be the coefficient of \(x^{\mu _i}\) in y.

Definition 1

We shall call the first \(\omega \) terms of y to the generalized power series \(\sum _{0\le i<\omega (y)}c_i\,x^{\mu _i}\).

Notice that if the support of y is finite or has no accumulation points then y coincides with its first \(\omega \) terms.

Corollary 1

Let y be a solution of the q-difference Eq. (1) and let \(\sum _{i}c_i\,x^{\mu _i}\) be the first \(\omega \) terms of y. Let \(P_i\) be the series defined as:

$$\begin{aligned} P_0:=P,\quad \text {and}\quad P_{i+1}:=P_i[c_i\,x^{\mu _i}+Y], \quad 0\le i<\omega (y). \end{aligned}$$

Then, for all \(0\le i<\omega (y)\), one has

$$\begin{aligned} \Phi _{(P_{i};\mu _i)}(c_i)=0,\quad \text {and}\quad \mu _{i-1}<\mu _{i}, \end{aligned}$$

where we denote \(\mu _{-1}=\mu _{-1}(P)\).

Proof

Let \({\bar{y}}_k=y-\sum _{i=0}^{k-1}c_i\,x^{\mu _i}\), then \(P_k[ {\bar{y}}_k ]=0\) and the first term of \({\bar{y}}_k\) is \(c_k\,x^{\mu _k}\). \(\square \)

By way of example, consider, for P given by (2), the transformation with \(\mu =2\) and \(c=1\), which gives \(P_1=P[x^2+Y]\) having 33 terms. The Newton polygon of \(P_1\) (and its comparison to that of P) is given in Fig. 2. Observe how (this will be proved later as Lemma 3) the Newton Polygons \(\mathcal {N}(P)\) and \(\mathcal {N}(P_1)\) coincide at and above the vertex \(v=(1,2)\), which is the topmost vertex of \(L(P;2)\cap \mathcal {N}(P)\). Underneath that vertex v, the point \(L(P;2)\cap \{|\rho |=0\} = (5,0)\) is to the left of \(\mathcal {N}(P_1)\cap \{|\rho |=0\}=(8,0)\).

At the same time, under v, the polygon \(\mathcal {N}(P_1)\) has only sides with co-slope greater than or equal to 2 (in the example, just one with co-slope 7/2). As \(\mu _1=2\), only co-slopes \(\mu _j>2\) are chosen afterwards (see Sect. 5 and Fig. 3 for the complete example).

Fig. 2
figure 2

Cloud and Newton polygon \(\mathcal {N}(P_1)\) of \(P_1=P[x^2+Y]\) where P is defined in (2). In dashed lines, \(\mathcal {N}(P)\). Observe how both polygons coincide at and above (1, 2), the topmost vertex of \(L(P;2)\cap \mathcal {N}(P)\)

Let \(P\in \mathcal {R}_\Gamma [[Y]]\) and let \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) be a series with \(\mu _{-1}(P)<\mu _i< \mu _{i+1}\), for all \(0\le i<\infty \) (We do not impose that \(c_i\ne 0\), but the sequence \((\mu _i)_{i\in \mathbb {N}}\) is strictly increasing). Consider the series \(P_0:=P\) and \(P_{i+1}:=P_i[c_ix^{\mu _i}+Y]\).

Definition 2

We say that \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) satisfies the necessary initial conditions for P, in short \({{\,\mathrm{NIC}\,}}(P)\), if \(\Phi _{(P_i;\mu _i)}(c_i)=0\), for all \(i\ge 0\).

The above Corollary states that the first \(\omega \) terms of a solution of \(P[y]=0\) satisfy \({{\,\mathrm{NIC}\,}}(P)\). In this section and the next one we shall prove in Proposition 3 the reciprocal statement for \(P\in \mathcal {R}_{\Gamma }[[Y]]\): if \(\sum _{i=0}^{\infty }c_ix^{\mu _i}\) satisfies \({{\,\mathrm{NIC}\,}}(P)\), then \(\lim _{i\rightarrow \infty }\mu _i=\infty \) and \(\sum _{i=0}^{\infty }c_ix^{\mu _i}\) is an actual solution of the q-difference equation \(P[y]=0\). This implies in particular that solutions of \(P[y]=0\) coincide with their first \(\omega \) terms.

A method for computing all the series satisfying \({{\,\mathrm{NIC}\,}}(P)\) with \(c_i\ne 0\), for all i, is the following one:

Procedure 1

(Computation of a power series satisfying \({{\,\mathrm{NIC}\,}}(P)\)) Set \(P_0:=P\) and \(\mu _{-1}:=\mu _{-1}(P)\).

For \(i=0,1,2, \ldots \) do either (a.1) or (a.2) and (b), where:

  1. (a.1)

    If \(y=0\) is a solution of \(P_i[y]=0\), then return \(\sum _{k=0}^{i-1}c_kx^{\mu _k}\).

  2. (a.2)

    Choose \(\mu _i>\mu _{i-1}\), and \(0\ne c_i\in \mathbb {C}\) satisfying \(\Phi _{(P_i,\mu _i)}(c_i)=0\).

    If neither (a.1) nor (a.2) can be performed then return fail.

  3. (b)

    Set \(P_{i+1}(Y):=P_i[c_i\,x^{\mu _i}+Y]\).

If fail is returned at step k of the above Procedure, this means that there are no a solutions of \(P[y]=0\) having \(\sum _{i=0}^{k-1}c_ix^{\mu _i}\) as its first k terms. To prove this, assume that z is a solution having \(\sum _{i=0}^{k-1}c_ix^{\mu _i}\) as its first k terms. Either \(z=\sum _{i=0}^{k-1}c_ix^{\mu _i}\), in which case \(y=0\) would be a solution of \(P_k[y]=0\) and (a.1) would have been performed, or \(z-\sum _{i=0}^{k-1}c_ix^{\mu _i}\) would have a first term of the form \(c_kx^{\mu _k}\) so that (a.2) could have been performed.

In order to carry out (a.2) in the above Procedure, one has to deal with the following formula with quantifiers

$$\begin{aligned} \exists \mu >\mu ',\, \exists c\in \mathbb {C}, c\ne 0, \quad \Phi _{(P;\mu )}(c)=0. \end{aligned}$$
(7)

The Newton Polygon provides a way to eliminate the quantifiers. Fix \(\mu '>\mu _{-1}(P)\); by Lemma 1, \(\mathcal {N}(P)\) has only a finite number of sides \(L_1,L_2,\ldots ,L_t\) with co-slopes greater than \(\mu '\). Let \(\gamma _1<\gamma _2<\cdots <\gamma _t\) be their respective co-slopes and denote by \(v_{i-1}\) and \(v_i\) the endpoints of \(L_i\). Take \(\mu >\mu '\). Either \(\mu =\gamma _j\) for some \(1\le j\le t\), or \(\gamma _j<\mu <\gamma _{j+1}\) for some \(0\le j\le t\) (writing \(\gamma _0=\mu '\) and \(\gamma _{t+1}=\infty \)). If \(\mu =\gamma _j\), then \(L(P;\mu )\cap \mathcal {N}(P)=L_j\) and \(\Phi _{(P;\mu )}(C)\) depends only on the coefficients \(P_{\alpha ,\rho }\) of P with \((\alpha ,|\rho |)\in L_j\). Otherwise, \(\gamma _j<\mu <\gamma _{j+1}\) for some j and \(L(P;\mu )\cap \mathcal {N}(P)\) is just the vertex \(v_j=(a,b)\), which implies that

$$\begin{aligned} \Phi _{(P;\mu )}(C)= C^{b}\cdot \Psi _{(P;v_j)}(q^{\mu }). \end{aligned}$$

From this equality follows that in order for \(\Phi _{(P;\mu )}(c)\) to be 0 for some \(c\ne 0\), the co-slope \(\mu \) must satisfy \(\Psi _{P;v_j}(q^{\mu })=0\). In other words: there exists \(c\ne 0\) and \(\mu \) with \(\gamma _j<\mu <\gamma _{j+1}\) such that \(\Phi _{(P;\mu )}(c)=0\) if and only if there exists \(\mu \), satisfying both \(\gamma _j<\mu <\gamma _{j+1}\) and \(\Psi _{(P;v_j)}(q^{\mu })=0\). This proves that Eq. (7) is equivalent to the quantifier-free formula obtained by the disjunction of the following formulæ:

$$\begin{aligned} \Phi _{(P;\gamma _j)}(c)&=0, \quad&1\le j\le t, \end{aligned}$$
(8)
$$\begin{aligned} \Psi _{(P;v_j)}(T)&=0,\,\mu =\log T/\log q,\,\gamma _j<\mu <\gamma _{j+1}, \quad&0\le j\le t. \end{aligned}$$
(9)

2.4 The Pivot Point

We prove in this subsection that if \(Q_{0}\) is the topmost vertex of \(L(P;\mu _0)\cap \mathcal {N}(P)\) and \(P_1=P_{1}[y]\) is the first substitution, then \(Q_{0}\) is also the topmost vertex of \(L(P_1;\mu _0)\cap \mathcal {N}(P_1)\), as exemplified in Fig. 2. This allows one to give a descent argument which guarantees that, from some index \(j_0\) on, the point \(Q_j\) (the topmost in \(L(P_{j};\mu _{j})\cap \mathcal {N}(P_{j})\)) is equal to \(Q_{j_0}\) for \(j\ge j_0\) (i.e. \(Q_{j}\) remains the same for \(j\ge j_0\)). This fixed vertex will be called the pivot point, as for \(j>j_0\) on, each supporting line \(L(P_j;\mu _{j})\) “hinges” around it when the substitution \(P_{j}\rightarrow P_{j+1}\) is carried out. The existence of this pivot point (and what we call relative pivot points in Sect. 2.5) guarantees the finiteness properties of Theorems 1 and 2.

In fact, we prove later that if s(x) is a solution of P, then either the pivot point has ordinate equal to 1 or we can derive a new equation from P which also has s(x) as a solution and whose pivot point with respect to s(x) has ordinate equal to 1. This simplifies our arguments considerably because when this happens, (4) is linear in \(a_j\).

For \(P\in \mathcal {R}_\Gamma [[Y]]\) and \(\mu >\mu _{-1}(P)\), we shall denote by \(Q(P;\mu )\) the point with highest ordinate in \(L(P;\mu )\cap \mathcal {N}(P)\). For \({\bar{P}}=P[c\,x^{\mu }+Y]\) [as in Eq. (6)], the following Lemma describes the Newton Polygon of \({\bar{P}}\):

Lemma 3

Let h be the ordinate of \(Q(P;\mu )\) and consider the half-planes \(h^{+}=\{(a,b)\in \mathbb {R}^2\mid b\ge h\}\), \(h^{-}=\{(a,b)\in \mathbb {R}^2\mid b\le h\}\). If \(L(P;\mu )^{+}\) is the closed right half plane defined by \(L(P;\mu )\) and \((\nu ,0)\) is the intersection of \(L(P;\mu )\) with the OX-axis, then

  1. (1)

    \(\mathcal {N}({\bar{P}})\cap h^{+}=\mathcal {N}(P)\cap h^{+}\), in particular \(Q(P;\mu )\in \mathcal {N}({\bar{P}})\). Moreover, for any \(\alpha \) and \(\rho \) with \((\alpha ,|\rho |)=Q(P;\mu )\), the coefficients \(P_{\alpha ,\rho }\) and \({\bar{P}}_{\alpha ,\rho }\) are equal.

  2. (2)

    \(\mathcal {N}({\bar{P}})\cap h^{-}\subseteq L(P;\mu )^{+}\cap h^{-}\),

  3. (3)

    The point \((\nu ,0)\in \mathcal {N}({\bar{P}})\) if and only if \(\Phi _{(P;\mu )}(c)\ne 0\).

Proof

Write \(M_\rho (Y)=P_\rho (x)Y^{\rho }\) and \(\alpha _\rho ={{\,\mathrm{ord}\,}}P_\rho (x)\). It is straightforward to show that \(M_\rho [cx^{\mu }+Y]= M_\rho (Y)+V(Y)\) for some V(Y), whose cloud of points is contained in the set \(A_\rho =\{(a,b)\mid b<|\rho |\} \cap L(M_\rho ;\mu )^{+}\). This proves part (2). If \(Q=(\alpha ,\rho )\) belongs to \(\mathcal {N}(P)\cap h^{+}\), then there are no points \(Q'=(\alpha ',\rho ')\in \mathcal {N}(P)\), except Q itself, such that \(Q\in A_{\rho '}\). This proves part (1). Part (3) is a consequence of Lemma 2. \(\square \)

Corollary 2

Let \({\bar{\mu }}>\mu \). Then either \(Q(P;\mu )=Q({\bar{P}},{\bar{\mu }})\) or the ordinate of \(Q({\bar{P}},{\bar{\mu }})\) is less than the ordinate of \(Q(P;\mu )\). If \(\Phi _{(P;\mu )}(c)\ne 0\), then the ordinate of \(Q({\bar{P}};{\bar{\mu }})\) is zero.

Proof

The previous Lemma implies that \(Q(P;\mu )\) is a vertex of \(\mathcal {N}({\bar{P}})\) and \(L(P;\mu )=L({\bar{P}};\mu )\). Hence \(Q(P;\mu )=Q({\bar{P}};\mu )\). Since \({\bar{\mu }}>\mu \), \(Q({\bar{P}};{\bar{\mu }})\) is a vertex with ordinate less than or equal to the ordinate of \(Q({\bar{P}};\mu )=Q(P;\mu )\). For the second part, assume that \(\Phi _{(P;\mu )}(c)\ne 0\). By the same Lemma, the point \((\nu ,0)\in \mathcal {N}({\bar{P}})\), so that the segment whose endpoints are \((\nu ,0)\) and \(Q({\bar{P}};\mu )\) is the only side of \(\mathcal {N}({\bar{P}})\) with co-slope greater than or equal to \(\mu \), from which follows that \(Q({\bar{P}};{\bar{\mu }})=(\nu ,0)\). \(\square \)

Let \(P\in \mathcal {R}_\Gamma [[Y]]\) and take a series \(\psi (x)=\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) with \(\mu _{-1}(P)<\mu _i< \mu _{i+1}\) for all \(0\le i<\infty \). (Notice that we do not impose that \(c_i\ne 0\), but the sequence \((\mu _i)_{i\in \mathbb {N}}\) must be strictly increasing). Writing \(P_0:=P\) and \(P_{i+1}:=P_i[c_ix^{\mu _i}+Y]\), let \(Q_i=Q(P_i;\mu _i)\). By the previous Corollary, the ordinate of \(Q_{i+1}\) is less than or equal to the ordinate of \(Q_{i}\). Since these are natural numbers, there exists N such that for \(i\ge N\), the ordinate of \(Q_i\) is equal to the ordinate of \(Q_N\) (it stabilizes). By the same Corollary, we know that actually \(Q_N=Q_i\), for all \(i\ge N\). This leads to the following

Definition 3

The pivot point of P with respect to \(\psi (x)\) is the point Q at which the sequence \(Q_i\) stabilizes and is denoted by \(Q(P;\psi (x))\). We say that it is reached at step N if \(Q_N=Q(P;\psi (x))\).

Let \(Q_N=(\alpha ,h)\) be the pivot point just defined. From part (1) of Lemma 3 follows that \((P_N)_{\alpha ,\rho }=(P_i)_{\alpha ,\rho }\) for all \(i\ge N\), and for all \(\rho \) with \(|\rho |=h\). In particular, the indicial polynomials \(\Psi _{(P_i;Q_N)}(T)\) are the same for all \(i\ge N\). We shall say that the monomial \(Y^{\rho }\) (resp. the variable \(Y_j\)) appears effectively in the pivot point if \((P_N)_{\alpha ,\rho }\ne 0\) (resp. for some \(\rho \) with \(\rho _j>0\)).

Proposition 1

Let P and \(\psi (x)=\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) be as above. The following statements are equivalent:

  1. (1)

    The ordinate of the pivot point of P with respect to \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) is greater than or equal to 1.

  2. (2)

    The series \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) satisfies \({{\,\mathrm{NIC}\,}}(P)\).

In case \(\lim \mu _i=\infty \), these statements are equivalent to

  1. (3)

    The series \(\psi (x)\) is a solution of \(P[y]=0\).

Proof

Assume statement (1). The ordinate of \(Q_{i+1}\) is non-zero and by the above Corollary, \(\Phi _{(P_i;\mu _i)}(c_i)=0\), which proves (2). Assume now that statement (1) is false, so that the ordinate of the pivot point is zero. This means that there exists some N such that \(Q_N\) has ordinate zero. By definition of \(Q_N\) we have that \(L(P_N;\mu _N)\cap \mathcal {N}(P_N)\) is just the point \(Q_N=(\alpha ,0)\). Then \(\Phi _{(P_N;\mu _N)}(C)\) is a non-zero constant (namely the coefficient of \(x^{\alpha }\) in \(P_N\)), therefore it has no roots, in contradiction with \(\Phi _{(P_N;\mu _N)}(c_N)=0\). This proves the equivalence between (1) and (2). By Corollary 1, (3) implies (2).

Assume (1) holds and that \(\lim \mu _i=\infty \). Write \(\psi _k(x)=\sum _{i=0}^{k-1}c_ix^{\mu _i}\) and notice that \(P_i=P[\psi _i(x)+Y]\), in particular, \(P[\psi _i(x)]=P_i[0]=(P_i)_{\underline{0}}\). Let \(Q=(\alpha ,h)\) be the pivot point of P with respect to \(\psi (x)\). Since \(L(P_i;\mu _i)\) contains the point Q, \({{\,\mathrm{ord}\,}}(P_i)_{\underline{0}} >\alpha +h\mu _i\) and since \(h\ge 1\), the sequence \( {{\,\mathrm{ord}\,}}P[\psi _i(x)]\) tends to infinity and we are done. \(\square \)

Corollary 3

Let \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) be the first \(\omega \)-terms of a solution of \(P[y]=0\). Then the pivot point of P with respect to \(\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) has ordinate greater than or equal to 1.

2.5 Relative Pivot Points

The above construction of the pivot point can be made relative to any of the variables \(Y_j\), \(0\le j\le n\), and more generally, relative to any monomial \(Y^{r}\), with \(r=(r_0,r_1,\cdots ,r_n)\in \mathbb {N}^{n+1}\), as follows:

Fix \(r\in \mathbb {N}^{n+1}\). The cloud of points of P relative to \(Y^{r}\) is defined as the set \(\mathcal {C}_r(P)= \{(\alpha ,|\rho |)\mid \exists \rho ,\,\text {with }P_{\alpha ,\rho }\ne 0,\text { and } r\preceq \rho \}\), where \(r\preceq \rho \) means that \(r_i\le \rho _i\), for all \(0\le i\le n\). It is obvious that \(\mathcal {C}_r(P)\subseteq \mathcal {C}(P)\).

Assume that \(\mathcal {C}_r(P)\) is not the empty set, then we may define the line \(L_r(P;\mu )\) as the leftmost line with co-slope \(\mu \) having nonempty intersection with \(\mathcal {C}_r(P)\). The point \(Q_r(P;\mu )\) will be the one with greatest ordinate in \(L_r(P;\mu )\cap \mathcal {C}_r(P)\).

If H denotes \(H=\frac{\partial ^{|r|} P}{\partial Y^{r}}\), the cloud \(\mathcal {C}_r(P)\) is not the empty set if and only if H is not the zero series. In this case, consider the translation map \(\tau (a,b)=(a,b-|r|)\). It is straightforward to prove that \(\mathcal {C}_r(P)=\tau ^{-1}(\mathcal {C}(H))\). Hence, \(L_r(P;\mu )=\tau ^{-1}(L(H;\mu ))\), and \(Q_r(P;\mu )=\tau ^{-1}(Q(H;\mu ))\).

Let \(\psi (x)=\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\), with \(\mu _0>\mu _{-1}(P)\). Denote \(H_0=H\) and \(H_{i+1}=H_i[c_ix^{\mu _i}+Y]\). By the chain rule,

$$\begin{aligned} \frac{\partial ^{{|r|}{}} P_{i}}{\partial Y^{r}}=H_{i},\quad i\ge 0. \end{aligned}$$
(10)

The sequence of points \(Q_r(P_i;\mu _i)=\tau ^{-1}(Q(H_i;\mu _i))\) for \(i\ge 0\) stabilizes at some point denoted \(Q_r(P;\psi (x))\) and which we call the pivot point of P with respect to \(\psi (x)\) relative to \(Y^{r}\). Therefore

$$\begin{aligned} Q(H;\psi (x))=\tau (Q_r(P;\psi (x))) \end{aligned}$$
(11)

Remark 1

Since \(H\ne 0\), then \(H_i\ne 0\), for \(i\ge 0\) so that \(\mathcal {C}_r(P_i)\) is not empty, for \(i\ge 0\). This proves that \(Q_r(P;\mu _i)\) and \(Q_r(P;\psi (x))\) are well-defined provided the monomial \(Y^{r}\) appears effectively in P.

From now on, we shall denote \(e_j\) the vector \((0,\dots , 0, 1, 0, \dots , 0)\) where the 1 appears at position \(j+1\), for \(j=0,\dots , n\). Thus, \(e_j=(\delta _{ij})_{0\le i\le n}\in \mathbb {N}^{n+1}\) where \(\delta _{ij}\) is the Kronecker symbol.

Proposition 2

Let \(Q=(a,h)\) be the pivot point of P with respect to \(\psi (x)\). Assume that the monomial \(Y^{r'}\) appears effectively in Q. Let \(r\in \mathbb {N}^{n+1}\), with \(r\preceq r'\), and \(H=\frac{\partial ^{|r|} P}{\partial Y^{r}}\). Then the pivot point of H with respect to \(\psi (x)\) is \((a,h-|r|)\). In particular, if \(r=r'-e_i\), for some i such that \(r'_i\ge 1\), then the ordinate of the pivot point \(Q(H;\psi (x))\) is 1. However, for \(r=r'\), one has \(Q(H;\psi (x))=(a,0)\) and therefore \(\psi (x)\) is not a solution of \(H[y]=0\).

Proof

Assume the pivot point Q is reached at step N, thus \(Q\in \mathcal {C}_{r'}(P_i)\subseteq \mathcal {C}_{r}(P_i)\) for all \(i\ge N\). From \(\mathcal {C}_r(P_i)\subseteq \mathcal {C}(P_i)\) and the fact that \(Q=Q(P_i;\mu _i)\) for all \(i>N\), one infers \(Q=Q_r(P_i;\mu _i)=Q_{r'}(P_i;\mu _i)\) for all \(i>N\). This means that Q is the pivot point of P with respect to \(\psi (x)\) relative to \(Y^{r}\) and also relative to \(Y^{r'}\). As we have seen before, \(\tau _r(Q)=(a,h-|r|)\) is the pivot point of H with respect to \(\psi (x)\). The third statement is a consequence of Proposition 1. \(\square \)

Corollary 4

Let \(\psi (x)=\sum _{i=0}^\infty c_i\,x^{\mu _i}\) be a solution of \(P[y]=0\) with \(\lim \mu _i=\infty \). If the pivot point \((P;\psi (x))\) has ordinate greater than 1, then there exists a non trivial derivative \(H=\frac{\partial ^{|r|} P}{\partial Y^{r}}\) of P, such that \(\psi (x)\) is a solution of \(H[y]=0\).

Proof

Let \(Y^{r'}\) be a monomial that appears effectively in the pivot point \(Q=Q(P;\psi (x))\). Since Q has ordinate greater that 1, \(r'\) can be chosen with \(|r'|\ge 2\). Let r be such that \(r\preceq r'\) and \(1\le |r|<|r'|\). By the Proposition, the pivot point of H with respect to \(\psi (x)\) has ordinate greater than or equal to 1. By Proposition 1, \(\psi (x)\) is a solution of \(H[y]=0\). \(\square \)

Lemma 4

Let \(Q(P;\psi (x))=(a,b)\) and \(Q_r(P;\psi (x))=(a',b')\) be respectively the general pivot point of \(\psi (x)\) and the pivot point of \(\psi (x)\) relative to \(Y^{r}\). If the sequence \(\mu _i\) of exponents of \(\psi (x)\) tends to infinity, then the following two statements hold:

  • The ordinate of \(Q_r(P;\psi (x))\) is at least b: \(b'\ge b\), and

  • If \(b'=b\) (both points are at the same height), then \(a' \ge a\).

Proof

Assume that both pivot points have been reached at step N. For any \(i\ge N\), the point \((a',b')\) belongs to the closed right half plane \(L(P_{i};\mu _i)^{+}\) because \(\mathcal {C}_r(P_i)\subseteq \mathcal {C}(P_i)\). Since \((a,b)\in {}L(P_i;\mu _i){}\) for all \(i\ge N\), and \(\lim \mu _i=\infty \), the intersection of all the half planes \(L(P_i;\mu _i)^{+}\) for \(i\ge N\), is the region R formed by the points in \(L(P_N;\mu _N)^{+}\) with ordinate greater than or equal to b. The result follows because \((a',b')\in R\), and (ab) is the most left point of R with ordinate equal to b. \(\square \)

3 Finiteness Properties

Throughout this section, we assume that \(\Gamma \) is a finitely generated semigroup of \(\mathbb {R}_{\ge 0}\) and that P is a nonzero element of \(\mathcal {R}_\Gamma [[Y]]\). We also assume that \(q\ne 1\): the case \(q=1\) is reduced to the case \(n=0\) considering \(P(Y_0,Y_0,\ldots ,Y_0)\). This section is devoted to proving the following results:

Theorem 1

If \(y\in \mathbb {C}((x^{\mathbb {R}}))\) is a solution of Eq. (1), then it is a grid-based formal power series.

Proposition 3

If \(\psi (x)=\sum _{i=0}^{\infty } c_ix^{\mu _i}\) satisfies \({{\,\mathrm{NIC}\,}}(P)\), then \(\psi (x)\) is a solution of \(P[y]=0\).

Definition 4

Let \(y\in \mathbb {C}((x^{\mathbb {R}}))\) and \(P\in \mathcal {R}_{\Gamma }[[Y]]\). We say that y is finitely determined by P if there exist positive integers k and h, such that if \(y_k\) denotes the first k terms of y then y is the only element \(z\in \mathbb {C}((x^{\mathbb {R}}))\) satisfying the following property: “\(z_k=y_k\) and \(Q[y]=0\) if and only if: for any \(Q=\frac{\partial ^{|r|} P}{\partial Y^{r}}\), with \(|r|\le h\), one has \(Q[z]=0\).”

Theorem 2

If \(\left| q \right| \ne 1\), then any solution y of Eq. (1) is finitely determined by P.

The hypothesis \(|q|\ne 1\) is necessary: let \(P=Y_0-Y_1\) and \(q=\sqrt{-1}\). Any series \(\sum _{i=0}^\infty c_{4i}\,x^{4i}\) (for arbitrary constants \(c_{4i}\)) is a solution of \(P[y]=0\). Since \(\partial P/\partial Y_0 [y]=0\) and \(\partial P/\partial Y_1[y]=0\) have no solutions, and higher order derivatives of P are zero, none of these solutions is finitely determined by P. If \(|q|=1\), \(q^{\alpha }=1\) for \(\alpha >0\) irrational, and \(q\ne 1\), then \(\sum _{i=0}^{\infty } a_ix^{i\alpha }\) is a also a solution of \(P[y]=0\) for any sequence \(a_i\), and it is not finitely determined either.

Remark 2

Let \(\Gamma \) be a finitely generated semigroup of \(\mathbb {R}_{\ge 0}\). For any real number k, the set \(\Gamma \cap \{r\mid r\le k\}\) is finite. Hence \(\Gamma \) is a well-ordered set with no accumulation points and its elements can be enumerated in increasing order: \(\Gamma =\{\gamma _i\}_{i\ge 0}\), with \(\gamma _{i}<\gamma _{i+1}\) and \(\lim \gamma _i=\infty \). Let \(\psi (x)=\sum _{i=0}^{\infty } c_ix^{\mu _i}\) be the first \(\omega \) terms of an element \(y\in \mathcal {R}\). If \({{\,\mathrm{supp}\,}}\psi (x)\) is contained in \(\Gamma \) then either it is finite or \(\lim \mu _i=\infty \). In both cases, \(y=\psi (x)\). In particular, any element of \(\mathcal {R}\) whose support is contained in \(\Gamma \) coincides with its first \(\omega \) terms.

3.1 Quasi Solved Form

Once we know that the pivot point Q corresponding to the solution s(x) can be assumed to have ordinate 1, we perform a transformation on P sending Q to (0, 1). Any equation whose pivot point with respect to a solution is at (0, 1) is very easy to study, as the successive Newton polygons only change below that point. This, together with the ease of computing their solutions is what makes this property relevant and deserving its own name, quasi-solved form.

A special case of quasi-solved form, called solved form, guarantees also that P has a unique solution s(x) with \(s(0)=0\). If P has integer exponents and is in solved form, then it has a single solution s(x) with \(s(0)=0\) and its exponents are integer (i.e. s(x) is a formal power series). As a side note, solutions to equations in solved form are studied in depth in our book [3] (their asymptotic properties, radius of convergence, etc.). In fact, many power series arising from combinatorial problems are in (or are easily turned into) solved form. We refer to [3] for the details.

We say that the equation

$$\begin{aligned} P[y]=0,\quad {{\,\mathrm{ord}\,}}(y)>0, \end{aligned}$$
(12)

is in quasi-solved form if the point (0, 1) is a vertex of \(\mathcal {N}(P)\) and \((0,0)\not \in \mathcal {C}(P)\). If this is the case, let \(\Psi (T)\) be the indicial polynomial of P at (0, 1), \(\Sigma =\{\mu \in \mathbb {R}\mid \Psi (q^{\mu })=0\}\) and \(\Sigma ^{+}=\Sigma \cap \mathbb {R}_{>0}\). We say that Eq. (12) is in solved form if \(\Sigma ^{+}\) is the empty set. One can verify (but it is irrelevant to our purposes) that an equation in solved form has a unique grid-based power series solution.

For the sake of comparison, a linear equation \(Q=\sum a_i(x)\sigma ^{j}\) is in quasi-solved form if \(a_j(0)\ne 0\) for some \(j\ge 1\).

The proof of Theorems 1 and 2 is structured as follows. A technical lemma on finitely generated semigroups allows us to introduce a change of variable \(z=x^{\gamma }\,y\) which will allow us to reduce the problem to quasi-solved form. Then we show (Lemma 7) that the solution is grid-based in this case. We also obtain in this case (Corollary 5) a recursive formula for the coefficients of the solution. Finally, the proofs of Theorems 1 and 2 follow.

Remark 3

The polynomial \(\Psi (T)\) can be written \(\Psi (T)=P_{0,e_0}+P_{0,e_1}\,T+\cdots + P_{0,e_n}\,T^{n} \in \mathbb {C}[T]\). Its degree m is the largest index such that the variable \(Y_m\) appears effectively in the point (0, 1). If the equation is in quasi-solved form, \(\Psi (T)\) is a nonzero polynomial because \((0,1)\in \mathcal {C}(P)\). If \(|q|\ne 1\), then \(\Sigma \) is finite. If case \(|q|=1\) (and \(q\ne 1\)), then \(\Sigma \) is the finite union of the sets \(\Sigma _r=\frac{\arg (r)}{\arg (q)}+\frac{2\pi }{\arg (q)}\mathbb {Z}\), for those complex roots r of \(\psi (T)\) with modulus 1. Recall that we have fixed a determination of the logarithm to compute \(q^{\mu }\), hence \(\arg (q)\) is also fixed. The following Lemma implies that \(\Sigma _r\cap \mathbb {R}_{\ge 0}\) is contained in a finitely generated semigroup. Therefore \(\Sigma ^{+}\) generates a finitely generated semigroup of \(\mathbb {R}_{\ge 0}\).

Lemma 5

Let \(\gamma \in \mathbb {R}\) and \(\gamma _1,\gamma _2,\ldots ,\gamma _s\) positive real numbers. Then the semigroup \(\Gamma \) of \(\mathbb {R}_{\ge 0}\) generated by the set \( A=(\gamma +\gamma _1\mathbb {N}+\cdots +\gamma _s\mathbb {N})\cap \mathbb {R}_{\ge 0} \) is finitely generated.

Proof

Let \(\Lambda \) be the set of \((n_1,\ldots ,n_s)\in \mathbb {N}^{s}\) such that \(\gamma +\sum n_i\gamma _i>0\). By Dickson’s lemma, the number of minimal elements in \(\Lambda \) with respect the product order are finite. Hence \(\Gamma \) is generated by \(\gamma _1,\ldots ,\gamma _s\) and the family \(\gamma +\sum n_i\gamma _i>0\) for all minimal element \((n_1,\ldots ,n_s)\) of \(\Lambda \). \(\square \)

We now introduce a change of variables which will allow us to simplify the exponents of the x variable in an equation. Let \(P\in \mathcal {R}_{\Gamma }[[Y]]\) and \(\gamma >\mu _{-1}(P)\). Define \(P[x^{\gamma }Y]\) as the series

$$\begin{aligned} \sum _{\rho }q^{\gamma \,\omega (\rho )}\,x^{\gamma |\rho |}\,P_\rho (x)\, Y_0^{\rho _0}Y_1^{\rho _1}\cdots Y_n^{\rho _n} \in \mathbb {C}((x^{\mathbb {R}}))^{g}\,\,[[Y]]. \end{aligned}$$
(13)

If \((\nu ,0)\) is the intersection point of \(L(P;\gamma )\) with the OX-axis, then all the coefficients of the series \(P[x^{\gamma }Y]\) have order greater than or equal to \(\nu \). Define \({}^{\gamma }\!{P}=x^{-\nu }P[x^{\gamma }Y]\). The coefficients of \({}^{\gamma }\!{P}\) are in \(\mathcal {R}_{\Gamma ^{*}}\), where \(\Gamma ^{*}\) is the semigroup of \(\mathbb {R}_{\ge 0}\) generated by \((-\nu +\Gamma +\gamma \mathbb {N})\cap \mathbb {R}_{\ge 0}\). By Lemma 5, \(\Gamma ^{*}\) is a finitely generated semigroup of \(\mathbb {R}_{\ge 0}\).

The transformation \(P\mapsto {}^{\gamma }\!{P}\) corresponds to the change of variable \(z=x^{\gamma }y\) in the following sense: for a series y, with \({{\,\mathrm{ord}\,}}y>\gamma +\mu _{-1}(P)\), one has \({}^{\gamma }\!{P}[x^{-\gamma }y]= x^{-\nu }P[y]\), in particular, \(P[y]=0\) if and only if \({}^{\gamma }\!{P}[x^{-\gamma }y]=0\).

Let \({\bar{\tau }}(a,b)\) be the plane affine map \({\bar{\tau }}(a,b)=(a-\nu +\gamma b,b)\), which satisfies \({\bar{\tau }}(\mathcal {C}_j(P))=\mathcal {C}_j({}^{\gamma }\!{P})\) for \(0\le j\le n\). In particular, \({\bar{\tau }}(\mathcal {N}(P))=\mathcal {N}({}^{\gamma }\!{P})\), and \({\bar{\tau }}\) maps vertices to vertices and sides of co-slope \(\mu \ge \gamma \) to sides of co-slope \(\mu -\gamma \). Moreover, \({\bar{\tau }}(L(P;\mu ))=L({}^{\gamma }\!{P};\mu -\gamma )\), in particular \({\bar{\tau }}(L(P;\gamma ))=L({}^{\gamma }\!{P};0)\) is the vertical axis. Therefore, \(Q(P;\mu )\) and \(Q({}^{\gamma }\!{P};\mu -\gamma )\) have the same ordinate. Let \(\sum _{i=0}^{\infty }c_ix^{\mu _i}\) and \(P_i\) be as in the definition of pivot point (Definition 3). Assume \(\gamma <\mu _0\) and set \(H={}^{\gamma }\!{P}\), \(H_0=H\) and \(H_{i+1}=H_i[c_ix^{\mu _i-\gamma }+Y]\). It is straightforward to prove that \({}^{\gamma }\!{P_i}=H_i\), so that \({\bar{\tau }}(Q(P_i;\mu _i))=Q(H_i;\mu _i-\gamma )\) and, in particular, they have the same ordinate. Then the image by \({\bar{\tau }}\) of the pivot point of P with respect to \(\sum _{i=0}^{\infty }c_ix^{\mu _i}\) is the pivot point of \({}^{\gamma }\!{P}\) with respect to \(\sum _{i=0}^{\infty }c_ix^{\mu _i-\gamma }\) and the same holds for relative pivot points. By Proposition 1, this implies that \(\sum _{i=0}^{\infty }c_ix^{\mu _i}\) satisfies \({{\,\mathrm{NIC}\,}}(P)\) if and only if \(\sum _{i=0}^{\infty }c_ix^{\mu _i-\gamma }\) satisfies \({{\,\mathrm{NIC}\,}}({}^{\gamma }\!{P})\).

Finally, if \(v\in \mathcal {C}(P)\) then \({\bar{\tau }}(v)\in \mathcal {C}({}^{\gamma }\!{P})\) and \(\Psi _{({}^{\gamma }\!{P};{\bar{\tau }}(v))}(T)=\Psi _{(P;v)}(q^{\gamma }T)\).

Lemma 6

Assume that \(\psi (x)=\sum _{i=0}^{\infty }c_i\,x^{\mu _i}\) satisfies \({{\,\mathrm{NIC}\,}}(P)\). Then there exist a finitely generated semigroup \(\Gamma ^{*}\), a series \(P^{*}\in \mathcal {R}_{\Gamma ^{*}}[[Y_{0},Y_{1},\ldots ,Y_{n}]]\), an index N and a rational number \(\gamma \) with \(\mu _{N-1}\le \gamma <\mu _{N}\), such that the equation

$$\begin{aligned} P^{*}[z]=0,\quad {{\,\mathrm{ord}\,}}z>0 \end{aligned}$$
(14)

is in quasi solved form and \(\psi ^*(x)=\sum _{i=N}^{\infty }c_i\,x^{\mu _i-\gamma }\) satisfies \({{\,\mathrm{NIC}\,}}(P^{*})\).

Proof

We may assume that the ordinate of the pivot point of \(\psi (x)\) with respect to P is 1. Otherwise, by Proposition 2, we may replace P by any of its derivatives \(\frac{\partial ^{|r|}P}{\partial Y^{r}}\), where the monomial \(Y_j\,Y^{r}\) appears effectively in the pivot point, for some j. We remark that the coefficients of any derivative of P also belong to \(\mathcal {R}_{\Gamma }\). Let \(Q=(\alpha ,1)\) be the pivot point of P with respect to \(\psi (x)\) and use the notation of Definition 3: \(P_0=P\), \(P_{i+1}=P_i[c_ix^{\mu _i}+Y]\) and so on. In particular, let the pivot point be reached at step \(N'-1\) for some \(N'\). Consider any integer \(N\ge N'\). Denote \(\Gamma _0=\Gamma \) and \(\Gamma _{i+1}=\Gamma _{i}+\mu _i\,\mathbb {N}\). Notice that the coefficients of \(P_i\) belong to \(\mathcal {R}_{\Gamma _i}\).

Let \(\gamma \) be a rational number such that \(\mu _{N-1}\le \gamma <\mu _{N}\) and set \(P^{*}={}^{\gamma }\!{P_N}\in \mathcal {R}_{\Gamma _N^{*}}[[Y]]\). Since the pivot point Q has been reached at step \(N-1\), \(Q\in L(P_{N-1};\mu _{N-1})\cap L(P_N;\mu _N)\). By Lemma 3, \(Q\in L(P_N;\mu _{N-1})\). Hence \(Q\in L(P_N;\mu _{N-1})\cap L(P_N;\mu _N)\); since \(\mu _{N-1}<\gamma <\mu _{N}\), we conclude that \(Q(P_N;\gamma )=Q=(\alpha ,1)\). So, as the change of variables  (13) sends a point (ab) to \(\tau (a,b)=(a-\nu +\gamma b,b)\) for the corresponding \(\nu \), we get \(\tau (\alpha ,1)=(0,1)\) and the point (0, 1), so that is in \(\mathcal {C}(P^{*})\), the equation \(P^{*}[y]=0\) is in quasi solved form and the pivot point of \(P^{*}\) with respect \(\psi ^*(x)\) is (0, 1). By Proposition 1, \(\psi ^*(x)\) satisfies \({{\,\mathrm{NIC}\,}}(P^{*})\). \(\square \)

Lemma 7

Assume Eq. (14) is in quasi-solved form and let \(\xi (x)=\sum _{i=0}^{\infty }c_ix^{\mu _i}\), with \(\mu _0>0\), be a series satisfying \({{\,\mathrm{NIC}\,}}(P^{*})\). Then the support of \(\xi (x)\) is contained in the finitely generated semigroup \(\Gamma '=\Gamma ^{*}+\Sigma ^{+}\,\mathbb {N}\). In particular, either the support of \(\xi (x)\) is finite or \(\lim \mu _i=\infty \) and in both cases \(\xi (x)\) is a solution of Eq. (14).

Proof

Let \(P_0=P^{*}\) and \(P_{i+1}=P_i[c_ix^{\mu _i}+Y]\) for \(i\ge 0\). We first prove that \(Q(P_i;\mu _i)=(0,1)\) for all \(i\ge 0\). We do this showing, by induction on i, that \(\mathcal {N}(P_i)\) is contained into the first quadrant of the plane and that the point \((0,1)\in \mathcal {C}(P_i)\). This holds for \(P_0\) because of the hypotheses on \(P^{*}\). Assume that the statement holds for \(P_i\). Since \(\mu _i>0\), the line \(L(P_i;\mu _i)\) either contains the point (0, 1), and then \(Q(P_i;\mu _i)=(0,1)\), or \(L(P_i;\mu _i)\) meets \(\mathcal {N}(P_i)\) at a single point with zero ordinate which is \(Q(P_i;\mu _i)\). If the latter happens, from Corollary 2, we infer that the pivot point of \(P^{*}\) with respect to \(\xi (x)\) has zero ordinate, in contradiction with the fact that \(\xi (x)\) satisfies \({{\,\mathrm{NIC}\,}}(P^{*})\). Hence \(Q(P_i;\mu _i)=(0,1)\). By Lemma 3, (0, 1) is a vertex of \(\mathcal {N}(P_{i+1})\) and since \(P_{i+1}\in \mathcal {R}[[Y]]\), its Newton polygon is contained in the first quadrant. This proves the induction step and that \(Q(P_i;\mu _i)=(0,1)\)\(i\ge 0\).

The fact that \(Q(P_i;\mu _i)=(0,1)\) implies that the polynomial \(\Phi _{(P_i;\mu _i)}(C)\) is equal to \(\Psi (q^{\mu _i})C+{{\,\mathrm{Coeff}\,}}(P_i;x^{\mu _i}\,Y^{\underline{0}})\), where \(\Psi (T)\) is the indicial polynomial of P at (0, 1) and \({{\,\mathrm{Coeff}\,}}(P_i;x^{\mu _i}\,Y^{\underline{0}})\) is the coefficient of \(x^{\mu _i}\,Y_0^{0}Y_1^{0}\cdots Y_n^{0}\) in \(P_i\).

Since \(\Phi _{(P_i;\mu _i)}(c_i)=0\) because \(\xi (x)\) satisfies \({{\,\mathrm{NIC}\,}}(P^{*})\), the following equations hold:

$$\begin{aligned} \Psi (q^{\mu _i})\,c_i+{{\,\mathrm{Coeff}\,}}(P_i;x^{\mu _i}\,Y^{\underline{0}})=0,\quad i\ge 0. \end{aligned}$$
(15)

Let us prove, by induction, that \(P_i\in \mathcal {R}_{\Gamma '}[[Y]]\), for all \(i\ge 0\), and that the support of \(\xi (x)\) is contained in \(\Gamma '\). By hypothesis, \(P_0\in \mathcal {R}_{\Gamma '}[[Y]]\). Assume that \(P_i\in \mathcal {R}_{\Gamma '}[[Y]]\). If \(c_i=0\), then \(P_{i+1}=P_i\in \mathcal {R}_{\Gamma '}[[Y]]\) and \(\mu _i\not \in {{\,\mathrm{supp}\,}}(\xi (x))\). If, on the contrary, \(c_i\ne 0\), we can prove by contradiction that \(\mu _i\in \Gamma '\): assume that \(\mu _i\not \in \Gamma '\), in particular \(\mu _i\not \in \Sigma ^{+}\), hence \(\Psi (q^{\mu _i})\ne 0\). From Eq. (15), \({{\,\mathrm{Coeff}\,}}(P_i;x^{\mu _i}\,Y^{\underline{0}})\ne 0\), and therefore \(\mu _i\in {{\,\mathrm{supp}\,}}((P_i)_{\underline{0}})\subseteq \Gamma '\). So \(P_{i+1}=P_i[c_ix^{\mu _i}+Y]\) belongs to \(\mathcal {R}_{\Gamma '}[[Y]]\) which proves the induction step.

The set \({{\,\mathrm{supp}\,}}\xi (x)\) has no accumulation points in \(\mathbb {R}\) because \(\Gamma '\) is a finitely generated semigroup of \(\mathbb {R}_{\ge 0}\) and \({{\,\mathrm{supp}\,}}\xi (x)\subseteq \Gamma '\) and we are done. \(\square \)

Corollary 5

Let y be a solution of Eq. (14) which is in quasi solved form. Let \(\Gamma '=\{\gamma _i\}_{i=0}^{\infty }\), with \(\gamma _i<\gamma _{i+1}\) for all i. Then \(y=\sum _{i=1}^{\infty } d_i\,x^{\gamma _i}\) with \(d_i\) satisfying the following recurrent formula:

$$\begin{aligned} \Psi (q^{\gamma _i})\,d_i=-{{\,\mathrm{Coeff}\,}}(P^{*}[d_1x^{\gamma _1}+\cdots +d_{i-1}x^{\gamma _{i-1}}]; x^{\gamma _i}), \quad i\ge 1. \end{aligned}$$
(16)

If \(\Sigma ^{+}\) is finite and z is another solution of Eq. (14) with \({{\,\mathrm{ord}\,}}(y-z)\) greater than any element of \(\Sigma ^{+}\), then \(y=z\).

Proof

Let \(\xi (x)\) be the first \(\omega \) terms of y. Then \({{\,\mathrm{supp}\,}}\xi (x)\subseteq \Gamma '\), and by Remark 2, \(y=\xi (x)\in \mathcal {R}_{\Gamma '}\). Hence we may write \(y=\sum _{i=1}^{\infty } d_i x^{\gamma _i}\) because \(\gamma _0=0\) and \({{\,\mathrm{ord}\,}}y>0\). Since \(\xi (x)\) satisfies \({{\,\mathrm{NIC}\,}}(P^{*})\), the same reasoning as in Lemma 7 up to Eq. (15) holds. The coefficient \({{\,\mathrm{Coeff}\,}}(P_i;x^{\gamma _i}Y^{\underline{0}})\) is equal to the coefficient of \(x^{\gamma _i}\) in \(P^{*}[d_1x^{\gamma _1}+\cdots +d_{i-1}x^{\gamma _{i-1}}]\), which gives Eq. (16). To prove the last statement, write \(z=\sum _{i=1}^{\infty } d'_i x^{\gamma _i}\). If \(\gamma _{i}\) is greater than any element of \(\Sigma ^{+}\), then \(\Psi (q^{\gamma _i})\ne 0\), and \(d_i\) is completely determined by \(d_1,\ldots ,d_{i-1}\), so that \(y=z\). \(\square \)

Proof of Proposition 3

Applying Lemma 6 to \(\psi (x)\) we obtain Eq. (14), and applying Lemma 7 to \(\xi (x)= \sum _{i=N}^{\infty }c_ix^{\mu _i-\gamma }\) we conclude that \(\mu _i-\gamma \in \Gamma '\), for \(i\ge N\). Since \(\gamma \ge \mu _0\), the set \((\gamma -\mu _0)+\Gamma '\) is included in \(\mathbb {R}_{\ge 0}\). Let \(\Gamma ''\) be the semigroup generated by \((\gamma -\mu _0)+\Gamma '\). By Lemma 5, \(\Gamma ''\) is finitely generated. Let \(\Gamma '''\) be the finitely generated semigroup \(\Gamma ''+\sum _{i=0}^{N-1} (\mu _i-\mu _0)\,\mathbb {N}\). The set \({{\,\mathrm{supp}\,}}\psi (x)\) is contained in \(\mu _0+\Gamma '''\), so that \(\lim \mu _i=\infty \). By Proposition 1, \(\psi (x)\) is a solution of \(P[y]=0\). \(\square \)

Proof of Theorem 1

Let \(\psi (x)=\sum _{i=0}^{\infty }c_ix^{\mu _i}\) be the first \(\omega \) terms of y. By Corollary 1, \(\psi (x)\) satisfies \({{\,\mathrm{NIC}\,}}(P)\). As in the proof of Proposition 3, there exists a finitely generated semigroup \(\Gamma \) such that \({{\,\mathrm{supp}\,}}\psi (x)\) is contained in \(\mu _0+\Gamma \). By Remark 2, \(y=\psi (x)\), so that y is grid-based. \(\square \)

Proof of Theorem 2

Let y be a solution of Eq. (1). By Theorem 1, y coincides with its first \(\omega \) terms. Write \(y=\sum _{i=0}^{\infty } c_i x^{\mu _i}\) and let \(Q=(\alpha ,h)\) be the pivot point of P with respect to y. Apply Lemmas 6 and 7 to y: let N and \(\gamma \) be as in Lemma 6; we may assume that the pivot point Q is reached at step \(N-1\). Since \(|q|\ne 1\), \(\Sigma ^{+}\) is finite by Remark 3. Since \(\lim \mu _i=\infty \), there is \(k>N\) such that \(\mu _k-\gamma \) is greater than any element of \(\Sigma ^{+}\).

Consider \(z\in \mathbb {C}((x^{\mathbb {R}}))\) with the same first k terms as y and satisfying that for any \(H=\frac{\partial ^{|r|} P}{\partial Y^{r}}\), with \(|r|\le h\), \(H[y]=0\) if and only if \(H[z]=0\). We have to show that \(y=z\).

Since \(P[y]=0\), then \(P[z]=0\), and z coincides with its first \(\omega \) terms. Write \(z=\sum _{i=0}^{\infty } d_i x^{\delta _i}\). By hypothesis, \(c_i=d_i\) and \(\mu _i=\delta _i\) for \(0\le i<k\). Denote \(P'_0=P\), \(P'_{i+1}=P_i[d_ix^{\delta _i}+Y]\) and \(P_0=P\) and \(P_{i+1}=P_i[c_ix^{\mu _i}+Y]\), for \(i\ge 0\). Obviously, \(P_i=P'_i\), for \(0\le i \le k\). In particular \(Q=Q(P_N;\mu _N)=Q(P'_N;\delta _N)\).

If the pivot point of P with respect to z is also Q, then apply Lemmas 6 and 7 to z in the same way as to y: choose the same derivative \(\frac{\partial ^{h-1} P}{\partial Y^{r}}\), the same N and the same \(\gamma \) to obtain the same \(P^{*}\). This can be done because \(P_i=P'_i\), for \(0\le i \le k\). This implies that \(\xi (x)=\sum _{i=N}^{\infty } {c_i}x^{\mu _i-\gamma }\) and \({\bar{\xi }}(x)=\sum _{i=N}^{\infty } {d_i}x^{\delta _i-\gamma }\) both satisfy \({{\,\mathrm{NIC}\,}}(P^{*})\). By Corollary 5, \(\xi (x)=\bar{\xi (x)}\), which implies \(y=z\).

Let us show by contradiction that the pivot point \(Q'\) of P with respect to z must be Q. Assume \(Q'\ne Q\). The point \(Q'\) is the stabilization point of the sequence \(Q(P'_i;\delta _i)\). On the other hand, Q belongs to this sequence because \(Q=Q(P_N;\mu _N)=Q(P'_N;\delta _N)\). Corollary 2 implies that either \(Q=Q'\) or otherwise their ordinates satisfies \(h>h'\). Hence \(h\ge h'\).

Let \(Y^{r}\) be a monomial that appears effectively in the pivot point of P with respect to z, so that \(|r|=h'\). Let \(H=\frac{\partial ^{h'} P}{\partial Y^{r}}\). By Proposition 2, \(H[z]\ne 0\); in particular \(H\ne 0\). We claim that \(H[y]=0\). By Remark 1, the pivot point \(Q_r\) of P with respect to y relative to \(Y^{r}\) is well-defined. Since \(\lim \mu _i=\infty \), by Lemma 4, the ordinate of \(Q_r\) is \(h''\ge h\). The pivot point of H with respect to y has ordinate \(h''-h'\ge h-h'\ge 1\). By Proposition 1, y satisfies \({{\,\mathrm{NIC}\,}}(P)\) and so \(H[y]=0\), which proves our claim and finishes the proof the the Theorem. \(\square \)

3.2 Bounding the Rational Rank in Order and Degree 1

In general, it is nice to know a priori how complex a solution of an equation can be. Following Seidenberg [29], one can deduce that if s(x) is a solution of a differential equation \(P=A(x,y)+B(x,y)y^{\prime }\) of order and degree 1 with \(A(x,y),B(x,y)\in \mathbb {C}[[x,y]]\), then its support is included in the \(\mathbb {Q}\)-vector space \(\mathbb {Q} + \alpha \mathbb {Q}\), for some \(\alpha \in \mathbb {R}\). Morally speaking, one can only have a single irrational exponent (and its \(\mathbb {Q}\)-span) in s(x). We pose here the same question (in all its generality, allowing P to have exponents in a finitely generated semigroup) and reach the equivalent conclusion: the dimension of the vector space generated by the support of a solution s(x) is at most 1 plus the dimension of the vector space generated by the support of P.

Recall that the rational rank of a semigroup \(S\subseteq \mathbb {R}\) is the dimension of \({\langle }S{\rangle }\), the \(\mathbb {Q}\)-vectorial subspace of \(\mathbb {R}\) generated by S. It is denoted \({{\,\mathrm{rat.rk}\,}}(S)\).

In what follows \(\Gamma \) denotes a finitely generated semigroup of \(\mathcal {R}_{\ge 0}\), as above.

Theorem 3

Assume \(|q| \ne 1\). Let \(P=A(Y_0)+B(Y_0)Y_1\) be a nonzero series, where \(A,B\in \mathcal {R}_{\Gamma }[[Y_0]]\). Let y be a solution of \(P[y]=0\), with \({{\,\mathrm{ord}\,}}y > \mu _{-1}(P)\). Then \({{\,\mathrm{rat.rk}\,}}({{\,\mathrm{supp}\,}}y)\le {{\,\mathrm{rat.rk}\,}}(\Gamma )+1\).

The inequality can be strict, as witness the equation \(P=Y_1-q^{\pi }Y_0\), which has as solution \(y(x)=x^{\pi }\).

Proof

By the previous results, y coincides with its first \(\omega \) terms \(\psi (x)=\sum _{i=0}^\infty c_i \,x^{\mu _i}\). Taking a rational \(\gamma <\mu _0\) and replacing P by \({}^{\gamma }\!{P}\) we may assume that \(\mu _0>0\) and that \({}^{\gamma }\!{P}\in \mathcal {R}_{\Gamma ^{*}}\) and \({{\,\mathrm{rat.rk}\,}}(\Gamma ^{*})={{\,\mathrm{rat.rk}\,}}(\Gamma )\), for another finitely generated semigroup \(\Gamma ^{*}\).

Define \(P_0=P\), \(P_{i+1}=P_i[c_ix^{\mu _i}+Y]\), \(\Gamma _0=\Gamma ^{*}\) and \(\Gamma _{i+1}=\Gamma _i+\mu _i\mathbb {N}\). The coefficients of \(P_i\) belong to \(\mathcal {R}_{\Gamma _i}\). Notice that one has \(\dim {\langle }\Gamma _{i+1}{\rangle }\le \dim {\langle }\Gamma _i{\rangle }+1\) and the inequality holds only if \(\mu _i\not \in {\langle }\Gamma _{i}{\rangle }\).

For each index i, the line \(L(P_i;\mu _i)\) corresponds either to a vertex or to a side of \(\mathcal {N}(P_i)\). If it corresponds to a side, then there are two different points \((\alpha ,a)\) and \((\beta ,b)\) in \(\mathcal {C}(P_i)\) lying on \(L(P_i;\mu _i)\). This implies that \(\alpha ,\beta \in \Gamma _i\), therefore \(\mu _i=(\beta -\alpha )/(a-b)\in {\langle }\Gamma _i{\rangle }\), and \({\langle }\Gamma _i{\rangle }={\langle }\Gamma _{i+1}{\rangle }\). Hence it is enough to prove that if for an index i, \(\mu _i\) corresponds to a vertex of \(\mathcal {N}(P_i)\), then for all \(j>i\), \(\mu _j\) corresponds to a side of \(\mathcal {N}(P_j)\). this holds, iterating the above argument, we get \(\dim {\langle }\Gamma _0{\rangle }=\dim {\langle }\Gamma _i{\rangle }\) and \( \dim {\langle }\Gamma _{i+1}{\rangle }=\dim {\langle }\Gamma _j{\rangle }\), for \(j>i\), which completes the proof because we have:

$$\begin{aligned} \dim {\langle }\cup _{j=0}^{\infty } \Gamma _j{\rangle }=\dim {\langle }\Gamma _{i+1}{\rangle }\le 1+\dim {\langle }\Gamma _i{\rangle }= 1+\dim {\langle }\Gamma _0{\rangle }. \end{aligned}$$

We now prove the statement above. Assume that for the index i, \(\mu _{i}\) corresponds to a vertex \(v=(a,h)\) of \(\mathcal {N}(P_i)\). By Corollary 1, we have that \(\Phi _{(P_i;\mu _i)}(c_i)=0\). Applying next Lemma 8 to \(P_i\) we obtain that \(v'=(\nu -h,1)\) is a vertex of \(\mathcal {N}(P_{i+1})\) and that \(\Psi _{(P_{i+1};v')}(q^{\mu })\ne 0\), for \(\mu > \mu _i\). By Lemma 3, \(\mathcal {N}(P_{i+1})\) is contained in the closed right half-plane defined by \(L(P_i;\mu _i)\). Since \(v'\in L(P_i;\mu _i)\) and \(\mu _{i+1}>\mu _i\), then the point \(Q(P_{i+1};\mu _{i+1})\) is either \(v'\) or a point with zero ordinate. The last possibility would be in contradiction with the fact that \(\psi (x)\) is a solution of \(P=0\) and Proposition 1, hence \(Q(P_{i+1};\mu _{i+1})=v'\) and is ordinate is 1. But this implies that \(Q(P_{i+1};\mu _{i+1})\) is the pivot point of P with respect to \(\psi (x)\), from which follows that for \(j>i\), \(Q(P_j;\mu _j)=v'\) and \(\Psi _{(P_j;v')}=\Psi _{(P_{i+1};v')}\) (see Definition 3 and the subsequent paragraph).

Let us prove, finally, that for any \(j>i\), \(\mu _j\) corresponds to a side of \(\mathcal {N}(P_j)\). Were this not the case, for some \(j>i\), \(\mu _j\) would correspond either to a vertex \(v'\) or to a different vertex with zero ordinate, and both possibilities are absurd:

  • If \(\mu _j\) corresponds to the vertex \(v'\) then

    $$\begin{aligned} \Phi _{(P_j;\mu _j)}(c_j)=\Psi _{(P_j;v')}(q^{\mu _j})\,c_j =\Psi _{(P_{i+1};v')}(q^{\mu })\,c_j\ne 0 \end{aligned}$$

    which contradicts Proposition 1.

  • If \(\mu _j\) corresponds to a vertex with zero ordinate, then \(\Phi _{(P_j;\mu _j)}\) is a non-zero constant polynomial and has no roots, but by hypothesis \(a_j\) is indeed a root.

Hence \(\mu _j\) corresponds to a side of \(\mathcal {N}(P_j)\), and we are done. \(\square \)

Lemma 8

Let \(P=A(Y_0)+B(Y_0)\,Y_1\) where \(A(Y_0),B(Y_0)\in \mathcal {R}_\Gamma [[Y_0]]\). Let \(\mu >\mu _{-1}(P)\) such that \(L(P;\mu )\cap \mathcal {N}(P)\) is a vertex \(v=(a,h)\) of \(\mathcal {N}(P)\) and let c be a nonzero constant such that \(\Phi _{(P;\mu )}(c)=0\). Let \({\bar{P}}=P[c\,x^{\mu }+Y]\). Then the point \(v'=(\nu -h,1)\), where \(\nu =a+\mu \,h\), is a vertex of \(\mathcal {N}({\bar{P}})\). Moreover, for \(\mu '>\mu \) we have that \(\Psi _{({\bar{P}},v')}(q^{\mu '})\ne 0\).

Proof

Since \(L(P;\mu )\cap \mathcal {N}(P)=\{v\}\), we have that \(\Phi _{(P;\mu )}(C)=\Psi _{(P;v)}(q^{\mu })\,C^{h}\). By hypothesis, \(\Phi _{(P;\mu )}(c)=0\) and \(c\ne 0\), hence \(\Psi _{(P;v)}(q^{\mu })=0\). Let us denote \(M=A\,x^{a}\,Y_0^{h}+B\,x^{a}\,Y_0^{h-1}\,Y_1\), \(A,B\in \mathbb {C}\), to be the sum of the terms of P corresponding to the vertex v; in particular, either A or B is different from zero. Then \(\Psi _{(P;v)}(T)=A+B\,T\) and \(q^{\mu }\) is the unique root of \(\Psi _{(P;v)}(T)\).

Lemma 3 describes \(\mathcal {N}({\bar{P}})\): The point \((\nu ,0)\not \in \mathcal {C}({\bar{P}})\) because \(\Phi _{(P,\mu )}(c)=0\) and \(\mathcal {N}({\bar{P}})\) is contained in the closed right-half plane defined by \(L(P;\mu )\). Let us prove that the point \(v'=(\nu -h,1)\in \mathcal {C}({\bar{P}})\) which would prove that \(v'\) is a vertex of \(\mathcal {N}({\bar{P}})\) because it belongs to \(L(P;\mu )\). For that, we compute the monomials of \({\bar{P}}\) corresponding to the point \(v'\). Again by Lemma 3, these are the monomials of \({\bar{M}}=M[c\,x^{\mu }+Y]\) corresponding to \(v'\). By direct computation these monomials are \(c^{h-1}\,x^{\nu -h}\,(A\,Y_0+B\,Y_1)\). Since \(c\ne 0\) and either \(A\ne 0\) or \(B\ne 0\) then \(v'\in \mathcal {C}({\bar{P}})\). Moreover \(\Phi _{({\bar{P}},v')}(T)=c^{h-1} (A+B\,T)\) and then \(q^{\mu }\) is the only root of \(\Phi _{({\bar{P}},v')}(T)\). Since \(|q|\ne 1\), if \(\mu '>\mu \), then \(q^{\mu '}\ne q^{\mu }\) and then \(\Phi _{({\bar{P}},v')}(q^{\mu '})\ne 0\). \(\square \)

4 q-Gevrey Order

Throughout this section we assume that \(|q|>1\) In this case, we prove some properties about the growth of the coefficients of a formal power series solution of a q-difference equation. Note that the case \(|q|<1\) follows from this one by considering the equation \(P(q^{-n}x,\sigma ^{-n}(y),\ldots ,y)=0\) which is equivalent to Eq. (1) because the operator \(\sigma ^{-1}(y(x))=y(q^{-1}\,x)\).

Definition 5

A formal power series \(\sum _{i=0}^{\infty }c_i\,x^{i}\) is said to be of q-Gevrey order \(s\ge 0\) if the series \(\sum _{i=0}^{\infty }c_i\,|q|^{-\frac{1}{2}s\,i^2}x^{i}\) has a positive radius of convergence.

We will say that a series \(P=\sum _{\alpha ,\rho }P_{\alpha ,\rho }\,x^{\alpha }\,Y^{\rho }\in \mathbb {C}[[x,Y_{0},Y_{1},\ldots ,Y_{n}]]\) is of q-Gevrey order \(s\ge 0\) if the series

$$\begin{aligned} \sum _{(\alpha ,\rho )\in \mathbb {N}\times \mathbb {N}^{n+1}} P_{\alpha ,\rho }\, |q|^{-\frac{1}{2} s(\alpha +|\rho |)^{2}} \, x^{\alpha }\,Y^{\rho } \end{aligned}$$

has a positive radius of convergence at the origin of \(\mathbb {C}^{n+2}\).

We remark that q-Gervey of order 0 means convergence. This section is devoted to proving the following result (the number s(Py(x)) in the statement is introduced in Definition 6 and can be computed from the relative pivot points of P with respect to y(x)).

Theorem 4

Let \(P\in \mathbb {C}[[x,Y_{0},Y_{1},\ldots ,Y_{n}]]\) be a non-zero formal power series of q-Gevrey order \(t\ge 0\) and \(y(x)\in \mathbb {C}[[x]]\) a solution of \(P[y]=0\). Then y(x) is of q-Gevrey order \(t+s(P;y)\) (see the following definition).

Definition 6

Let \(Q=(a,h)\) be the pivot point of P with respect to y(x). The number s(Py) is defined as follows:

Case \(h=1\). Let \(Q_j=(a_j,h_j)\) be the pivot point of P with respect to y(x) relative to the variable \(Y_j\) (for \(0\le j\le n\)). Since Q has ordinate 1, \(Q=Q_j\) for some j. Let \(r=\max \{j\mid Q_j=Q\}\). There are three cases:

  1. (RS-R)

    If \(r=n\), then \(s(P;y(x))=0\).

  2. (RS-N)

    If \(r<n\) and \(h_j>1\) for all \(r<j\le n\), then s(Py(x)) can be taken as any positive number. In this case, Theorem 4 says that y(x) is of q-Gevrey order \(t+\varepsilon \) for any \(\varepsilon >0\).

  3. (IS)

    If \(r<n\) and \(h_j=1\) for some \(r<j\le n\), then \(s(P;y(x))=\max \{\frac{j-r}{a_j-a_r}\mid r<j\le n,\, h_j=1\}\).

Case \(h>1\). By Proposition 2 there exist derivatives \(H=\frac{\partial ^{|\rho |}P}{\partial Y^{\rho }}\), with \(|\rho |=h-1\), such that the pivot point of H with respect to y(x) is equal to 1. Define s(Py(x)) as the minimum of all those s(Hy(x))). If for some derivative H, the equation \(H=0\) and its solution y(x) fall in the (RS-N) case, then s(Py(x)) can be taken as any positive number.

Remark 4

When Q has ordinate \(h>1\), the number s(Py(x)) can be described directly in terms of the relative pivot points: let \(Q=(a,h)\) be the (general) pivot point of P with respect to y(x), and \(Q_{\rho }(P;y(x))=(a_\rho ,h_{\rho })\). Let A be the set formed by those 3-tuples \((\rho ,i,j)\) satisfying the following properties: \(|\rho |=h\), \(Q_\rho =Q\), \(0\le i <j\le n\), and \(h_{\rho '}=h\), where \(\rho '=\rho -e_i+e_j\), using the previous notation \(e_j = (0,\ldots ,0, 1,0,\ldots ,0)\) where the 1 is at position \(j+1\) (as we need to account for the case \(j=0\)). If the set A is empty, we define s(Py(x)) as any positive real number. Otherwise, s(Py(x)) is the minimum of \(\frac{j-i}{a_{\rho '}-a}\), for those \((\rho ,i,j)\in A\).

Remark 5

Zhang’s paper [32] deals with the case in which P is a convergent series. The bound given there for the q-Gevrey order of the solution coincides with the one described here in cases (RS-R) and (IS), provided \(h_n=1\). In the other cases, Zhang proves that some bound exits but without a detailed control. In particular, our bound in case (RS-N) is more accurate because we prove that the solution is of q-Gevrey order s, for any \(s>0\). If \(h_n=1\), the bound found in [32] is described with the aid of the Newton-Adams Polygon (see [1, 2]) of the linearized operator along y(x):

$$\begin{aligned} L_y=\sum _{j=0}^{n}\frac{\partial P}{\partial Y_j}[y(x)]\, \sigma ^{j}\in \mathbb {C}[[x]][\sigma ]. \end{aligned}$$

By Proposition 2, we know that \(L_y\) is not identically zero if and only if the pivot point of P with respect to y(x) has ordinate 1. The Newton-Adams Polygon \(\mathcal {N}_q(L_y)\) of \(L_y\) is defined as follows: for each \(0\le j \le n\), let \(l_j={{\,\mathrm{ord}\,}}\frac{\partial P}{\partial Y_j}[y(x)]\in \mathbb {N}\cup \{\infty \}\). Notice that \(l_j=a_j\) if \(h_j=1\). Then \(\mathcal {N}_q(L_y)\) is the convex hull of the set \(\{(j,l_j+r)\mid l_j\ne \infty ,\,r\ge 0\}\). It is easy to check that s(Py(x)) is the reciprocal of the minimum of the positives slopes of \(\mathcal {N}_q(L_y)\).

Remark 6

The labels (RS-*) and (IS-*) in Definition 6 correspond to the singularity type of the linearized operator \(L_y\) (regular or irregular). The labels (*-R) or (*-N) denote whether the solution y(x) is a regular solution of P (i.e. \(h_n=1\)) or not.

4.1 Reduction to Solved Form

In order to prove Theorem 4, we first show (in the paragraphs below) that we may assume that the equation \(P[y]=0\) is in solved form and that the general and all the relative pivot points with respect to the variables \(Y_j\) are reached at step 0.

Let \(y(x)=\sum _{i=0}^{\infty }c_i\,x^{i}\in \mathbb {C}[[x]]\) be a solution of \(P[y]=0\). We apply the process described in the proof of Lemma 6 to P and y(x) in three steps:

  1. (a)

    Replace P by some of its derivatives H such that the ordinate of the pivot point of H with respect to y(x) is equal to 1 and \(s(P;y(x))=s(H;y(x))\).

  2. (b)

    Let N be large enough so that all the relative points \(Q_{j}(H;y(x))\), for \(0\le j \le n\), have been reached at step \(N-1\).

  3. (c)

    Let \(\gamma =N-1\) and consider \(P^*={}^{\gamma }\!{H_{N}}\) and \(y^*(x)=\sum _{i=N}^{\infty }c_i\,x^{i-N+1}\). Then, \(P^*[y]=0\) is in quasi-solved form and \(P^*[y^*(x)]=0\).

If \({\bar{y}}(x)=\sum _{i=N}^{\infty }c_i\,x^{i}\), then the relative pivot points of y(x) with respect to H are the same as the relative pivot points of \({\bar{y}}(x)\) with respect to \(H_N\). Hence, \(s(H;y(x))=s(H_N;{\bar{y}}(x))\). Finally, the change of variables (13) produces the affine transformation \(\tau \) on the (ij) plane on which the Newton polygon is defined; recall that this transformation satisfies \(\tau (Q_j(H_N;{\bar{y}}(x)))=Q_j(P^*;y^*(x))\), and moreover, \(\tau \) restricted to the line of points with ordinate 1 is a translation, so that \(s(H_N;{\bar{y}}(x))=s(P^*;y^*(x))\). This proves that \(s(P;y(x))=s(P^*;y^*(x))\). Moreover, the general and relative pivot points \(Q_j(P^*;y^*(x))\) are reached at step 0. It is straightforward to prove that if P is of q-Gevrey order t, then H, \(H_N\) and \(P^*\) are all of q-Gevrey order t. Also \(y^*(x)\) and y(x) have the same q-Gevrey order. This shows that it is enough to prove Theorem 4 when the q-difference equation \(P[y]=0\) is in quasi-solved form and the relative pivot points \(Q_j(P;y(x))\) are reached at step 0.

Finally, assuming that \(P[y]=0\) is in quasi-solved form, since \(|q|>1\), the set \(\Sigma ^{+}\) is finite. Let N be an integer greater than the maximum of \(\Sigma ^{+}\), \(P^{*}={}^{N}\!{(P_{N+1})}\) and \(y^{*}(x)=\sum _{i=N+1}c_i\,x^{i-N}\). It is clear that \(s(P;y(x))=s(P^*;y^*(x))\), and also that \(P^{*}\) and \(y^{*}(x)\) are of the same q-Gevrey order as P and y(x) respectively. From this we conclude that we may assume the q-difference equation \(P[y]=0\) is in solved form.

4.2 Recursive Formula for the Coefficients

Let \(y(x)=\sum _{i=1}^{\infty } c_i\,x^{i}\) be a power series solution of the q-difference equation \(P[y]=0\), where

$$\begin{aligned} P=\sum _{(\alpha ,\rho )\in \mathbb {N}\times \mathbb {N}^{n+1}} P_{\alpha ,\rho }\, x^{\alpha }\,Y^{\rho } \in \mathbb {C}[[x,Y_{0},Y_{1},\ldots ,Y_{n}]]. \end{aligned}$$

Assume that it is in solved form and that the general pivot point Q with respect to y(x) and the relative ones \(Q_j=(a_j,h_j)\) are all reached at step 0. Since the equation is in solved form, \(Q=(0,1)\). Let r be the maximum index j, \(0\le j\le n\), such that \(Q_j=Q\) and let \(\Psi (T)\) be the indicial polynomial of P at point Q. From Eq. (15) one has

$$\begin{aligned} \Psi (q^{i})\,c_i=-{{\,\mathrm{Coeff}\,}}(P_i;\,x^{i}\,Y^{\underline{0}}),\quad i\ge 1. \end{aligned}$$
(17)

As usual \(P_i=P[c_1x+\cdots +c_{i-1}x^{i-1}+Y]\). We are interested in computing \({{\,\mathrm{Coeff}\,}}(P_i;\,x^{i}\,Y^{\underline{0}})\) in terms of \(c_1,c_2,\ldots ,c_{i-1}\). To this end, we shall consider formal series \(H^{i}\) in the variables \(T_{\alpha ,\rho }\), \(C_{j,l}\), x and \(Y_{0},Y_{1},\ldots ,Y_{n}\), where \(\alpha \in \mathbb {N}\), \(\rho =(\rho _0,\ldots ,\rho _n)\in \mathbb {N}^{n+1}\), \(0\le j\le n\) and \(1\le l\le i-1\), defined as follows

$$\begin{aligned} H^{i}=\sum _{(\alpha ,\rho )\in \mathbb {N}^{n+2}}T_{\alpha ,\rho }\,\, x^{\alpha } \prod _{0\le j\le n}\left( \sum _{l=1}^{i-1}C_{j,l}\,x^{l}+Y_j \right) ^{\rho _j}. \end{aligned}$$

For \((\beta ,\gamma )\in \mathbb {N}\times \mathbb {N}^{n+1}\), let \(H^{i}_{\beta ,\gamma }\) be the coefficient of \(x^{\beta }Y^{\gamma }\) in \(H^{i}\). It is a polynomial with coefficients in \(\mathbb {N}\) and in the variables \(T_{\alpha ,\rho }\) and \(C_{j,l}\). Denote \(L_{i}=H^{i}_{i,\underline{0}}\), i.e. the coefficient of \(x^{i}Y^{\underline{0}}\) in \(H^{i}\). A simple computation shows that

$$\begin{aligned} L_{i}=\sum _{(\alpha ,\rho ,\underline{d})\in \mathcal {F}_i} B^{i}_{\alpha ,\rho ,\underline{d}}\,\,T_{\alpha ,\rho }\,\, \prod _{0\le j\le n} \prod _{1\le l\le i-1}C_{j,l}^{d_{j,l}}, \end{aligned}$$

where \(B^{i}_{\alpha ,\rho ,\underline{d}}\) are non-negative integers and the summantion set \(\mathcal {F}_i\) comprises those \((\alpha ,\rho ,\underline{d})\), such that \(\alpha \in \mathbb {N}\), \(\rho \in \mathbb {N}^{n+1}\), \(\underline{d}=(d_{j,l})\in \mathbb {N}^{(n+1)(i-1)}\), for \(0\le j\le n\), \(1\le l \le i-1\), for which the following formulæ hold:

$$\begin{aligned} \alpha +\sum _{j,l}l\,d_{j,l}= & {} i, \end{aligned}$$
(18)
$$\begin{aligned} \sum _{l}d_{j,l}= & {} \rho _j,\quad \text {and so, }\,\,\, \sum _{j,l}d_{j,l}=|\rho |. \end{aligned}$$
(19)

Remark 7

Notice that, substituting in \(H^{i}\) the variables \(T_{\alpha ,\rho }\) for \(P_{\alpha ,\rho }\) and \(C_{j,l}\) for \(c_{j,l}:=q^{j\,l}\,c_l\), one obtains \(P_i\). Hence,

$$\begin{aligned} {{\,\mathrm{Coeff}\,}}(P_i;x^{i}Y^{\underline{0}})=L_{i}(P_{\alpha ,\rho },c_{j,l}). \end{aligned}$$

However, in order to have an optimal control on the q-Gevrey growth, we need to be more precise and use the position of the relative pivot points of P with respect to y(x), and refine the summation set: let \(\mathcal {F}'_i\) be the subset of \(\mathcal {F}_i\) composed by those \((\alpha ,\rho ,\underline{d})\) satisfying the following properties:

$$\begin{aligned}&\text {If }&j>r,\,h_j\ge 2,\,\text { and }l>i/2\,\text { then }d_{j,l}=0. \end{aligned}$$
(20)
$$\begin{aligned}&\text {If }&j>r,\,h_j=1,\,\text { and }l>i-a_j\,\text { then }d_{j,l}=0. \end{aligned}$$
(21)

Let \(\mathcal {F}''_i\) be the complement of \(\mathcal {F}'_i\) in \(\mathcal {F}_i\) and let \(L'_i\) (resp. \(L''_i\)) be the sum of those terms in \(L_i\) corresponding to those \((\alpha ,\rho ,\underline{d})\) in \(\mathcal {F}'_i\) (resp. in \(\mathcal {F}''_i\)); obviously \(L_i=L'_i+L''_i\).

Lemma 9

The following equality holds: \({{\,\mathrm{Coeff}\,}}(P_i;x^{i}Y^{\underline{0}})=L'_{i}(P_{\alpha ,\rho },c_{j,l})\).

Proof

Take \(l_0\) with \(1\le l_0\le i-1\), and consider \(P_{l_0}=P[\sum _{l=1}^{l_0-1}c_l\,x^{l}+Y]\) (see (6)). Let \({\bar{P}}_{l_0}\) be the series obtained substituting in \(P_{l_0}\) the expression \(\sum _{l=l_0}^{i-1}C_{j,l}\,x^{l}+Y_j\) for the variable \(Y_j\), \(0\le j\le n\). By construction,

$$\begin{aligned}&P_i=P_{l_0}[c_{l_0}\,x^{l_0}+\cdots +c_{i-1}\,x^{i-1}+Y],\\&L_i(T_{\alpha ,\rho }=P_{\alpha ,\rho },C_{j,l}=c_{j,l};1\le l <l_0)={{\,\mathrm{Coeff}\,}}({\bar{P}}_{l_0};x^{i}Y^{\underline{0}}). \end{aligned}$$

Write \(P_{l_0}=\sum _{(\alpha ,\rho )\in \mathbb {N}\times \mathbb {N}^{n+1}} (P_{l_0})_{\alpha ,\rho }\,x^{\alpha }\,Y^{\rho }\). Expanding \({\bar{P}}_{l_0}\) as a series in the variables \(C_{j,l}\), \(l_0\le l\le i-1\), x and \(Y_j\), \(0\le j\le n\), let us denote, for \(r<j_0\le n\), by \(A_{j_0,l_0}\) the sum of terms of \({\bar{P}}_{l_0}\) in which the variable \(C_{j_0,l_0}\) appears effectively. In order to compute \(A_{j_0,l_0}\), it is only necessary to take into account the terms of \(P_{l_0}\) in which the variable \(Y_{j_0}\) appears effectively, that is, only consider the sum over the indices \((\alpha ,\rho )\in \mathcal {C}_{j_0}(P_{l_0})\). Since we are assuming that the pivot point \(Q_{j_0}=(a_{j_0},h_{j_0})\) of P with respect to y(x) relative to the variable \(Y_{j_0}\) is reached at step 0, we may assume that the order in x of \(A_{j_0,l_0}\) is greater than or equal to \(a_{j_0}+h_{j_0}\,l_0\). If \(j_0,l_0\) satisfy the premise of either (20) or (21), then \(a_{j_0}+h_{j_0}\,l_0>i\) and the variable \(C_{j_0,l_0}\) does not appear effectively in the coefficient of \(x^{i}Y^{\underline{0}}\) in \({\bar{P}}_{l_0}\). From this one infers that \(L''_i(P_{\alpha ,\rho },c_{j,l})=0\). \(\square \)

From the definition of r, one has \(\Psi (T)=P_{0,e_0}+P_{0,e_1}\,T+\cdots + P_{0,e_r}\,T^{r}\), with \(P_{0,e_r}\ne 0\). In particular, \(\Psi (T)\) has degree r. Moreover, since the equation \(P[y]=0\) is in solved form, \(\Psi (q^{i})\ne 0\), for \(i\ge 1\). From Eq. (17) and Lemma 9, the following recursive formula holds for all \(i\ge 1\):

$$\begin{aligned} c_i=\frac{-1}{\Psi (q^{i})}L'_i(P_{\alpha ,\rho };c_{j,l}), \end{aligned}$$
(22)

where \(c_{j,l}=q^{j\,l}c_{l}\), \(1\le l\le i-1\) and \(0\le j\le n\).

4.3 A Majorant Series

Assume the hypotheses and notations of the previous sub-section and that P has q-Gevrey order \(t\ge 0\). Let \(s=s(P;y(x))\). Consider the equation in two variables x and w:

$$\begin{aligned} w= \vert q \vert ^{-\frac{s+t}{2}} \,|c_1|\,x+ \sum _{(\alpha ,\rho )\in \mathcal {C}'} G_{\alpha ,\rho } \, x^{\alpha }\,w^{|\rho |}, \end{aligned}$$
(23)

where \(G_{\alpha ,\rho }= |P_{\alpha ,\rho }|\, \vert q \vert ^{-\frac{t}{2}(\alpha +|\rho |)^{2}+k_1(\alpha +|\rho |)+k_2}\), \(k_1\) and \(k_2\) are positive constants to be specified later, and \(\mathcal {C}'\) is the set \(\mathbb {N}\times \mathbb {N}^{n+1}\) without the points \((0,\underline{0})\), \((1,\underline{0})\) and \((0,e_j)\) for \(0\le j\le n\). It is straightforward to prove that the right hand side of (23) is a convergent series and that the equation has a unique power series solution \(w(x)=\sum _{i=1}^{\infty } c'_i\, x^{i}\), whose coefficients \(c'_i\) satisfy the recursive formulae:

$$\begin{aligned} c'_1=\vert q \vert ^{-\frac{s+t}{2}} |c_1|,\quad c'_i=L_i(G_{\alpha ,\rho };\{c'_{j,l}\}),\quad i\ge 2, \end{aligned}$$

where \(c'_{j,l}=c'_{l}\), for \(1\le l\le i-1\), and \(0\le j\le n\). In particular, \(c'_i\ge 0\), for all \(i\ge 1\), since the coefficients of \(L_i\) are non-negative. By Puiseux’s theorem, the series w(x) is convergent. The following lemma finishes the proof of Theorem 4, because by using the majorant criterion the series solution \(y(x)=\sum _{i=1}^{\infty }c_i\,x^{i}\) is of q-Gevrey order \(s+t\).

Lemma 10

With the above notations, there exist positive constants \(k_1\) and \(k_2\) such that the coefficients \(c'_l\) of the solution of Eq. (23) satisfy

$$\begin{aligned} |c_l|\le {\vert q \vert ^{\frac{s+t}{2}l^{2} }}|c'_{l}|,\quad l\ge 1. \end{aligned}$$
(24)

Proof

The above inequality holds trivially for \(l=1\). Assume that it holds for \(l=1,2,\ldots ,i-1\). Using Eq. (22) and the fact that the coefficients of \(L_i\) are non-negative, one gets

$$\begin{aligned} |c_i|\le&\, \frac{1}{|\Psi (q^{i})|} \sum _{(\alpha ,\rho ,\underline{d})\in \mathcal {F}'_i} B^{i}_{\alpha ,\rho ,\underline{d}} \,|P_{\alpha ,\rho }| \prod _{j,l}(\vert q \vert ^{jl}|c_l|)^{d_{j,l}} \nonumber \\ \le&\, \frac{1}{|\Psi (q^{i})|} \sum _{(\alpha ,\rho ,\underline{d})\in \mathcal {F}'_i}B^{i}_{\alpha ,\rho ,\underline{d}}\, G_{\alpha ,\rho } \frac{\vert q \vert ^{\frac{t}{2}(\alpha +|\rho |)^{2}}}{\vert q \vert ^{k_1(\alpha +|\rho |)+k_2}} \prod _{j,l}\left( \vert q \vert ^{jl}\,{\vert q \vert ^{\frac{s+t}{2}l^{2}}}|c'_{l}| \right) ^{d_{j,l}}\nonumber \\ =&\, \sum _{(\alpha ,\rho ,\underline{d})\in \mathcal {F}'_i} R_i(\alpha ,\rho ,\underline{d})\, B^{i}_{\alpha ,\rho ,\underline{d}} \,G_{\alpha ,\rho }\,\prod _{j,l}|c'_l|^{d_{j,l}}, \end{aligned}$$
(25)

where the indices j and l are \(0\le j\le n\) and \(1\le l\le i-1\), and

$$\begin{aligned} R_i(\alpha ,\rho ,\underline{d})&= \frac{1}{|\Psi (q^{i})|}\,\vert q \vert ^{r_i(\alpha ,\rho ,\underline{d})}, \\ r_i(\alpha ,\rho ,\underline{d})&= \sum _{j,l}\left( j\,l+\frac{s+t}{2}l^{2}\right) d_{j,l}+ \frac{t}{2}(\alpha +|\rho |)^{2} -k_1(\alpha +|\rho |)-k_2. \end{aligned}$$

Claim (proved below): there exist positive constants \(k_1\) and \(k_2\), such that

$$\begin{aligned} R_i(\alpha ,\rho ,\underline{d})\le \vert q \vert ^{\frac{s+t}{2}i^{2}}, \quad (\alpha ,\rho ,\underline{d})\in \mathcal {F}'_{i}. \end{aligned}$$
(26)

Assuming the claim and using Eqs. (25) and (26), one gets

$$\begin{aligned} |c_i|\le \vert q \vert ^{\frac{s+t}{2}i^{2}} \sum _{(\alpha ,\rho ,\underline{d})\in \mathcal {F}'_i} B^{i}_{\alpha ,\rho ,\underline{d}}\, G_{\alpha ,\rho } \prod _{j,l}|c'_l|^{d_{j,l}} =\vert q \vert ^{\frac{s+t}{2}i^{2}} L'_i(G_{\alpha ,\rho };\{|c'_l|\}). \end{aligned}$$

Since the coefficients of \(L_i\), the elements \(G_{\alpha ,\rho }\) and \(c'_l\) are all non-negative real numbers, then \(L''_i(G_{\alpha ,\rho };\{c'_l \})\ge 0\). Hence,

$$\begin{aligned} L'_i(G_{\alpha ,\rho };\{|c'_l|\})\le L'_i(G_{\alpha ,\rho };\{c'_l\})+L''_i(G_{\alpha ,\rho };\{c'_l\}) = L_i(G_{\alpha ,\rho };\{c'_l\})=|c'_{i}|, \end{aligned}$$

which proves the Lemma. \(\square \)

Proof of Claim

Since the degree of \(\Psi (T)\) is r, \(\vert q \vert >1\) and \(\Psi (q^{i})\ne 0\) for \(i\ge 1\), there exists a constant \(K_2>1\), such that \(|q|^{i\,r}\le K_2\,|\Psi (q^{i})|\), for all \(i\ge 1\). Thus, it is enough to prove that there exist \(k_1>0\) and \(k_2>{}\ln K_2{}/\ln |q|\) such that \(r_i(\alpha ,\rho ,\underline{d})\le \frac{s+t}{2}i^{2}+ri\), for all \(i\ge 1\) and all \((\alpha ,\rho ,\underline{d})\in \mathcal {F}'_{i}\). Grouping the terms of \(r_i\) and rearranging, we divide the inequality above into two parts so that it is enough to prove the existence of positive constants \(k_1\) and \(k_2\), such that for all \((\alpha ,\rho ,\underline{d})\in \mathcal {F}'_{i}\) and \(i\ge 1\), the following inequalities hold:

$$\begin{aligned} \frac{s}{2}\sum _{j,l}l^{2}d_{j,l} + \sum _{j,l}j\,l\,d_{j,l}&\le \frac{s}{2}i^2+r\,i+k_2, \end{aligned}$$
(27)
$$\begin{aligned} \frac{t}{2} \sum _{j,l} l^{2}\,d_{j,l} + \frac{t}{2}(\alpha +|\rho |)^{2}&\le \frac{t}{2}i^2+k_1(\alpha +|\rho |). \end{aligned}$$
(28)

We first prove the existence of \(k_2\) such that inequality (27) holds and then we do the same for \(k_1\) and Eq. (28).

Proof of inequality (27)

Call \(r'_i(\alpha ,\rho ,\underline{d})\) the left hand side of (27). Let \(\mathcal {F}'_i=F_1\cup F_2\), where \(F_1\) is the subset formed by those \((\alpha ,\rho ,\underline{d})\) such that \(l>i/2\) implies \(d_{j,l}=0\), and \(F_2\) is its complement in \(\mathcal {F}'_i\). We shall bound \(r'_i\) in each of \(F_1,F_2\) by a polynomial \({\bar{r}}'(i)={\bar{r}}'_2i^{2}+{\bar{r}}'_1i+{\bar{r}}'_0\), such that, either \({\bar{r}}'_2<\frac{s}{2}\) or \({\bar{r}}'_2=\frac{s}{2}\) and \({\bar{r}}'_1\le r\). Adjusting \(k_2\) conveniently, one gets (27).

Let \((\alpha ,\rho ,\underline{d})\in F_1\). This implies that if \(d_{j,l}\ne 0\), then \(l\le i/2\). As \(j\le n\), and \(\sum _{j,l}l\,d_{j,l}\le i\) (which follows form (18)), we conclude that

$$\begin{aligned} r'_i= \frac{s}{2}\sum _{j,l}l^{2}d_{j,l}+\sum _{j,l}j\,l\,d_{j,l} \le \frac{s\,i}{4}\sum _{j,l}ld_{j,l} + n\sum _{j,l}l\,d_{j,l} \le \frac{s}{4}i^{2}+n\,i={\bar{r}}'(i). \end{aligned}$$

If \(s\ne 0\), then \({\bar{r}}'_2<s/2\). Otherwise, \(s=0\), and by Definition 6, \(r=n\), hence \(\bar{r_1}'\le r\). This proves that the polynomial \({\bar{r}}(i)\) satisfies our requirements.

Let \((\alpha ,\rho ,\underline{d})\in F_2\). There exists a pair \((j_0,l_0)\) such that \(l_0>i/2\) and \(d_{j_0,l_0}\ne 1\). By inequality (18), this pair is unique and \(d_{j_0,l_0}=1\). In this case, Eq. (18) reads as

$$\begin{aligned} \alpha +\sum _{j,l\ne l_0}l\,d_{j,l}+l_0=i,\quad \text {and in particular }\, \sum _{j,l\ne l_0}l\,d_{j,l}\le a, \end{aligned}$$
(29)

where \(a=i-l_0 < i/2\). This implies also that for \(l\ne l_0\) and \(d_{j,l}\ne 0\) one has \(l\le a\). Therefore,

$$\begin{aligned} r'_i&= \frac{s}{2}\left( \sum _{j,l\ne l_0} l^{2}d_{j,l}+l_0^{2}\right) + \sum _{j,l\ne l_0} j\,l\,d_{j,l}+j_0\,l_0\\&\le \frac{s}{2}\left( a\sum _{j,l\ne l_0} l\,d_{j,l}+(i-a)^{2}\right) + n\sum _{j,l\ne l_0}l\,d_{j,l}+j_0(i-a)\\&\le \frac{s}{2}(2a^{2}-2a\,i+i^{2})+n\,a+j_0(i-a)\\&=(s/2)i^{2}+(j_0-s\,a)i+(s\,a^{2}+n\,a-a\,j_0):=f_i(a). \end{aligned}$$

For a fixed i, the graph of \(f_i(a)\) is either an upwards parabola (case \(s>0\)) or an straight line (case \(s=0\)), so its maximum in an interval is reached at its endpoints. The available range for a depends on \(j_0\). If \(j_0\le r\), then there are no additional constrains on \(l_0\), so \(a\in [1,i/2[\), and we take \({\bar{r}}'(i)=s/2\,i^{ 2}+r\,i+{\bar{r}}'_0\). We can chose \({\bar{r}}'_0\) in such a way that \(\max \{f_i(1),f_i(i/2)\}\le {\bar{r}}'(i)\), for all \(i\ge 1\) and \(0\le j_0\le r\), because \(f_i(i/2)\le (s/4)\,{i}^{2}+n\,i\) and \(f_i(1)\le (s/2)\,{i}^{2}+ r\, i+s+n\). If, on the other hand, \(j_0>r\), since \(l_0>i/2\), case (20) does not hold, hence case (21) holds; so that \(h_{j_0}=1\) and \(l_0\le i-a_{j_0}\), and the range for a is \([a_{j_0},i/2[\). By definition of s, one has \(j_0-s\,a_{j_0}\le r\) and \(s>0\). Consider \({\bar{r}}'(i)=s/2\,i^{2}+r\,i+{\bar{r}}'_0\), where \({\bar{r}}'_0\) is chosen in such a way that \(\max \{f_i(i/2),f_i(a_j);j>r, h_j=1\}\le {\bar{r}}'(i)\), for all \(i\ge 1\). Such an \({\bar{r}}'_0\) exists because as above \(f_i(i/2)\le (s/4)\,{i}^{2}+n\,i\) and \(f_i(a_j)\le (s/2)i^{2}+(j-s\,a_j)i+s\,a_{j}^{2} + na_j\) and \(j-sa_j\le r\) for those j such that \(j>r\) and \(h_j=1\).

Proof of inequality (28)

For \(t=0\), the inequality holds trivially, so we may assume that \(t>0\). Let \((\alpha ,\rho ,\underline{d})\in \mathcal {F}'_i\). Denote \(d_l=\sum _{j=0}^{n}d_{j,l}\), for \(1\le l\le i-1\) and let \(l_0\) be the maximum of the indices l such that \(d_l\ne 0\). From Eqs. (18) and (19), the fact that \(l_0\ge 1\) and \(d_{l_0}\ge 1\), one gets:

$$\begin{aligned} i-|\rho |=\alpha +\sum _{l}l\,d_l-\sum _{l}d_l =\alpha +\sum _{l\ne l_0}(l-1)d_{l}+(l_0-1)d_{l_0}\ge \alpha + l_0-1. \end{aligned}$$

From which \(i-l_0\ge \alpha +|\rho | -1\). Taking into account that \(\alpha \ge 0\), Eq. (18), and the fact that \(l_0\ge l\) for any l with \(d_l\ne 0\), we conclude that

$$\begin{aligned} i^{2}&=(i-l_0+l_0)^{2}=(i-l_0)^{2}+l_0^{2}+2\,l_0\,(i-l_0)\\&\ge (\alpha +|\rho |-1)^{2}+l_0^{2}+2\,l_0 \left( \sum _{l\ne l_0} l\,d_{l} + l_0(d_{l_0}-1)\right) \\&\ge (\alpha +|\rho |-1)^{2}+l_0^{2} + \sum _{l\ne l_0}l^{2}d_l+l_0^{2}(d_{l_0}-1)\\&\ge (\alpha +|\rho |)^{2}-2(\alpha +|\rho |)+\sum _{l}l^{2}d_{l}. \end{aligned}$$

This gives inequality (28) for \(k_1\ge t\) and finishes the proof of Theorem 4. \(\square \)

5 Working Example

Let us consider the q-difference equation \(P[y]=0\) of order 5 and degree 6, where

$$\begin{aligned} P=4\,{Y_1}^4 -9\,{Y_0}^2\,{ Y_1}\,{ Y_2} +2\,{ Y_0}^3\,{ Y_2} -x^3\,{ Y_0}^4\,{ Y_5}^2 +{{x{ Y_0}\,{ Y_2}}\over {q^4}} -{{x^3\,{ Y_2}}\over {q^4}} -x^3\,{ Y_0}+x^5, \end{aligned}$$

and \(q=4\). Its Newton Polygon is \(\mathcal {N}(P)\) in Fig. 3. It has four vertices \(v_0=(3,6), v_1=(0,4), v_2= (1,2)\), \(v_3=(5,0)\) and three sides \(L_1,L_2\) and \(L_3\) with respective co-slopes \(\gamma _1=-3/2\), \(\gamma _2=1/2\) and \(\gamma _3=2\). We apply some steps of Procedure 1 to P. As P is a polynomial, \(\mu _{-1}(P)=-\infty \).

Fig. 3
figure 3

Newton polygons \(\mathcal {N}(P)\), \(\mathcal {N}(P_1)\) and \(\mathcal {N}(P_2)\)

In order to find all the possible starting terms \(c_0\,x^{\mu _0}\) of a solution, we need to consider all the vertices and sides of \(\mathcal {N}(P)\) according as formulæ (8) and (9). For the vertices, we get: \(\Psi _{(P;v_0)}(T)=-3\,T^{10}\), \(\Psi _{(P;v_1)}(T)=T^{2}(T-2)(4T-1)\), \(\Psi _{(P;v_2)}(T)=T^{2}/q^{4}\), \(\Psi _{(P;v_3)}(T)=1\). Hence, for \(j=0,1,2,3\) the only satisfiable formula in (9) is the one corresponding to vertex \(v_1\), that is \(\Psi _{(P;v_1)}(q^{\mu })=0\) and \(-3/2<\mu <1/2\). This gives \(\mu =-1\) for any nonzero c. For the sides, we get: \(\Phi _{(P;\gamma _1)}(c)= q^{-15}c^{4}(2\,q^{12}-9\,q^{21/2}+4\,q^{9}-c^{2})\), \(\Phi _{(P;\gamma _2)}(c)=c^2/64\), and \(\Phi _{(P;\gamma _3)}(c)=(c-1)^2\). According as (8), the only possible starting terms related to the sides are \(\pm 1024 \sqrt{15}\, x^{-3/2}\) and \(x^{2}\). Notice that \(L_2\) gives rise to no starting term.

Following Procedure 1 we choose \(x^{2}\), that is \(c_0=1\) and \(\mu _0=2\). The polynomial \(P_1=P[x^{2}+Y]\) has 33 terms that we do not exhibit; its Newton Polygon is \(\mathcal {N}(P_1)\) in Fig. 3. Since \(y=0\) is not a solution of \(P_1[y]=0\) because \(\mathcal {C}(P_1)\) has points on the OX-axis, we need to perform step (a.2) of Procedure 1, that is finding \(\mu >\mu _0=2\) and \(c\ne 0\) so that \(\Psi _{(P_1;\mu )}(c)=0\). Thus, we can only use the vertices \(v_2\) and \(v'_3\) and side \(L'_3\).

For formula (9) we get \(\Psi _{(P_1;v_2)}(T)=\Psi _{(P;v_2)}\) and that \(\Psi _{(P_1;v'_3)}(T)\) is a constant, hence those vertices do not give rise to subsequent terms. For side \(L'_3\), we get \(\mu _1=7/2\) and \(\Psi _{(P_1;\mu _1)}(c)=64\,c^2+225792\), so that there are two possibilities for \(c_1\). We choose \(c_1=21\sqrt{8}\sqrt{-1}\) and go on with Procedure 1.

Let us consider \(P_2=P_1[c_1x^{\mu _1}+Y]\) whose Newton Polygon is \(\mathcal {N}(P_2)\) having a side \(L''_3\) of the same co-slope as \(L'_3\) and another \(L''_4\) of co-slope 5. As vertex \(v''_3\) gives \(\Psi _{(P_2;v''_3)}(q^{\mu })=q^{2\mu }+ 16384\) which has no real solutions, it is useless to find \(\mu _2\). Hence we must use \(L''_4\) which gives \(\mu _2=5\) and (after a trivial computation) \(c_2=-88984/65\).

Notice that, after performing the first two steps detailed above and getting \(x^2+21\sqrt{8}\sqrt{-1}\,x^{7/2}\), the fact that \(v''_3\) gives rise to a formula which no \(\mu >7/2\) can satisfy and that it has ordinate 1 implies that, taking \(P^{*}={}^{{\mu _1}}\!{P_2}\), the equation \(P^{*}[y]=0\) is solved form. Therefore, by Lemma 7 there exists a unique solution of \(P[y]=0\) of the form:

$$\begin{aligned} y(x)=x^2+21\sqrt{8}\sqrt{-1}\,x^{7/2}+o(x^{7/2}). \end{aligned}$$

Notice also that as \(P^*\in \mathbb {C}[[x^{1/2}]][Y]\), Lemma 7 guarantees as well that \(y(x)\in \mathbb {C}[[x^{1/2}]]\).

The pivot point of P with respect to y(x) is \(Q(y(x);P)=v''_3=(4.5,1)\). This means that, from now on, for each transformation \(P_i[c_ix^{\mu _i}+Y]\), the supporting line \(L_{(P_i;\mu _i)}\) will intersect \(\mathcal {N}(P_i)\) on its lowest side, and the topmost vertex of this side will always be that point (4.5, 1). Moreover, \(Y_2\) is the highest order appearing effectively in it, hence \(r=2\) in Definition 5. There being no monomials with \(Y_3\) or \(Y_4\) in P we only need consider the pivot point relative to \(Y_5\) which is the point \(Q_{e_5}(y(x);P)=(13,1)\) (notice that \(\mathcal {C}_{e_5}(P_2)\) is in the region above and to the right of the dashed line). Applying Definition 5 formally we would get \(s(y(x);P)=\frac{5-2}{13-4.5}=6/17\).

As regards the growth of the coefficients of y(x), we transform it into a formal power series in order to apply Theorem 4. We do this by means of the ramification \(x=t^{2}\). The series y(t) is a solution of a \({\bar{q}}\)-difference equation \({\bar{P}}[y]=0\) derived from P with \({\bar{q}}=q^{1/2}\). The ramification induces a horizontal homothecy of ratio 2 on the cloud of points of P, \(P_1\) and \(P_2\). Hence \(s(y(t);{\bar{P}})=\frac{5-2}{2(13-4.5)}= 3/17\) is a bound for the \({\bar{q}}\)-Gevrey order of y(t).