1 Introduction

The growth associated to the coefficients of formal solutions to functional equations has been widely studied in the literature. Results on this direction are known as Maillet type theorems. They coined their name in honor to the pioneering work of Maillet [18] where it was shown that any formal power series solution of a nonlinear algebraic ordinary differential equation is s-Gevrey, for some \(s\ge 0\), see Section 2 for definitions. Further initial results in this context can be found in [19, 21, 25] where optimal bounds are interpreted as slopes of adequate Newton polygons associated to the given analytic equation. Recognizing optimal values for the Gevrey class of formal solutions is of utmost importance in the study of (Borel-, multi-)summability phenomena, a great tool to construct analytic solutions of the given problem which are asymptotic to the formal ones.

The increasing interest on these results has provided advances in other frameworks. For instance, on generalized power series solutions of ordinary differential equations [11], in singularly perturbed problems [3], integro-differential equations [22], moment PDEs [2, 16, 26], difference and q-difference equations [9, 14, 29], among others. We can also mention results in dynamical systems, such as the Gevrey character of invariant formal curves to analytic local diffeomorphisms [1, 17].

Convergence and divergence (Maillet type) theorems have also been developed for singular holomorphic partial differential equations (of non-Kowalevski type, Fuchsian, of totally non-characteristic type, among others). A good account on these results can be found at Gerard and Tahara’s book [10] and the references therein. Moreover, optimal Gevrey bounds have been found for many families of PDEs in terms of slopes of adequate Newton polygons, see, e.g., [12, 13, 23, 24, 27] and the recent work [15]. The topic is an active subject of research where many problems on summability of solutions remain open.

On the other hand, results on singular PDEs are not directly applicable to other type of equations, for instance, mixing irregular singularities and singular perturbations. An interesting example is the family of doubly singular equations

$$\begin{aligned} \epsilon ^\sigma z^{r+1}\frac{\partial y}{\partial z}=f(z,\epsilon ,y), \end{aligned}$$
(1.1)

where \(\sigma \) and r are positive integers and f is analytic at the origin. The equation exhibits an irregular singularity at \(z=0\) and a singular behavior as \(\epsilon \rightarrow 0\). In this case, the optimal Gevrey type is only revealed when the equation is considered in the variable \(t=z^r\epsilon ^\sigma \). In fact, the relation between true solutions asymptotic to formal ones was answered in [4] with the development of monomial summablity. Later on, the extension of this notion to more variables led naturally to the study of equations of type

$$\begin{aligned} \epsilon ^\sigma x_1^{\alpha _1}\cdots x_n^{\alpha _n}\left( \lambda _1 x_1\frac{\partial y}{\partial x_1}+\cdots +\lambda _n x_n\frac{\partial y}{\partial x_n}\right) =f(x,\epsilon ,y), \end{aligned}$$
(1.2)

where \(\lambda _1,\dots ,\lambda _n>0\). This system is the higher dimension analogue to equation (1.1). In this case, the optimal Gevrey type is obtained working with the variable \(t=\epsilon ^\sigma x_1^{\alpha _1}\ldots x_n^{\alpha _n}\). Moreover, novel results on the monomial summability of formal solutions are available in this framework [7], see also [28] for the case \(\alpha _j=0\).

Recently, the foundations of asymptotic expansions and summability with respect to an arbitrary analytic germ \(P:(\mathbb {C}^d,0)\rightarrow \mathbb {C}\) such that \(P(0)=0\) were established in [20]. In particular, P-k-Gevrey series were defined and systematized. Roughly speaking, a formal power series \(\widehat{y}\in \mathbb {C}[[x]]\), \(x=(x_1,\dots ,x_d)\) is P-k-Gevrey if it can be written as

$$\begin{aligned} \widehat{y}=\sum _{n=0}^\infty y_n P^n,\quad \text { where } \sup _{{x}\in D}|y_n({x})|\le CA^n n!^{k}, \end{aligned}$$
(1.3)

for some constants \(C,A>0\), and where the coefficients \(y_n\) are holomorphic in a common polydisc \(D\subseteq \mathbb {C}^d\) centered at the origin. This concept captures the idea of measuring the divergence of a series using the leading variable \(t=P(x)\). Moreover, it gives more precise information on the divergence rate of \(\widehat{y}\), inaccessible when only working with \(x_1,\dots ,x_d\) separately.

In this setting, we can pose in greater generality the family of problems

$$\begin{aligned} P(x)L_1(y)=F(x,y),\qquad L_1:=a_1(x)\partial _{x_1}+\cdots a_d(x)\partial _{x_d}, \end{aligned}$$
(1.4)

with analytic coefficients, which include equations (1.1) and (1.2) as particular cases. The key point to obtain existence and uniqueness of formal solutions of (1.4) is that

$$\begin{aligned} P \text { divides } L_1(P). \end{aligned}$$

Geometrically, this condition means that the local hypersurface \(Z_P:=\{x\in (\mathbb {C}^d,0) : P(x)=0\}\) is invariant under the vector field \(L_1\). In this case, the solution turns out to be P-1-Gevrey, as it was proved in [5, Theorem 1]. Surprisingly, this recovered many cases on the Gevrey class of formal power series solutions of ODEs and PDEs that have been treated in the literature. Finally, results of this sort are a first step to approach Borel P-summability which is a difficult phenomenon far from being understood, see [6, 20].

The aim of this paper is to study a higher order analogue to (1.4), where once again, known results in the theory of singular PDEs fail to provide optimal bounds for the Gevrey type of formal solutions. For positive integers dNk, and complex coordinates \({x}=(x_1,\dots ,x_d)\in (\mathbb {C}^d,{0})\) and \({y}=(y_1,\dots ,y_N)\in \mathbb {C}^N\), we pose the system of PDEs

$$\begin{aligned} P({x})^{k}L_k({y})({x})+\cdots +P({x})L_1({y})({x})=F({x},{y}). \end{aligned}$$
(1.5)

F is a \(\mathbb {C}^N\)-valued holomorphic map defined near \(({0},{0})\in \mathbb {C}^d\times \mathbb {C}^{N}\), and

$$\begin{aligned} L_j:=\sum _{|\alpha |=j} a_{\alpha }^{(j)} ({x}) \partial _{\alpha },\qquad j=1,\dots ,k, \end{aligned}$$
(1.6)

are differential operators of order j with holomorphic coefficients \(a_{\alpha }^{(j)}\) near \(0\in \mathbb {C}^d\), see below for notations. Note that if x approaches \(Z_P\), the nature of (1.5) changes from differential to implicit one. Moreover, if the linear part of F at the origin \(D_yF(0,0)\) is an invertible matrix, P cannot be canceled from (1.5), so its zero set is a non-removable singular part of the equation. We mention that this equation is also inspired in its simple one-dimensional analogue

$$\begin{aligned} \tau ^k b_k(\tau )\partial _\tau ^k(u)+\cdots +\tau b_1(\tau )\partial _\tau (u)=f(\tau ,u), \end{aligned}$$

familiar from point of view of Borel summability.

The previous work [5] studied Eq. (1.4) by direct recurrences, based on generalized Weierstrass division algorithms, and used modified Nagumo norms [3] to establish the Gevrey type in P of \(\widehat{y}\). However, this approach left several questions opened. First, do formal solutions of these equations admit a canonical expansion in power series of P? Second, is it possible to treat the families (1.4) with the standard methods for nonlinear singular PDEs and Newton polygons? Here we answer both questions affirmatively for the more general equation (1.5). The method we explore here consists of adding a time variable \(t\in (\mathbb {C},0)\) to lift (1.5) to a system of PDEs in t and x. The new system will have a unique solution of the form \(\widehat{W}(t,x)=\sum _{n=0}^\infty y_n t^n\), where the \(y_n\) are as in (1.3). This trick produces an equation where known results on singular PDEs can be effectively used to find the Gevrey order in t of \(\widehat{W}\), and thus the P-Gevrey order of \(\widehat{y}(x)=\widehat{W}(P(x),x)\). Since the lifted equation determines the coefficients \(y_n\) naturally, this procedure guarantees a canonically decomposition of \(\widehat{y}\) as a power series in P. The idea was suggested in [5] by anonymous referees to whom we thank for their contribution.

To state our results, we associate to \(L_j\) and P the holomorphic function

$$\begin{aligned} L_j^{\star }(P):=\sum _{|\alpha |=j} a_{\alpha }^{(j)}({x}) (\partial _{x_1}P)^{\alpha _1}\dots (\partial _{x_d}P)^{\alpha _d},\qquad j=1,\dots ,k. \end{aligned}$$
(1.7)

In particular, \(L^{\star }_1(P)\) is simply \(L_1(P)\), but for \(j\ge 2\) these expressions generally differ. It turns out that these functions contain the key that leads to the existence, uniqueness, and Gevrey order for formal solutions of (1.5).

Theorem 1.1

Consider the system of partial differential equations (1.5) where \(F({0},{0})={0}\), and \(D_yF(0,0)\in GL _N(\mathbb {C})\) is an invertible matrix. If \(L_k\not \equiv 0\) and

$$\begin{aligned} P\hbox { divides }L^{\star }_j(P),\hbox { for every } j=1,\dots ,k, \end{aligned}$$
(1.8)

then equation (1.5) admits a unique formal power series solution \(\widehat{{y}}\in \mathbb {C}[[{x}]]^{N}\) with \(\widehat{y}(0)=0\). Moreover, \(\widehat{{y}}\) is a P-k-Gevrey series.

It is worth mentioning that condition (1.8) naturally appears in the problem. Indeed, if it is not satisfied, then the result is no longer valid, as it can be seen from Example 6.1.

On the other hand, if 0 is not a singular point for P, i.e., \(\partial _{x_l}P(0)\ne 0\) for some l, the problem changes and Poincaré type conditions appear to guarantee existence and uniqueness of solutions. In fact, we obtain an analytic solution.

Theorem 1.2

Consider (1.5) where \(F({0},{0})={0}\) and \(D_yF(0,0)\in GL _N(\mathbb {C})\). If \(L_k^\star (P)(0)\ne 0\) and

$$\begin{aligned} \left[ \sum _{j=1}^k \frac{n!}{(n-j)!}L_j^\star (P)(0)\right] I_N - D_yF(0,0)\in GL _N(\mathbb {C}),\, \text { for all } n\ge 0, \end{aligned}$$
(1.9)

then (1.5) has a unique analytic solution \(\widehat{y}\in \mathbb {C}\{x\}^N\) with \(\widehat{y}(0)=0\). Here \(I_N\in \mathbb {C}^{N\times N}\) is the identity matrix.

We stress that the current technique can be applied to concrete equations and it is an idea worth exploring for future works. For instance, problems involving non-linear terms in the derivatives of u. In fact, obtaining P-Gevrey estimates for solutions of these problems is likely to be inaccessible by a direct approach.

The plan for the paper is as follows. Section 2 recalls the basics on Gevrey series in several variables and P-Gevrey series, including a natural relation between them (Proposition 2.6). The necessary tools to prove Theorems 1.1 and 1.2 are developed in Sects. 3 and 4. First, we give a Maillet type theorem for singular PDEs adapted for our purposes (Theorem 3.1), and then several lemmas of elementary nature. The main results are proved in Sect. 5. The case \(k=1\) is particularly simple and we include it in Corollaries 5.1 and 5.2 hoping that its proof helps to elucidate the ideas. The work concludes in Sect. 6 with several examples. In particular, we provide examples where condition (1.8) is necessary and others in which the Gevrey type given by Theorem 1.1 is attained.

1.1 Notation

\(\mathbb {N}\) denotes the set of non-negative integers and \(\mathbb {N}^*:=\mathbb {N}{\setminus }\{0\}\). For \(d\in \mathbb {N}^*\), \(\alpha =(\alpha _1,\ldots ,\alpha _d), \beta =(\beta _1,\ldots ,\beta _d)\in \mathbb {N}^d\), and \(s=(s_1,\dots ,s_d)\in \mathbb {R}_{\ge 0}^d\) we set

$$\begin{aligned} \alpha +\beta =(\alpha _1+\beta _1,\ldots ,\alpha _d+\beta _d),\quad |\alpha |=\alpha _1+\cdots +\alpha _d,\quad \alpha !^s=\alpha _1!^{s_1}\dots \alpha _d!^{s_d}. \end{aligned}$$

We write \(\alpha \le \beta \) if \(\alpha _j\le \beta _j\), for all \(1\le j\le d\), and \(\alpha <\beta \) if \(\alpha \le \beta \) and there is \(1\le j_0\le d\) such that \(\alpha _{j_0}<\beta _{j_0}\). If \(\beta \le \alpha \), we put \(\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) =\left( {\begin{array}{c}\alpha _1\\ \beta _1\end{array}}\right) \cdots \left( {\begin{array}{c}\alpha _d\\ \beta _d\end{array}}\right) \), where \(\left( {\begin{array}{c}\alpha _j\\ \beta _j\end{array}}\right) \) is the binomial coefficient of \(\alpha _j\) and \(\beta _j\). The symbol \({0}\) stands for a vector with zero components. For \(1\le j\le d\), \({e}_j\in \mathbb {N}^d\) is the tuple with all its components being zero, except the position j which is 1.

We work in \((\mathbb {C}^d,{0})\) with local coordinates \({x}=(x_1,\dots ,x_d)\). If \(\alpha \in \mathbb {N}^{d}\), let

$$\begin{aligned} {x}^{\alpha }=x_1^{\alpha _1}\cdots x_d^{\alpha _d},\qquad \partial _{x_j}:=\partial _{{e}_j,{x}},\text { and }\quad \partial _{\alpha ,x}=\partial _{\alpha }=\frac{\partial ^{|\alpha |}}{\partial x_1^{\alpha _1}\cdots \partial x_d^{\alpha _d}}. \end{aligned}$$

In the former case we omit the x when the variables are identified from the context. Given a complex Banach space \(({E},\Vert \cdot \Vert )\), we write \({E}[[{x}]]\) and \({E}\{{x}\}\) for the spaces of formal and convergent power series in \({x}\) with coefficients in E, respectively. In our context, E will be \(\mathbb {C}^N\) or an adequate space of functions. If \({E}=\mathbb {C}\) we simply write \(\widehat{\mathcal {O}}=\mathbb {C}[[{x}]]\) and \(\mathcal {O}=\mathbb {C}\{{x}\}\). \(\mathcal {O}^*=\{U\in \mathcal {O} \,:\, U({0})\ne 0\}\) is the corresponding group of units.

Given \(\hat{f}=\sum a_\beta {x}^\beta \in \widehat{\mathcal {O}}\), \(o(\hat{f})\) denotes its order: if \(\hat{f}=\sum _{n=0}^\infty {f}_n\), \({f}_n=\sum _{|\beta |=n} a_\beta {x}^\beta \), is written as the sum of its homogeneous components, \(o(\hat{f})\) is the least integer k for which \({f}_k\ne 0\). Given a polyradius \(R=(R_1,\ldots ,R_d)\in \mathbb {R}_{>0}^{d}\), we write

$$\begin{aligned} D_{R}:=\{x\in \mathbb {C}^{d}: |x_j|<R_j, j=1,\ldots d\}, \end{aligned}$$

for such polydisc. If \(R=(r,\dots ,r) \), \(r>0\), we also write \(D_R=D_r^d\) as the Cartesian product of one-dimensional discs. For \(N\in \mathbb {N}^*\) we set \(\mathcal {O}(\Omega ,\mathbb {C}^{N})\) (resp. \(\mathcal {O}_b(\Omega ,\mathbb {C}^{N})\)) for the set of \(\mathbb {C}^{N}\)-valued holomorphic (resp. and bounded) functions on an open domain \(\Omega \subseteq \mathbb {C}^{d}\). We write \(\mathcal {O}(\Omega ):=\mathcal {O}(\Omega ,\mathbb {C})\) and \(\mathcal {O}_b(\Omega ):=\mathcal {O}_b(\Omega ,\mathbb {C})\) for short. Note that \(\mathcal {O}_b(\Omega ,\mathbb {C}^{N})\) endowed with the supremum norm is a Banach space.

2 Gevrey series

We start by recalling the main facts on Gevrey series in several variables and those with respect to germs of analytic functions. In particular, we include a relation between these notions which was first obtained in the proceeding article [5].

Definition 2.1

Let E be a complex Banach space and \(s=(s_1,\dots ,s_d)\in \mathbb {R}_{\ge 0}^d\). A series \(\hat{f}=\sum _{\beta \in \mathbb {N}^d} a_\beta {x}^\beta \in E[[{x}]]\) is s-Gevrey if we can find constants \(C,A>0\) such that

$$\begin{aligned} \Vert a_\beta \Vert \le CA^{|\beta |} \beta !^{s},\quad \text { for all } \beta \in \mathbb {N}^d. \end{aligned}$$

Equivalently, \(\sum _{\beta \in \mathbb {N}^d} a_\beta {x}^\beta /\beta !^s\in E\{x\}\). Note that \(s={0}\) means convergence. In the case \(p=s_1=\dots =s_d\ge 0\), since \(\beta !\le |\beta |!\le d^{|\beta |}\beta !\), \(\hat{f}\) is \((p,\dots ,p)\)-Gevrey if and only if there are constants \(C,A>0\) such that

$$\begin{aligned} \Vert a_\beta \Vert \le CA^{|\beta |} |\beta |!^p,\quad \beta \in \mathbb {N}^d. \end{aligned}$$

We denote by \(E[[x]]_{s}\) the set of s-Gevrey series with coefficients in E. It is straightforward to check that this space is closed under sums and partial derivatives, and it contains \(E\{x\}\). It is also closed under products when E is a Banach algebra. Moreover, it is stable under linear changes of variables, in view of the following result.

Lemma 2.2

(Lemma 2.1, [12]) Given \(p\ge 0\), \(\hat{f}({x})\in E[[{x}]]_{(p,\dots ,p)}\) if and only if there exists \(M\in GL _d(\mathbb {C})\) such that \(\hat{f}(M{x})\in E[[{x}]]_{(p,\dots ,p)}\).

Consider now a germ P at \({0}\in \mathbb {C}^d\) of a \(\mathbb {C}\)-valued holomorphic function, i.e., an element \(P\in \mathcal {O}{\setminus }\{0\}\), and assume that \(P({0})=0\). There are equivalent definitions for Gevrey series with respect to P, with coefficients in E, see [6, 20]. We focus on the case \(E=\mathbb {C}\) and follow the simple characterization given in [6, Lemma 4.1].

Definition 2.3

Given \(s\ge 0\), \(\hat{f}\in \widehat{\mathcal {O}}\) is said to be a P-s-Gevrey series if there is a polyradius r, constants \(C,A>0\) and a sequence \(\{f_n\}_{n\in \mathbb {N}}\in \mathcal {O}_b(D_{r})\) such that

$$\begin{aligned} \hat{f}=\sum _{n=0}^\infty f_n P^n,\quad \text { where } \sup _{{x}\in D_r}|f_n({x})|\le CA^n n!^s. \end{aligned}$$
(2.1)

We will use the notation \(\widehat{\mathcal {O}}^{P,s}\) for the set of P-s-Gevrey series. A series \((\hat{f}_1,\dots ,\hat{f}_N)\in \widehat{\mathcal {O}}^N\) is P-s-Gevrey if every component is so.

Remark 2.4

The expansion (2.1) is not unique. In fact, for each injective linear form \(\ell :\mathbb {N}^d\rightarrow \mathbb {R}\) there is one such decomposition via a generalized Weierstrass division theorem, see [5, 20]. In general, the \(f_n\) obtained from \(\hat{f}\) under this process are merely formal power series. Therefore, in our definition we are implicitly assuming that these coefficients are convergent in a common polydisc at \({0}\in \mathbb {C}^d\). Moreover, the growth of \(f_n\) does not dependent on the decomposition used, thus the notion of P-s-Gevrey series is well-defined, see [6, Lemma 4.1] for details.

We recall some basic properties on P-s-Gevrey series below.

Proposition 2.5

(Corollary 4.2, Lemma 4.3, [6]) Let \(s\ge 0\) and \(P,Q\in \mathcal {O}{\setminus }\{0\}\) be such that \(P({0})=Q({0})=0\). The following statements hold:

  1. 1.

    \(\widehat{\mathcal {O}}^{P,s}\) is stable under sums, products and partial derivatives, and \(\mathcal {O}\subset \widehat{\mathcal {O}}^{P,s}\).

  2. 2.

    For any \(k\in \mathbb {N}^*\), \(\widehat{\mathcal {O}}^{P^k,ks}=\widehat{\mathcal {O}}^{P,s}\).

  3. 3.

    If Q divides P, then \(\widehat{\mathcal {O}}^{P,s}\subseteq \widehat{\mathcal {O}}^{Q,s}\). In particular, if \(Q=U\cdot P\), \(U\in \mathcal {O}^*\), then \(\widehat{\mathcal {O}}^{P,s}=\widehat{\mathcal {O}}^{Q,s}\).

  4. 4.

    Let \(\phi :(\mathbb {C}^d,{0})\rightarrow (\mathbb {C}^d,{0})\) be analytic, \(\phi ({0})={0}\), and assume \(P\circ \phi \) is not identically zero. If \(\hat{f}\in \widehat{\mathcal {O}}^{P,s}\), then \(\hat{f}\circ \phi \in \widehat{\mathcal {O}}^{P\circ \phi ,s}\).

  5. 5.

    If \(P({x})={x}^\alpha \), \(\alpha \in \mathbb {N}^d{\setminus }\{{0}\}\), then \(\hat{f}=\sum a_{{\beta }}{x}^{{\beta }}\in \widehat{\mathcal {O}}^{{x}^\alpha ,s}\) if and only if there are constants \(C,A>0\) satisfying

    $$\begin{aligned} |a_{{\beta }}|\le CA^{|{\beta }|}\min \{ \beta _j!^{s/\alpha _j} : j=1,\dots ,d, \alpha _j\ne 0\} ,\quad {\beta }\in \mathbb {N}^d. \end{aligned}$$
    (2.2)

    Note that the variables \(x_j\) for which \(\alpha _j=0\) can be regarded as regular parameters.

The previous proposition characterizes P-s-Gevrey series when P is a monomial, directly from the growth of the coefficients of the series. Although it is not yet known whether a similar property is true for an arbitrary P, we have the following result from [5, Proposition 3] that we include for the sake of completeness.

Proposition 2.6

Consider \(P\in \mathcal {O}\) with \(o(P)=k\ge 1\). Then, a P-s-Gevrey series is a \((s/k,\dots ,s/k)\)-Gevrey series.

Proof

Writing \(P=\sum _{j=k}^\infty P_j\) as the sum of homogeneous polynomials, where \(P_k\ne 0\), take \({a}\in \mathbb {C}^d\) such that \(P_k({a})\ne 0\), and choose \(A\in \text {GL}_d(\mathbb {C})\) having a as its first column. If we set \(Q({x})=P(A{x})\) and we write it as sum of its homogeneous components \(Q=\sum Q_j\), then \(Q_j({x})=P_j(A{x})\), and \(Q_k({x})=P_k({a})x_1^k+\cdots \), i.e., \(o(Q)=k\) and \(Q_k(1,0,\dots ,0)\ne 0\).

Given a P-s-Gevrey series \(\hat{f}\), the series \(\hat{f}_0({x})=\hat{f}(A{x})=\sum b_\beta {x}^\beta \) is a Q-s-Gevrey series, thanks to Proposition 2.5, 4. above. We consider the change of variables

$$\begin{aligned} x_1=z_1,\quad x_2=z_1z_2,\quad \dots ,\quad x_d=z_1z_d. \end{aligned}$$
(2.3)

If \(R({z})=Q({x})\) and \(\hat{f}_1({z})=\hat{f}_0({x})\), we see that \(\hat{f}_1\) is a R-s-Gevrey series. Now,

$$\begin{aligned} R({z})=Q(z_1,z_1z_2,\dots ,z_1z_d)=\sum _{j=k}^\infty z_1^j Q_j(1,z_2,\dots ,z_d)=z_1^k U({z}), \end{aligned}$$

where U is a unit, because \(U({0})=Q_k(1,0,\dots ,0)\ne 0\). Using this equation and Proposition 2.5, 2. above, we find that \(\hat{f}_1\) is \(z_1^k\)-s-Gevrey, or equivalently, a \(z_1\)-s/k-Gevrey series. Let us write \(z'=(z_2,\dots ,z_d)\). Since

$$\begin{aligned} \hat{f}_1({z})=\sum _{\beta \in \mathbb {N}^d} b_\beta z_1^{|\beta |} z_2^{\beta _2}\cdots z_d^{\beta _d}\\ ={\mathop {\mathop {\sum }_{(n,{\gamma })\in \mathbb {N}\times \mathbb {N}^{d-1}}}\limits _{n\ge |{\gamma }|}} b_{n-|{\gamma }|,{\gamma }} z_1^n {z}'^{{\gamma }}, \end{aligned}$$

we can find constants \(C,A>0\) such that \(|b_{n-|{\gamma }|,{\gamma }}|\le CA^{n+|{\gamma }|}n!^{s/k}\). Therefore,

$$\begin{aligned} |b_\beta |\le CA^{\beta _1+2\beta _2+\cdots +2\beta _d} |\beta |!^{s/k},\quad \text { } \beta \in \mathbb {N}^d, \end{aligned}$$

i.e., \(\hat{f}_0\) is \((s/k,\dots ,s/k)\)-Gevrey. The same is true for \(\hat{f}\) due to Lemma 2.2. \(\square \)

3 A preliminary Maillet-type theorem for singular PDEs

The aim of this section is to establish the existence, uniqueness, and Gevrey class (in the time variable t) of formal solutions of a family of singular PDEs. These include the equations that will be obtained by lifting (1.5). The results presented here will be the key to prove Theorems 1.1 and 1.2.

More precisely, fixing \(m,d,N\in \mathbb {N}^*\), \(p,k\in \mathbb {N}\) and local coordinates \((t,x)\in (\mathbb {C}\times \mathbb {C}^d,0)\), we consider the system of equations

$$\begin{aligned} {[}c_p({x})(t\partial _t)^p+\cdots +c_1({x})(t\partial _t)+c_0({x})]u=B({x})t^k+G(x)(t,D^mu). \end{aligned}$$
(3.1)

for an unknown \(u=u(t,x)\in \mathbb {C}^N\). The coefficients in (3.1) are assumed to be holomorphic and bounded near the origin, say \(c_0,\dots ,c_p\in \mathcal {O}_b(D_r^d,\mathbb {C}^{N\times N})\) and \(B\in \mathcal {O}_b(D_r^d,\mathbb {C}^N)\) for a fixed \(r>0\). Moreover, \(G({x})(t,D^mu)\) is the operator

$$\begin{aligned} u(t,{x})\mapsto G({x})(t,D^mu):=G_0(t,{x},u)+\sum _{(b,\alpha )\in I_m} G_{b,\alpha }(t,{x}) t^b\partial _t^b\partial _{\alpha ,{x}} u, \end{aligned}$$

acting on \(\mathbb {C}[[t,{x}]]^N\), where:

  • \(I_m:=\{(b,\alpha )\in \mathbb {N}\times \mathbb {N}^d \,:\, b+|\alpha |\le m\}\) is a finite set of indices.

  • \(G_0\in \mathcal {O}_b(D_r\times D_r^d\times D_r^N,\mathbb {C}^N)\) and \(G_{b,\alpha }\in \mathcal {O}_b(D_r\times D_r^d)\), for all \((b,\alpha )\in I_m\).

  • The previous maps have the convergent Taylor expansions

    $$\begin{aligned} G_0(t,x,u)=\sum _{j=0}^\infty F_{0,j}(x,u) t^j,\quad \text { and }\quad G_{b,\alpha }(t,{x})=\sum _{j=1}^\infty g_{b,\alpha ,j}({x})t^j, \end{aligned}$$

    respectively. We assume that

    $$\begin{aligned} F_{0,j}(x,u)=\sum _{\gamma \in \mathbb {N}^N, |{\gamma }|\ge 2} F_{0,j,{\gamma }}({x}) u^{{\gamma }}, \end{aligned}$$

    has only non-linear terms in u, where \(F_{0,j,\gamma }\in \mathcal {O}_b(D_r^d,\mathbb {C}^N)\) and \(g_{b,\alpha ,j}\in \mathcal {O}_b(D_r^d)\). Thus, the non-linear terms in u of G are collected in \(G_0\) whereas the remaining terms are linear in u and its derivatives.

Equation (3.1) is part of a family of scalar equations (\(N=1\)) treated in [10, Chapter 6] for \(k=1\). In that case, the Gevrey class is given by the maximum of

$$\begin{aligned} s_p(t^{j+b}\partial _t^b\partial _{\alpha }):=\max \left\{ 0,\frac{b+|\alpha |-p}{j}\right\} , \end{aligned}$$
(3.2)

and taken over the terms appearing on the right-hand side of (3.1). Our adaptation below will be obtained from this statement which is Theorem 6.3.1 and Corollary 6.3.3 (1) for \(p=0\) in [10] (and \(d=1\) in their notation).

Theorem 3.1

(Gerard–Tahara) A sufficient condition to guarantee the existence and uniqueness of a solution of Eq. (3.1) of the form

$$\begin{aligned} \widehat{u}(t,x)=\sum _{n=k}^\infty u_n(x) t^n\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^N)[[t]],\quad \text { for some } \rho >0, \end{aligned}$$
(3.3)

is that

$$\begin{aligned} c_p(0) \text { and } c_p(0)n^p+\cdots +c_1(0)n+c_0(0)\text { are invertible for all } n\ge 0. \end{aligned}$$
(3.4)

In this case, \(\widehat{u}\) is s-Gevrey in t, where

$$\begin{aligned} s:=\sup _{(j,b,\alpha )\in J} s_p(t^{j+b}\partial _t^b\partial _{\alpha }), \end{aligned}$$
(3.5)

and \(J=\{ (j,b,\alpha )\in \mathbb {N}^*\times \mathbb {N}\times \mathbb {N}^d : g_{b,\alpha ,j}({x})\not \equiv 0\}\).

Proof

If we substitute \(\widehat{u}(t,x)=\sum _{n=0}^\infty u_n(x) t^n\) into (3.1) and equate coefficients in corresponding powers of t, we find that

$$\begin{aligned} c_0(x)u_0(x)=F_{0,0}(x,u_0(x)). \end{aligned}$$
(3.6)

Moreover, for \(n\ge 1\) we have the recurrence

$$\begin{aligned}&{[}c_p(x)n^p+\cdots +c_1(x)n+c_0(x)]u_{n}(x)\nonumber \\&\quad =\delta _{n,k}B(x)+\sum _{l=0}^{n-1} \sum _{(b,\alpha )\in I_m} \left( {\begin{array}{c}l\\ b\end{array}}\right) b! g_{b,\alpha ,n-l}(x) \partial _\alpha (u_l) +\text { l.o.t}, \end{aligned}$$
(3.7)

where l.o.t. are the non-linear terms in \(u_0,u_1,\dots ,u_{n-1}\) coming from \(G_0(t,x,\widehat{u}(t,x))\), and \(\delta _{n,k}\) is the Kronecker delta.

Condition (3.4) allows to determine uniquely the coefficients \(u_{n}(x)\), \(n\ge 1\), from (3.7) thanks to the following lemma. We postpone the proof to the end of the section.

Lemma 3.2

Consider \(c_0,\dots ,c_p\in \mathcal {O}_b(D_r^d,\mathbb {C}^{N\times N})\) such that (3.4) holds. Then there is \(0<\rho \le r\) such that \(c_p(x)n^p+\cdots +c_1(x)n+c_0(x)\) is invertible, for all \(x\in D_\rho ^d\) and \(n\ge 0\). Moreover, there is a constant \(M>0\) such that

$$\begin{aligned} \sup _{x\in D_\rho ^d} \left\| (c_p(x)n^p+\cdots +c_1(x)n+c_0(x))^{-1}\right\| \le \frac{M}{n^p},\qquad \text { for all } n\ge 1. \end{aligned}$$
(3.8)

Here \(\Vert B\Vert =\max _{1\le i\le N} \sum _{j=1}^N |B_{i,j}|\), for \(B=(B_{ij})\in \mathbb {C}^{N\times N}\).

We have seen that \(u_n\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^N)\) can be found recursively from \(u_0\). Now, to determine \(u_0(x)\) we apply the implicit function theorem to (3.6): since \(F_{0,0}\) has only nonlinear terms in u, the linear part of this equation in u at \(x=0\) is \(c_0(0)\), which is invertible due to (3.4) for \(n=0\). Therefore, (3.6) has a unique analytic solution \(u_0(x)\in \mathbb {C}\{x\}^N\) such that \(u_0(0)=0\). Since \(u_0=0\) also solves this equation, the initial term of \(\widehat{u}\) is \(u_0(x)\equiv 0\). Moreover, (3.7) shows that \(u_0=u_1=\cdots =u_{k-1}=0\) while \(u_k(x)=(c_p(x)k^p+\cdots +c_1(x)k+c_0(x))^{-1} B(x)\). In this way, we see that the system (3.1) has a unique formal power series solution of the form (3.3).

We proceed with the Gevrey type. The result holds for \(k=1\) since the majorant argument in [10] can be modified for vector equations in a straightforward way, see also Remark 3.3 below. It is worth remarking that the reason the term \(-p\) appears in (3.2) is due to the inequality (3.8) —in [10, p.180] it is used in the equivalent form \(\Vert L_nu_n\Vert _r\ge (\sigma _0/2)^p n^p \Vert u_n\Vert _r\)—.

The case \(k>1\) is done using the change of variables

$$\begin{aligned} u(t,x)=t^{k-1}v(t,x). \end{aligned}$$

We can check that \(\widehat{u}\) in (3.3) solves (3.1) if and only if \(\widehat{v}=t^{-(k-1)}\widehat{u}\) solves an equation of the same type but with \(k=1\). In fact, a direct calculation using Leibniz rule to compute \((t\partial _t)^b(t^{k-1}v)\) and \(t^b\partial _t^b(t^{k-1}v)\) shows that u satisfies (3.1) if and only if v satisfies

$$\begin{aligned} {[}\widetilde{c}_p({x})(t\partial _t)^p+\cdots +\widetilde{c}_1({x})(t\partial _t) +\widetilde{c}_0({x})]v=B({x})t+\widetilde{G}(x)(t,D^mv). \end{aligned}$$

The new coefficients are \(\widetilde{c}_l=\sum _{j=l}^p \left( {\begin{array}{c}j\\ l\end{array}}\right) (k-1)^{j-l}c_j\), \(l=0,1,\dots ,p\),

$$\begin{aligned} \widetilde{G}(x)(t,D^mv)=\widetilde{G}_0(t,x,v)+\!\sum _{(b,\alpha )\in I_m} \sum _{l=0}^{b} \left( {\begin{array}{c}b\\ l\end{array}}\right) \left( {\begin{array}{c}k-1\\ b-l\end{array}}\right) (b-l)! G_{b,\alpha }(t,{x}) t^{l} \partial _t^{l}\partial _\alpha (v), \end{aligned}$$

where \(G_0(t,x,t^{k-1}v)=t^{k-1}\widetilde{G}_0(t,x,v)\) and

$$\begin{aligned} \widetilde{G}_0(t,x,v)=\sum _{j=0}^\infty \widetilde{F}_{0,j}(t,x,v) t^j,\quad \widetilde{F}_{0,j,\gamma }(t,x,v):={\mathop {\mathop {\sum }_{\gamma \in \mathbb {N}^N}}\limits _{|{\gamma }|\ge 2}} F_{0,j,{\gamma }}({x}) t^{(k-1)(|\gamma |-1)} v^{{\gamma }}. \end{aligned}$$

They remain holomorphic and bounded near the origin. Moreover, the condition (3.4) holds in this case since \(\widetilde{c}_p(x)=c_p(x)\) and

$$\begin{aligned} \sum _{l=0}^p n^l\widetilde{c}_l(x) =\sum _{j=0}^p (k-1+n)^{j}c_j(x). \end{aligned}$$

Thus these matrices are invertible at \(x=0\), for all \(n\ge 0\). By the case \(k=1\), \(\widehat{v}\) is of s-Gevrey, where s is the maximum of \(s_p(t^{j+l}\partial _t^l \partial _\alpha )\) over the indexed \((j,l,\alpha )\) such that \(0\le l\le b\), \(b+|\alpha |\le m\), and \(g_{b,\alpha ,j}(x)\ne 0\). But (3.2) shows that

$$\begin{aligned} \max _{0\le l\le b} s_p(t^{j+l}\partial _t^l \partial _\alpha )=s_p(t^{j+b}\partial _t^b \partial _\alpha ). \end{aligned}$$

Therefore, s is given by (3.5). Since multiplication by \(t^{k-1}\) does not change the Gevrey order of a series, \(\widehat{u}\) is also s-Gevrey as we wanted to prove. \(\square \)

Remark 3.3

The invertibility of \(c_p(0)n^p+\cdots +c_1(0)n+c_0(0)\) means that

$$\begin{aligned} C(\lambda ):=\det (c_p(0)\lambda ^p+\cdots +c_1(0)\lambda +c_0(0))\ne 0,\qquad \text { for } \lambda =n\in \mathbb {N}. \end{aligned}$$

Since \(c_p(0)\) is also invertible, the function \(C(\lambda )\) is a polynomial in \(\lambda \) of degree exactly Np. If we denote its roots by \(\lambda _1,\dots ,\lambda _{Np}\in \mathbb {C}\), we are requiring that \(\lambda _j\ne n\), for all possible j and n. This is equivalent to the existence of a constant \(\sigma >0\) such that

$$\begin{aligned} \left| n-\lambda _j\right| > \sigma n,\qquad \text { for all } j=1,\dots , Np,\, n\in \mathbb {N}, \end{aligned}$$

which is the classical Poincaré condition, c.f., [10, Theorem 6.3.1].

Remark 3.4

An equivalent form of equation (3.1) is

$$\begin{aligned} {[}c'_p({x})t^p\partial _t^p+\cdots +c'_1({x})t\partial _t+c'_0({x})]u=B({x})t^k+G(x)(t,D^mu). \end{aligned}$$
(3.9)

In this case, the hypothesis on the matrices is that

$$\begin{aligned} c_p'(0)\text { and } \sum _{j=0}^p n(n-1)\cdots (n-1+j)c_j'(0)\text { are invertible for all } n\ge 0. \end{aligned}$$

This can be checked recalling the Stirling numbers of the first kind \(s(j,l)\in \mathbb {Z}\), \(1\le l\le j\), which are defined by the expansion

$$\begin{aligned} \lambda (\lambda -1)\cdots (\lambda -1+j)=\sum _{l=1}^j s(j,l) \lambda ^l,\,\, \text { and satisfying }\,\, t^j\partial _t^j=\sum _{l=1}^j s(j,l) (t\partial _t)^l. \end{aligned}$$

Writing the left-hand side of (3.9) in terms of the operators \((t\partial _t)^j\), it takes the form of (3.1) with

$$\begin{aligned} c_p(x)=c_p'(x),\quad c_l(x)=\sum _{j=l}^p s(j,l) c_j'(x),\, l=0,1\dots ,p-1. \end{aligned}$$

Thus \(\sum _{l=0}^p c_l(x)n^l=\sum _{j=0}^p n(n-1)\cdots (n-1+j)c_j'(x)\) as required.

We conclude this section with the proof of the lemma.

Proof of Lemma 3.2

Since \(c_0(0), c_p(0)\) are invertible we can choose \(\rho >0\) such that \(c_0(x), c_p(x)\) are invertible and \(c_0(x)^{-1}, c_p(x)^{-1}\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^{N\times N})\).

We recall that if \(B=(B_{ij})\in \mathbb {C}^{N\times N}\) is such that \(\Vert B\Vert <1\) for a matrix norm \(\Vert \cdot \Vert \), then \(I_N-B\) is invertible with inverse given by the Neumann series \((I_N-B)^{-1}=\sum _{n=0}^\infty B^n\). Moreover \(\Vert (I_N-B)^{-1}\Vert \le {1}/{(1-\Vert B\Vert )}.\) In particular, this holds for \(B\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^{N\times N})\) and the supremum norm \(\Vert B\Vert _{\rho }:=\sup _{x\in D_\rho ^d} \Vert B(x)\Vert \), where \(\Vert \cdot \Vert \) is as in the statement of the lemma.

Consider an integer \(n>L=\Vert c_p^{-1}\Vert _{\rho } \sum _{j=1}^{p-1} \Vert c_j\Vert _{\rho }\). If \(x\in D_\rho ^d\), then

$$\begin{aligned} \left\| \left( \frac{c_0(x)}{n^p}+\frac{c_1(x)}{n^{p-1}}+\cdots +\frac{c_{p-1}(x)}{n}\right) c_p^{-1}(x) \right\| \le \sum _{j=0}^{p-1} \frac{\Vert c_j\Vert _{\rho }}{n} \Vert c_p^{-1}\Vert _{\rho }<1. \end{aligned}$$

By the previous paragraph, we find that

$$\begin{aligned} c_p(x)n^p+\cdots +c_1(x)n+c_0(x)=\left( I_N+ \left( \frac{c_0(x)}{n^p}+\cdots +\frac{c_{p-1}(x)}{n}\right) c_p^{-1}(x)\right) n^p c_p(x), \end{aligned}$$

is invertible, for all \(x\in D^d_\rho \). Moreover, we have the bound

$$\begin{aligned}&\Vert (c_p(x)n^p+\cdots +c_1(x)n+c_0(x))^{-1}\Vert \le \frac{\Vert c_p^{-1}\Vert _\rho /n^p}{1-\left\| \left( \frac{c_0(x)}{n^p} +\cdots +\frac{c_{p-1}(x)}{n}\right) c_p^{-1}(x)\right\| }\\&\le \frac{\Vert c_p^{-1}\Vert _\rho /n^p}{1-\Vert c_p^{-1}\Vert _\rho \left( \frac{\Vert c_0\Vert _{\rho }}{n^p}+\cdots +\frac{\Vert c_{p-1}\Vert _{\rho }}{n}\right) }\le \frac{1}{an^p-\left( \Vert c_0\Vert _{\rho }+\cdots +\Vert c_{p-1}\Vert _{\rho }n^{p-1}\right) }, \end{aligned}$$

where \(a=1/\Vert c_p^{-1}\Vert _\rho \). This shows that (3.8) holds for a large M. Note also that the denominator is indeed positive since, by hypothesis, \(\Vert c_p^{-1}\Vert _{\rho } \sum _{j=1}^{p-1} \Vert c_j\Vert _{\rho } n^j\le n^{p-1} \sum _{j=1}^{p-1} \Vert c_j\Vert _{\rho }/a<n^p\). For the remaining integers \(1\le n\le L\), by (3.4) we can shrink \(\rho \) and enlarge M if necessary to assure that \(c_p(x)n^p+\cdots +c_1(x)n+c_0(x)\) is invertible, for all \(x\in D_\rho ^d\) and that (3.8) still holds as it was required. \(\square \)

4 Some technical results

The proof of Theorems 1.1 and 1.2 requires some technical lemmas that we collect here. They contain elementary properties on the derivatives of powers of a function and on suitable changes of variables.

Although we are mainly interested in holomorphic coefficients, we state the following two results for arbitrary formal power series. We recall that according to the notation in (1.7) we have that

$$\begin{aligned} \partial _{\alpha }^\star (P):=(\partial _{x_1}P)^{\alpha _1}\cdots (\partial _{x_d}P)^{\alpha _d},\qquad \alpha =(\alpha _1,\ldots ,\alpha _d)\in \mathbb {N}^d{\setminus }\{0\}. \end{aligned}$$
(4.1)

Lemma 4.1

Consider \(P\in \mathbb {C}[[{x}]]\), \(\alpha \in \mathbb {N}^{d}{\setminus }\{{0}\}\) and an integer \(n\ge 1\). Then,

$$\begin{aligned} \partial _{\alpha }(P^n)=\sum _{j=1}^{n}\frac{n!}{(n-j)!}P^{n-j}\cdot A_{\alpha ,j}, \end{aligned}$$
(4.2)

where each \(A_{\alpha ,j}\) is a polynomial in derivatives of P, and it does not depend on n. In particular, \(A_{\alpha ,1}=\partial _\alpha (P)\), \(A_{\alpha ,j}=0\) if \(j>|\alpha |\) and

$$\begin{aligned} A_{\alpha ,|\alpha |}=\partial _{\alpha }^\star (P). \end{aligned}$$
(4.3)

Proof

We apply induction on \(|\alpha |\). The result is valid for \(|\alpha |=1\) and

$$\begin{aligned} A_{{e}_l,1}:=\partial _{{e}_l}(P),\qquad l=1,\ldots ,d, \end{aligned}$$
(4.4)

since \(\partial _{{e}_l}(P^n)=nP^{n-1}\partial _{{e}_l}(P)\). If we assume the result is valid up to some \(|\alpha |\), the induction argument shows that

$$\begin{aligned} \partial _{\alpha +{e}_l}(P^n)&=\sum _{j=1}^{n} \frac{n!}{(n-j)!} \partial _{{e}_l}(P^{n-j} A_{\alpha ,j})\\&=\sum _{j=1}^{n-1} \frac{n!}{(n-j-1)!}P^{n-j-1} \partial _{{e}_l}(P) A_{\alpha ,j}+\sum _{j=1}^n \frac{n!}{(n-j)!}P^{n-j} \partial _{{e}_l}( A_{\alpha ,j}), \end{aligned}$$

for \(l=1,\dots ,d\). A rearrangement of the terms in the previous expression leads to (4.2) for \(\alpha +e_l\) where

$$\begin{aligned} A_{\alpha +{e}_l,1}&=\partial _{{e}_l}(A_{\alpha ,1}), \end{aligned}$$
(4.5)
$$\begin{aligned} A_{\alpha +{e}_l,j}&=\partial _{{e}_l}(A_{\alpha ,j})+\partial _{{e}_l}(P)\cdot A_{\alpha ,j-1},\qquad j=2,,\dots , n. \end{aligned}$$
(4.6)

Then (4.2) holds for \(|\alpha |+1\). The formula follows from the principle of induction.

On the other hand, it is clear from (4.4) and (4.5) that \(A_{\alpha ,1}=\partial _{\alpha }(P)\) is valid. In addition to this, if \(j>|\alpha |\), the recurrence (4.6) implies that \(A_{\alpha ,j}=0\). Finally, if \(j=|\alpha |\), (4.6) takes the form

$$\begin{aligned} A_{{e}_l,1}=\partial _{{e}_l}(P),\qquad A_{\alpha +{e}_l,|\alpha |+1}=\partial _{{e}_l}(P)\cdot A_{\alpha ,|\alpha |}, \end{aligned}$$

from which (4.3) follows. \(\square \)

Remark 4.2

Equation (4.6) describes a recursion leading to each \(A_{\alpha ,j}\). We can give closed formulas for them using the multivariate Faà di Bruno formula [8, p. 505]. Indeed, consider \(h({x})=f(g^{(1)}({x}),\ldots , g^{(n)}({x}))\), where

$$\begin{aligned} f(y_1,\ldots ,y_n)=y_1\cdots y_n,\qquad g^{(1)}({x})\equiv \ldots \equiv g^{(n)}({x})\equiv P({x}). \end{aligned}$$

We observe that, given any multiindex \({\lambda }\in \mathbb {N}^n\) with all its components being 0 or 1, i.e., \(\lambda \in \{0,1\}^n\), and such that \(|{\lambda }|=j\), then one has that \(\partial _{{\lambda }}(f)(P,\dots ,P)=P^{n-j}\). Otherwise, \(\partial _{{\lambda }}(f)=0\). Then, for every \(\alpha \in \mathbb {N}^d{\setminus }\{0\}\) one has

$$\begin{aligned} \partial _{\alpha }(P^n)=\sum _{j=1}^{|\alpha |} P^{n-j} {\mathop {\mathop {\sum }_{{\lambda }\in \{0,1\}^{n}}}\limits _{|{\lambda }|=j}} \left[ \sum _{s=1}^{|\alpha |}\sum _{p_s(\alpha ,{\lambda })} \alpha ! \prod _{r=1}^{s} \frac{(\partial _{{\ell }_r}P)^{|{k}_{r}|}}{{k}_r!({\ell }_r!)^{|{k}_r|}}\right] , \end{aligned}$$
(4.7)

where

$$\begin{aligned} p_s(\alpha ,{\lambda })=\{ ({k}_1,\ldots ,{k}_s;{\ell }_1,\ldots ,{\ell }_s) \in (\mathbb {N}^{n})^{s}\times (\mathbb {N}^{d})^{s}:\, |{k}_i|>0,\\ {0}<{\ell }_1<\cdots <{\ell }_{s},\quad \sum _{i=1}^{s} {k}_i={\lambda },\quad \sum _{i=1}^{s}|{k}_i|{\ell }_i=\alpha \}. \end{aligned}$$

Note there are \(\left( {\begin{array}{c}n\\ j\end{array}}\right) \) n-tuples \({\lambda }\in \{0,1\}^{n}\) such that \(|{\lambda }|=j\) and each of them is obtained from \({e}_1+{e}_2+\cdots +{e}_j\) by permuting the corresponding variables. Fixing one such \({\lambda }\), if \(({k}_1,\ldots ,{k}_s;{\ell }_1,\ldots ,{\ell }_s)\in p_s(\alpha ,{\lambda })\), we see that \(s\le \sum _{i=1}^s |{k}_i|=|{\lambda }|=j\). Moreover, the term in brackets in (4.7) is independent of \({\lambda }\). Indeed, the previous permutation gives a bijective correspondence between \(p_{s,j}(\alpha ):=p_s(\alpha ,{e}_1+{e}_2+\cdots +{e}_j)\) and \(p_s(\alpha ,{\lambda })\). Therefore, (4.7) simplifies to

$$\begin{aligned} \partial _{\alpha }(P^n)= \sum _{j=1}^{|\alpha |} \left( {\begin{array}{c}n\\ j\end{array}}\right) P^{n-j} \left[ \sum _{s=1}^{j}\sum _{p_{s,j}(\alpha )} \alpha ! \prod _{r=1}^{s} \frac{(\partial _{{\ell }_r}P)^{|{k}_{r}|}}{{k}_r!({\ell }_r!)^{|{k}_r|}}\right] , \end{aligned}$$
(4.8)

giving explicit formulas for \(A_{\alpha ,j}\).

Lemma 4.3

Fix \(m\ge 1\) and \(h, P\in \mathbb {C}[[{x}]]\). Consider the differential operators \(L_j\) in (1.6) and the associated functions \(L^{\star }_j(P)\) in (1.7). If P divides \(L_j^\star (P)\), for all \(j=1,\dots ,m\), then \(P^m\) divides \(\sum _{j=1}^{m} P^{j-1} L_j(hP^m)\).

Proof

Using the multivariate Leibniz rule,

$$\begin{aligned} \partial _{\alpha }(hP^m)=\sum _{0\le \beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial _{\alpha -\beta }(h) \partial _\beta (P^m), \end{aligned}$$
(4.9)

we see that

$$\begin{aligned} P^{j-1}L_j(hP^m)= P^{j-1}\sum _{|\alpha |=j} a_{\alpha }^{(j)} \sum _{0\le \beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial _{\alpha -\beta }(h) \partial _\beta (P^m). \end{aligned}$$

By Lemma 4.1 we can write

$$\begin{aligned} \partial _{\beta }(P^m)=\sum _{l=1}^{|\beta |}\frac{m!}{(m-l)!}P^{m-l}\cdot A_{\beta ,l}, \end{aligned}$$

since \(|\beta |\le |\alpha |=j\le m\) and \(A_{\beta ,l}=0\) for \(l>|\beta |\). To prove the statement, we analyze each one of the terms

$$\begin{aligned} a_{\alpha }^{(j)} \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial _{\alpha -\beta }(h) \frac{m!}{(m-l)!} A_{\beta ,l} P^{j-1+m-l}, \end{aligned}$$
(4.10)

whose sum gives \(P^{j-1} L_j(hP^m)\). We distinguish two cases:

  • If \(|\beta |\le j-1\), then \(P^m\) divides (4.10) since \(j-1+m-l\ge j-1+m-|\beta |\ge m\).

  • If \(|\beta |=j\), it holds that \(\beta =\alpha \). On the one hand, if \(l<j\), then \(j-1+m-l>m-1\) and we are done. Otherwise, \(l=j\) and we are left with the term

    $$\begin{aligned} P^{j-1} \sum _{|\alpha |=j} a_{\alpha }^{(j)} h A_{\alpha ,|\alpha |} P^{m-j}=h L_j^\star (P)P^{m-1}, \end{aligned}$$

    which, by hypothesis, is also divisible by \(P^m\).

\(\square \)

The final lemma computes the derivatives of functions after a change of variables having P as a holomorphic local coordinate.

Lemma 4.4

Let \(P\in \mathbb {C}\{{x}\}\) such that \(P(0)=0\) and \(\partial _{x_1}P(0)\ne 0\), and consider the change of variables \(\xi :(\mathbb {C}^d,0)\rightarrow (\mathbb {C}^d,0)\) given by

$$\begin{aligned} \xi _1=P({x}),\quad \xi _j=x_j,\quad j=2,\dots ,d. \end{aligned}$$
(4.11)

If \(f(\xi ({x}))=f(P(x),x_2,\dots ,x_d)\) is holomorphic and \(\alpha \in \mathbb {N}^d{\setminus }\{0\}\), then

$$\begin{aligned} \partial _{\alpha ,{x}}(f)=\partial _{\alpha }^\star (P)\partial _{\xi _1}^{|\alpha |}(f)+\sum _{j=1}^{|\alpha |-1} \left[ \sum _{*} B_{j,\beta }^{\alpha }\cdot \partial _{\xi _1}^j\partial _{\beta ,\xi }(f)\right] + \overline{\delta }_{\alpha }\partial _{\alpha ,\xi }(f), \end{aligned}$$

where the inner sum is taken over all \(\beta \in \mathbb {N}^{d-1}\) such that \((0,\beta )\le \alpha \) and \(|\beta |\le |\alpha |-j\). The \(B_{j,\beta }^{\alpha }\) are polynomials in the derivatives of P and \(\overline{\delta }_{\alpha }:=(\overline{\delta }_{1,1})^{\alpha _1}\cdots (\overline{\delta }_{1,d})^{\alpha _d}\), where \(\overline{\delta }_{i,j}:=1-\delta _{i,j}\) and \(0^0=1\).

Proof

Note that \(\xi \) is indeed a holomorphic change of variables since \(\xi (0)=0\) and its Jacobian determinant is precisely \(\partial _{x_1}(P)(0)\ne 0\). To prove the lemma we proceed by induction on \(|\alpha |\). In the case \(|\alpha |=1\), the chain rule shows that

$$\begin{aligned} \partial _{x_l}(f)=\partial _{x_l}(P)\partial _{\xi _1}(f)+\overline{\delta }_{1,l} \partial _{\xi _l}(f),\quad l=1,\dots ,d, \end{aligned}$$
(4.12)

proving this case. If we assume the result is valid up to some \(|\alpha |\), taking \(l=1,\dots ,d\), using the induction hypothesis and formula (4.12) we find that

$$\begin{aligned}&\partial _{\alpha +e_l,x}(f) =\partial _{x_l}(\partial _{\alpha ,x}(f))\\&\quad =\partial _{x_l}(\partial _{\alpha }^\star (P)\partial _{\xi _1}^{|\alpha |}(f))+\sum _{j=1}^{|\alpha |-1} \left[ {\mathop {\mathop {\sum }_{(0,\beta )\le \alpha }}\limits _{|\beta |\le |\alpha |-j}} \partial _{x_l}( B_{j,\beta }^{\alpha }\cdot \partial _{\xi _1}^j\partial _{\beta ,\xi }(f) )\right] + \overline{\delta }_{\alpha } \partial _{x_l}(\partial _{\alpha ,\xi }(f)), \end{aligned}$$

which is equal to

$$\begin{aligned}&\partial _{x_l}(P)\partial _{\alpha ,x}^\star (P) \partial _{\xi _1}^{|\alpha |+1}(f)+\overline{\delta }_{1,l}\partial _{\alpha }^\star (P)\partial _{\xi _l}\partial _{\xi _1}^{|\alpha |}(f) +\partial _{x_l}(\partial _{\alpha }^\star (P)) \partial _{\xi _1}^{|\alpha |}(f)+\nonumber \\&\sum _{j=1}^{|\alpha |-1} \!\! {\mathop {\mathop {\sum }_{(0,\beta )\le \alpha }} \limits _{|\beta |\le |\alpha |-j}} \!\! \partial _{x_l}( B_{j,\beta }^{\alpha }) \partial _{\xi _1}^j\partial _{\beta ,\xi }(f) + B_{j,\beta }^{\alpha }\left( \partial _{x_l}(P)\partial _{\xi _1}^{j+1}\partial _{\beta ,\xi }(f)+ \overline{\delta }_{1,l}\partial _{\xi _1}^{j}\partial _{\beta +e_l,\xi }(f)\right) \nonumber \\&\qquad +\overline{\delta }_{\alpha } \partial _{x_l}(P)\partial _{\xi _1}\partial _{\alpha ,\xi }(f)+\overline{\delta }_{\alpha }\overline{\delta }_{1,l}\partial _{\xi _l}\partial _{\alpha ,\xi }(f). \end{aligned}$$
(4.13)

Note that the external terms are \(\partial _{\alpha +e_l,x}^\star (P) \partial _{\xi _1}^{|\alpha |+1}(f)\) and \(\overline{\delta }_{\alpha +e_l}\partial _{\alpha +e_l,\xi }(f)\) as required. On the other hand, the remaining terms have the form \(B_{k,\gamma }^{\alpha +e_l}\partial _{\xi _1}^k \partial _{\gamma ,\xi }(f)\) with \(1\le k\le |\alpha |\) and \((0,\gamma )\le \alpha +e_l\), where the \(B_{k,\gamma }^{\alpha +e_l}\) can be found recursively. By the nature of the terms in the sum (4.13) it is clear that each \(B_{k,\gamma }^{\alpha +e_l}\) is a polynomial in the derivatives of P. The principle of induction allows to conclude the proof. \(\square \)

Remark 4.5

Another way to prove Lemma 4.4 is applying Faà di Bruno formula [8, p. 505] to \(f(P({x}),x_2,\dots ,x_d)\). On the other hand, for our purposes it is not necessary to specify the recurrences to determine the coefficients \(B^{\alpha }_{j,\beta }\). However, for \(\alpha =n e_1\) and \(l=1\), we have that \(\beta =0\) for all j and (4.13) takes the form

$$\begin{aligned} \partial _{x_1}^{n+1}(f)=&\partial _{x_1}(P)^{n+1}\partial _{\xi _1}^{n+1}(f)+ \partial _{x_1}((\partial _{x_1}(P)^n))\partial _{x_1}^n(f)\\&+\sum _{j=1}^{n-1} \left[ \partial _{x_1}(B_{j,0}^{ne_1}) \partial _{\xi _1}^j(f)+B_{j,0}^{ne_1} \partial _{x_1}(P) \partial _{\xi _1}^{j+1}(f)\right] . \end{aligned}$$

Setting \(B_{n,0}^{ne_1}=(\partial _{x_1}P)^n\), we find that

$$\begin{aligned} B_{1,0}^{(n+1e_1)}=\partial _{x_1}(B_{1,0}^{ne_1}),\quad B_{j,0}^{(n+1)e_1}=\partial _{x_1}(B_{j,0}^{ne_1})+\partial _{x_1}(P)B_{j-1,0}^{ne_1},\qquad j=2,\dots ,n. \end{aligned}$$

This is recurrence (4.6) in Lemma 4.1. Thus \(B_{1,0}^{ne_1}=A_{ne_1,1}=\partial _{x_1}^n(P)\), \(B_{j,0}^{ne_1}=A_{ne_1,j}\), \(j=2,\dots ,n-1\), and

$$\begin{aligned} \partial _{x_1}^n(f)=(\partial _{x_1}P)^n \partial _{\xi _1}^n (f)+\sum _{j=1}^n A_{ne_1,j} \partial _{\xi _1}^j(f),\qquad \text { for all } n\ge 1. \end{aligned}$$

5 The proof of Theorems 1.1 and 1.2

The idea behind the proofs is simple. For Theorem 1.1 we add a variable t and working in \((t,x)\in (\mathbb {C}\times \mathbb {C}^{d},0)\) we search for a PDE satisfied by the series \(\widehat{w}=\sum y_n t^n\), where \(\widehat{y}=\sum y_n P^n\) is the solution to the initial equation (1.5). For Theorem 1.2, we can take \(P=\xi _1\) as one of the variables, and write (1.5) as an equation in the new coordinates. In both cases the existence, uniqueness and Gevrey type will be obtained from Theorem 3.1.

Proof of Theorem 1.1

We point out that if \(F(x,0)\equiv 0\), then the the unique formal power solution is zero. Thus we assume \(f(x):=F(x,0)\not \equiv 0\). We will write

$$\begin{aligned} F({x},{y})=f({x})+A({x}){y}+H({x},{y}),\quad H({x},{y})=\sum _{I\in \mathbb {N}^N, |I|\ge 2} A_I({x}) {y}^I, \end{aligned}$$
(5.1)

where \(f\in \mathcal {O}_b(D_r^d,\mathbb {C}^N)\) with \(f({0})={0}\), \(A\in \mathcal {O}_b(D_r^d,\mathbb {C}^{N\times N})\), and \(H\in \mathcal {O}_b(D_r^d\times D_r^N,\mathbb {C}^{N})\) has no constant nor linear terms in its Taylor expansion with respect to \({y}\) at the origin, and where \(r>0\) is small. Since \(A({0})=D_yF(0,0)\) is invertible, by continuity we can assume \(A({x})\) is also invertible for all \(x\in D_r^d\).

We search for a formal P-series solution of (1.5) in the form

$$\begin{aligned} \widehat{{y}}({x})=\sum _{n=0}^\infty {y}_n({x})P({x})^n, \end{aligned}$$
(5.2)

with the \({y}_n({x})\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^N)\), for all \(n\ge 0\), for a common \(\rho >0\). The rest of the proof is divided in several steps.

Step 1: We determine the terms \({y}_0({x}),\dots ,{y}_{k-1}({x})\) inductively solving adequate implicit equations. For the coefficient \({y}_0\), setting \(x=0\) in (1.5) and recalling that \(F(0,0)=0\), we require that \({y}_0({0})={0}\). Now we search for a holomorphic solution of

$$\begin{aligned} f({x})+A({x}){y}_0({x})+H({x},{y}_0({x}))=0. \end{aligned}$$
(5.3)

Since A(0) is invertible, shrinking \(r>0\) if necessary, the implicit function theorem leads to the existence of such solution \({y}_0({x})\in \mathcal {O}_b(D_r^d,\mathbb {C}^N)\) with \({y}_0({0})={0}\). Then, considering the change of variables \({y}={y}_0+{w}_0\) in (1.5), we find that \(w_0\) satisfies the system

$$\begin{aligned} \sum _{j=1}^k P^{j}L_j({w}_0) =F_0({x},{w}_0)=g_0({x})+B_0({x}){w}_0+H_0({x},{w}_0), \end{aligned}$$
(5.4)

with \(F_0({0},{0})={0}\), the matrix \(B_0({0})\) is invertible, and the Taylor expansion of \(H_0\in \mathcal {O}_b(D_\rho ^d\times D_\rho ^N,\mathbb {C}^N)\) with respect to \({w}_0\) has no constant nor linear terms. Here, we have written

$$\begin{aligned} g_0:=-\sum _{j=1}^k P^{j}L_j({y}_0),\qquad B_0:=A+A_0, \end{aligned}$$

where \(A_0\in \mathcal {O}_b(D_\rho ^d,\mathbb {C}^{N\times N})\) satisfies \(A_0({0})={0}\). These maps are obtained from

$$\begin{aligned}&H({x},{y}_0({x})+{w}_0)-H({x},{y}_0({x}))=A_0({x}){w}_0+H_0({x},{w}_0),\\&A_0({x})=\left. D_{w_0}( H({x},{y}_0({x})+{w}_0))\right| _{w_0=0}=D_{y}H({x},{y}_0({x})). \end{aligned}$$

We now proceed recursively, by means of the change of variables

$$\begin{aligned} {w}_{m-1}={w}_{m}+{y}_{m} P^{m},\qquad m=1,\dots ,k-1, \end{aligned}$$

and determining functions \(g_m\), \(A_m, B_m\) and \(H_m\) defined by

$$\begin{aligned}&A_{m}{w}_{m}+H_{m}({x},{w}_{m})=H_{m-1}({x}, {w}_{m}+{y}_{m} P^{m})-H_{m-1}({x},{y}_{m} P^{m})\nonumber \\&g_m=-\sum _{j=1}^k P^{j}L_j({y}_m P^m),\qquad B_m=B_{m-1}+A_m=A+(A_0+\cdots +A_m). \end{aligned}$$
(5.5)

Note that \(H_m\) has no constant or linear terms in its Taylor expansion in \({w}_m\) near the origin and that \(B_m({0})=A({0})\) is an invertible matrix. Indeed, we see from (5.5) that \(A_{m}=\left. D_{ {w}_{m}}(H_{m-1}({x},{w}_{m}+{y}_{m}P^{m}))\right| _{w_{m}=0}=D_{{w}_{m-1}} H_{m-1}({x},{y}_{m} P^{m}),\) so \(A_m({0})={0}\) as required.

To proceed we need to define \({y}_m\) in a consistent way. If \({y}_{m-1}\), \(g_{m-1}\), \(B_{m-1}\) and \(H_{m-1}\) have been found, we set \(y_m\) as the unique holomorphic solution near the origin of the system

$$\begin{aligned} P^{-m}g_{m-1}+B_{m-1}{y}_{m}+P^{-m}H_{m-1}({x},{y}_{m} P^{m})={0}. \end{aligned}$$

This equation has holomorphic coefficients on some neighborhood of the origin. Indeed, the function \(g_{m-1}\) is divisible by \(P^{m}\) thanks to Lemma 4.3 since \(g_{m-1}=-P\cdot \sum _{j=1}^{m-1} P^{j-1}L_j(y_{m-1}P^{m-1})-\sum _{j=m}^k P^{j}L_j(y_{m-1}P^{m-1})\). Also, if we write

$$\begin{aligned} H_m({x},{w}_m)=\sum _{|I|\ge 2} A_{I,m}({x}) {w}_m^I, \end{aligned}$$

then

$$\begin{aligned} P^{-m}H_{m-1}({x},{y}_{m} P^{m})=\sum _{|I|\ge 2} A_{I,m-1}({x}) P^{m(|I|-1)} {y}_m^I, \end{aligned}$$

which also has holomorphic coefficients in \({x}\) that vanish at \({x}={0}\). Therefore, \({y}_m\) is determined by means of the implicit function theorem.

At this point it follows from a direct recursive argument that \(w_m\) satisfies

$$\begin{aligned} \sum _{j=1}^k P^{j}L_j({w}_{m}) =F_m({x},{w}_m):=g_{m}({x})+B_{m}({x}){w}_{m}+H_{m}({x},{w}_{m}), \end{aligned}$$
(5.6)

where \(F_m({0},{0})=0\), \(B_m({0})\) is invertible, and \(H_m({x},{w}_m)\) has no constant nor linear terms in its Taylor expansion in \({w}_m\) in a neighborhood of the origin. In conclusion, after collecting all the previous changes of variables we find that \({w}={w}_k\) defined by \({w}={y}-(y_0+y_1P+\cdots +y_{k-1} P^{k-1})\) satisfies

$$\begin{aligned} \sum _{j=1}^k P^{j}L_j({w}) =g({x})+B({x}){w}+H'({x},{w}), \end{aligned}$$
(5.7)

where \(g:=g_{k-1}\in \mathcal {O}_b(D_r^d,\mathbb {C}^N)\) is divisible by \(P^k\), \(B:=B_{k-1}\in \mathcal {O}_b(D_r^d,\mathbb {C}^{N\times N})\) with B(0) invertible, and \(H'=H_{k-1}\in \mathcal {O}_b(D_r^d\times D_r^N,\mathbb {C}^{N})\) has no constant nor linear terms in its Taylor expansion with respect to \({w}\), and where \(r>0\) has been reduced when required. Therefore, we can restrict the problem to find a solution of (5.7) having the form

$$\begin{aligned} \widehat{{w}}({x})=\sum _{n=k}^\infty y_n({x}) P({x})^n. \end{aligned}$$

Step 2: We study the action of the operator \(P^jL_{j}\) on \(\widehat{{w}}({x})\) for each \(j=1,\dots ,k\). By the multivariate Leibniz rule (4.9) and Lemma 4.1 we see that

$$\begin{aligned} L_j(\widehat{w})=&\sum _{|\alpha |=j} a_{\alpha }^{(j)} \sum _{n=k}^\infty \partial _{\alpha }(y_nP^n)\\ =&\sum _{n=k}^\infty L_j({y}_n) P^n + \sum _{n=k}^\infty \sum _{|\alpha |=j} a_{\alpha }^{(j)} \sum _{0<\beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \partial _{\alpha -\beta } (y_n) \sum _{l=1}^{|\beta |} \frac{n!}{(n-l)!} P^{n-l} A_{\beta ,l},\\ =&\sum _{n=k}^\infty L_j({y}_n) P^n +S_j, \end{aligned}$$

where the first sum corresponds to \(\beta ={0}\). Note that the last inner sum is taken over \(1\le l \le |\beta |\) since \(A_{\beta ,l}=0\) if \(l>|\beta |\), and \(|\beta |\le |\alpha |=j\le k\le n\). Let us write \(S_j=S_{j,1}+S_{j,2}\), where \(S_{j,2}\) retains the terms corresponding to \(\beta =\alpha \). Then

$$\begin{aligned} S_{j,2}=S_{j,3}+\sum _{n=k}^\infty {y}_n \frac{n!}{(n-j)!} P^{n-j} L_j^\star (P), \end{aligned}$$

where \(S_{j,3}\) contains the terms in which \(l<j\) and \(S_{j,2}-S_{j,3}\) in the previous expression corresponds to \(l=j\), according to the definition of \(L_j^\star \). Therefore,

$$\begin{aligned} S_{j,3} = \sum _{n=k}^\infty \sum _{l=1}^{j-1} \left[ \sum _{|\alpha |=j} a_{\alpha }^{(j)} A_{\alpha ,l}\right] {y}_n \frac{n!}{(n-l)!} P^{n-l}. \end{aligned}$$

On the other hand, we can organize the terms in \(S_{j,1}\) to write

$$\begin{aligned} S_{j,1}&= \sum _{n=k}^\infty \sum _{m=1}^{j-1} \sum _{l=1}^{m} \left[ {\mathop {{\sum }_{|\alpha |=j,}}\limits _{|\beta |=m, \beta<\alpha }} \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) a_{\alpha }^{(j)} \partial _{\alpha -\beta } (y_n) A_{\beta ,l} \right] \frac{n!}{(n-l)!} P^{n-l} \\&= \sum _{n=k}^\infty \sum _{l=1}^{j-1} \left[ \sum _{m=l}^{j-1} {\mathop {\mathop {\sum }_{|\alpha |=j,}}\limits _{|\beta |=m, \beta <\alpha }} \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) a_{\alpha }^{(j)} \partial _{\alpha -\beta } (y_n) A_{\beta ,l}\right] \frac{n!}{(n-l)!} P^{n-l}, \end{aligned}$$

by grouping those indices \(\beta \) with the same norm.

Step 3: We search for a partial differential equation satisfied by

$$\begin{aligned} \widehat{W}(t,{x}):=\sum _{n=k}^\infty {y}_n({x}) t^n, \end{aligned}$$
(5.8)

from the system (5.7) satisfied by \(\widehat{{w}}({x})=\widehat{W}(P({x}),{x})\). Indeed, recalling (1.8) we can write \(L^{\star }_{j}(P)=\phi _j\cdot P\), for some holomorphic function \(\phi _j\) near the origin. Therefore, noticing that \(\frac{n!}{(n-l)!} P^{n-l}=\partial _t^l(t^n)|_{t=P}\), for \(l\le n\), we find

$$\begin{aligned} P^j\sum _{n=k}^\infty {y}_n \frac{n!}{(n-j)!} P^{n-j} L_j^\star (P)=\left. \phi _j t^{j+1}\partial _t^j(\widehat{W})\right| _{t=P}. \end{aligned}$$

Let us consider the differential operator

$$\begin{aligned} K({x})&(t,D^kW):=-H'({x},W)+\sum _{j=1}^k \Bigg [ t^j L_j(W)+\phi _j t^{j+1}\partial _t^j(W)\\&+\sum _{l=1}^{j-1} \Bigg (\sum _{|\alpha |=j} a_{\alpha }^{(j)} A_{\alpha ,l} t^j\partial _t^l(W)+ \sum _{m=l}^{j-1} {\mathop {\mathop {\sum }_{|\alpha |=j,}}\limits _{|\beta |=m, \beta <\alpha } } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) a_{\alpha }^{(j)} A_{\beta ,l} \partial _{\alpha -\beta }t^j\partial _t^l(W)\Bigg )\Bigg ]. \end{aligned}$$

Then \(\widehat{{w}}(x)=\sum _{n=k}^\infty y_n P^n\) satisfies (5.7) if and only if \(\widehat{W}\) in (5.8) satisfies

$$\begin{aligned} B(x)W=-h({x})t^k+K(x)(t,D^kW), \end{aligned}$$

where \(g=h \cdot P^k\). Theorem 3.1 for \(p=0\) proves that this equation has a unique formal power series solution \(\widehat{W}(t,{x})\) of the form (5.8). Therefore, we have the existence and uniqueness of the solution w of equation (5.7) and therefore of the main equation (1.5).

Finally, Theorem 3.1 also asserts that \(\widehat{W}(t,{x})\) is s-Gevrey in t where s in (3.2) is computed using the derivatives appearing in K. In this case, \(s_0(t^j \partial _{\alpha })=|\alpha |/j=1\), for the terms in \(t^j L_j\),

$$\begin{aligned} s_0(t^{j+1}\partial _t^j)&=j,\quad s_0(t^{j}\partial _t^l)=\frac{l}{j-l},\text { and }\quad s_0(t^{j}\partial _t^l \partial _{\alpha -\beta })=\frac{l+|\alpha |-|\beta |}{j-l}, \end{aligned}$$

where \(1\le j\le k\), \(l\le j-1\), \(|\alpha |=j\), \(0<\beta \le \alpha \), and \(l\le |\beta |\le j-1\). Thus

$$\begin{aligned} s_0(t^{j}\partial _t^l)\le \frac{j-1}{j-l}\le j-1,\quad \text { and }\quad s_0(t^{j}\partial _t^l \partial _{\alpha -\beta })\le \frac{|\alpha |}{j-l}\le j. \end{aligned}$$

Therefore, the maximum s of these values is k and it attained at the term \(\phi _k t^{k+1}\partial _t^k\), when \(\phi _k\ne 0\). If \(\phi _k=0\), we still have that \(s=k\) as it is also attained at the terms \(a^{(k)}_{\alpha } A_{\beta ,|\beta |}\partial _{\alpha -\beta } t^k\partial _t^{k-1}\), where \(|\alpha |=k\) and \(\beta <\alpha \) with \(|\beta |=k-1\). But \(L_k\ne 0\), so there is \(\alpha _0\in \mathbb {N}^d\) with \(|\alpha _0|=k\) and \(a_{\alpha _0}^{(k)}\ne 0\). Recalling formula (4.2) we see that at least one of these terms appears in K, thus \(s=k\). In conclusion, \(\widehat{W}\) is a k-Gevrey series in t, i.e, \(\widehat{w}\) is a P-k-Gevrey series as we wanted to show. \(\square \)

It is worth remarking that the proof of Theorem 1.1 simplifies considerably when \(k=1\). To highlight the main ideas used we reproduce the argument again.

Corollary 5.1

Consider the partial differential equation

$$\begin{aligned} P({x})L_1({y}) =F({x},{y}), \end{aligned}$$
(5.9)

where \(L_1=a_1\partial _{x_1}+\cdots +a_d \partial _{x_d}\) has holomorphic coefficients at the origin, F is holomorphic near the origin, \(F({0},{0})={0}\), and \(D_yF(0,0)\) is an invertible matrix. If P divides \(L_1(P)\), equation (5.9) has a unique formal power series solution \(\widehat{{y}}\in \mathbb {C}[[x]]^{N}\) with \(\widehat{y}(0)=0\), which is a P-1-Gevrey series.

Proof

Writing F as in equation (5.1) and setting \(y=y_0+w\) in the equation (5.9), where \(y_0\) solves the implicit equation (5.3), we find that w satisfies

$$\begin{aligned} P\cdot L_1({w}) =g_0({x})+B_0({x}){w}+H_0({x},{w}), \end{aligned}$$
(5.10)

where \(g_0=-P\cdot L_1(y_0)\), \(B_0(x)\) is invertible at \(x=0\), and the Taylor expansion of \(H_0(x,w)\) in w has no constant nor linear terms. This reduces the problem to find a formal solution \(\widehat{{w}}({x})=\widehat{W}(x,P(x)),\) where \(\widehat{W}(x,t)=\sum _{n=1}^\infty y_n(x)t^n\). Note that

$$\begin{aligned} P\cdot L_1(\widehat{{w}}) =\sum _{n=1}^\infty L_1({y}_n)P^{n+1}+ \phi \cdot n{y}_n P^{n+1} = \left. \left( tL_1+\phi t^2\partial _t\right) (\widehat{W})\right| _{t=P}, \end{aligned}$$

where \(L_1(P)=\phi \cdot P\). Therefore, \(\widehat{w}\) solves (5.10) if and only if \(\widehat{W}\) solves

$$\begin{aligned} B_0({x})W=-L_1(y_0)t+\left( tL_1+\phi ({x}) t^2\partial _t\right) W-H_0({x},W). \end{aligned}$$
(5.11)

Theorem 3.1 for \(p=0\) shows that (5.11) has a unique formal power series solution \(\widehat{W}(x,t)\) where the \(y_n\) are holomorphic functions in a common neighborhood of \({0}\in \mathbb {C}^d\). Moreover, \(\widehat{W}\) is s-Gevrey, where s is the maximum of \(s_0(\phi t^2\partial _t)=1\) and \(s_0(t\partial _{x_j})=1\), for \(j=1,\dots ,d\) such that \(a_j\ne 0\). Since \(L_1\not \equiv 0\), it follows that \(s=1\) as required. \(\square \)

We move now to Theorem 1.2. Although we can apply the same technique as in Theorem 1.1, it is easier to take P directly as one of the coordinates.

Proof of Theorem 1.2

By hypothesis \(L_k^{*}(P)(0)=\sum _{|\alpha |=k} a_{\alpha }^{(k)}(0) \partial _{\alpha }^\star (P)(0)\ne 0\). Thus at least one of these terms is non-zero, so necessarily \(\partial _{x_l}(P)(0)\ne 0\) for some \(l=1,\dots ,d\) –recall (4.1)–. Up to permuting the coordinates we can assume that \(l=1\). We make the change of variables (4.11) and write equation (1.5) in the coordinates \(\xi =(\xi _1,\xi ')\), \(\xi ':=(\xi _2,\dots ,\xi _d)\). In fact, setting \(u(\xi )=y({x})\), Lemma 4.4 shows that

$$\begin{aligned}&\sum _{j=1}^k P^j L_j(y)=\sum _{j=1}^k L_j^\star (P) \xi _1^j \partial _{\xi _1}^{j}(u)+\xi _1^j C_j(\xi ,u,D^ju),\\&\quad C_j(\xi ,u,D^ju):=\sum _{|\alpha |=j} \overline{a}_{\alpha }^{j}(\xi )\left[ \overline{\delta }_{\alpha }\partial _{\alpha ,\xi }(u)+\sum _{l=1}^{j-1} \sum _{*} B_{l,\beta }^{\alpha }\cdot \partial _{\xi _1}^l\partial _{\beta ,\xi }(u)\right] , \end{aligned}$$

where \(\overline{a}_{\alpha }^{j}(\xi )=a_{\alpha }^{(j)}({x})\) and the inner sum is taken over all \(\beta \in \mathbb {N}^{d-1}\) such that \((0,\beta )\le \alpha \) and \(|\beta |\le |\alpha |-l\). Therefore, \(y(x)=\sum _{\beta \in \mathbb {N}^d} y_{\beta } {x}^{\beta }\in \mathbb {C}[[x]]^N\) is a solution of (1.5) if and only if \(u(\xi )=y(x(\xi ))=\sum _{n=0}^\infty u_n(\xi ')\xi _1^n\in \mathbb {C}[[\xi ]]^N\) satisfies

$$\begin{aligned} \sum _{j=1}^k L_j^\star (P) \xi _1^j \partial _{\xi _1}^{j}(u)=\overline{F}(\xi ,u)-\sum _{j=1}^k \xi _1^j C_j(\xi ,u,D^ju), \end{aligned}$$
(5.12)

where \(\overline{F}(\xi ,u)=F({x},{y})\). Write F as in equation (5.1), and expand \(A(x)=\overline{A}(\xi )=A_0(\xi ')+\sum _{m= 1}^\infty A_m(\xi ')\xi _1^m\) in powers of \(\xi _1\), where \(A_0(0)=A_0\). Then we conclude that (5.12) has the form of equation (3.1) with \(p=k\), and

$$\begin{aligned} c_0(\xi ')=A_0(\xi '),\qquad c_j=L_j^\star (P)I_N,\qquad j=1,\dots ,k. \end{aligned}$$

Now, Remark 3.4 and the hypothesis (1.9) guarantee that we can apply Theorem 3.1 to (5.12) to conclude the existence and uniqueness of the solution \({u}(\xi )\in \mathbb {C}[[\xi ]]^N\) which is s-Gevrey, with s as in (3.5). The terms that appear in (5.12) satisfy

$$\begin{aligned} s_k(\xi _1^j\partial _{\alpha ,\xi })= \max \left\{ 0,\frac{|\alpha |-k}{j}\right\} =0,\quad s_k(\xi _1^j\partial _{\xi _1}^l\partial _{\beta ,\xi })=\max \left\{ 0,\frac{|\beta |+l-k}{j-l}\right\} =0, \end{aligned}$$

because \(|\beta |+l-k\le |\alpha |-k=j-k\le 0\). Therefore, \(s=0\) and \({u}(\xi )={y}(x)\in \mathbb {C}\{x\}^N\) is convergent as we wanted to show. \(\square \)

For the case \(k=1\) Corollary 5.1 takes the following form, c.f. [5, Theorem 2].

Corollary 5.2

Assume the conditions of Corollary 5.1, but now suppose that \(L_1(P)({0})\ne 0\). If \(nL_1(P)({0})I_N-D_yF(0,0)\in GL _N(\mathbb {C})\), for all \(n\in \mathbb {N}\), equation (5.9) has a unique analytic solution at the origin \(\widehat{{y}}\in \mathbb {C}\{{x}\}^N\) with \(\widehat{y}(0)=0\).

6 Examples

We conclude the paper with some examples. First, Example 6.1 shows that condition (1.8) is needed in order to apply Theorem 1.1. Moreover, Example 6.3 illustrates that the Gevrey type provided by Theorem 1.1 is attained, thus, in general it cannot be improved. For more examples in the case \(k=1\) we refer to [5], including the use of ramifications and punctual blow-ups (2.3) to bring other differential equations into a form where Theorem 1.1 can be applied.

Example 6.1

Consider the scalar equation

$$\begin{aligned} x_1^k x_2^k \partial _{x_1}^k(y)=\mu y-\frac{x_1^k}{1-x_1}, \end{aligned}$$

where \(\mu \ne 0\) is a constant and \(k\ge 1\) is an integer. The problem has a unique formal power series solution that can be found setting \(\widehat{y}=\sum _{n=0}^\infty y_n(x_2) x_1^n\) and replacing into the equation. In this way, we find that

$$\begin{aligned} \widehat{y}=\sum _{n=k}^\infty \frac{x_1^n}{\mu -\frac{n!}{(n-k)!}x_2^k}=\sum _{n\ge k, m\ge 0} a_{n,km} x_1^n x_2^{km},\quad a_{n,km}=\frac{n!^m}{\mu ^{m+1}(n-k)!^m}. \end{aligned}$$

Using that \(n!/(n-k)!\le n^k\) and \(\tau ^m/m!<e^\tau \) for \(\tau >0\), we see that

$$\begin{aligned} |a_{n,km}|\le \frac{n^{mk}}{|\mu |^{m+1}}\le \frac{e^{nk}}{|\mu |^{m+1}}m!^k\le \frac{e^{nk}}{|\mu |^{m+1}}(km)!, \end{aligned}$$

i.e., \(\widehat{y}\) is exactly \(x_2\)-1-Gevrey. We can apply Theorem 1.1 choosing \(P=x_2\), \(L_1=\cdots =L_{k-1}=0\), and \(L_k=x_1^k\partial _{x_1}^k\) (\(L_j^\star (P)=0\) for \(j=1,\dots ,k\)) to find that \(\widehat{y}\) is \(x_2\)-k-Gevrey, which is not optimal unless \(k=1\). On the other hand, neither \(P=x_1\), \(L_k=x_2^k\partial _{x_1}^k\) nor \(P=x_1x_2\), \(L_k=\partial _{x_1}^k\) are valid choices since \(L_k^\star (P)\) is not divisible by P. In fact, \(\widehat{y}\) is neither \(x_1\)-s-Gevrey nor \(x_1x_2\)-s-Gevrey for any s, since there is no common neighborhood of the origin where all the \(y_n(x_2)\) are defined.

Example 6.2

Fix integers \(m,k\ge 1\) and consider the scalar equation

$$\begin{aligned} x^{(m+1)k}\partial _{x}^{k}y=y-1-\frac{x^k}{k!}. \end{aligned}$$
(6.1)

It has a unique formal solution \(\widehat{y}(x)=\sum _{n=0}^{\infty }{y_n}x^n\) given by

$$\begin{aligned} y_0=1,\,\, y_k=\frac{1}{k!},\,\, y_{mk+k}=1,\qquad y_{jmk+k}=\prod _{l=1}^{j-1} \frac{(lmk+k)!}{(lmk)!},\quad j\ge 2, \end{aligned}$$

and \(y_n=0\) in other cases. But \((lmk+k)!/(lmk)!=(lmk+1)\cdots (lmk+k)\le (lmk+k)^k\le ((j-1)mk+k)^k=k^k((j-1)m+1)^k\le (2km)^k(j-1)^k\). Thus

$$\begin{aligned} a_j:={y_{jmk+k}}\le (2km)^{k(j-1)}(j-1)^{k(j-1)}, \end{aligned}$$

and

$$\begin{aligned} \widehat{y}(x)=1+x^k\cdot \sum _{j=0}^\infty {a_{j}} (x^{mk})^j\,\, \text { is } x^{mk}-k \text { -Gevrey, i.e}.,\text { it is } x-1/m\text { -Gevrey}. \nonumber \\ \end{aligned}$$
(6.2)

On the other hand, for \(k\ge 2\) we can apply Theorem 1.1 to \(P(x)=x^{m+1}\) and \(L_k=\partial _x^k\) to conclude that \(\widehat{y}\) is \(x^{m+1}\)-k-Gevrey, i.e., x-\(\frac{k}{m+1}\)-Gevrey. In fact,

$$\begin{aligned} L_k^\star (P)=((m+1)x^m)^k \text { is divisible by } P, \end{aligned}$$

since \(m+1\le mk\). If \(m+1<mk\), (6.2) gives a better bound. However, if \(m+1=mk\), then \(m=1\), \(k=2\), and Theorem 1.1 gives an optimal bound. Indeed,

$$\begin{aligned} \widehat{y}(x)=1+\frac{x^2}{2}\sum _{j=0}^\infty (2j)!x^{2j} \end{aligned}$$

which is exactly \(x^2\)-2-Gevrey, i.e., x-1-Gevrey.

Example 6.3

Consider the scalar equation

$$\begin{aligned} x_1^2x_2^2(x_1^2\partial _{x_1}^2u+x_2^2\partial _{x_2}^2u+2\partial _{x_1}\partial _{x_2}u)-2u=-2x_1x_2, \end{aligned}$$

having as unique formal power series solution \(\widehat{u}(x_1,x_2)=\sum _{n=0}^\infty a_n x_1^n x_2^n\). In fact, this equation corresponds to the ODE

$$\begin{aligned} t^4\partial _t^2w+t(t\partial _t)^2w-w=-t, \end{aligned}$$

where \(t=x_1x_2\) and \(u(x_1,x_2)=w(t)\). Theorem 3.1 proves that \(\widehat{w}(t)=\widehat{u}(x_1,x_2)\) is t-2-Gevrey. Theorem 1.1 applied to \(k=2\), \(P=x_1x_2\), \(L_1=0\), and

$$\begin{aligned} L_2=x_1^2\partial _{x_1}^2+x_2^2\partial _{x_2}^2+2\partial _{x_1}\partial _{x_2}, \end{aligned}$$

shows \(\widehat{u}\) is \(x_1x_2\)-2-Gevrey since \(L_1^\star (P)=0\) and \(L_2^\star (P)=2P^2+2P=2P(1+P)\).

We can also find the Gevrey order by direct means. First, the \(a_n\) are given by \(a_0=0\), \(a_1=a_2=1\), and

$$\begin{aligned} a_n=(n-1)^2 a_{n-1}+(n-2)(n-3)a_{n-2},\qquad \text { for } n\ge 3. \end{aligned}$$

If we set \(\alpha _n=a_n/(n-1)!^2\), \(n\ge 1\), this sequence satisfies

$$\begin{aligned} \alpha _n=\alpha _{n-1}+\frac{n-3}{(n-1)^2(n-2)}\alpha _{n-2}. \end{aligned}$$

It follows by induction that \(1\le \alpha _n\le \varphi ^n\), where \(\varphi =(1+\sqrt{5})/2\) solves \(\varphi ^2=\varphi +1\). In conclusion, \((n-1)!^2\le a_n\le \varphi ^n (n-1)!^2,\) so \(\widehat{u}\) is exactly \(x_1x_2\)-2-Gevrey. Thus, the Gevrey type provided by Theorem 1.1 cannot be improved in general.

Example 6.4

Returning to the framework of singular perturbations, we consider systems

$$\begin{aligned} \epsilon ^k x^{k+1}\partial _x^ky+\sum _{j=1}^{k-1} \epsilon ^j x^{j+1} a_j(x,\epsilon )\partial _x^jy=F(x,\epsilon ,y), \end{aligned}$$

where \(\mathbb {C}\ni \epsilon \rightarrow 0\), \(x\in \mathbb {C}\), the \(a_j\) are holomorphic near \((0,0)\in \mathbb {C}^2\), and y and F are as in Theorem 1.1. The main result can be applied to \(P(x,\epsilon )=x\epsilon \), \(L_k=x\partial _x^k\) and \(L_j=a_j x\partial _x^j\) since

$$\begin{aligned} L_k^\star (P)=x(\partial _xP)^k=x\epsilon ^k,\quad \text { and }\quad L_j^\star (P)=a_j x(\partial _xP)^j=a_jx\epsilon ^j, \end{aligned}$$

are divisible by P. Therefore, this system has a unique formal power series solution in x and \(\epsilon \) which is \(x\epsilon \)-k-Gevrey. The case \(k=1\) was first established in [3].

Example 6.5

Fix \(\alpha \in \mathbb {N}^d{\setminus }\{0\}\) and consider the equation

$$\begin{aligned} ({x}^\alpha )^k L_k(y)(x)+\cdots +{x}^\alpha L_1(y)(x)=F({x},{y}), \end{aligned}$$

with differential operators of the form \(L_j=\sum _{|\beta |=j} b_{\beta }({x}) {x}^{\beta } \partial _{\beta },\) where \(b_{\beta }\in \mathcal {O}_b(D_r^d)\), for a common \(r>0\). Assuming that \(D_yF(0,0)\) is invertible, since

$$\begin{aligned} L_j^\star ({x}^\alpha )= {x}^{j\alpha }\cdot \sum _{|\beta |=j} \alpha ^\beta b_{\beta }(x), \end{aligned}$$

is divisible by \({x}^\alpha \) for all \(j=1,\dots ,d\), Theorem 1.1 proves that this equation has a unique \({x}^{\alpha }\)-k-Gevrey series solution. Note that equation (1.2) is a particular case for \(k=1\), where this Gevrey bound is optimal due to the \(x^\alpha \)-1-summability of the solution and Tauberian theorems for these methods, [6, 7].