## 1 Local Classification of Linear Ordinary Differential Equations

### 1.1 Systems and Higher Order Equations

The local analytic theory of linear ordinary differential equations exists in two parallel flavours, either that of systems of several first order equations, or of scalar (higher order) equations. One can relatively easily transform one type of objects to the other, yet this transformation loses some additional structures.

Let $$\Bbbk$$ be a differential field, called the field of coefficients. We will be interested almost exclusively in the field $${\fancyscript{M}}={\fancyscript{M}}({\mathbb C}^1,0)$$ of meromorphic germs at the origin $$t=0$$ on the complex line $${\mathbb C}={\mathbb C}^1$$, the quotient field of the ring $${\fancyscript{O}}={\fancyscript{O}}({\mathbb C}^1,0)$$ of holomorphic germs at the origin. The standard $${\mathbb C}$$-linear derivation $$\partial =\frac{d}{dt}$$ acts on both $${\fancyscript{O}}$$ and $${\fancyscript{M}}$$ according to the Leibniz rule and extends on vector and matrix functions with entries in $$\Bbbk$$ in the natural way.

Let $$A\in {\text {Mat}}(n,\Bbbk )$$ be an $$(n\times n)$$-matrix function, called the coefficients matrix, $$A=\Vert a_{ij}(t)\Vert _{i,j=1}^n$$, $$a_{ij}\in \Bbbk$$. This matrix defines the homogeneous system of linear ordinary equations

\begin{aligned} \partial x=Ax,\quad x=(x_1,\dots ,x_n)\in {\mathbb C}^n, \quad t\in ({\mathbb C},0). \end{aligned}
(1)

The system (1) only exceptionally rarely has a solution $$x\in \Bbbk ^n$$. However, it always has $$n$$ linear independent solutions in the class of functions analytic in a small punctured neighborhood of the origin, which are multivalued (ramified) over the point $$t=0$$. Assembling these solutions (as column vectors) into a multivalued matrix function $$X=X(t)$$ whose determinant never vanishes for $$t\ne 0$$, we can without loss of generality reduce the system (1) to one matrix differential equation $$\partial X=AX$$. For instance, the trivial system is defined by the equation $$\partial X=0$$, and any invertible constant matrix $$C\in {{\text {GL}}}(n,{\mathbb C})$$ is its solution.

Alternatively, one may consider homogeneous linear ordinary differential equations of the form

\begin{aligned} a_0\partial ^n u+a_1\partial ^{n-1}u+\cdots +a_{n-1}\partial u+a_n u=0,\quad a_0,\dots ,a_m\in \Bbbk ,\;a_0\ne 0. \end{aligned}
(2)

Each Eq. (2) is a linear (over $$\Bbbk$$) relation between the unknown function $$u$$ and its derivatives $$\partial ^k u$$ up to order $$k=n$$. Traditionally, such equations are written using linear differential operators: if $$L=\sum _0^n a_i\partial ^{n-i}\in \Bbbk [\partial ]$$ is the formal expression, then the above equation is written under the form $$Lu=0$$. Elements of the field $$\Bbbk$$ are identified with “operators of zeroth order” $$u\mapsto au$$, $$a\in \Bbbk$$. The key feature of differential operators is the possibility of their composition which equips the $$\Bbbk$$-space of linear operators with the structure of (noncommutative infinite-dimensional) $${\mathbb C}$$-algebra, denoted by $${\fancyscript{W}}$$.Footnote 1

As before, generically solution exists only as a multivalued function defined for $$t\ne 0$$ and ramified over the origin.

### 1.2 Mutual Reduction

One can easily transform the Eq. (2) to a system (1) by introducing the variables $$x_k=\partial ^{k-1}u$$, $$k=1,\dots ,n$$. The corresponding first order identities take the form

\begin{aligned} \partial x_k=x_{k+1},\quad k=1,\dots ,n-1,\quad \partial x_{n}=-a_0^{-1}(a_1x_{n-1}+\cdots +a_nx_1). \end{aligned}
(3)

Conversely, each of the variables $$u=x_k$$ of a solution $$x(t)$$ to the system (1) satisfies an equation of the form (2). To obtain this equation, note that all derivatives $$\partial ^i u$$ are $$\Bbbk$$-linear combinations of the formal variables $$x_1,\ldots ,x_n$$. Indeed, by induction, if $$\partial ^i x=A_i x$$, $$A_i\in {\text {Mat}}(n,\Bbbk )$$, $$A_0=E$$, $$A_1=A$$, then

\begin{aligned} \partial ^{i+1} x=(\partial A_i)x+A_iA_1x=(\partial A_i+A_iA)x=A_{i+1}x,\quad i=1,2,\ldots . \end{aligned}
(4)

Taking the $$k$$th line of these identities yields the required linear combination. Since the space of combinations is $$n$$-dimensional (over $$\Bbbk$$), we conclude that $$n+1$$ derivatives $$u,\partial u,\partial ^2 u, \dots ,\partial ^n u$$ are necessarily linear dependent over $$\Bbbk$$ (the order can be less than $$n$$). This dependence is of the form (2), but the corresponding equation will in general depend on the choice of $$k$$ between $$1$$ and $$n$$. Slightly modifying this construction, one can produce a differential equation of order $$\leqslant n^2$$, satisfied by all components $$x_{ij}$$ of any fundamental matrix solution $$X=\Vert x_{ij}\Vert$$ of the equation $$\partial X=AX$$.

### 1.3 Gauge Equivalence of Linear Systems. Equations of the Same Type

The group $${\fancyscript{G}}={{\text {GL}}}(n,\Bbbk )$$ of invertible matrix functions with entries in the field $$\Bbbk$$ acts naturally on the space of all linear systems of the form (1). Namely, if $$H=\Vert h_{ij}(t)\Vert _{i,j=1}^n$$, $$h_{ij}\in \Bbbk$$, is such a function with the inverse $$H^{-1}\in {{\text {GL}}}(n,\Bbbk )$$, then one can “change variables” in (1) by substituting $$y=Hx$$, $$y=(y_1,\ldots ,y_n)\in {\mathbb C}^n$$. This substitution transforms (1) to the identity $$\partial y=(\partial H)x+H\partial x=(\partial H)H^{-1}y+HAH^{-1}y$$, so that

\begin{aligned} \partial y=By,\quad B\in {\text {Mat}}(n,\Bbbk ),\quad B= (\partial H)\cdot H^{-1}+HAH^{-1}. \end{aligned}
(5)

This differs from the conjugacy of linear operators by the logarithmic derivative $$(\partial H)\cdot H^{-1}$$; this term vanishes if $$H$$ is constant.

Two systems $$\partial x=Ax$$ and $$\partial y=By$$ are called gauge equivalent, if there exists an element $$H\in {\fancyscript{G}}$$ such that (5) holds. Since $${\fancyscript{G}}$$ is a group, this equivalence naturally is reflexive, symmetric and transitive. Thus one can formulate the problem of classification: what is the simplest form to which a given linear system can be transformed by a suitable gauge transformation? The corresponding theory is fairly well established, see below for the initial results.

### Remark 1

Systems of linear Eq. (1) can be considered geometrically as flat meromorphic connections on a vector bundle over the (complex) 1-dimensional base. The gauge transform corresponds to the change of a tuple of horizontal sections locally trivializing this bundle. Such interpretation allows for global and multidimensional generalizations, see Ilyashenko and Yakovenko (2008, Chapter III) and Novikov and Yakovenko (2004).

Unfortunately, the notion of gauge equivalence is too restricted to deal with high order equations: indeed, since the unknown function is scalar, only the transformations of the form $$u=hv$$, $$h\in \Bbbk$$, can be considered, but one cannot expect this small group to produce a meaningful classification.

Instead it is natural to consider $$\Bbbk$$-linear changes of variables of a more general form which involve the unknown function and its derivatives. More specifically, one can choose a tuple of functions $$h=(h_0,\dots ,h_{n-1})\in \Bbbk ^n$$ and use it to change the dependent variable from $$u$$ to $$v$$ as follows,

\begin{aligned} v=h_1\partial ^{n-1}u+h_2\partial ^{n-2}u+\cdots +h_{n-1}\partial u+h_{n}u. \end{aligned}
(6)

The reason why derivatives of order $$n$$ and may be omitted, is rather clear: if the transformation (6) is applied to an Eq. (2) of order $$n$$, then all such higher order derivatives can be replaced by $$\Bbbk$$-linear combinations of the lower order derivatives by virtue of the equation.

The new variable $$v$$ also satisfies a linear differential equation which can be derived as follows (cf. with Sect. 1.2). Differentiating the formula (6) for $$v$$ by virtue of the Eq. (2), one can see that all higher order derivatives $$\partial ^i v$$ can be expressed as linear combinations (over $$\Bbbk$$) of the formal derivatives $$\partial ^j u$$, $$u=0,\dots ,n-1$$. The space of such combinations is $$n$$-dimensional, so no later than on the $$n$$th step there will necessary appear an identity of the form $$b_0\partial ^m v+b_1\partial ^{m-1}v+\cdots +b_{m-1}\partial v+b_mv=0$$, $$b_0\ne 0$$, $$b_j\in \Bbbk$$, $$m\leqslant n$$, which is the transform of the Eq. (2) by the action of (6). Classically, the initial equation and the transformed equation are called equations of the same type, see Singer (1996), Tsarëv (2009) and Ore (1933), but we would prefer to use the term “Weyl equivalence” (justifying the fact), with an intention to refine it by imposing additional restrictions on the transformation (6).

In order for this change of variables to be “faithful”, one has to impose the additional condition of nondegeneracy: no solution of (2) is mapped into identical zero by the transformation (6). Indeed, if this extra assumption is violated, one can easily transform the initial equation to the trivial (meaningless) form $$0=0$$. On the other hand, accepting this condition guarantees (as can be easily shown) that the transformed equation has the same order $$m=n$$.

Still a few questions remain unanswered by this naïve approach. The transformation (6), unlike the gauge transformation of linear systems, is rather problematic to invert: transition from $$u$$ to $$v$$ always has a nontrivial kernel (solutions of the corresponding homogeneous equations). In addition, “restoring” $$u$$ from $$v$$ is in general a transcendental operation requiring integration of linear equations, and it is by no means clear how one should proceed.

The algebraic nature of these questions was studied since 1880s by F. Frobenius, E. Landau, A. Loewy, W. Krull and culminated in the perfect form in the brilliant paper by Ore (1933). The idea is to consider the noncommutative algebra of differential operators $$\Bbbk [\partial ]$$ with coefficients in $$\Bbbk$$. The next Sect. 2.1 summarizes the necessary fundamentals of the “algebraic theory of noncommutative polynomials” following (Ore 1933).

### 1.4 Singularities, monodromy

From this moment we focus on the special case where $$\Bbbk ={\fancyscript{M}}$$ is the differential field of meromorphic germs at the origin and denote for brevity $${\fancyscript{W}}={\fancyscript{M}}[\partial ]$$ the algebra of operators with meromorphic coefficients.

For each linear system (1) or a high order Eq. (2) with meromorphic coefficients one can choose representatives of germs of all coefficients $$a_{ij}(t)$$, resp., $$a_i(t)$$ in a punctured neighborhood of the origin $$({\mathbb C}^1,0){\backslash }\{0\}$$ so small that all representatives are holomorphic in this punctured neighborhood. The classical theorems of analysis guarantee that solutions of the system (resp., equation) are holomorphic on the universal cover of this punctured neighborhood, i.e., in the more traditional terminology, are multivalued analytic functions on $$({\mathbb C}^1,0)$$ ramified at the origin.

If the coefficients of the system (1) are holomorphic at the origin, i.e., $$A\in {\text {Mat}}(n,{\fancyscript{O}})\subsetneq {\text {Mat}}(n,{\fancyscript{M}})$$, then for the same reasons solutions of the system are holomorphic (hence single-valued) at the origin. This case is called nonsingular, and the corresponding matrix equation admits a unique solution $$X\in {{\text {GL}}}(n,{\fancyscript{O}})$$ with the initial condition $$X(0)=E$$ (the identity matrix).

Solution $$X$$ of a general matrix equation $$\partial X=AX$$ with $$A\in {{\text {GL}}}({\fancyscript{M}},n)$$ after continuation along a small closed loop around the origin gets transformed into another solution $$X'=XM$$ of the same equation. The monodromy matrix $$M\in {{\text {GL}}}(n,{\mathbb C})$$ depends on $$X$$.

A homogeneous Eq. (2) defined by a linear operator $$L=\sum _{i=0}^n a_i\partial ^{n-i}$$ can always be multiplied by a meromorphic multiplier so that all its coefficients become holomorphic and at least one of them is nonvanishing at the origin. The reduction (3) shows that if it is the leading coefficient $$a_0$$ that is nonvanishing, then all solutions of the equation $$Lu=0$$ are holomorphic at the origin (we call such operators nonsingular), otherwise they may be ramified at the origin.

Choose a neighborhood $$U=({\mathbb C}^1,0)$$ and meromorphic representatives of the germs $$a_j(\cdot )$$ which have no other poles in $$U$$ expect for $$t=0$$. If $$0\ne t_0\in U$$ is any other point in the domain of the system (equation), then it is well known that germs of solutions of the system (equation) $$Lu=0$$ form a $${\mathbb C}$$-linear subspace in $$Z_L\subset {\fancyscript{O}}({\mathbb C},t_0)$$ of dimension $$\dim _{\mathbb C}Z_L$$ exactly equal to $$n$$. After the analytic continuation along a small loop around the origin, this space is mapped into itself by a linear invertible map called the monodromy transformation (monodromy, for short): for any basis $$u_1,\dots ,u_n$$ in the space of solutions (considered as a row vector function), we have

\begin{aligned} \Delta \begin{pmatrix}u_1&\cdots&u_n\end{pmatrix}=\begin{pmatrix}u_1&\cdots&u_n\end{pmatrix}M \end{aligned}
(7)

for a suitable nondegenerate matrix $$M$$ (depending on the basis $$\{u_i\}_{i=1}^n$$).

### 1.5 Different Flavors of the Gauge Classification

The gauge transformation group $${\fancyscript{G}}={{\text {GL}}}(n,{\fancyscript{M}})$$ introduced above, may be too large for certain problems of analysis, see Sect. 1.4. For several reasons it is interesting to consider a smaller group $${\fancyscript{G}}_h={{\text {GL}}}(n,{\fancyscript{O}})$$ of holomorphic matrix functions which are holomorphically invertible. It is the semidirect product of $${{\text {GL}}}(n,{\mathbb C})$$ and the group $${\fancyscript{G}}_0$$ of holomorphic matrix germs $$H$$ which are identical at the origin, $${\fancyscript{G}}_0=\{H\in {\fancyscript{G}}:H(0)=E\}$$.

Besides, one can identify two types of singularities of linear systems, characterized by strikingly different behavior of solutions, called respectively regular (in full, regular singular, to avoid confusion with nonsingular systems) and irregular singularities. Recall (Ilyashenko and Yakovenko 2008, Definition 16.1) that the system (1) is called regular if the norm $$|X(t)|$$ of any its fundamental matrix solution grows no faster than polynomially when approaching the singular point in any sector on the $$t$$-plane (more precisely, on the universal cover of $$({\mathbb C}^1,0){\backslash } 0$$):

\begin{aligned} |X(t)|\leqslant Ct^{-N} \quad \forall t\in ({\mathbb C}^1,0),\ \alpha <{\text {Arg}}t<\beta ,\quad C>0,\ N<+\infty , \end{aligned}
(8)

for some constants $$C,N$$ depending on the sector (its opening and the radius). This condition is difficult to verify as it refers to the properties of solutions, but it is automatically satisfied for Fuchsian systems, when the meromorphic matrix function $$A$$ has a pole of at most first order [Ilyashenko and Yakovenko 2008, Theorem 16.10 (Sauvage, 1886)].

### Example 1

An Euler system is any system of the form $$\partial X=t^{-1}BX$$ with a constant matrix $$B\in {\text {Mat}}(n,{\mathbb C})$$. Its fundamental matrix solution is given by the (multivalued) matrix function $$X(t)=t^B=\exp (B\ln t)$$, $$t\in ({\mathbb C},0)$$. If $$B={\text {diag}} (\lambda _1,\dots ,\lambda _n)$$ is a diagonal matrix with $$\lambda _i\in {\mathbb C}$$, then the solution is also diagonal, $$X(t)={\text {diag}}(t^{\lambda _1},\dots ,t^{\lambda _n})$$. The monodromy matrix of this solution is $$\exp 2\pi i B\in {{\text {GL}}}(n,{\mathbb C})$$.

In general if $$\lambda _1,\dots ,\lambda _n$$ are the eigenvalues of the matrix $$B$$, then the corresponding Euler system is called resonant if some of the differences $$\lambda _i-\lambda _k$$ are natural numbers (nonzero), otherwise the system is called nonresonant.

The principal results on classification of linear systems are summarized in Table 1, based on Ilyashenko and Yakovenko (2008, §16, §20).

The notions of (ir)regularity can be defined also for linear equations of higher order. Somewhat mysteriously, unlike in the case of general linear systems, it is equivalent to a condition on the order of the poles of the ratios $$a_i/a_0\in {\fancyscript{M}}$$ of the coefficients of the equation (this condition is also called the Fuchsian condition).

### 1.6 Goals of the Paper and Main Results

We study the classification of nonsingular or Fuchsian (singular) equations with respect to the Weyl equivalence (formally introduced below).

It can be easily shown (see below) that nonsingular equations are Weyl equivalent to the trivial equation $$\partial ^n u=0$$, whose solutions are polynomials of degrees $$\leqslant n-1$$. An equally simple fact is the Weyl equivalence of any Fuchsian equation to an Euler equation. Furthermore, we show that the property of a Fuchsian equation to possess only holomorphic (or meromorphic) solutions can be expressed in terms of Weyl equivalence.

In our paper we introduce a more fine Fuchsian equivalence, or $${\fancyscript{F}}$$-equivalence for short, using expansion of operators in noncommutative Taylor series. It turns out that the corresponding classification of Fuchsian operators is very similar to the holomorphic classification of Fuchsian systems. In particular, in the nonresonant case any Fuchsian equation is $${\fancyscript{F}}$$-equivalent to an Euler equation, while resonant operators are $${\fancyscript{F}}$$-equivalent to operators with polynomial coefficients, i.e., from $${\mathbb C}[t][\partial ]$$. Finally, we show that any (resonant) Fuchsian operator is $${\fancyscript{F}}$$-equivalent to an operator which is Liouville integrable, that is, whose solutions can be obtained from rational functions by iterated integration and exponentiation.

## 2 Algebras of Differential Operators

In this section we recall the basic facts about the algebra of differential operators with coefficients from a differential field.

### 2.1 Noncommutative Polynomials in One Variable Over a Differential Field

Consider the $${\mathbb C}$$-algebra $$\Bbbk [\partial ]$$ generated by the differential field $$\Bbbk$$ and the symbol $$\partial$$ with the noncommutative multiplication satisfying the Leibniz rule,

\begin{aligned} \partial \cdot a=a\cdot \partial +a',\quad a,a'\in \Bbbk ,\quad a'=\partial a=\text { the derivative of }a. \end{aligned}
(9)

This algebra can be considered as the algebra of differential operators acting on “test functions”, where elements from $$\Bbbk$$ act by multiplication $$u\mapsto au$$ and $$\partial$$ is the derivation. The operation corresponds to the composition of operators (and the dot will be omitted from the notation).

Any operator from $$\Bbbk [\partial ]$$ can be uniquely represented under the “standard form”

\begin{aligned} L=a_0\partial ^n+a_1\partial ^{n-1}+\cdots +a_{n-1}\partial +a_n,\quad a_0,\dots ,a_n\in \Bbbk ,\ a_0\ne 0 \end{aligned}
(10)

with the coefficients $$a_i$$ to the left from the powers of $$\partial$$. The number $$n\geqslant 0$$ is called the order of the operator $$L$$. The composition $$LM$$ of two operators $$L$$ and $$M=b_0\partial ^m+\cdots \in \Bbbk [\partial ]$$ of orders $$n$$ and $$m$$ is an operator of order $$n+m$$ with the (nonzero) leading coefficient $$a_0b_0\in \Bbbk$$.

The key property of the algebra $$\Bbbk [\partial ]$$ is the possibility of division with remainder. Indeed, if $$n={\text {ord}}L\geqslant m={\text {ord}}M$$, then the difference $$L-a_0b_0^{-1}\partial ^{n-m}M$$ is an operator with zero (absent) “leading coefficient” before $$\partial ^{n}$$, i.e., is of order strictly less than $$n$$. Iterating this order depression, one can find two operators $$Q,R\in \Bbbk [\partial ]$$ such that

\begin{aligned} L=QM+R,\quad {\text {ord}}Q={\text {ord}}L-{\text {ord}}M,\quad {\text {ord}}R<{\text {ord}}M. \end{aligned}
(11)

When $$R=0$$ we say that $$L$$ is divisible by $$M$$.

This construction allows to define for any two operators $$L,M\in \Bbbk [\partial ]$$ their greatest common divisor $$D=\gcd (L,M)$$ as the operator of maximal order which divides both $$L$$ and $$M$$ (this operator is defined modulo a multiplication by an element from $$\Bbbk$$). The Euclid algorithm (Ore 1933, Theorem 4) guarantees that for any $$L,M$$ there exist $$U,V\in \Bbbk [\partial ]$$ such that

\begin{aligned} UL+VM=\gcd (L,M),\quad {\text {ord}}U<{\text {ord}}M,\ {\text {ord}}V<{\text {ord}}L. \end{aligned}
(12)

A less direct computation allows to construct the least common multiple $${\text {lcm}}(L,M)$$ which is by definition the smallest order operator divisible by both $$L$$ and $$M$$ (and also defined modulo a nonzero coefficient from $$\Bbbk$$). Indeed, consider the operators $$M,\partial M,\partial ^2M,\ldots ,\partial ^{n}M$$ modulo $$L$$, i.e., their remainders after division by $$L$$, $$n={\text {ord}}L$$. Since all these $$n+1$$ remainders are of order $$\leqslant n-1$$, they must be linear dependent over $$\Bbbk$$, that is, a certain linear combination $$(c_0\partial ^n +\cdots +c_{n-1}\partial + c_{n})M=PM$$ must be divisible by $$L$$: $$PM=QL$$, $${\text {ord}}P\leqslant {\text {ord}}L$$, $${\text {ord}}Q\leqslant {\text {ord}}M$$. There is an explicit formula expressing $${\text {lcm}}(L,M)$$ through the operators appearing in the Euclid’s algorithm, see Ore (1933, Theorem 8).

### 2.2 Algebra vs. Analysis

Denote by $${\fancyscript{W}}$$ the local Weyl algebra $$\Bbbk [\partial ]$$ in the case where $$\Bbbk ={\fancyscript{M}}$$ is the differential field of meromorphic germs.

If an operator $$L$$ is divisible by $$M$$ in $${\fancyscript{W}}$$, then their spaces of solutions $$Z_L$$, resp., $$Z_M$$, are subject to the inclusion $$Z_M\subseteq Z_L$$. Conversely, if for two operators $$L,M\in {\fancyscript{W}}$$ we have $$Z_M\subseteq Z_L$$, then $$L$$ is divisible by $$M$$. Indeed, otherwise the remainder of division of $$L$$ by $$M$$ would be an operator of order strictly less than $${\text {ord}}M$$, whose solutions form the space of superior dimension $$\dim Z_M={\text {ord}}M$$. In terms of solutions,

\begin{aligned} \begin{aligned} D&=\gcd (L,M)\iff Z_D=Z_L\cap Z_M, \\ P&={\text {lcm}}(L,M) \iff Z_P=Z_L+Z_M \end{aligned} \end{aligned}
(13)

(the sum of linear subspaces in $${\fancyscript{O}}({\mathbb C},t_0)$$ is assumed).

Thus two equations $$Lu=0$$ and $$Mv=0$$ are of the same type in the sense of Sect. 1.3, if their order is the same and there exists an operator $${H}\in \fancyscript{W}$$ which maps $$Z_L$$ to $$Z_M$$ isomorphically: for any $$u$$ such that $$Lu=0$$, the function $$v=Hu$$ is annulled by $$M$$.

### Definition 1

Two operators $$L,M\in {\fancyscript{W}}$$ of the same order $$n$$ are called Weyl equivalent (or Weyl conjugate), if there exist two operators $$H,K\in {\fancyscript{W}}$$ of order $$\leqslant n-1$$, such that

\begin{aligned} MH=KL,\quad \gcd (L,H)=1,\quad {\text {ord}}H,K<{\text {ord}}L,M. \end{aligned}
(14)

The operator $$H$$ is said to be the conjugacy between $$L$$ and $$M$$.

### Remark 2

Ø. Ore uses the notation $$M={\text {lcm}}(L,H)H^{-1}=HLH^{-1}$$ to denote the fact of conjugacy to stress its resemblance with the “similarity” in the noncommutative algebra $${\fancyscript{W}}$$. It has its mnemonic advantages, although the formal construction of (noncommutative) field of ratios for $${\fancyscript{W}}$$ requires additional efforts (Ore 1933, p. 487).

We will abbreviate the words “Weyl equivalence” (resp., conjugacy) to $${\fancyscript{W}}$$-equivalence (conjugacy) for simplicity.

### Theorem 1

$${\fancyscript{W}}$$-conjugacy is indeed an equivalence relation: it is reflexive, transitive and symmetric.

### Proof

It is obvious that this relationship is reflexive (suffices to choose $$H=K=1$$). To prove its transitivity, assume that $$L_1$$ is $${\fancyscript{W}}$$-conjugate with $$L_2$$, and $$L_2$$ with $$L_3$$. This means that there exist operators $$H_i,K_i\in {\fancyscript{W}}$$, $$i=1,2$$, of order $$\leqslant n-1$$ such that $$L_2H_1=K_1L_1$$ and $$L_3H_2=K_2L_2$$. Then $$L_3H_2H_1=K_2K_1L_1$$. To produce a pair $$(H',K')$$ conjugating $$L_1$$ with $$L_3$$, it suffices to define $$H'=H_2H_1\mod L_1$$: the order of this remainder will not exceed $$n-1$$ by construction. One has to check that $$\gcd (H',L_1)=1$$, but this is obvious: if $$u$$ is a nontrivial solution of $$L_1u=0$$ and $$\gcd (H_1,L_1)=1$$, then $$v=H_1u$$ is a nontrivial solution of $$L_2v=0$$, hence $$H_2v\ne 0$$. Replacing $$H_2H_1$$ by its remainder modulo $$L_1$$ cannot change the fact that $$H_2H_1u\ne 0$$ for any solution of $$L_1u=0$$.

The symmetry is less trivial, see Ore (1933, Theorem 13). For the reader’s convenience we provide here a short direct proof due to Yu. Berest. It is convenient to formulate it as a separate lemma. $$\square$$

### Lemma 1

For any two operators $$L,M$$ satisfying (14), there exists a pair of operators $$V,W\in {\fancyscript{W}}$$ such that $$LV=WM$$ and $$\gcd (V,M)=1$$.

### Proof

By (12), the condition $$\gcd (L,H)=1$$ implies that there exist $$U,V\in {\fancyscript{W}}$$ such $$UL+VH=1$$. Multiplying this identity by $$L$$ from the left, we see that $$(LU-1)L=LVH$$, that is, the operator $$Q$$, expressed by each side of the identity, is divisible by both $$H$$ and $$L$$. This means that the operator $$LVH$$ is divisible by $$P={\text {lcm}}(H,L)$$, which in turn has two representations, $$P=MH=KL$$ as in (14). The last divisibility means that $$LVH=WP=WMH$$ in $${\fancyscript{W}}$$. Yet since $${\fancyscript{W}}$$ has no zero divisors (the leading coefficient of any composition is nonzero), we can cancel $$H$$ and arrive at the identity $$LV=WM$$. It is a simple exercise to see that $$\gcd (V,M)=1$$. $$\square$$

### 2.3 Nonsingular Operators

An operator $$L\in {\fancyscript{W}}$$ of the form (10) is referred to as nonsingular, if all its coefficients are holomorphic, $$a_i\in {\fancyscript{O}}({\mathbb C},0)$$, and the leading coefficient is invertible, $$a_0(0)\ne 0$$. Nonsingular operators can be reduced by the transformation (3) to a holomorphic (nonsingular) system of first order equations. An immediate conclusion is that the corresponding equation $$Lu=0$$ has only holomorphic solutions, and a fundamental system of solutions $$\{u_k\}_{k=1}^n$$ can always be chosen so that $$u_k(t)=t^k+\cdots$$ where the dots stand for terms of order greater than $$k$$.

### 2.4 Fuchsian Operators

There exists another special subclass of linear operators $$L\in {\fancyscript{W}}$$ with the property that the respective linear equations $$Lu=0$$ enjoy a certain regularity, namely, all their solutions grow moderately when approaching the singular point at the origin. Unlike the general linear systems (1), such operators admit precise algebraic description. It can be given in several equivalent forms.

Note that together with the “basic” derivation $$\partial$$ any other element $$a\partial \in {\fancyscript{W}}$$ is also a derivation of the field $${\fancyscript{M}}$$ ($${\mathbb C}$$-linear self-map satisfying the Leibniz rule). It can be used as the generator of the algebra $${\fancyscript{W}}$$. We will be mostly interested in the Euler derivation $${\epsilon }=t\partial \in {\fancyscript{W}}$$ with the commutation rule

\begin{aligned} {\epsilon }=t\cdot \partial , \quad {\epsilon }\cdot t^m=t^m\cdot ({\epsilon }+m),\quad \forall m\in {\mathbb Z}, \end{aligned}
(15)

cf. with (9). Though the derivations $$\partial$$ and $${\epsilon }$$ are very simply related, their algebraic nature is radically different. Restricted on the finite-dimensional subspace of polynomials of any finite degree, the standard derivation $$\partial$$ is nilpotent, while the Euler derivation is semisimple ($${\epsilon }$$ is diagonal in the monomial basis, $${\epsilon }(t^m)=mt^m$$).

For any polynomial $$w\in {\mathbb C}[{\epsilon }]$$ in the variable $${\epsilon }$$ denote by $$w^{\scriptscriptstyle [j]}$$, $$j\in {\mathbb Z}$$, the shift of the argument:

\begin{aligned} w\mapsto w^{\scriptscriptstyle [j]},\quad w^{\scriptscriptstyle [j]}({\epsilon })=w({\epsilon }+j),\quad j\in {\mathbb Z}. \end{aligned}
(16)

This operator preserves the degree of the polynomial, and using it one can rewrite the commutation rule (15) as follows,

\begin{aligned} \forall w\in {\mathbb C}[{\epsilon }],\quad \forall j\in {\mathbb Z},\quad wt^j=t^jw^{\scriptscriptstyle [j]}. \end{aligned}
(17)

Substituting $$\partial =t^{-1}{\epsilon }$$ and re-expanding terms, any operator $$L\in {\fancyscript{W}}$$ can be represented under the form

\begin{aligned} L=r_0{\epsilon }^n+r_1{\epsilon }^{n-1}+\cdots +r_{n-1}{\epsilon }+r_n,\quad r_i\in {\fancyscript{M}},\; r_0\ne 0. \end{aligned}
(18)

### Definition 2

An operator $$L$$ is called Fuchsian, if in the representation (18) all coefficients $$r_i$$ are holomorphic and the leading coefficient $$r_0$$ is invertible (nonvanishing):

\begin{aligned} r_0,r_1,\dots , r_n\in {\fancyscript{O}}({\mathbb C}^1,0),\quad r_0(0)\ne 0. \end{aligned}
(19)

The operator is pre-Fuchsian, if it has a form $$hL$$ with any nonzero $$h\in {\fancyscript{M}}$$; without loss of generality one may assume that $$h=t^k$$, $$k\in {\mathbb Z}$$.

An operator is called Eulerian, if all coefficients $$r_0,\dots ,r_n\in {\mathbb C}$$ are constant.

### Remark 3

In the classical literature the notion of Fuchsian operators is not defined, only the notion of a (homogeneous) Fuchsian equation of the form $$Lu=0$$ is discussed, see Ince (1944). Clearly, two operators $$L\in {\fancyscript{W}}$$ and $$hL$$, $$h\in {\fancyscript{M}}$$, define the same homogeneous equation. For operators written in the form (18), the corresponding homogeneous equation will be Fuchsian if and only if all ratios $$r_i/r_0\in {\fancyscript{M}}$$ are actually holomorphic at $$t=0$$ for all $$i=1,\dots ,n$$. Our choice may seem to be artificial, yet it is justified by subsequent computations.

We will denote by $${\fancyscript{F}}\subset {\fancyscript{W}}$$ the set of all Fuchsian operators. It is convenient to assume that holomorphically invertible germs and meromorphic germs belong in $${\fancyscript{F}}$$, resp., pre-$${\fancyscript{F}}$$ as “differential operators of zero order”.

A Fuchsian differential equation $$Lu=0$$ with $$L$$ as in (18) can be reduced to a Fuchsian system in the sense (1.5) by slightly modifying the computation (3): one has to introduce the new variables as follows, $$x_1=u$$, and then

\begin{aligned} {\epsilon }x_k=x_{k+1},\ k=1,\dots , n-1,\quad {\epsilon }x_{n}=-r_0^{-1}(r_1 x_{n-1}+\cdots +r_n x_1) \end{aligned}
(20)

(recall that $$r_0$$ is invertible hence $$r_0^{-1}\in {\fancyscript{O}}$$), or in the matrix form, $${\epsilon }x=Rx$$, with the holomorphic matrix $$R\in {\text {Mat}}(n,{\fancyscript{O}})$$ of coefficients. This computation explains the relation of two Fuchsian objects of different nature. However, unlike the case of systems, in the case of scalar equations the Fuchsian condition is not only sufficient, but also necessary for the regularity (moderate growth of solutions).

### Theorem 2

[L. Fuchs (1868), see Ilyashenko and Yakovenko (2008, Theorem 19.20)] The operator $$L\in {\fancyscript{W}}$$ is pre-Fuchsian if and only if all solutions of the equation $$Lu=0$$ and all their derivatives grow at most polynomially in any sector with the vertex at the origin in the sense (8).

### 2.5 First Results on $${\fancyscript{W}}$$-Classification

The initial results on $${\fancyscript{W}}$$-equivalence are completely parallel to $${\fancyscript{G}}_h$$-classification of nonsingular systems and $${\fancyscript{G}}$$-classification of regular systems: even the ideas of the proofs remain the same.

### Theorem 3

A nonsingular operator is $${\fancyscript{W}}$$-conjugate to the operator $$M=\partial ^n$$ by a nonsingular operator $$H$$ of order $$n-1$$.

### Proof

Any nonsingular equation $$Lu=0$$ of order $$n$$ always admits $$n$$ linear independent solutions of the form $$u_k(t)=t^{k-1}(1+\cdots )$$, $$k=1,\dots , n$$. Indeed, one should look for solutions of the companion system (3) with a suitable initial condition $$x_k(0)=1$$, $$x_j(0)=0$$ for all $$j\ne k$$.

A linear operator $$H$$ transforming solutions $$v_k=t^{k-1}$$ of the equation $$\partial ^n=0$$ to solutions of the equation $$Lu=0$$ by the formulas (6) can be obtained by the method of indeterminate coefficients: $$H=h_1\partial ^{n-1}+\cdots +h_{n-1}\partial +h_n$$. The equations $$Hv_k=u_k$$, $$k=1,\dots , n$$ correspond to a system of linear algebraic equations over $${\fancyscript{O}}$$ for the unknown coefficients $$h_i$$:

\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} h_{n}&h_{n-1}&\cdots&h_1 \end{array}\right) \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1&{}t&{}t^2&{}\cdots &{}t^{n-1} \\ &{} 1&{} 2t&{}\cdots &{}(n-1)t^{n-2} \\ &{} &{} 2 &{} &{}\vdots \\ &{}&{}&{}\ddots &{}\vdots \\ &{}&{}&{}&{}(n-1)! \end{array}\right) =\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} u_1&u_2&\cdots&u_n \end{array}\right) \end{aligned}

The matrix $$J$$ of coefficients, the companion matrix of the tuple of solutions $$v_1=1$$, $$v_2=t,\dots ,v_n=t^{n-1}$$, is holomorphic and invertible (it is upper triangular with nonzero diagonal entries). A simple inspection shows that the leading coefficient $$h_1$$ cannot vanish at $$t=0$$, hence the operator $$H$$ will be nonsingular. $$\square$$

A minor modification of this argument proves the following general result.

### Theorem 4

Any (pre-)Fuchsian operator is $${\fancyscript{W}}$$-equivalent to an Euler operator from $${\mathbb C}[{\epsilon }]$$.

### Proof

Let, as before, $$J$$ denote the Euler-companion matrix of $$n$$ linear independent solutions $$u_1,\dots ,u_n$$ of the equation $$Lu=0$$: unlike the usual companion matrix, it is obtained by applying the iterated Euler derivation $${\epsilon }$$ instead of $$\partial$$ to the functions $$u_i$$:

\begin{aligned} J=J(t)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c}1 \\ {\epsilon }\\ \vdots \\ {\epsilon }^{n-1} \end{array}\right) \cdot \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} u_1&u_2&\ldots&u_n\end{array}\right) = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} u_1&{}u_2&{}\ldots &{}u_n \\ {\epsilon }u_1&{}{\epsilon }u_2&{}\ldots &{}{\epsilon }u_n \\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ {\epsilon }^{n-1}u_1&{}{\epsilon }^{n-1}u_2&{}\ldots &{}{\epsilon }^{n-1}u_n \end{array}\right) \end{aligned}

Unlike in the nonsingular case, we cannot guarantee anymore that $$J(t)$$ is holomorphic and invertible: its entries are in general multivalued and grow moderately at the origin (Ince 1944). The companion matrix $$J(t)$$ has a monodromy factor $$C\in {{\text {GL}}}(n,{\mathbb C})$$: $$\Delta J(t)=J(t)C$$ exactly as in (7) which applies to each row of the matrix $$J$$. Yet one can always find an Euler equation whose tuple of solutions $$v=(v_1,\cdots ,v_n)$$ will exhibit exactly the same monodromy matrix factor $$C$$, see Ilyashenko and Yakovenko (2008, Proposition 19.29): $$\Delta v=vC$$. The corresponding linear system of algebraic equations takes the form

\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} h_{n}&h_{n-1}&\cdots&h_1 \end{array}\right) J(t)=\left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} v_1&v_2&\cdots&v_n \end{array}\right) \end{aligned}

The solution is given by the product $$\begin{pmatrix} h_{n}&h_{n-1}&\cdots&h_1 \end{pmatrix}=\begin{pmatrix} v_1&v_2&\cdots&v_n \end{pmatrix}\cdot J^{-1}(t)$$. This product is single-valued: after analytic continuation around the origin we have

\begin{aligned} \Delta \left( \begin{array}{c@{\quad }c@{\quad }c} h_{n}&\cdots&h_1 \end{array}\right) =\left( \begin{array}{c@{\quad }c@{\quad }c} v_1&\cdots&v_n \end{array}\right) C\cdot C^{-1}J^{-1}(t)=\left( \begin{array}{c@{\quad }c@{\quad }c} h_{n}&\cdots&h_1 \end{array}\right) . \end{aligned}

Because of the moderate growth assumption, the coefficients $$h_j$$ of the conjugating operator $$H$$ must be meromorphic germs at the origin, thus $$H\in {\fancyscript{W}}$$. $$\square$$

### Remark 4

There is no reason to expect that the operator $$H$$ conjugating two Fuchsian operators is necessarily (pre)-Fuchsian. Indeed, let $$L$$ be any Fuchsian operator and $$H$$ an irregular conjugacy. Applying $$H$$ to any basis tuple of solutions $$u_1,\dots ,u_n$$ of $$Lu=0$$, we obtain another tuple $$v_i=Hu_i$$ which also grow moderately and have the same monodromy as $$u_i$$. By the Fuchs theorem, they satisfy a Fuchsian equation $$Mv=0$$. Thus, the two Fuchsian operators $$L,M$$ are conjugated by a (unique for the reasons of order/dimension) irregular operator $$H$$.

In other words, the (general) Weyl classification of (pre)-Fuchsian operators coincides with the classification of their monodromy matrices, very much like the meromorphic gauge classification of linear systems (1).

## 3 Fuchsian Equivalence

It appears that a comprehensive analog of the holomorphic gauge equivalence between Fuchsian linear systems is the Fuchsian equivalence of Fuchsian operators: modulo technical details, this equivalence means the Weyl conjugacy (14) by a Fuchsian operator $$H$$ subject to certain nondegeneracy constraints. We start with developing the formal theory of such equivalence via noncommutative formal power series.

### 3.1 Noncommutative Taylor Expansions for Fuchsian Operators

Together with the representation of differential operators from the ring $${\fancyscript{W}}={\fancyscript{M}}[{\epsilon }]$$ as polynomials in $${\epsilon }\in {\fancyscript{W}}$$ with coefficients in $${\fancyscript{M}}$$, we can expand them in convergent noncommutative Laurent series in the variable $$t\in ({\mathbb C}^1,0)$$ with (right) coefficients from the (commutative) ring $${\mathbb C}[{\epsilon }]$$. Any operator $$L\in {\fancyscript{W}}$$ of order $$n={\text {ord}}L$$ can be expanded under the form

\begin{aligned} L=\sum _{k=-N}^{+\infty }t^kp_k({\epsilon }),\quad \max _k\deg _{{\epsilon }} p_k=n, \quad N<+\infty . \end{aligned}

The operator is Fuchsian if and only if all powers are nonnegative and the leading coefficient $$p_0$$ is of the maximal degree: $$L\in {\fancyscript{F}}$$ if and only if

\begin{aligned} L=\sum _{k=0}^\infty t^k\,p_k({\epsilon }),\quad p_k\in {\mathbb C}[{\epsilon }],\ \deg p_k\leqslant n,\quad \deg p_0=n. \end{aligned}
(21)

The differential operator $$p_0\in {\mathbb C}[{\epsilon }]\subset {\fancyscript{F}}$$ is called the Euler part of $$L$$, or its Eulerization (in analogy with linearization) and denoted by $${{\fancyscript{E}}}(L)$$.

Very informally, an operator with holomorphic coefficients can be considered as a small perturbation of its Eulerization. The Fuchsian condition means that this perturbation is nonsingular, i.e., it does not increase the order of the Euler part, in the same way as the nonsingularity condition means that the operator can be considered as a small nonsingular perturbation of the operator $$p(\partial )$$.

The key tool used in this paper will be a systematic use of the Taylor expansion (21) in exactly the same way the theory of formal series with matrix coefficients of the form $$H(t)=\sum _{k=0}^\infty t^kH_k$$, $$H\in {\text {Mat}}(n,{\mathbb C})$$, is used in the theory of formal normal forms of vector fields (Ilyashenko and Yakovenko 2008, §4 and §16). Note the difference in the algebraic nature of the noncommutativity: in the matrix case the coefficients $$H_k$$ commute with the variable $$t$$ but in general do not commute between themselves. In the operator case the polynomial coefficients $$p_k\in {\mathbb C}[{\epsilon }]$$ commute between themselves but do not commute with $$t$$.

Together with the convergent noncommutative Taylor series, it is convenient to introduce the class of formal Fuchsian operators.

### Definition 3

A formal Fuchsian operator is a formal series of the form (21) without any convergence assumption. The set of formal Fuchsian operators is denoted by $$\hat{\fancyscript{F}}$$.

### Remark 5

For any two Fuchsian operators $$L,M\in {\fancyscript{F}}$$ their composition is again a Fuchsian operator of order $${\text {ord}}L+{\text {ord}}M$$. If $${\text {ord}}M\leqslant {\text {ord}}L$$, then the incomplete ratio $$Q$$ as in (11) is a Fuchsian operator of order $${\text {ord}}L-{\text {ord}}M$$. The same applies to $$\hat{\fancyscript{F}}$$. This follows from direct inspection of the division algorithm.

However, the set $${\fancyscript{F}}$$ is not a subalgebra of $${\fancyscript{W}}$$: the sum of two Fuchsian operators may well be non-Fuchsian. Hence the remainder $$R$$ as in (11) after the incomplete division may well turn non-Fuchsian (the leading coefficient may vanish). Yet for any two given Fuchsian operators $$L,M$$ of degrees $$n>m$$ one can construct a relaxed division with remainder $$L=Q'M+R'$$ with $${\text {ord}}Q'=n-m$$ and $${\text {ord}}R'=m$$ and $$Q',R'$$ Fuchsian. Indeed, it suffices to modify the standard division with remainder $$L=QM+R$$ with $${\text {ord}}R\leqslant m-1$$ (assuming $$Q,R$$ with holomorphic coefficients) and replace $$Q'=Q-1$$, $$R'=M+R$$: the latter operators will be automatically Fuchsian.

### Definition 4

Two operators $$L,M\in {\fancyscript{W}}$$ of the same order $$n$$ are called Fuchsian equivalent (or $${\fancyscript{F}}$$-equivalent), if there exist two Fuchsian operators $$H,K\in {\fancyscript{F}}$$ such that $$MH=KL$$ (exactly as in Definition 1), but with the additional property that the Euler parts of $$H$$ and $$L$$ are mutually prime (have no common roots), i.e., $$\gcd ({{\fancyscript{E}}}(H),{{\fancyscript{E}}}(L))=1\in {\mathbb C}[{\epsilon }]$$.

Two formal Fuchsian operators $$L,M\in \hat{\fancyscript{F}}$$ are called formally $${\fancyscript{F}}$$ -equivalent ($$\hat{\fancyscript{F}}$$-equivalent in short) if there exist $$H,K\in \hat{\fancyscript{F}}$$ such that $$MH=KL$$ and the Euler parts of $$H,L$$ are mutually prime.

We expect that the Fuchsian classification (and its formal counterpart) for arbitrary operators from $${\fancyscript{W}}$$ will be a very challenging problem with the Stokes phenomenon (Ilyashenko and Yakovenko 2008, §20) manifesting itself in a new way. However, everywhere below we will deal only with the $${\fancyscript{F}}$$ -equivalence between Fuchsian operators.

Note that we dropped the condition on the order of $$H,K$$ which can now be higher than $$n$$. Besides, in this definition we replaced the condition $$\gcd (H,L)=1\in {\fancyscript{W}}$$ from (14) by the stronger condition on the mutual primality of the respective Eulerizations.

### Theorem 5

$$\hat{\fancyscript{F}}$$-conjugacy is indeed an equivalence relation: it is reflexive, symmetric and transitive.

Reflexivity is obvious: each operator $$L$$ is $${\fancyscript{F}}$$-equivalent to itself by admissible conjugacy $$H=1$$ (which is a zero order Fuchsian operator).

The transitivity is even simpler compared to the proof of Theorem 1: we do not replace the composition $$H_2H_1$$ of $${\fancyscript{F}}$$-conjugacies, which is always Fuchsian, by its remainder $$\mod L_1$$, which may be non-Fuchsian.

However, the proof of the symmetry, given in Lemma 1 relies on the possibility of representing the identical operator $$1$$ by a combination $$1=UL+VH$$ with Fuchsian operators $$U,V\in \hat{\fancyscript{F}}$$. Simple example shows that even under the stronger assumption $$\gcd ({{\fancyscript{E}}}(L),E(H))=1$$, this representation is not always possible with operators of the minimal order $$n-1$$.

To correct the situation, one has to allow operators of above-the-minimal order.

### 3.3 Fuchsian Invertibility

It will be convenient to introduce the following notation:

\begin{aligned} \forall L,H\in {\fancyscript{F}}\quad \gcd \nolimits _0(L,H)=\gcd ({{\fancyscript{E}}}(L),{{\fancyscript{E}}}(H))\in {\mathbb C}[{\epsilon }]. \end{aligned}
(22)

Using this notation, the second condition of $${\fancyscript{F}}$$-equivalence can be shortened to $$\gcd _0(L,H)=1$$.

As follows from the proof of Lemma 1, the key step is to show that if $$H$$ is a Fuchsian operator such that $$\gcd _0(L,H)=1$$, then there exist two Fuchsian operators $$U,V\in {\fancyscript{F}}$$ such that $$UL+VH=1\in {\fancyscript{F}}$$ and $$\gcd _0(V,L)=1$$. Recall that if $$p,q\in {\mathbb C}[{\epsilon }]$$ are two relatively prime polynomials of respective degrees $$n,m$$, then the linear Sylvester map from $${\mathbb C}^m\times {\mathbb C}^n$$ to $${\mathbb C}^{m+n}$$

\begin{aligned} \varvec{S}=\varvec{S}_{p,q}:(u,v)\mapsto pu+qv,\quad \deg u\leqslant m-1,\;\deg v\leqslant n-1, \end{aligned}
(23)

is injective and surjective (here we identify $${\mathbb C}^m$$ and $${\mathbb C}^{n}$$ with the linear spaces of polynomials of degree $$\leqslant m-1$$, resp., $$\leqslant n-1$$). In particular, any equation in $${\mathbb C}[{\epsilon }]$$ of the form

\begin{aligned} up+vq=r,\quad \deg r\leqslant \deg p+\deg q-1, \end{aligned}

is solvable with respect to $$u,v$$ constrained as above.

The following result is the analog of the implicit function theorem for differential operators.

### Lemma 2

If $$L,M\in {\fancyscript{F}}$$ are two Fuchsian operators with $$\gcd _0(L,M)=1$$, then for any operator $$R=\sum t^k r_k({\epsilon })$$ of order $$\leqslant {\text {ord}}L+{\text {ord}}M-1$$ with holomorphic coefficients the equation

\begin{aligned} UL+VM=R \end{aligned}

is solvable with respect to the operators $$U,V$$ of orders $${\text {ord}}M-1$$ and $${\text {ord}}L-1$$ respectively, also with holomorphic coefficients.

Note that we do not assume $$R$$ Fuchsian, nor claim the Fuchsianity of $$U$$ and $$V$$.

### Proof

The proof is achieved by inductive determination of the coefficients of the unknown operators $$U,V$$.

Substitute the expansions for $$L=\sum _0^\infty t^k p_k$$ and $$M=\sum _0^\infty t^kq_k$$ and the unknown operators $$U=\sum _{0}^\infty t^k u_k$$, $$V=\sum _0^\infty t^k v_k$$, $$p_k,q_k,u_k,v_k\in {\mathbb C}[{\epsilon }]$$ into the equation $$UL+VM=R$$:

\begin{aligned}&(u_0+tu_1+t^2u_2+\cdots )(p_0+tp_1+t^2p_2+\cdots ) \\&\quad +(v_0+tv_1+\cdots )(q_0+tq_1+\cdots )=r_0+tr_1+t^2r_2\ldots \end{aligned}

Using the commutation rules (16), we reduce this operator identity to an infinite series of identities in $${\mathbb C}[{\epsilon }]$$,

\begin{aligned} u_0p_0+v_0q_0&=r_0, \\ u_0^{\scriptscriptstyle } p_1+u_1p_0+v_0^{\scriptscriptstyle } q_1+v_1q_0&=r_1, \\ u_0^{\scriptscriptstyle }p_2+u_1^{\scriptscriptstyle }p_1+u_2p_0+v_0^{\scriptscriptstyle }q_2+v_1^{\scriptscriptstyle }q_1+v_2q_0&=r_2, \\ ........................ \\ \cdots +u_k p_0+v_kq_0&=r_k,\quad \forall k\geqslant 0. \end{aligned}

This system has a “triangular” form: each left hand side is the sum of the term $$u_kp_0+v_kq_0={\varvec{S}}(u_k,v_k)$$ and terms involved shifted polynomials $$u_i^{\scriptscriptstyle [j]}$$, $$v_i^{\scriptscriptstyle [j]}$$ with $$i,j<k$$. By the relative primality of $$p_0,q_0$$, for any combination of previously defined coefficients the equation number $$k$$ is always uniquely solvable with respect to some polynomials $$\deg u_k\leqslant \deg q_0-1$$, $$\deg v_k\leqslant \deg p_0-1$$. $$\square$$

### Remark 6

The proof of the convergence of the series for $$U$$ and $$V$$ can be obtained directly by control over the growth of the polynomial coefficients.

However, a simpler argument works. Expanding $$U,V$$ as polynomials of $${\epsilon }$$ with analytic coefficients from $${\fancyscript{O}}({\mathbb C},0)$$,

\begin{aligned} U=\sum _k a_k(t){\epsilon }^k,\quad V=\sum _j b_j(t){\epsilon }^j, \end{aligned}

we see that the operator equation $$UL+VM=R$$ reduces to a system of linear nonhomogeneous algebraic equations with respect to the unknown coefficients $$a(t),b(t)$$: in a symbolic way, this system can be written as $$C(t)z=f(t)$$, where $$C(t)$$ is an $$(n+m)\times (n+m)$$-matrix with holomorphic entries (produced from the coefficients of the operators $$L$$ and $$M$$ and their $${\epsilon }$$-derivatives), and $$f(t)$$ is an $$(n+m)$$-dimensional holomorphic vector function.

One can easily see that the condition $$\gcd _0(L,M)=1$$ implies that the matrix $$C(0)$$ is nondegenerate and the system has a holomorphic solution. The formal computation amounts to the formal inversion of the corresponding matrix $$C(t)$$ without even explicitly writing it down.

Unfortunately, the goal of solving the equation $$UL+VH=1$$ in the class of Fuchsian operators cannot be achieved using only this Lemma: indeed, there is no way to ensure that the polynomial $$v_0={{\fancyscript{E}}}(V)$$ has the maximal degree equal to $${\text {ord}}V$$. The way out is to look for a solution of higher order.

We look for a Fuchsian solution of the equation $$UL+VH=1$$ in the class of operators $${\text {ord}}U\leqslant {\text {ord}}H=m$$, $${\text {ord}}V\leqslant {\text {ord}}L=n$$ as follows,

\begin{aligned} U=H+U_{m-1},\ V=-L+V_{n-1},\quad {\text {ord}}U_{m-1}\leqslant m-1,\;{\text {ord}}V_{n-1}\leqslant n-1. \end{aligned}

Substituting these formulas into the original equation, we transform it to the equation

\begin{aligned} U_{m-1}L+V_{n-1}H=1-[H,L], \quad [H,L]=HL-LH. \end{aligned}
(24)

The commutator $$[L,H]$$ of the two Fuchsian operators possesses two obvious properties. It is an operator of order no greater than $${\text {ord}}L+{\text {ord}}H-1$$ (the highest order terms, in the expansion (18), the symbols of operators cancel each other when computing the commutator). On the other hand, its Euler part vanishes.

Thus the equation is solvable by virtue of Lemma 2, and

\begin{aligned} {{\fancyscript{E}}}(U_{m-1}){{\fancyscript{E}}}(L)+{{\fancyscript{E}}}(V_{n-1}){{\fancyscript{E}}}(H)=1\in {\mathbb C}[{\epsilon }]. \end{aligned}

In other words, $$\gcd _0(V_{n-1},L)=1$$. The operator $$V=-L+V_{n-1}$$ is Fuchsian (since $$L$$ is Fuchsian of order $$n$$), and $$\gcd _0(V,L)=\gcd _0(V_{n-1},L)=1$$.

This completes the proof of the symmetry of the $${\fancyscript{F}}$$-equivalence.

## 4 Formal $${\fancyscript{F}}$$-Classification of Fuchsian Operators

This and the next section contain the main results of the paper. They are established on the formal level, yet at the end we will show that any $$\hat{\fancyscript{F}}$$-conjugacy between convergent Fuchsian operators in fact converges.

### 4.1 Nonresonant Case: Eulerization

We start by establishing an analog of the linearization theorem for nonresonant systems, cf. with the second line in Table 1.

### Definition 5

A Fuchsian operator $$L\in {\fancyscript{F}}$$ is nonresonant, if no two roots of $${{\fancyscript{E}}}(L)\in {\mathbb C}[{\epsilon }]$$ differ by a positive integer number (multiple roots are allowed).

### Proposition 1

A nonresonant Fuchsian operator is $${\fancyscript{F}}$$-equivalent to its Euler part.

### Proof

Consider the expansion of the operator: $$L=\sum _{j=0}^\infty t^jp_j({\epsilon })$$, $$p_0={{\fancyscript{E}}}(L)$$. We look for an operator $$H=\sum t^j h_j({\epsilon })$$ which would solve (together with some other Fuchsian operator $$K=\sum t^j k_j({\epsilon })\in {\fancyscript{F}}$$) the operator equation $$p_0({\epsilon })H=KL$$. After substituting the expansions and using the commutation rule (15), we obtain in the left hand side the operator

\begin{aligned} p_0({\epsilon })H=p_0 h_0+tp_0^{\scriptscriptstyle }h_1+\cdots +t^jp_0^{\scriptscriptstyle [j]} h_j+\cdots , \end{aligned}

cf. with the notation (16)–(17). In the right hand side the expansion for

\begin{aligned} KL=(k_0+tk_1+t^2k_2+\cdots )(p_0+tp_1+t^2p_2+\cdots ) \end{aligned}

will have more complicated form: the term proportional to $$t^j$$ has the form

\begin{aligned} t^j(k_jp_0+k_{j-1}^{\scriptscriptstyle }p_1+k_{j-2}^{\scriptscriptstyle }p_2+\cdots +k_0^{\scriptscriptstyle [j]} p_j). \end{aligned}

The operator equation thus splits into an infinite number of polynomial equations involving the known polynomials $$p_j$$ and unknown $$h_j,k_j$$ as follows,

\begin{aligned} p_0h_0&=k_0p_0, \nonumber \\ p_0^{\scriptscriptstyle } h_1&=k_1p_0+k_0^{\scriptscriptstyle }p_1, \nonumber \\ p_0^{\scriptscriptstyle }h_2&=k_2p_0+k_1^{\scriptscriptstyle }p_1+k_0^{\scriptscriptstyle }p_2, \nonumber \\&........................ \nonumber \\ p_0^{\scriptscriptstyle [j]}h_j&=k_jp_0+k_{j-1}^{\scriptscriptstyle }p_1+\cdots +k_0^{\scriptscriptstyle [j]}p_j, \nonumber \\&........................ \end{aligned}
(25)

This system can be solved inductively: on the first step we choose $$h_0=k_0$$ any polynomial of degree $$n-1$$ relatively prime with $$p_0$$. The remaining equations all have the common structure:

\begin{aligned} p_0^{\scriptscriptstyle [j]}h_j-p_0 k_j=u_j, \end{aligned}
(26)

where $$u_j\in {\mathbb C}[{\epsilon }]$$ is a polynomial of degree $$\leqslant 2n-1$$ built from the already obtained polynomials $$k_0,\dots ,k_{j-1}$$ and known $$p_1,\dots ,p_j$$.

If $$L$$ is nonresonant, no two roots of $$p_0$$ differ by a positive integer $$j$$, hence $$\gcd (p_0,p_0^{\scriptscriptstyle [j]})=1$$ for all $$j=1,2,\dots$$ and any such equation is (uniquely) solvable by a suitable pair $$(h_j,k_j)$$ of polynomials of degree $$\leqslant n-1$$. Thus the entire infinite system admits a formal solution $$(H,K)$$.

It remains to show that if the series for $$L=\sum t^jp_j$$ was convergent, so will be the series for $$L$$ and $$K$$. This can be done by the direct estimates, yet we give a general proof avoiding all computations later, in Sect. 6. $$\square$$

### 4.2 $${\fancyscript{F}}$$-Normal Form and Apparent Singularities

Some properties of solutions can be easily described in terms of $${\fancyscript{F}}$$-equivalence. Recall that a singular point of a differential equation is called apparent, if all solutions of this equation are holomorphic at this point.

### Proposition 2

A Fuchsian operator has only meromorphic solutions if and only if it is $${\fancyscript{F}}$$-equivalent to an Euler operator $$L={{\fancyscript{E}}}(L)=p_0({\epsilon })$$ with integer pairwise different roots, $$p_0({\epsilon })=\prod _{i=1}^n ({\epsilon }-\lambda _i)$$, $$\lambda _i\in {\mathbb Z}$$, $$\lambda _i\ne \lambda _k$$ for $$i\ne k$$.

A Fuchsian operator has only holomorphic solutions, if and only if it is $${\fancyscript{F}}$$-equivalent to an Euler operator as above, with nonnegative pairwise distinct roots, $$\lambda _i\in {\mathbb Z}_+$$.

### Proof

In one direction both statements are obvious. We show that Fuchsian operators with only meromorphic (resp., holomorphic) solutions are $${\fancyscript{F}}$$-equivalent to an Euler equation as above.

One can easily show that any $$n$$-dimensional $${\mathbb C}$$-linear subspace $$\ell$$ in $${\fancyscript{M}}({\mathbb C},0)$$ (resp., in $${\fancyscript{O}}({\mathbb C},0)$$) admits a basis of the germs of the form $$f_i=t^{\lambda _i}u_i(t)$$ with pairwise different integer (resp., nonnegative integer) powers $$\lambda _i$$, $$u_i\in {\fancyscript{O}}({\mathbb C},0)$$ and $$u_i(0)=1$$. Indeed, we can start with any $${\mathbb C}$$-basis $$f_1,\dots ,f_n$$ in $$\ell$$ and normalize them so that each function has a monic leading term $$t^{\lambda _i}(1+\cdots )$$. If there are two equal powers among the initial collection, $$\lambda _i=\lambda _k$$, then their difference (which cannot be identically zero by linear independence) has the leading term proportional to $$t^{\mu }$$, $$\lambda _i=\lambda _k<\mu \in {\mathbb Z}$$. Repeating this procedure finitely many steps, one can always achive the situation when $$\lambda _i\ne \lambda _k$$.

Now we construct explicitly the Fuchsian operator $$H=\sum t^jh_j({\epsilon })$$ which would transform the monomials $$t^{\lambda _i}$$, $$i=1,\dots ,n$$, to the functions $$c_if_i$$ for suitable coefficients $$c_i\in {\mathbb C}$$. Note that each monomial $$t^{\lambda _i}$$ is an eigenfunction for any Euler operator, in particular, $$h_j({\epsilon })t^{\lambda _i}=h_j(\lambda _i)t^{\lambda _i}$$, and therefore

\begin{aligned} Ht^{\lambda _i}=\varphi _i(t)t^{\lambda _i},\quad \varphi _i(t)=\sum _{j\geqslant 0} t^jh_j(\lambda _i). \end{aligned}

The equations $$Ht^{\lambda _i}=t^{\lambda _i}(c_i+c_{i1}t+c_{i2}t^2+\cdots )$$ are thus transformed to the infinite number of interpolation problems,

\begin{aligned} h_0(\lambda _i)=c_i, \quad h_j(\lambda _i)=c_{ij},\quad i=1,\dots ,n,\quad j=1,2,\dots \end{aligned}

Such problems are always solvable by polynomials $$h_j\in {\mathbb C}[{\epsilon }]$$ of degree $$\leqslant n-1$$, and since $$c_i=h_0(\lambda _i)\ne 0$$, we have $$\gcd (h_0,p_0)=1$$. By a suitable (generic) choice of the constants $$c_i\ne 0$$, one may guarantee that $$\deg h_0=n-1$$, that is, $$H$$ is indeed a Fuchsian operator, as required for the $${\fancyscript{F}}$$-equivalence. $$\square$$

Note that in both cases the normal form is maximally resonant: all differences between the roots of the Euler part are integer.

### Remark 7

This results shows to what extent the $${\fancyscript{F}}$$-equivalence is more fine than the $${\fancyscript{W}}$$-equivalence. Indeed, given the trivial monodromy, all operators having only meromorphic solutions, are $${\fancyscript{W}}$$-equivalent to the same Euler operator $$t^{-n}\partial ^n={\epsilon }({\epsilon }-1)\cdots ({\epsilon }-n+1)$$. On the other hand, two different Euler operators are never $${\fancyscript{F}}$$-equivalent: if $$\gcd (p_0,h_0)=1$$, then the identity $$p_0h_0=q_0k_0$$ in $${\mathbb C}[{\epsilon }]$$, the first line from (25), implies that $$p_0=q_0$$ and $$h_0=k_0$$.

### 4.3 Resonant Case: Homological Equation and its Solvability

If some of the roots of the Euler part $$p_0$$ differ by a natural number, then the corresponding Eq. (26) may become unsolvable and in general transforming a resonant Fuchsian operator $$L\in {\fancyscript{F}}$$ to its Euler part $${{\fancyscript{E}}}(L)\in {\mathbb C}[{\epsilon }]$$ is impossible, see Example  2 below. However, one can use $${\fancyscript{F}}$$-equivalence to simplify Fuchsian operators.

If a Fuchsian operator $$H=\sum t^jh_j({\epsilon })$$ conjugates $$L$$ with another operator $$M=\sum t^j q_j({\epsilon })\in {\fancyscript{F}}$$, then the left hand side of the identity $$p_0({\epsilon })H=KL$$ should be replaced by

\begin{aligned} MH&=(p_0+tq_1+t^2q_2+\cdots )(h_0+th_1+t^2h_2+\cdots ) \nonumber \\&=p_0h_0+t(q_1h_0+p_0^{\scriptscriptstyle }h_1)+\cdots \nonumber \\&\quad + t^j(q_j h_0+q_{j-1}h_1^{\scriptscriptstyle }+\cdots +p_0h_j^{\scriptscriptstyle [j]})+\cdots , \end{aligned}
(27)

and, accordingly, the Eq. (26) should be replaced by the equations

\begin{aligned} p_0^{\scriptscriptstyle [j]}h_j-p_0 k_j+q_j h_0= v_j,\quad j=1,2,\dots , \end{aligned}
(28)

where, as before, $$v_j\in {\mathbb C}[{\epsilon }]$$ is a polynomial of degree $$\leqslant 2n-1$$ formed by (eventually shifted) combinations of $$q_i,h_i,k_i$$ with smaller indices $$0<i<j$$ and $$p_1,\dots ,p_j$$.

First, we use the fact that although some of the Eq. (28) may be non-solvable, they are always solvable for sufficiently large orders.

### Proposition 3

Let $$L=p_0+tp_1+\cdots \in {\fancyscript{F}}$$ be a Fuchsian operator and $$N$$ the maximal natural difference between the roots of $$p_0={\mathbb C}_n[{\epsilon }]$$.

Then $$L$$ is $${\fancyscript{F}}$$-equivalent to the polynomial operator $$M$$ obtained by truncation of the Taylor series at the order $$N$$,

\begin{aligned} M=\sum _{j=0}^N t^jp_j({\epsilon })=\sum _{k=0}^n b_k(t){\epsilon }^{n-k}\in {\mathbb C}[t,{\epsilon }] \end{aligned}

with polynomial coefficients $$b_k\in {\mathbb C}[t]$$ of degree $$\deg _t b_k\leqslant N$$, obtained by truncation of the analytic coefficients $$a_k\in {\fancyscript{O}}({\mathbb C},0)$$ of the initial operator $$L$$ at the order $$N$$.

### Proof

First we find a pair of operators $$H_0,K_0\in {\fancyscript{W}}$$ of order $$n-1$$ with holomorphic coefficients, which almost conjugate $$L$$ with $$M$$ in the form $$H_0=1+\sum _{j>N}t^j h_j$$, $$K=1+\sum _{j>N} t^j k_j$$, so that $$MH_0=K_0M$$. Substituting these expansions in the Eq. (28), we see that all equations of order $$j=0,1,\dots ,N$$ are satisfied automatically if we set $$q_j=p_j$$ and $$0=h_j=k_j$$ for all $$j=1,\dots ,N$$.

The operators $$H_0,K_0$$ are (usually) non-Fuchsian, since $$0={\text {ord}}h_0<{\text {ord}}H_0=n-1$$. However, the operators $$H=H_0+L$$ and $$K=K_0+M$$ are Fuchsian, satisfy the identity $$MH=M(H_0+L)=K_0L+ML=(K_0+M)L=KL$$ and the nondegeneracy condition $$\gcd _0(L,H)=\gcd (p_0,p_0+1)=1$$ is satisfied. $$\square$$

### 4.4 Integrable Normal Form

The polynomial normal form established in Proposition 3, lacks any integrability properties. Yet using the same method, one can construct a Liouville integrable $${\fancyscript{F}}$$-normal form for any Fuchsian operator.

### Proposition 4

A Fuchsian operator $$=p_0+tp_1+\cdots \in {\fancyscript{F}}$$ with the Eulerization $$p_0({\epsilon })=\prod _ {i=1}^n({\epsilon }-\lambda _i)$$, is $${\fancyscript{F}}$$-equivalent to an operator $$M\in {\fancyscript{F}}$$ of the form

\begin{aligned} M=({\epsilon }-\lambda _1+r_{1})\cdots ({\epsilon }-\lambda _n+r_n),\quad r_{i}=r_i(t)\in {\mathbb C}[t],\; r_{i}(0)=0. \end{aligned}
(29)

In other words, $$M$$ is a (noncommutative) product of polynomial operators of order $$1$$. The degrees of the polynomials $$r_i(t)$$ are explicitly bounded, $$\deg _t r_i(t)\leqslant N$$, where $$N$$, as before, is the maximal order of resonance between roots of $$p_0$$.

### Remark 8

If $$\lambda _{i-1}=\lambda _i$$ is a multiple root of $$p_0$$, still the polynomials $$r_{i-1}$$ and $$r_i$$ in general will be different.

### Lemma 3

Any analytic Fuchsian operator $$L\in {\fancyscript{F}}$$ can be factorized as

\begin{aligned} L=({\epsilon }{-}\lambda _1+R_1)\cdots ({\epsilon }{-}\lambda _n+R_n), \quad R_i=R_i(t)\in {\fancyscript{O}}({\mathbb C}^1,0),\;R_i(0)=0, \end{aligned}
(30)

with analytic (rather than polynomial) functions $$R_1,\dots ,R_n$$.

### Proof of the Lemma

Consider an eigenfunction $$u(t)$$ of the monodromy operator, associated with the equation $$Lu=0$$: the corresponding eigenvalue is nonzero, hence $$\Delta u=\mathrm e^{2\pi i\lambda }u$$ for some $$\lambda \in {\mathbb C}$$. Then $$u=t^{\lambda }v(t)$$, where $$v$$ is a meromorphic germ, and modulo replacing $$\lambda$$ by $$\lambda +j$$ for some $$j\in {\mathbb Z}$$, we may assume that $$v$$ is holomorphic invertible, $$v\in {\fancyscript{O}}({\mathbb C},0)$$, $$v(0)\ne 0$$. Applying $${\epsilon }$$ to this function, we see that $${\epsilon }u=\lambda t^{\lambda }v+t^{\lambda }({\epsilon }v)=\lambda u-Ru$$, $$R=-\frac{{\epsilon }v}{v}\in {\fancyscript{O}}({\mathbb C},0)$$, in other words, $$u$$ satisfies a Fuchsian equation of the first order and $$L$$ is divisible from the right by $${\epsilon }-\lambda +R(t)$$. The quotient is again a Fuchsian operator of order $$n-1$$, and the process can be continued by induction. $$\square$$

### Proof of the Proposition 4

Consider the factorization (30) of the operator $$L$$ as provided by Lemma 3, and replace each analytic function $$R_i$$ by its polynomial truncation $$r_i$$ to order $$N$$, so that $${\text {ord}}_{t=0} (r_i-R_i)>N$$. The (polynomial) operator $$M$$ thus obtained has the same $$N$$-jet with respect to $$t$$ as the initial operator $$L$$. By Proposition 3, $$M$$ is $${\fancyscript{F}}$$-equivalent to $$L$$. $$\square$$

The normal form established by Proposition 4 has an advantage of being Liouville integrable. Each linear equation of the first order is explicitly solvable “in quadratures”. In particular, the homogeneous equation

\begin{aligned} Lu=0,\quad L={\epsilon }-\lambda +r(t),\qquad r\in {\mathbb C}[t],\ r(0)=0, \end{aligned}

has a 1-dimensional space of solutions $$u(t)=Ct^\lambda \exp \rho (t)$$, where $$\rho (t)=-\int \frac{r(t)}{t} \,\mathrm dt$$ is a polynomial in $$t$$.

To solve the nonhomogeneous equations, the method of variation of constants can be used to produce a particular solution using operations of integration (computation of the primitive), exponentiation and the field operations in the field $${\mathbb C}(t)$$ of rational functions (the details are left to the reader). Iterating this computation, one can find a general solution of the equation $$Mu=0$$ with a completely reduced operator $$M$$ as in (29): if $$M=L_1L_2\ldots L_n$$, $${\text {ord}}L_i=1$$, then solution of the equation $$Mu=0$$ amounts to solving a chain of equations of order $$1$$,

\begin{aligned} L_1 u_1=0,\,L_2 u_2=u_1,\ldots , L_nu_n=u_{n-1},\quad u=u_n. \end{aligned}
(31)

### Corollary 1

Any Fuchsian operator is $$\hat{\fancyscript{F}}$$-equivalent to a Liouville integrable operator. $$\square$$

### 4.5 Non-Eulerizability of Resonant Fuchsian Equations

The explicit integrability of the factorized equations allows to show that the resonant Fuchsian equations, “as a rule”, are even not $${\fancyscript{W}}$$-equivalent to their Euler part.

### Example 2

Consider the Fuchsian operator

\begin{aligned} L=({\epsilon }+t)({\epsilon }-1)={{\fancyscript{E}}}(L)+t({\epsilon }-1)\in {\fancyscript{F}},\quad {{\fancyscript{E}}}(L)={\epsilon }({\epsilon }-1). \end{aligned}

The Euler part of $$L$$ has simple integer roots, hence the trivial monodromy. On the other hand, the equation $$Lu=0$$ can be explicitly solved. One solution, $$u_1(t)=t$$, satisfying the equation $$({\epsilon }-1)u=0$$, is obvious. The equation $$({\epsilon }+t)v=0$$ has solution $$v(t)={\mathrm e}^{-t}$$, and another solution $$u_2(t)$$ of the linear non-homogeneous equation $$({\epsilon }-1)u(t)={\mathrm e}^{-t}$$, can be found by the method of variation of constants, $$u_2(t)=t\int {\mathrm e}^{-t}t^{-2}\,{\mathrm {dt}}$$. The monodromy transformation of the pair of solutions $$(u_1,u_2)$$ is given by the non-identical matrix $$\begin{pmatrix} 1 &{} 2\pi \mathrm i \\ &{} 1 \end{pmatrix}$$. This means that the full operator is even not $${\fancyscript{W}}$$-equivalent to its Euler part.

## 5 Minimal Normal Form

The polynomial normal forms established in the preceding section are of rather limited interest: indeed, no attempt was made to modify the lower order terms of the Taylor expansion of the resonant Fuchsian operators.

The system of Eq. (28) can be solved recursively with respect to $$h_j,k_j$$ even in the resonant case $$\gcd (p_0,p_0^{\scriptscriptstyle [j]})\ne 1$$, provided that $$q_j$$ are chosen in a suitable way: the difference $$v_j-q_jh_0$$ should belong to the image of the Sylvester map $$\varvec{S}_j=\varvec{S}_{p_0,p_0^{\scriptscriptstyle [j]}}$$, cf. with (23). This image consists of all polynomials of degree $$\leqslant 2n-1$$ divisible by $$w_j=\gcd (p_0,p_0^{\scriptscriptstyle [j]})\in {\mathbb C}[{\epsilon }]$$. In this section we describe possible choices for the terms $$q_j$$.

### 5.1 Abstract Normal Forms

Denote by $$\mathfrak P={\mathbb C}[{\epsilon }]/\left<p_0\right>\simeq {\mathbb C}_{n-1}[{\epsilon }]$$ the quotient algebra: as a $${\mathbb C}$$-space it is $$n$$-dimensional and can be identified with the residues modulo $$p_0$$, polynomials of degree $$\leqslant n-1$$.

This quotient algebra in the simplest case where all $$n$$ roots $$\lambda _1,\dots ,\lambda _n\in {\mathbb C}$$ of $$p_0$$ are simple, can be identified with the $${\mathbb C}$$-algebra of functions on $$n$$ points $$\varLambda =\{\lambda _1,\dots ,\lambda _n\}\subseteq {\mathbb C}:\mathfrak P=\{\varphi :\varLambda \rightarrow {\mathbb C}\}\simeq {\mathbb C}\times \cdots \times {\mathbb C}$$: any such function can be represented as the restriction of a polynomial $$h\in {\mathbb C}_{n-1}[{\epsilon }]$$ of degree $$\leqslant n-1:h|_\varLambda =\varphi$$. The functions $$\varphi _i$$ equal to $$1$$ at one point $$\lambda _i\in \varLambda$$ and vanishing at all other points $$\lambda _k\ne \lambda _i$$, form a natural basis of $$\mathfrak P$$.

### Remark 9

In the general case where the roots $$\lambda _i$$ may have nontrivial multiplicities $$\mu _i\in {\mathbb N}$$,

\begin{aligned} p_0({\epsilon })=\prod _i ({\epsilon }-\lambda _i)^{\mu _i},\quad \sum _i \mu _i=\deg p_0=n, \end{aligned}

the quotient algebra $$\mathfrak P$$ is naturally isomorphic to the direct sum of the local algebras $$J_i\simeq {\mathbb C}[{\epsilon }]/({\epsilon }-\lambda _i)^{\mu _i}$$ of dimension $$\mu _i$$: each element of $$\mathfrak P=\bigoplus _i J_i$$ can be identified with a multijet, a collection of ($$\mu _i-1$$)-jets (Taylor polynomials of order $$\mu _i-1$$) at the points $$\lambda _i\in \varLambda \subset {\mathbb C}$$.

For any polynomial $$s\in {\mathbb C}[{\epsilon }]$$ the multiplication by $$s$$ is an endomorphism of the algebra $$\mathfrak P$$. It is invertible (automorphism of $$\mathfrak P$$) if and only if $$\gcd (p_0,s)=1$$.

The Eq. (28) induce the equations in the algebra $$\mathfrak P$$:

\begin{aligned} p_0^{\scriptscriptstyle [j]} h_j+q_jh_0=v_j,\quad j=1,2,\dots \end{aligned}
(32)

They can be re-written in the operator form as

\begin{aligned} \varvec{P}_j h_j+\varvec{H} q_j=v_j \end{aligned}
(33)

where $$\varvec{P}_j,\varvec{H}$$ are endomorphisms (self-maps) of $$\mathfrak P$$, induced by multiplication,

\begin{aligned} \varvec{P}_j:h\longmapsto p_0^{\scriptscriptstyle [j]}h, \quad \varvec{H}:q_j\longmapsto h_0 q_j. \end{aligned}
(34)

The endomorphisms commute between themselves and $$\varvec{H}$$ is invertible.

### Definition 6

An affine normal form for the polynomial $$p_0$$ is a family of subspaces $$V_j\subseteq \mathfrak P$$ (not necessarily subalgebras) such that $$V_j$$ is complementary to the image of $$\varvec{P}_j$$,

\begin{aligned} \varvec{P}_j\mathfrak P+V_j=\mathfrak P\quad j=1,2,\dots . \end{aligned}
(35)

The affine normal form is minimal, if $$\dim V_j=\dim {\text {Ker}}\varvec{P}_j$$.

Without loss of generality we may assume that $$V_j=0$$ for all sufficiently large values of $$j$$ (for minimal normal forms this condition is automatically satisfied).

Note that the choice of an affine normal form is by no means unique: moreover, being a normal form is an open property (small perturbation of the subspaces $$V_j$$ does not violate the property (35).

These definitions are tailored to make the following statement trivial.

### Theorem 6

Let $$\{V_j\}$$ be an abstract affine normal form for the polynomial $$p_0\in {\mathbb C}_n[{\epsilon }]$$.

Then any Fuchsian operator $$L=p_0({\epsilon })+tp_1({\epsilon })+\cdots$$ is $${\fancyscript{F}}$$-equivalent to an operator $$M=p_0+\sum _{j=1}^Nt^jq_j({\epsilon })$$ with $$q_j\in V_j$$.

### Proof

By invertibility of $$\varvec{H}$$, we have $$\varvec{H}^{-1}\mathfrak P=\mathfrak P=\varvec{H}\mathfrak P$$. By (35), $$\varvec{P}_j \varvec{H}^{-1}\mathfrak P+V_j=\mathfrak P$$. Applying to both parts of the latter equality the operator $$\varvec{H}$$ and using the commutativity, we see that

\begin{aligned} \varvec{P}_j\mathfrak P+\varvec{H}V_j=\mathfrak P, \end{aligned}

that is, each homological Eq. (33), regardless of the right hand side $$v_j$$, admits a solution $$h_j\in \mathfrak P$$, $$q_j\in V_j$$. This solution generates (by definition of $$\mathfrak P$$) a solution $$(h_j,k_j)$$ of (28). $$\square$$

### 5.2 Minimal Affine Normal Form

One possibility to chose an affine normal form is to stick to the polynomials of minimal degree modulo $$p_0$$. Denote by $$w_j\in {\mathbb C}[{\epsilon }]$$ the greatest common divisor $$w_j=\gcd (p_0,p_0^{\scriptscriptstyle [j]})$$; this is a polynomial of degree $$\nu _j\leqslant n-1$$.

### Proposition 5

Any Fuchsian operator $$L=p_0({\epsilon })+tp_1({\epsilon })+\cdots$$ is $${\fancyscript{F}}$$-equivalent to a polynomial operator of the form $$M=p_0+\sum _j t^j q_j({\epsilon })$$ with $$\deg q_j\leqslant \nu _j-1$$. In particular, $$q_j=0$$ for all nonresonant orders.

### Proof

It suffices to note that the subspace $$V_j\simeq {\mathbb C}_{\nu _j-1}[{\epsilon }]\subseteq {\mathbb C}_{n-1}[{\epsilon }]\simeq \mathfrak P$$ is naturally complementary to the image $$\varvec{P}_j\mathfrak P$$ which consists of all polynomials of degree $$\leqslant n-1$$, divisible by $$w_j$$. This follows from the division with remainder by $$w_j$$ in $$\mathfrak P\simeq {\mathbb C}_{n-1}[{\epsilon }]$$. $$\square$$

Note that the family of the subspaces $$V_j\simeq {\mathbb C}_{\nu _j-1}[{\epsilon }]$$ is a minimal normal form.

### Example 3

Assume that the operator $$L$$ has a single resonance, i.e., only one pair of roots of $$p_0$$ differs by an integer $$k$$. Then the operator $$L$$ is $${\fancyscript{F}}$$-equivalent to $$p_0({\epsilon })+ct^k$$, $$c\in {\mathbb C}$$.

### 5.3 Separation of Resonances

A different strategy of choice of the subspaces $$\{V_j\}$$ constituting a normal form, is to reproduce the strategy which results in the Poincaré–Dulac normal form for Fuchsian systems with diagonal residue matrix $$A\in {\text {Mat}}(n,{\mathbb C})$$. Recall that in this case instead of solving the homological Eq. (28), one has to solve matrix equations of the form $$[A, H]+jH=B_j$$, where $$B_j$$ are given matrices from $${\text {Mat}}(n,{\mathbb C})$$, cf. with Ilyashenko and Yakovenko (2008, Theorem 16.15). The operator taking a matrix $$H$$ into the twisted commutator as above, is diagonal in the natural basis of matrices having only one nonzero entry, and kernel of this operator is naturally complementary to its image.

An analogous construction can be applied in the case of operators $$\varvec{P}_j$$ if the polynomial $$p_0={{\fancyscript{E}}}(L)$$ has simple roots. Then multiplication by any polynomial, including $$w_j$$, is diagonal, hence one can choose $$V_j={\text {Ker}}\varvec{P}_j$$. The polynomials $$q_j$$ which appear in the corresponding normal form, will be vanishing at all roots of $$p_0/w_j$$, hence divisible by the latter polynomial (recall that we consider polynomials of degree $$\leqslant \deg p_0-1$$). In particular, if a certain root $$\lambda _i$$ of $$p_0$$ does not appear in any resonance, then all polynomials $$q_j$$ in the normal form will be divisible by $${\epsilon }-\lambda _i$$, and therefore the operator $$M$$ in the normal form established in Theorem 6 will be divisible (from the right) by the first order Euler operators $${\epsilon }-\lambda _i\in {\fancyscript{F}}$$.

This claim gives a partial effective factorization of the normal form (29), which allows to identify factors with $$r_i=0$$. In the next section we explain how one can give an accurate description of the factors in (29) in general.

### 5.4 Completely Reducible Minimal Normal Form

Occurrence of resonances between the roots $$\varLambda =\{\lambda _1,\dots ,\lambda _n\}\subset {\mathbb C}$$ of the polynomial $$p_0\in {\mathbb C}_n[{\epsilon }]$$ allows to introduce certain combinatorial structures. First, the (natural linear) order on $${\mathbb Z}$$ induces a partial order on the roots: $$\lambda _i\geqslant \lambda _k \iff \lambda _i-\lambda _k\in {\mathbb Z}_+$$.

### Remark 10

If the Euler part has multiple roots, then the set $$\varLambda$$ contains repetitions. To simplify the subsequent arguments, it is convenient to extend the partial order to a full order as follows. The roots of $$p_0$$ are subdivided in resonant groups in such a way that inside each group all roots have integer differences (and hence are comparable in the sense of the partial order). Different resonant groups can be arranged between themselves in any way. The corresponding order is conveniently represented by the enumeration of the set of all roots $$\varLambda =\{\lambda _i\}$$ in the non-decreasing order. This will make $$\varLambda$$ into an ordered set naturally isomorphic to $$\{1,2,\dots ,n\}$$: multiple roots of $$p_0$$ occupy consecutive positions in this list. We call this order a natural order on $$\varLambda$$ (it is not unique, since different resonant groups can be transposed, but is convenient for the formulations).

Second, for each order we can list all roots which produce resonances of this order. Given a natural index $$j\in {\mathbb N}$$, we define

\begin{aligned} \varLambda _j=\{\lambda \in \varLambda :\lambda +j\in \varLambda \}\subset \varLambda ,\quad j=1,2,\dots . \end{aligned}
(36)

This definition, unambiguous in the case where $$p_0$$ has only simple roots, should be modified as follows: if $$\mu +j=\varkappa$$ are two roots in resonance and the multiplicities of $$\mu ,\varkappa$$ in $$\varLambda$$ (the list which now may have repetitions) are $$m,k$$ respectively, then $$\mu$$ enters $$\varLambda _j$$ with the multiplicity equal to $$\min (m,k)$$, that is, with its multiplicity as the root of the polynomial $$w_j=\gcd (p_0,p_0^{\scriptscriptstyle [j]})$$.

Together with the sets $$\varLambda _j\subset \varLambda$$ it is convenient to consider also their fully ordered counterparts (cf. with Remark 10), the sets of the corresponding indices $$I_j\subset \{1,2,\dots ,n\}$$. In the case where $$\lambda$$ a root of $$p_0$$ with multiplicity $$m>1$$ and of $$p_0^{\scriptscriptstyle [j]}$$ with multiplicity $$k<m$$, we include in $$I_j$$ the last $$k$$ instances where $$\lambda$$ enters $$\varLambda$$ (out of the total $$m$$).

The dual description can be given by the sets $$J(\lambda )$$, which for any root $$\lambda \in \varLambda$$ consists of the natural numbers $$j\in {\mathbb N}$$ such that $$\lambda +j\in \varLambda$$. The case of multiple roots needs no special treatment.

Recall that support (or the Newton diagram) of a polynomial $$r=\sum c_k t^k\in {\mathbb C}[t]$$ is the set of indices $$k\in {\mathbb N}$$ such that the corresponding coefficient $$c_k$$ is nonzero: $${\text {supp}}r=\{k:c_k\ne 0\}\subset {\mathbb N}$$.

### Theorem 7

Any Fuchsian operator is $${\fancyscript{F}}$$-equivalent to a completely reducible operator of the form

\begin{aligned} L&=({\epsilon }-\lambda _1+r_1(t))\cdots ({\epsilon }-\lambda _n+r_n(t)),\nonumber \\ r_i&\in {\mathbb C}[t],\quad {\text {supp}}r_i\subseteq J(\lambda _i),\quad i=1,\dots ,n. \end{aligned}
(37)

In particular, $$\deg r_i\leqslant \max \{j\in {\mathbb N}:\ \lambda _i+j\in \varLambda \}$$.

The rest of this section contains the proof of this theorem.

### 5.5 Expansion of Noncommutative Products

From that moment we assume that the roots $$\lambda _i$$ are labeled in a natural order, see Remark 10.

Consider the operators $$E_{ij}\in {\fancyscript{F}}$$ of the form (37) in the case where only one polynomial $$r_i$$ is different from zero and is itself a monomial of degree $$j$$:

\begin{aligned} E_{ij}=({\epsilon }-\lambda _1)\cdots ({\epsilon }-\lambda _{i-1})({\epsilon }-\lambda _i+t^j)({\epsilon }-\lambda _{i+1})\cdots ({\epsilon }-\lambda _n),\quad i=1,\dots ,n. \end{aligned}

After complete expansion of $$E_{ij}$$ we obtain

\begin{aligned} E_{ij}&=p_0({\epsilon })+t^j p_{ij}({\epsilon }),\quad p_{ij}\in {\mathbb C}[{\epsilon }],\quad i=1,\dots ,n, \nonumber \\ p_{ij}&=({\epsilon }-\lambda _1+j)\cdots ({\epsilon }-\lambda _{i-1}+j)({\epsilon }-\lambda _{i+1})\cdots ({\epsilon }-\lambda _n). \end{aligned}
(38)

In other words, $$p_{ij}$$ is obtained by shifting the argument by $$j$$ in the first $$i-1$$ terms of the ordered factorization of $$p_0$$, while keeping the last terms the same as in $$p_0$$. Accordingly, the roots of $$p_{ij}$$ are obtained by shifting the first $$i-1$$ roots of $$\varLambda$$ to the left by $$j$$ units, removing the $$i$$th root from the list and keeping the remaining (larger) roots in place. Speaking informally, $$p_{ij}$$ has a gap on $$i$$th place in the (partially) ordered set $$\varLambda$$.

### Lemma 4

For each $$j\geqslant 1$$ the polynomials $$p_{1j},\dots ,p_{nj}\in {\mathbb C}_{n-1}[{\epsilon }]$$ are linear independent over $${\mathbb C}$$.

### Proof

A vanishing $${\mathbb C}$$-linear combination of polynomials $$p_{1j},\dots ,p_{nj}$$ after division by $$p_0$$ would result in a vanishing $${\mathbb C}$$-linear combination between the corresponding rational functions. However, this is impossible, since the first fraction will have a pole at $$\lambda _1$$, and in a similar way $$\frac{p_{ij}}{p_0}$$ will have either a pole at $$\lambda _i$$, or (if $$\lambda _i=\lambda _{i-1}$$ was a multiple root of $$p_0$$) the order of the pole will increase compared with the previous fraction. Since the roots were ordered, this new poles appear at the points where all previous ratios were holomorphic, which means that no linear combination can arise in the process. $$\square$$

### Corollary 2

For any $$j$$ the polynomials $$p_{1j},\dots ,p_{nj}$$ span $${\mathbb C}_{n-1}[{\epsilon }]$$.

### Proof

Since these polynomials are linear independent, they span an $$n$$-dimensional $${\mathbb C}$$-subspace in $${\mathbb C}_{n-1}[{\epsilon }]$$ which for the reasons of dimension must coincide with $${\mathbb C}_{n-1}[{\epsilon }]$$. $$\square$$

A minor modification of this argument proves a similar statement.

### Lemma 5

Let $$j\in {\mathbb N}$$ be a natural number and $$w_j=\gcd (p_0,p_0^{\scriptscriptstyle [j]})$$. Then the polynomials $$p_{ij}$$ for $$i\in I_j$$ are linear independent modulo $$w_j$$.

Note that the polynomials $$p_{ij}$$ and $$p_{i+1,j}$$ in general are different even if $$\lambda _i=\lambda _{i+1}$$.

### Proof

Arguing as before, consider the rational fractions $$\frac{p_{ij}}{w_j}$$. Since the roots of $$w_j$$ constitute only a proper subset of $$\varLambda$$, then not all of these fractions have either a new pole at $$\lambda _i$$, or a pole of larger order. On the other hand, if $$\lambda _i\in \varLambda _j$$, then this means that one or more (depending on multiplicity) of the larger roots when shifted by $$j$$ will coincide with $$\lambda _i$$ and hence create a pole of $$\frac{1}{w_j}$$ of the corresponding order. In the case where $$\lambda _i$$ is a multiple root, we have to consider the fractions $$\frac{p_{ij}}{w_j}$$ for $$i\in I_j$$

This means that in the ordered subsequence $$\frac{p_{ij}}{w_j}$$, $$i\in I_j$$, the behavior of the poles will be as before (either a new pole appears or the order of the previous pole is increased). In both cases the linear dependence is impossible. $$\square$$

### Corollary 3

The linear span $$V_j$$ of polynomials $$\{p_{ij}:i\in I_j\}\subset {\mathbb C}_{n-1}[{\epsilon }]\simeq {\mathbb C}[{\epsilon }]\mod p_0$$ is a linear subspace transversal to the image of the operator $${\varvec{P}_j}$$ from (35), and hence these subspaces form a minimal abstract normal form in the sense of Definition 6.

### Proof

This follows from the linear independence above and the fact that the number of these polynomials is equal to the codimension of the image (which consists of polynomials of degree $$\leqslant n-1$$ divisible by $$w_j$$). $$\square$$

### 5.6 Proof of Theorem 7

Assume (by way of induction) that the a Fuchsian operator $$L\in {\fancyscript{F}}$$ is already shown to be $${\fancyscript{F}}$$-equivalent to an operator $$L_{j-1}\in {\fancyscript{F}}$$ whose $$(j-1)$$-jet is as in (37), i.e.,

\begin{aligned} L_{j-1}&=({\epsilon }-\lambda _1+r_{1,j-1})\cdots ({\epsilon }-\lambda _n+r_{n,j-1})+t^jv_j({\epsilon })+\cdots , \\ r_{i,j-1}&\in {\mathbb C}[t],\quad {\text {supp}}r_i\subseteq J(\lambda _i)\cap [1,j-1],\quad i=1,\dots ,n. \end{aligned}

We will show that there exists an operator $$L_j$$ of the same form but with $${\text {supp}}r_{i,j}\in J(\lambda _i)\cap [1,j]$$, which is $${\fancyscript{F}}$$-equivalent to $$L_{j-1}$$. Indeed, adding monomials of order $$j$$ to the polynomials $$r_{i,j-1}$$,

\begin{aligned} r_{i,j}=r_{i,j-1}+c_it^j, \quad c_i\ne 0\iff j\in J(\lambda _i),\quad i=1,\dots ,n \end{aligned}

will affect only terms of order $$j$$ and higher after the expansion: the (polynomial) coefficient $$v_j$$ will be replaced by $$v_j+\sum c_ip_{ij}$$ by definition (38) of the polynomials $$p_{ij}$$. By a suitable choice of the coefficients $$c_i$$ for $$i\in I_j$$, one can bring this sum into the range of the homological operator $$\varvec{P}_{j}$$, as follows from Corollary 3.

For this choice the homological Eq. (28) will be solvable with respect to $$h_j,k_j$$ by setting $$h_0=1$$, $$q_j=-\sum c_i p_{ij}$$. Continuing this way, we eventually reach the values of $$j$$ which exceed the maximal order $$N$$ of possible resonances. The corresponding operator $$L_N$$, by construction $${\fancyscript{F}}$$-equivalent to the initial operator $$L$$, is $${\fancyscript{F}}$$-equivalent to its product part $$\prod _{i=1}^n({\epsilon }-\lambda _i+r_{iN}(t))$$ with $${\text {supp}}r_{iN}\in J(\lambda _i)$$ by Proposition 3. $$\square$$

### Remark 11

The same argument allows to construct an effective factorization of any Fuchsian operator. Indeed, by Corollary 2, one can always construct a linear combination $$\sum _{i=1}^n c_ip_{ij}\in {\mathbb C}[{\epsilon }]$$ which cancels the term $$v_j$$. Proceeding this way, one can construct the formal factorization $$L=\prod _{i=1}^n ({\epsilon }-\lambda _i+\hat{r}_i(t))$$, $$\hat{r}_i\in {\mathbb C}[[t]]$$. One can show that in the Fuchsian case this factorization is always converging.

### 5.7 Concluding Remarks

The minimality of the normal form (37) does not imply that coefficients of the first order factors $$r_1,\dots ,r_n\in {\mathbb C}[t]$$ are $${\fancyscript{F}}$$-invariant. Nevertheless, one can expect that for operators of sufficiently high order there will appear moduli (numeric invariants) of $${\fancyscript{F}}$$-classification: for holomorphic gauge classification of Fuchsian systems this was discovered by Kleptsyn and Rabinovich (2004).

## 6 Convergence of the Formal Series

Here we prove that the formal and analytic Fuchsian classifications for Fuchsian operators coincide.

More precisely, assume that two formal operators $$H,K\in \hat{\fancyscript{F}}$$, $$H=\sum _{k=0}^{n-1}u_k(t){\epsilon }^k$$, $$K=\sum _{k=0}^{n-1}v_k(t){\epsilon }^k$$ with formal coefficients $$u_k,v_k\in {\mathbb C}[[t]]$$ conjugate two Fuchsian operators $$L=\sum _{k=0}^n a_k(t){\epsilon }^k$$, $$M=\sum _{k=0}^n b_k(t){\epsilon }^k$$ with analytic coefficients $$a_k, b_k\in {\fancyscript{O}}({\mathbb C},0)$$, $$a_n(0)b_n(0)\ne 0$$.

### Theorem 8

The formal series for the coefficients $$u_k,v_k\in {\mathbb C}[[t]]$$ necessarily converge, hence $$H,K\in {\fancyscript{F}}$$.

### Proof

One possibility of proving this result is to control explicitly the growth rate of the Taylor coefficients. However, a simple strategy is to use the fact that a (vector) formal Taylor series which solves a holomorphic Fuchsian system of equations, is necessarily convergent.

The conjugacy equation $$MH=KL$$ takes the form of a noncommutative identity

\begin{aligned} \biggl (\sum _{k=0}^n b_k(t){\epsilon }^k\biggr )\biggl (\sum _{k=0}^{n-1}u_k(t){\epsilon }^k\biggr )= \biggl (\sum _{k=0}^{n-1}v_k(t){\epsilon }^k\biggr )\biggl (\sum _{k=0}^n a_k(t){\epsilon }^k\biggr ). \end{aligned}
(39)

We claim that this identity implies that the coefficients $$u_k(t)$$ of the operator $$H$$, after passing to a companion form (20), together satisfy a Fuchsian system of linear ordinary differential equation. This follows from the direct inspection of the way the highest order derivatives of $$u_k$$ enter the expressions in (39).

The identity (39), using the commutation relationship in the Weyl algebra

\begin{aligned} {\epsilon }f=f{\epsilon }+ g,\quad g={\epsilon }(f)\in {\fancyscript{O}}({\mathbb C},0)\text { the Euler derivative of f}, \end{aligned}
(40)

can be rewritten as equality of two differential operators

\begin{aligned} \sum _{j=0}^{2n-1} l_j{\epsilon }^j=\sum _{j=0}^{2n-1}r_j{\epsilon }^j \end{aligned}

of order $$2n-1$$, implying the identical coincidence of their coefficients, $$l_j=r_j$$. Thus we have a system of $$2n$$ linear ordinary differential equations of order $$n$$ involving $$2n$$ unknown functions $$u_k,v_k$$ and their derivatives. We will show that this system can be reduced to a Fuchsian system of $$n^2$$ differential equations of order $$1$$.

One can instantly verify that these equations have the following structure.

1. (1)

All expressions for $$l_j$$ are linear with respect to the functions $$u_k=u_{k,0}$$ and their iterated Euler derivatives $$u_{ki}={\epsilon }u_{k,i-1}$$ of orders $$1\leqslant i\leqslant n$$ with holomorphic coefficients.

2. (2)

All expressions for $$r_j$$ are linear with respect to the functions $$v_k$$ with holomorphic coefficients.

It is rather easy to control the coefficients with which the highest order derivatives $$u_{kn}$$ and $$v_k$$ enter these equations.

The coefficients with which the variables $$v_k$$ enter the linear forms $$r_j$$, form an “upper triangular” $$n\times 2n$$-matrix with the same invertible diagonal entry $$a_n$$: the highest number forms $$r_{2n-1},\dots ,r_{2n-k}$$ depend only on the variables $$v_{n-1},\dots ,v_{n-k}$$, and the variable $$v_{n-k}$$ enters with the coefficient $$a_n$$ for all $$k=1,\dots ,n$$.

The coefficients with which the highest order derivatives $$u_{kn}$$ enter the linear forms $$l_j$$, are zero for $$l_{2n-1},\dots , l_{n}$$ and form a “diagonal” $$n\times 2n$$-matrix with the same invertible diagonal entry $$b_n$$ in the forms $$l_{n-1},\dots ,l_0$$. Indeed, the formulas (40) imply that a highest order derivative $$u_{kn}$$ can appear only after iterated transposition with the term $$b_n{\epsilon }^n$$ and only before the powers of the type $${\epsilon }^{j-n}$$.

Together these two observations imply that the system of the linear equations $$l_j=r_j$$, $$j=2n-1,\dots ,1,0$$ can be resolved with respect to the variables $$u_{kn},v_k$$, in particular,

\begin{aligned} u_{kn}(t)=\sum _{i=0}^{n-1}\sum _{j=0}^{n-1}c_{knij}(t)u_{ij}(t), \quad c_{knij}\in {\fancyscript{O}}({\mathbb C},0),\;k=0,\dots ,n-1 \end{aligned}
(41)

(and of course similar expressions for the $$v_k$$).

This system of $$n$$ linear ordinary differential equations of order $$n$$ with respect to the functions $$u_k=u_k(t)$$ is explicitly resolved with respect to the highest order derivatives, hence is a Fuchsian system of $$n^2$$ first order equations in exactly the same way as in (20).

It remains only to refer to the well-known fact: any formal solution of a Fuchsian system converges, see Ilyashenko and Yakovenko (2008, Lemma 16.17 and Theorem 16.16). Thus any noncommutative series for an operator $$H$$ conjugating two Fuchsian operators $$L,M$$, converges. Convergence of the series for $$K$$ follows by the uniqueness of the right division of $$MH$$ by $$L$$. $$\square$$