1 Introduction

Consider a polynomial ordinary differential equation in \(\mathbb {C}^n\)

$$\begin{aligned} \dot{x}=f(x)=f^{(0)}(x)+f^{(1)}(x)+\cdots +f^{(m)}(x), \end{aligned}$$
(1)

with each \(f^{(i)}\) a homogeneous polynomial of degree i, \(0\le i\le m\), and \(f^{(m)}\ne 0\). By \(X_f\) we denote the vector field associated to f (also called the Lie derivative with respect to f). A polynomial \(\psi : \mathbb {C}^n\rightarrow \mathbb {C}^n\) is called a semi–invariant of f if \(\psi \) is nonconstant and

$$\begin{aligned} X_f(\psi )=\lambda \cdot \psi , \end{aligned}$$
(2)

for some polynomial \(\lambda \), called the cofactor of \(\psi \). As it is well-known, a polynomial is a semi-invariant of the vector field f if and only if its vanishing set is invariant for the flow of system (1). Given a degree bound, the problem of finding semi-invariants essentially reduces to solving a linear system of equations with parameters, thus a problem of linear algebra.

The existence problem for semi-invariants is relevant for several questions, notably Darboux integrability and existence of Jacobi multipliers. From the work of Żoła̧dek [33] it is known that generically no algebraic invariant sets exist for polynomial vector fields of a fixed degree; see also Coutinho and Pereira [14]. Even for dimension \(n=2\) the existence problem is hard when the vector field has dicritical stationary points. When there are no dicritical stationary points then work of Cerveau and Lins Neto [7] and Carnicer [4] provides degree bounds for irreducible semi-invariants; see Pereira [23] for a refinement and also note the work by Ferragut et al. [17]. For certain classes of planar polynomial vector fields it was shown in [32] by elementary arguments that an effective degree bound for irreducible semi-invariants exists, and strong restrictions were found for possible integrating factors. For higher dimensions Jouanolou [18] showed the existence of the general degree bound \(m+1\) for those semi-invariants of system (1) that define smooth hypersurfaces in projective space. Following Jouanolou’s work, the Poincaré problem is now generally being viewed in the framework of algebraic solutions for Pfaffian systems on projective varieties. It is known that no general degree bounds can exist for algebraic solutions, and that certain properties of singular points are relevant for establishing degree bounds in some classes. Considering dimension \(n\ge 3\), much work has been done in the last two decades to classify and characterize invariant surfaces; see for instance the survey [6] by Cerveau, and the work [8] by Cerveau et al. on local properties. Further notable contributions concerning invariant algebraic varieties are due to Brunella and Gustavo Mendes [3], Cavalier and Lehmann [5], Corrêa and da Silva Machado [11, 12], Corrêa and Jardim [10], Corrêa and Soares [13], Esteves [16], and Soares [29,30,31]. The recent survey by Corrêa [9] collects these and more results, and provides an overview.

The purpose of the present paper, which is based in part on the doctoral thesis [19] by one of the authors, is to generalize the results of [32] to higher dimensions, with a focus on dimension three. The emphasis is on establishing verifiable conditions that provide uniform degree bounds for irreducible semi-invariants. In contrast to the deep theoretical results used in most of the above mentioned references, our approach is different, employing rather elementary methods. Moreover we consider the affine rather than the projective case. Thus we start with asking about the existence of irreducible, pairwise relatively prime semi-invariants

$$\begin{aligned} \phi _i=\phi _i^{(1)}+\cdots +\phi _i^{(r_i)}, \qquad 1\le i\le s, \end{aligned}$$
(3)

with \(\phi _i^{(k)}\) homogeneous polynomials of degree k, and \(\phi _i^{(r_i)}\ne 0.\) Our principal approach will be to consider stationary points at infinity of system (1).

The plan of the paper is as follows. In a preparatory Sect. 2 we collect mostly known facts about semi-invariants of polynomial and formal vector fields, and Poincaré transforms in order to discuss the behavior at infinity. Some of the statements are proven for the reader’s convenience. In Sect. 3 we consider a class of polynomial vector fields in \(\mathbb C^n\) which is characterized by certain properties of its stationary points at infinity. We derive degree bounds for collections of irreducible semi-invariants given that either at least \(n-1\) pairwise relatively prime semi-invariants exist or that a degree bound for (and a bound for the number of) the irreducible homogeneous semi-invariants of the highest degree term \(f^{(m)}\) is known. We then proceed to discuss posssible exponents and degree bounds for Jacobi multipliers that are algebraic over the rational function field \(\mathbb {C}(x_1,\ldots ,x_n)\). We close the section by stating some facts about reduction of dimension for homogeneous vector fields. In Sect. 4 we apply and specialize the results from the previous section to vector fields in \(\mathbb C^3\), obtaining rather strong results on degree bounds by combining our approach with earlier results by Jouanolou [18] and Carnicer [4]. Moreover we explicitly construct a class of quadratic vector fields for which the conditions on the stationary points at infinity are directly verifiable. This class, seen as a subset of the coefficient space (\(\cong \mathbb {C}^{18}\)) contains a Zariski open subset, and the vector fields not satisfying the conditions on stationary points at infinity form a measure zero subset. The Appendix contains some additional material on the construction and some proofs. It also contains the statement and proof of a result by Röhrl [26] (the original source contains an erroneous statement and proof) which is needed by us as a basis for the construction of the quadratic vector fields in Sect. 4. Röhrl’s result, which generalizes the common knowledge fact that generically a linear map admits a basis of eigenvectors to homogeneous polynomial maps of arbitrary positive degree, seems to be of independent interest.

2 Preparations

In this section we review some known facts and introduce notions and tools to be used later on. For the reader’s convenience, some proofs of known facts will be included in the Appendix.

2.1 Semi-invariants and Some of Their Properties

In addition to polynomial semi-invariants of polynomial vector fields we will consider local analytic (or formal) semi-invariants of local analytic (or formal) vector fields. Thus, a formal vector field is given by a power series

$$\begin{aligned} g(x)=Bx+\sum _{i\ge 1} g^{(i)}(x) \qquad \text{ in } \ \mathbb {C}^n, \end{aligned}$$
(4)

with B linear and each \(g^{(i)}\) a homogeneous polynomial of degree i. A semi-invariant of g is defined as a non-invertible power series

$$\begin{aligned} \rho =\rho ^{(1)}+\rho ^{(2)}+\cdots \quad (\text{ thus }\ \rho (0)=0) \end{aligned}$$
(5)

satisfying \(X_g(\rho )=\mu \rho \) for some power series \(\mu .\) Similar to the polynomial case, an analytic function at 0 is a semi-invariant of the analytic vector field (4) if and only if its vanishing set is invariant for \(\dot{x}=g(x).\) We collect some general properties of semi-invariants.

Lemma 1

  1. (a)

    Let the polynomial vector field f as in (1) be given. Then the following hold.

    • From \(X_f(\psi _j)=\lambda _j\cdot \psi _j\), \(j=1,2\), the relations

      $$\begin{aligned} X_f(\psi _1\cdot \psi _2)=(\lambda _1+\lambda _2)\cdot \psi _1\cdot \psi _2\text { and }X_f(\psi _1/\psi _2)=(\lambda _1-\lambda _2)\cdot \psi _1/\psi _2 \end{aligned}$$

      follow.

    • If \(\psi _1,\ldots ,\psi _\ell \) are irreducible and pairwise relatively prime polynomials, \(m_1,\ldots ,m_\ell \) are nonzero integers and there is a polynomial \(\mu \) such that

      $$\begin{aligned} X_f(\psi _1^{m_1}\cdots \psi _\ell ^{m_\ell })=\mu \cdot \psi _1^{m_1}\cdots \psi _\ell ^{m_\ell } \end{aligned}$$

      then every \(\psi _j\) is a semi-invariant of f.

    • If \(\sigma \) is nonconstant and algebraic over the field \(\mathbb C(x_1,\ldots ,x_n)\) and satisfies \(X_f(\sigma )=\mu \cdot \sigma \) for some polynomial \(\mu \) then every nonzero coefficient \(\beta \) of its minimal polynomial satisfies \(X_f(\beta )=k\cdot \mu \cdot \beta \) with some positive integer k, hence also \(X_f(\beta ^{1/k})=\mu \cdot \beta ^{1/k}\).

  2. (b)

    Mutatis mutandis, the same statements hold for formal semi-invariants of formal vector fields.

For a proof of Lemma 1 see the Appendix, Sect. 1.

Given a semi-invariant \(\psi \) of the polynomial vector field f, and a stationary point z of f, one has either \(\psi (z)\ne 0\) or \(\psi \) is a local analytic (hence also a formal) semi-invariant of f at z. Thus one can use local information in the search for polynomial semi-invariants, based on the following result.

Lemma 2

The following statements hold.

  1. (a)

    Let \(\psi \) be an irreducible polynomial semi-invariant of (1), and \(\psi (0)=0\). Then every irreducible series in the factorization of \(\psi \) in the formal power series ring \(\mathbb C[[x_1,\ldots ,x_n]]\) has multiplicity one.

  2. (b)

    Let \(\psi _1\) and \(\psi _2\) be relatively prime polynomial semi-invariants of (1), and \(\psi _1(0)=\psi _2(0)=0\). Then the prime factorizations of \(\psi _1\) and \(\psi _2\) in the formal power series ring \(\mathbb C[[x_1,\ldots ,x_n]]\) have no common irreducible factor.

These properties hold more generally for polynomials. An algebraic proof of this fact (which is known) is given in [19], Lemma 8.4, and we give an elementary ad-hoc proof in Sect. 1 of the Appendix.

We next discuss cases when the local information is rather precise. Again, the proof of Lemma 2 is given in Sect. 1.

Lemma 3

Let g be a formal vector field as in (4), with \(\lambda _1,\ldots ,\lambda _n\) the eigenvalues of B. Consider the following conditions:

  1. 1.

    \(\lambda _1,\ldots ,\lambda _n\) are linearly independent over the rational number field \(\mathbb {Q}\).

  2. 2.

    \(\dim _{\mathbb {Q}}\,(\mathbb {Q}\lambda _1+\cdots +\mathbb {Q}\lambda _n)=n-1\) and there exist positive integers \(m_1,\ldots ,m_n\) (w.l.o.g. relatively prime) with

    $$\begin{aligned} \sum _{i=1}^nm_i\lambda _i=0. \end{aligned}$$

If one of the two conditions above is satisfied then the following hold.

  1. (a)

    If \(B=diag(\lambda _1,\ldots ,\lambda _n)\) and g is in Poincaré–Dulac normal form (PDNF) then \(x_1,\ldots ,x_n\) are (up to multiplication by invertible series) the only irreducible formal semi-invariants of g.

  2. (b)

    For general g there exist (up to multiplication by invertible series) precisely n irreducible and pairwise relatively prime formal semi-invariants of system (4).

Thus, one only has to consider finitely many local semi-invariants if condition 1 or 2 from Lemma 3 holds. Moreover, their vanishing sets (in the analytic case) meet transversally in the stationary point 0. This property is characteristic for conditions 1 and 2 above, as the next observation shows.

Remark 1

When the dimension is \(n\ge 3\) and \(\dim _{\mathbb {Q}}(\mathbb {Q}\lambda _1+\cdots +\mathbb {Q}\lambda _n)\le n-2\), then there always exist infinitely many irreducible, pairwise relatively prime formal semi-invariants for the semisimple part \(B_s\) of B: By the dimension assumption, there is a relation \(\sum \ell _i\lambda _i=0\) with integers \(\ell _i\) that do not all have the same sign; thus we may assume that there are nonnegative, relatively prime integers \(k_i\) and some \(1<p<n\) such that

$$\begin{aligned} \sum _{i=1}^p k_i\lambda _i=\sum _{i=p+1}^n k_i\lambda _i, \end{aligned}$$

whence \(x_1^{k_1}\cdots x_p^{k_p}\) and \(x_{p+1}^{k_{p+1}}\cdots x_n^{k_n}\) are semi-invariants with the same cofactor, and

$$\begin{aligned} x_1^{k_1}\cdots x_p^{k_p} +\alpha \cdot x_{p+1}^{k_{p+1}}\cdots x_n^{k_n} \end{aligned}$$

is a semi-invariant for every constant \(\alpha \). Since the \(k_i\) are relatively prime, irreducibility follows whenever \(\alpha \not =0\).

2.2 Poincaré Transforms and Stationary Points at Infinity

The following is geometrically motivated by the well known procedure of passing to the Poincaré hypersphere (or to projective space) in the analysis of polynomial vector fields. We choose a rather straightforward adaptation (see [32] in the case of dimension two), to facilitate our computations.

Definition 1

Let \(\psi \in \mathbb {C}[x]\) be a polynomial of degree r with decomposition

$$\begin{aligned} \psi (x)=\sum \limits _{j=0}^{r}\psi ^{(j)}(x), \end{aligned}$$

where each \(\psi ^{(j)}\) is a homogeneous polynomial of degree j and \(\psi ^{(r)}\ne 0\).

  1. (a)

    The homogenization of \(\psi \) with respect to \(x_{n+1}\) is defined as

    $$\begin{aligned} \widetilde{\psi }(x_{1},\ldots ,x_{n},x_{n+1}):=\sum \limits _{j=0}^{r}\psi ^{(j)}(x_{1}, \dots ,x_{n})x_{n+1}^{r-j}\in \mathbb {C}[x_{1},\dots ,x_{n},x_{n+1}]. \end{aligned}$$
  2. (b)

    The special Poincaré transform of \(\psi \) with respect to

    $$\begin{aligned} e_1=(1,0,\ldots ,0)^T\in \mathbb {C}^n \end{aligned}$$

    is the polynomial

    $$\begin{aligned}{\begin{matrix} \psi ^{*}:=\psi ^{*}_{e_{1}}(x_{2},\ldots ,x_{n+1}):=&{}{\widetilde{\psi }}(1,x_{2},\ldots ,x_{n+1})\\ =&{}\sum \limits _{j=0}^{r}\psi ^{(j)}(1,x_{2},\ldots ,x_{n})x_{n+1}^{r-j}\in \mathbb {C}[x_{2},\ldots ,x_{n},x_{n+1}]. \end{matrix}} \end{aligned}$$
  3. (c)

    A Poincaré transform of \(\psi \) w.r.t. \(v\in \mathbb {C}^n{\setminus } \{0\}\) is defined as

    $$\begin{aligned} \psi ^{*}_{v}(x_2,\dots ,x_{n+1}):=\left( \psi \circ T^{-1}\right) ^{*}_{e_1}(x_2,\dots ,x_{n+1}) \end{aligned}$$

    with a regular matrix \(T\in \mathbb {C}^{n\times n}\) such that \(Tv=e_1\).

The following properties are easy to prove, see [32] or [19] for more details.

Lemma 4

Let \(\psi \) and v be as in Definition 1. Then the following hold.

  1. (a)

    One has \(\psi ^{(r)}(v)=0\) if and only if \(\psi _v^{*}(0)=0.\)

  2. (b)

    The map

    $$\begin{aligned} \mathbb {C}[x_1,\cdots ,x_n]{\setminus }\left<x_1\right>\rightarrow \mathbb {C}[x_2,\cdots ,x_{n+1}], \quad \psi \mapsto \psi _{e_1}^* \end{aligned}$$

    is injective.

  3. (c)

    If \(\psi \) is irreducible with \(\psi ^{(r)}(v)=0\) then \(\psi _v^*\) is irreducible.

  4. (d)

    If \(\psi _1\) and \(\psi _2\) are relatively prime and \(\psi _1^*(v)=\psi _2^*(v)=0\), then \(\psi _1^*\) and \(\psi _2^*\) are relatively prime.

Moreover, we define Poincaré transforms of vector fields.

Definition 2

Let f be given as in (1).

  1. (a)

    The homogenization of f with respect to \(x_{n+1}\) is defined as

    $$\begin{aligned} \widetilde{f}(x_{1},\dots ,x_{n+1})&:=\begin{pmatrix} \sum \limits _{j=0}^{m}f^{(j)}(x_{1},\dots ,x_{n})x_{n+1}^{m-j}\\ 0\\ \end{pmatrix}\\&=: \begin{pmatrix} g_{1}\\ \vdots \\ g_{n}\\ 0 \end{pmatrix}\in \mathbb {C}[\underline{ x}]^{n+1},\text { where }\underline{x}=\begin{pmatrix}x_{1}\\ \vdots \\ x_{n+1}\end{pmatrix}. \end{aligned}$$
  2. (b)

    The projection of \(\widetilde{f}\) with respect to \(x_{1}\) is

    $$\begin{aligned}{\begin{matrix} P\widetilde{f}(x_{1},\dots ,x_{n+1}):=&{}-g_{1}(x_{1},\dots ,x_{n+1})\cdot \underline{x}+x_{1}\cdot \widetilde{f}(x_{1},\dots ,x_{n+1})\\ =&{}\begin{pmatrix} 0\\ -g_{1}x_{2}+x_{1}g_{2}\\ \vdots \\ -g_{1}x_{n}+x_{1}g_{n}\\ -g_{1}x_{n+1}\\ \end{pmatrix}. \end{matrix}} \end{aligned}$$
  3. (c)

    The special Poincaré transform with respect to the vector \(e_1\) is defined as

    $$\begin{aligned}{\begin{matrix} f^{*}:=f^{*}_{e_{1}}(x_{2},\dots ,x_{n},x_{n+1}):=\begin{pmatrix}-g_{1}(1,x_{2},\dots ,x_{n+1})x_{2}+g_{2}(1,x_{2},\dots ,x_{n+1})\\ \vdots \\ -g_{1}(1,x_{2},\dots ,x_{n+1}) x_{n}+g_{n}(1,x_{2},\dots ,x_{n+1}) \\ -g_{1}(1,x_{2},\dots ,x_{n+1})x_{n+1}\end{pmatrix}, \end{matrix}}\end{aligned}$$

    where \(f^{*}\in \mathbb {C}[x_{2},\dots ,x_{n},x_{n+1}]^{n}\).

  4. (d)

    A Poincaré transform of f w.r.t. \(v\in \mathbb {C}^n{\setminus } \{0\}\) is defined as

    $$\begin{aligned} f^{*}_{v}(x_2,\dots ,x_{n+1}):=\left( T\circ f\circ T^{-1}\right) ^{*}_{e_1}(x_2,\dots ,x_{n+1}) \end{aligned}$$

    with a regular matrix \(T\in \mathbb {C}^{n\times n}\) such that \(Tv=e_1\).

Remark 2

The definitions of Poincaré transforms depend on the choice of the matrix T, but a different choice of T will just amount to a linear automorphism of the polynomial algebra resp. a conjugation of vector fields by a linear transformation; see [32], Lemma 3.1. This will be irrelevant for our purpose.

We now turn to stationary points at infinity, i.e. to stationary points of Poincaré transforms which lie in the hyperplane \(\{x:\,x_{n+1}=0\}\).

Lemma 5

Let f be as in (1) and \(v\in \mathbb {C}^n{\setminus }\{0\}.\) Then:

  1. (a)

    The point 0 is stationary for \(f_v^*\) if and only if \(f^{(m)}(v)=\gamma v\) for some \(\gamma \in \mathbb {C}.\)

  2. (b)

    The Jacobian \(Df^{(m)}(v)\) admits the eigenvector v, with eigenvalue \(m\gamma \). Let \(\beta _2,\ldots ,\beta _n\) be the further eigenvalues of \(Df^{(m)}(v)\) (counted according to multiplicity). Then the eigenvalues of \(Df_v^*(0)\) are given by

    $$\begin{aligned} -\gamma ,\beta _2-\gamma ,\ldots ,\beta _n-\gamma . \end{aligned}$$
  3. (c)

    If the number of lines \(\mathbb {C}v\) with \(f^{(m)}(v)\in \mathbb {C}v\) is finite then it is equal to

    $$\begin{aligned} \frac{m^n-1}{m-1}=\sum _{i=0}^{n-1}m^i, \end{aligned}$$

    counting multiplicities.

Proof

Statement (a) can be verified directly. For statement (b) see [27], Proposition 1.8 and Corollary. 1.9. For statement (c), a proof is given by Röhrl [25], using Bezout’s theorem in projective space (see e.g. Shafarevich [28], Chapter 4 for this). \(\square \)

Remark 3

Note that the stationary point 0 of \(f_v^*\) has multiplicity one if and only if all the eigenvalues of its Jacobian are nonzero. Moreover, the stationary point 0 is then isolated. In particular, given condition 1 or 2 of Lemma 3 for the eigenvalues at the stationary point 0 of \(f_v^*\), this stationary point has multiplicity one. In the particular (non-generic) case when \(f^{(m)}(x)=\rho (x)\cdot x\) one sees that every point at infinity is stationary. If one considers differential forms and foliations of projective space rather than vector fields in affine space, one usually divides out a common factor in the coefficients, obtaining a form which generally does not admit the hyperplane at infinity as an invariant set. The vector fields we will discuss are not in this particular class.

3 Invariant Hypersurfaces of a Class of Vector Fields

In the present section we will discuss a particular class of vector fields and degree bounds for irreducible semi–invariants of this class. We first give some definitions.

Definition 3

Let f be given as in (1).

  1. (a)

    We say that f has property E if condition 1 or condition 2 in Lemma 3 holds for the linearization of \(f_v^*\) at every stationary point at infinity.

  2. (b)

    We say that f has property S if every proper homogeneous invariant variety Y of \(f^{(m)}\) with \(\dim Y\ge 1\) contains an invariant subspace \(\mathbb {C}v\) with \(v\ne 0.\)

Note that both conditions apply only to the highest degree term \(f^{(m)}.\) In dimension two, property S is trivially satisfied, and it was shown in [32], Proposition 3.7 that property E is generic. In higher dimensions it is not a priori obvious that vector fields admitting properties E and S exist. In Sect. 4 we will construct a class of examples in dimension three.

One consequence of property E is that the number of stationary points at infinity is finite (otherwise some Jacobian at a stationary point would have to be non-invertible), and every stationary point at infinity has multiplicity one. A further consequence is not directly relevant for the main topic of this paper, but it is worth recording.

Proposition 1

Let f satisfy property E. Then the number of irreducible invariant algebraic curves for system (1) is bounded by \((m^n-1)/(m-1)\).

Proof

Let C be an invariant algebraic curve of the system. Then its intersection with the hyperplane at infinity is invariant, hence every intersection point is stationary. Let \(\mathbb Cv\) correspond to one of these points. Then, by Theorems 3.1 and 3.2 of [20], the local ideal defining the image of the curve under the Poincaré transform is generated by certain semi-invariants of \(Df_v^*(0)\), and there must be \(n-1\) of these, since the dimension of the curve equals one. Using property E and Lemma 3, and the fact that the transformed curve is not contained in the hyperplane \(\{x:\,x_{n+1}=0\}\), one sees that the only possible ideal is the one not containing the semi–invariant \(x_{n+1}\). This argument shows that every stationary point at infinity can be an intersection point with at most one irreducible invariant curve. Now use Lemma 5(c). \(\square \)

3.1 Degree Bounds for Semi-invariants

We first state a preliminary result.

Lemma 6

Let \(\phi _1,\ldots ,\phi _d\) be irreducible and pairwise relatively prime semi–invariants of the polynomial vector field f, and let v be such that \(\phi _{i,v}^*(0)=0\), \(1\le i\le d\), and that the Jacobian \(Df_v^*(0)\) satisfies condition 1 or 2 from Lemma 3. Denote the irreducible local semi–invariants of \(Df_v^*(0)\) by \(\sigma _1,\ldots ,\sigma _{n-1},\,\sigma _n:=x_{n+1}\). Then \(\sigma _n\) is not a factor of any \(\phi _{i,v}^*\), and for the prime decompositions

$$\begin{aligned} \phi _{i,v}^*=\sigma _1^{\ell _{i,1}}\cdots \sigma _{n-1}^{\ell _{i,n-1}}\cdot \nu _i,\quad 1\le i\le d \end{aligned}$$

with nonnegative integers \(\ell _{i,j}\) and invertible series \(\nu _i\) one has

$$\begin{aligned} \sum _i \ell _{i,j}\le 1. \end{aligned}$$

Proof

This is a direct consequence of Lemma 2. \(\square \)

Our first result yields degree bounds when \(n-1\) irreducible semi–invariants exist.

Theorem 1

Let the vector field f in (1) satisfy properties E and S, and let \(\phi _1,\ldots , \phi _{n-1}\) be irreducible and pairwise relatively prime semi–invariants of f as in (2), of degrees \(r_1,\ldots ,r_{n-1}\) respectively. Then

$$\begin{aligned} \prod _{i=1}^{n-1}r_i\le \frac{m^n-1}{m-1}. \end{aligned}$$

Proof

We first recall that invariance of the common zero set of the \(\phi _j\) for (1) implies invariance of \(\mathcal {V}(\phi _1^{(r_1)},\ldots , \phi _{n-1}^{(r_{n-1})})\) for \(\dot{x}=f^{(m)}(x)\). Now let \(\mathbb {C}v\) be a common zero of \(\phi _1^{(r_1)},\ldots ,\phi _{n-1}^{(r_{n-1})}.\) By property S we may assume that \(f^{(m)}(v)\in \mathbb {C}v\). Consider the prime factorization of the \(\phi _{i,v}^*\) in the power series ring. By property E and Lemma 6, after possibly renumbering the factors, we have

$$\begin{aligned} \phi _{i,v}^*=\sigma _i\cdot \nu _i,\quad 1\le i\le d, \end{aligned}$$

whence the intersection multiplicity of the common zero of the \(\phi _{i,v}^*\) and \(x_{n+1}\) is equal to one. In particular the irreducible component Y of \(\mathcal {V}(\phi _1^{(r_1)},\ldots , \phi _{n-1}^{(r_{n-1})})\) that contains \(\mathbb {C}v\) is equal to \(\mathbb {C}v\). Bezout’s Theorem in projective space now shows that \(\mathcal {V}(\phi _1^{(r_1)},\ldots , \phi _{n-1}^{(r_{n-1})})\) is the union of precisely \(\prod _{i=1}^{n-1}r_i\) distinct lines in \(\mathbb {C}^n\). Each of these lines is an invariant set for \(\dot{x}=f^{(m)}(x),\) hence by Lemma 5(c) one has the assertion. \(\square \)

Corollary 1

Let f be as in (1), satisfying properties E and S. Moreover, let \(k\ge n\) and \(\phi _1,\ldots ,\phi _k\) be irreducible and pairwise relatively prime semi–invariants of f, with \(deg\ \phi _i=r_i\). Then,

$$\begin{aligned} \prod _{1\le i\le k}r_i\cdot \sum _{\begin{array}{c}M\subseteq \{1,\ldots ,k\}\\ |M|=k+1-n\end{array}}\frac{1}{\prod _{j\in M}r_j} \le \frac{m^n-1}{m-1}. \end{aligned}$$

Proof

Let \(1\le i_1<\cdots <i_{n-1}\le k\), thus \(\{i_1,\ldots ,i_{n-1}\}\) is the complement of a set \(M\subseteq \{1,\ldots ,k\}\) with \(k+1-n\) elements. Then, by the argument in the proof of Theorem 1, the vanishing set

$$\begin{aligned} \mathcal {V}\left( \phi _{i_1}^{(r_{i_1})},\ldots ,\phi _{i_{n-1}}^{(r_{i_{n-1}})}\right) \end{aligned}$$

is a union of precisely \(r_{i_1}\cdots r_{i_{n-1}}\) invariant lines, each with multiplicity one. By property E and Lemma 3(b), each invariant line \(\mathbb {C}v\) of \(f^{(m)}\) appears in at most one of the common vanishing sets

$$\begin{aligned} \mathcal {V}\left( \phi _{i_1}^{(r_{i_1})},\ldots ,\phi _{i_{n-1}}^{(r_{i_{n-1}})}\right) , \end{aligned}$$

since otherwise more than n local invariant hypersurfaces would meet at the stationary point 0 of \(f_v^*\). Now Bezout’s theorem, and adding up the contributions of all stationary points at infinity, shows the assertion. \(\square \)

The second result may be used to obtain degree bounds for semi–invariants of f, if degree bounds for semi–invariants of the highest degree term \(f^{(m)}\) are known.

Theorem 2

Let the vector field f in (1) satisfy properties E and S, and let \(\phi _1,\ldots ,\phi _s\) be irreducible and pairwise relatively prime semi–invariants of f.

  1. (a)

    Let \(\{\psi _1,\ldots ,\psi _\ell \}\) be the set of pairwise relatively prime irreducible factors of the \(\phi _i^{(r_i)}\) with \(1\le i\le s\). (Note that these are homogeneous and semi–invariants of \(f^{(m)}\).) Then, given the representations

    $$\begin{aligned} \phi _i^{(r_i)}=\prod _{j=1}^\ell \psi _j^{k_{ij}}, \quad k_{ij}\ge 0, \end{aligned}$$

    one has \(\sum _{i=1}^sk_{ij}\le 1\) for all j. Thus every \(\psi _j\) appears in at most one of the \(\phi _i^{(r_i)}\), and if it appears then with exponent 1.

  2. (b)

    If \(f^{(m)}\) admits only finitely many irreducible semi–invariants \(\psi _1,\ldots ,\psi _\ell \) (up to multiplication by constants), then

    $$\begin{aligned} deg \ \phi _i=\sum _{j=1}^\ell k_{ij}\ deg\ \psi _j, \quad 1\le i\le s \end{aligned}$$

    and

    $$\begin{aligned} \sum _{i=1}^s deg\ \phi _i\le \sum _{j=1}^\ell deg\ \psi _j. \end{aligned}$$

    In particular the number of irreducible and pairwise relatively prime semi–invariants of f is finite.

Proof

Statement (b) is a simple consequence of (a). As for the proof of statement (a), by property S we may again consider the \(\phi _i^{(r_i)}\) at a common zero \(\mathbb {C}v\) which is also invariant for \(\dot{x}=f^{(m)}(x).\) We consider a Poincaré transform at v, and there is no loss of generality in assuming that \(v=e_1\). Moreover let the \(\sigma _i\) be as in Lemma 6, and by property E and Lemma 3 we may assume that

$$\begin{aligned} \sigma _i(x_2,\ldots ,x_n,\,x_{n+1})=x_{i+1}+\text { h.o.t},\quad 1\le i\le n-1, \end{aligned}$$

with “h.o.t” denoting higher order terms. From Definition 1 we have

$$\begin{aligned} \phi _i^{(r_i)}(1,x_2,\ldots ,x_n)+x_{n+1}\cdot (\cdots )=\phi _{i,e_1}^*=\sigma _1^{\ell _{i,1}}\cdots \sigma _{n-1}^{\ell _{i,n-1}}\cdot \nu _i \end{aligned}$$

with all \(\ell _{i,j}\in \{0,1\}\) and therefore also

$$\begin{aligned} \phi _i^{(r_i)}(1,x_2,\ldots ,x_n)=\sigma _1(x_2,\ldots ,x_n,0)^{\ell _{i,1}}\cdots \sigma _{n-1}(x_2,\ldots ,x_n,0)^{\ell _{i,n-1}}\cdot \nu _i(x_2,\ldots ,x_n,0). \end{aligned}$$

From \(\sigma _i=x_{i+1}+\cdots \) one sees that the \(\sigma _i(x_2,\ldots ,x_n,0)\) are still irreducible and pairwise relatively prime in the formal power series ring. Comparing this decomposition with

$$\begin{aligned} \phi _i^{(r_i)}=\prod _{j=1}^\ell \psi _j^{k_{ij}}, \end{aligned}$$

the corresponding local decompositions of the \(\psi _j^{k_{ij}}(1,\,x_2,\ldots ,x_n)\) can have only simple prime factors, hence \(k_{ij}\le 1\) whenever \(\psi _j(v)=0\). \(\square \)

3.2 Degree Bounds for Jacobi Multipliers

We recall that a Jacobi multiplier (or Jacobi last multiplier) \(\sigma \ne 0\) of a vector field h is characterized by the condition

$$\begin{aligned} X_h(\sigma )+\mathrm{div}\, h\cdot \sigma =0, \text { equivalently } X_h(\sigma ^{-1})=\mathrm{div}\, h\cdot \sigma ^{-1}. \end{aligned}$$

In particular, a Jacobi multiplier satisfies the defining identity for semi–invariants but might be invertible. For a comprehensive account of Jacobi multipliers and their properties see Berrone and Giacomini [1].

We focus on Jacobi multipliers that are algebraic over \(\mathbb {C}(x_1,\ldots ,x_n)\) resp. \(\mathbb {C}((x_1,\ldots ,x_n))\). This is a natural requirement in dimension \(n=2\) in view of Prelle and Singer’s paper [24] on elementary integrability, and seems to be a sensible (initial) restriction also for local integrating actors of Darboux type, as the following results will imply. We note that by Lemma 1 one may restrict interest to products of powers of semi–invariants, with rational exponents. We first discuss the local analytic, respectively the formal case, given the setting of Lemma 3.

Lemma 7

Let \(g(x)=Bx+\cdots \) as in (4) in Poincaré-Dulac normal form, \(B=\mathrm{diag}\,(\lambda _1,\ldots ,\lambda _n)\).

  1. (a)

    If condition 1 from Lemma 3 holds, then \((x_1\cdots x_n)^{-1}\) is, up to multiplication by constants, the only Jacobi multiplier of g which is algebraic over \(\mathbb {C}((x_1,\ldots ,x_n)).\)

  2. (b)

    If condition 2 from Lemma 3 holds, and \(\gamma =x_1^{m_1}\cdots x_n^{m_n}\), then there exist linearly independent diagonal matrices \(C_1,\ldots ,C_{n-1}\) such that the Lie bracket \([g, C_ix]=0\) and \(X_{C_ix}(\gamma )=0\), \(1\le i\le n-1\); moreover B is a linear combination of the \(C_i\).

    • If \(\tau (x):=det(g(x), C_1x,\ldots ,C_{n-1}x)\ne 0\), then there exists \(\ell >0\) such that \(\tau (x)=\gamma (x)^\ell \cdot (x_1\cdots x_n)\cdot \widehat{\tau }(x),\) with an invertible series \(\widehat{\tau }\), and \(\tau ^{-1}\) is (up to multiplication by constants) the unique Jacobi multiplier for g which is algebraic over \(\mathbb {C}((x_1,\ldots ,x_n)).\)

    • If \(\tau (x)=0\) then \((x_1\cdots x_n)^{-1}\) is a Jacobi multiplier, and \(\gamma \) a first integral of g. In that case every Jacobi multiplier that is algebraic over \(\mathbb {C}((x_1,\ldots ,x_n))\) has the form

      $$\begin{aligned} (x_1\cdots x_n)^{-1}\cdot \nu \end{aligned}$$

      with \(d\in \mathbb {Q}\) and \(\nu \) an algebraic first integral of g.

Proof

(a) In this case one has \(g(x)=Bx\), and one directly verifies that \(\sigma := (x_1\cdots x_n)^{-1}\) is a Jacobi multiplier. Assume that there exists a Jacobi multiplier \(\widetilde{\sigma }\) that is algebraic over \(\mathbb {C}((x_1,\ldots ,x_n))\), then one also has a Jacobi multiplier \(x_1^{-d_1}\cdots x_n^{-d_n}\) with rational exponents by Lemma 1, and

$$\begin{aligned} \sum d_i\lambda _i=\sum \lambda _i. \end{aligned}$$

Condition 1 implies that all \(d_i=1\), hence \(x_1^{-1}\cdots x_n^{-1}\) is the unique algebraic integrating factor. Using the (notation and) argument in the proof of Lemma 1 the coefficient of \(T^{m-i}\) in the minimal polynomial of \(\widetilde{\sigma }^{-1}\) is a constant multiple of \(x_1\cdots x_n\). But then the same holds for \(\widetilde{\sigma }^{-1}\) itself.

(b) It is known that

$$\begin{aligned} g(x)=Bx+\sum _{j\ge 1}\gamma (x)^jD_jx \end{aligned}$$

with diagonal matrices \(D_j\), see e.g. Bibikov [2], Definition 2.3 and Theorem 2.2. The equation \(\sum m_i\mu _i=0\) has \(n-1\) linearly independent solutions in \(\mathbb {Q}^n\). Take these as the diagonal elements of \(C_1,\cdots ,C_{n-1}\) respectively. Then \(X_{C_i}(\gamma )=0\) and \([C_i, B]=[C_i, D_j]=0\) for all ij, whence \([C_i, g]=0\) for all i. Now write \(D_jx=\sum \nu _{jh} C_hx+\beta _j\cdot x\) with constants \(\nu _{jh}\) and \(\beta _j\). Thus,

$$\begin{aligned} \tau (x)=\sum _{j\ge 1} \gamma (x)^j\beta _j \det (x, C_1x,\ldots , C_{n-1}x)=\kappa x_1\cdots x_n\sum _{j\ge 1}\beta _j\gamma (x)^j, \end{aligned}$$

with a nonzero constant \(\kappa \). If \(\tau \ne 0\) then \(\tau ^{-1}\) is a Jacobi multiplier according to Berrone and Giacomini [1], and if \(\ell \) is the smallest index with \(\beta _\ell \ne 0\) then we get

$$\begin{aligned} \tau (x)=\gamma ^\ell (x)\cdot (x_1\cdots x_n)\cdot (\beta _\ell \kappa +h.o.t). \end{aligned}$$

As for uniqueness, we first recall: Every first integral in \(\mathbb {C}((x_1,\ldots ,x_n))\) of \(\dot{x}=Bx\) is a quotient of power series in \(\gamma \). Indeed, numerator and denominator are semi-invariants of B, with the same cofactor (which is a first integral of B and lies in \(\mathbb {C}[[x_1,\ldots ,x_n]]\)), and the argument in the proof of Lemma 3 shows the assertion. From this we find that g admits no nonconstant first integral that is algebraic over \(\mathbb {C}((x_1,\ldots ,x_n))\) whenever some \(\beta _\ell \ne 0\), by showing that there exists no such first integral in \(\mathbb {C}((x_1,\ldots ,x_n))\). But such a first integral would also be a first integral of \(\dot{x}=Bx\) (see e.g. [22], Theorem 1), hence a quotient of power series in \(\gamma \). But then \(\gamma \) is a first integral. Finally, the identity

$$\begin{aligned} X_g(\gamma )=\kappa \cdot \sum m_i\cdot \sum _{j\ge 1} \beta _j\gamma (x)^{j+1} \end{aligned}$$

yields a contradiction unless all \(\beta _j=0\). In the case \(\tau =0\) one verifies by direct computation that \((x_1\cdots x_n)^{-1}\) is a Jacobi multiplier, and \(\gamma \) is obviously a first integral of g. The last statement is again clear from known properties of Jacobi multipliers; see [1]. \(\square \)

Obviously, Lemma 7 is also applicable to local analytic or formal systems which are not in PDNF, given that condition 1 or condition 2 of Lemma 3 holds. We shall now apply this to Jacobi multipliers of system (1) that are algebraic over the rational function field \(\mathbb {C}(x_1,\ldots ,x_n)\).

Theorem 3

Let the polynomial vector field f given by (1) satisfiy properties E and S and let

$$\begin{aligned} \left( \phi _1^{d_1}\cdots \phi _s^{d_s}\right) ^{-1} \end{aligned}$$

be a Jacobi multiplier of f, with the \(\phi _i\) as in (3), and nonzero rational exponents \(d_1,\cdots ,d_s\).

Moreover, assume that there exists a line \(\mathbb {C}w\) such that linearization of \(f_w^*\) at 0 has eigenvalues that are linearly independent over \(\mathbb {Q}\). Then

$$\begin{aligned} d_1=\cdots =d_s=1 \quad \text{ and }\quad \sum _{i=1}^sr_i=m+n-1. \end{aligned}$$

Proof

(i) We need some technical preparations, the proofs of which are straightforward generalizations of the ones for Proposition 3.3 in [32].

  • If \(\psi \) is a semi–invariant of f, with degree r and \(X_f(\psi )=\lambda \psi \), then the Poincaré transform \(\psi _{e_1}^*\) is a semi–invariant of \(f^*_{e_1}\) with cofactor \(-rg_1(1,x_2,\ldots ,x_{n+1})+\lambda _{e_1}^*.\)

  • \(f_{e_1}^*\) admits the Jacobi multiplier

    $$\begin{aligned} \left( x_{n+1}^{(m+n-\sum d_ir_i)}\cdot (\phi _{1,e_1}^*)^{d_1}\cdots (\phi _{s,e_1}^*)^{d_s} \right) ^{-1}. \end{aligned}$$
  • More generally, for any \(v\in \mathbb {C}^n{\setminus }\{0\}\), the Poincaré transform \(f_v^*\) admits the Jacobi multiplier

    $$\begin{aligned} \left( x_{n+1}^{(m+n-\sum d_ir_i)}\cdot (\phi _{1,v}^*)^{d_1}\cdots (\phi _{s,v}^*)^{d_s} \right) ^{-1}. \end{aligned}$$

(ii) Let \(\mathbb {C}w\) correspond to a stationary point at infinity such that the eigenvalues of \(Df_w^*(0)\) are linearly independent over \(\mathbb {Q}\). Then Lemma 7 (a) shows that \(d_i=1\) for all i such that \(\phi _{i,w}^*(0)=0\), and moreover \(m+n-\sum d_ir_i=1\).

(iii) If \(\mathbb {C}v\) corresponds to a stationary point at infinity such that the eigenvalues of \(Df_v^*(0)\) satisfy the second condition of Lemma 3, then one exponent in the local factorization of the multiplier, viz. the one belonging to the factor \(x_{n+1}\), is equal to \(-1\). By Lemma 7(b), all the other exponents equal \(-1\), hence \(d_i=1\) whenever \(\phi _{i,v}^*(0)=0\). Due to property S, we thus find that all exponents are equal to \(-1\), and the assertion follows. \(\square \)

3.3 Reduction of Dimension

The verification of property S in dimension three, as well as the search for degree bounds via Theorem 2 leads to semi–invariants of the homogeneous vector field \(f^{(m)}\), and in turn to semi–invariants of a reduced vector field in \(\mathbb {C}^{n-1}\). This reduction is due to scaling symmetry, and we will recall it now.

Proposition 2

Let \(p:\mathbb {C}^n\rightarrow \mathbb {C}^n\), \(x\mapsto (p_1(x), \ldots , p_n(x))^T\) be homogeneous of degree m, and let \(H=\left\{ x:\,x_n=0\right\} .\)

  1. (a)

    Then

    $$\begin{aligned} \Phi : \mathbb {C}^n{\setminus } H\rightarrow \mathbb {C}^{n-1}, \quad x\mapsto \left( \begin{array}{c}x_1/x_n\\ \vdots \\ x_{n-1}/x_n\end{array}\right) \end{aligned}$$

    maps solutions orbits of \(\dot{x}=p(x)\) to solution orbits of \(\dot{y}=q(y)\), with

    $$\begin{aligned} q=\left( \begin{array}{c} q_1\\ \vdots \\ q_{n-1} \end{array} \right) ; \quad q_i(y)=p_i(y_1,\cdots ,y_{n-1}, 1)-y_ip_n(y_1,\cdots ,y_{n-1},1). \end{aligned}$$
  2. (b)

    Every homogeneous invariant set Y of \(\dot{x}=p(x)\) which is not contained in H is mapped to an invariant set of \(\dot{y}=q(y)\) with dimension decreasing by one. Conversely, the inverse image of every invariant set Z of \(\dot{y}=q(y)\) is a homogeneous invariant set of \(\dot{x}=p(x),\) with dimension increasing by one.

  3. (c)

    Let \(v\in \mathbb {C}^n\) such that \(v_n\not =0\) and \(p(v)=\gamma v\). Then v is an eigenvector of Dp(v) with eigenvalue \(m\gamma \). Let \(\beta _2,\ldots ,\beta _{n}\) be the other eigenvalues of Dp(v), each counted according to its multiplicity. Then the linearization of q at the stationary point \((v_1/v_n,\ldots ,v_{n-1}/v_n)\) has eigenvalues

    $$\begin{aligned} v_n(\beta _2-\gamma ),\ldots ,v_n(\beta _{n}-\gamma ). \end{aligned}$$

Sketch of proof

From \(\dot{x}_i=p_i(x)\) one gets

$$\begin{aligned} \frac{d}{dt}\left( \frac{x_i}{x_n}\right) =\frac{1}{x_n^2}(x_{n}p_i(x)-x_ip_n(x)), \quad 1\le i\le n-1 \end{aligned}$$

and rescaling time yields

$$\begin{aligned} \left( \frac{x_i}{x_n}\right) '=x_np_i(x)-x_ip_n(x), \quad 1\le i\le n-1. \end{aligned}$$

Dehomogenize to obtain statement (a).

Statement (b) is a direct consequence of (a) since any invariant set is a union of solution orbits.

The proof of statement (c) is a variant of the proof of [27], Proposition 1.8: Define

$$\begin{aligned} Q(x):= & {} x_n\cdot p(x)-p_n(x)\cdot x,\text { with } DQ(x)y\\= & {} y_n\cdot p(x)+x_n\cdot Dp(x)y-(Dp(x)y)_n\cdot x -p_n(x)y \end{aligned}$$

and note that the first \(n-1\) entries of Q(x), upon setting \(x_n=1\), are just the entries of q(x). Now let y be an eigenvector of Dp(v) that is linearly independent from v, with eigenvalue \(\beta \). Then, using \(p(v)=\gamma v\), one gets

$$\begin{aligned} DQ(v)y= \left( \cdots \right) \cdot v+ v_n\cdot Dp(v)y-\gamma v_n\cdot y=\left( \cdots \right) \cdot v+v_n\cdot (\beta -\gamma )\cdot y. \end{aligned}$$

This proves the part of the assertion for eigenvectors. The remaining part (if nontrivial Jordan blocks exist) is proven similarly. \(\square \)

Remark 4

  1. (a)

    The entries of q in part (a) of Proposition 2 are just the first \(n-1\) entries of the Poincaré transform of p with respect to the vector \(e_n\). Note that the \(q_i\) may have common factors. Dividing out such common factors, if necessary, one obtains what we call the full reduction of p.

  2. (b)

    A coordinate-free version of the reduction starts from a nonzero linear form

    $$\begin{aligned} \alpha (x)=\sum \alpha _ix_i, \quad H_\alpha :=\{x:\,\alpha (x)=0\}\text { and } \Psi _{\alpha }: \mathbb {C}^n{\setminus } H_\alpha \rightarrow \mathbb {C}^n,\,\,x\mapsto \frac{1}{\alpha (x)}\,x. \end{aligned}$$

    Then \(\Psi _\alpha \) maps solution orbits of \(\dot{x}=p(x)\) to solution orbits of the equation

    $$\begin{aligned} \dot{x}=Q_\alpha (x):=\alpha (x)p(x)-\alpha (p(x))x \end{aligned}$$

    which admits the linear first integral \(\alpha \). In this case, whenever \(p(v)=\gamma v\), \(\alpha (v)\not =0\) and the eigenvalues of Dp(v) are as in Proposition 2 , the eigenvalues for the reduced system on the hyperplane given by \(\alpha (x)=1\) are

    $$\begin{aligned} \alpha (v)\cdot (\beta _2-\gamma ),\ldots ,\alpha (v)\cdot (\beta _{n}-\gamma ). \end{aligned}$$

4 Dimension Three

In this section we will specialize our general results to dimension \(n=3\). In particular we will verify that property S from Definition 3 is always satisfied, and show that property E holds for almost all quadratic vector fields (in a sense to be specified).

4.1 Property S and Reduction

The first pertinent property is always satisfied in dimension three, as follows directly from the work of Jouanolou [18].

Proposition 3

Let f be a polynomial vector field in \(\mathbb C^3\). Then f satisfies property S.

Proof

In dimension three one has to prove that the zero set of a homogeneus semi–invariant of a homogeneous polynomial vector field p contains an invariant line for p. There is no loss of generality in assuming that the entries of p are relatively prime. One may rephrase this for the projective plane \(\mathbb P^2(\mathbb {C})\), by introducing the one form

$$\begin{aligned} \omega =(p_2x_3-p_3x_2)\,\mathrm{d}x_1+(p_3x_1-p_1x_3)\,\mathrm{d}x_2+(p_1x_2-p_2x_1)\,\mathrm{d}x_3 \end{aligned}$$

and considering a homogeneous polynomial solution \(\phi \) of the Pfaffian equation \(\omega =0\). First consider the case when the projective curve defined by the zeros of \(\phi \) is smooth (hence normal). Then by Jouanolou, Chapter 2, Proposition 4.1(ii), the curve defined by \(\phi \) in \(\mathbb P^2\) contains a singular point of \(\omega \), which corresponds to an invariant line for the homogeneous vector field p. If the normality requirement for the solution is not satisfied then the projective curve defined by \(\phi \) contains a singular point, which translates to a singular line in homogeneous coordinates. But singular sets of invariant varieties of polynomial vector fields are also invariant. \(\square \)

We note that from Jouanolou [18], Chapter 2, Proposition 4.1(iii) one also obtains the degree bound \(m+1\) for irreducible homogeneous semi-invariants whose associated projective curve is smooth. Next we discuss the reduction of the highest degree term of f, with a view on applying Theorem 2.

Lemma 8

Let the polynomial vector field f be given as in (1), and consider the reduction of its homogeneous highest degree term \(p=f^{(m)}\), according to Proposition 2 and Remark 4. Then the following hold.

  1. (a)

    Upon identifying homogeneous polynomial vector fields with their coefficients in some \(\mathbb {C}^N\), the full reduction q of p admits no stationary points at infinity for a Zariski-open subset of \(\mathbb {C}^N\).

  2. (b)

    Assume \(p(v)=v\) and let the linear form \(\alpha \) be such that \(\alpha (v)\not =0\). Moreover let the eigenvalues of Dp(v) be m, \(\beta _2\) and \(\beta _3\). Then the eigenvalues of the linearization of q at the corresponding stationary point are \(\alpha (v)(\beta _2-1)\) and \(\alpha (v)(\beta _3-1)\).

  3. (c)

    Assume that \(p(v)=v\) and the linearization of \(f_v^*\) at the stationary point at infinity of system (1) satisfies condition 1 or condition 2 from Lemma 3. Then the \(\beta _i-1\) are nonzero and their ratio is not a positive rational number.

Proof

We may assume that \(\alpha (x)=x_3\). For statement (a), abbreviate \(h_i(y_1,y_2):=p_i(y_1,y_2,1)\), noting that the \(h_i\) generically have degree m. We now compute the Poincaré transform of q with respect to \(e_1\), following the procedure in Definition 2. The first two entries of the homogenization have the form

$$\begin{aligned} -h_3^{(m)}(y_1,y_2)\cdot \begin{pmatrix}y_1\\ y_2\end{pmatrix}+y_3\cdot \left( \begin{pmatrix}h_1^{(m)}(y_1,y_2)\\ h_2^{(m)}(y_1,y_2)\end{pmatrix}-h_3^{(m-1)}(y_1,y_2)\cdot \begin{pmatrix}y_1\\ y_2\end{pmatrix}\right) +y_3^2\cdots , \end{aligned}$$

from which the Poincaré transform is computed as

Passing to the full reduction by dividing out the factor \(y_3\) one obtains a vector field that generically (i.e. corresponding to a Zariski open set in the space of coefficients of the \(h_i\), hence also of the coefficients of p) has no stationary points on \(y_3=0\), since \(-h_1^{(m)}(1,y_2)\cdot y_2+h_2^{(m)}(1,y_2)\) and \(h_3^{(m)}(1,y_2)\) generically have no common zeros. Statement (b) is a direct consequence of Proposition 2. To prove statement (c), note first that conditions 1 and 2 both imply that all eigenvalues for the Poincaré transform are nonzero and observe Lemma 5. Now assume that \((\beta _3-1)/(\beta _2-1)=r/s\) with positive integers r and s. Then one obtains

$$\begin{aligned} (r-s)\cdot (-1)+r\cdot \beta _2+(-s)\cdot \beta _3=0. \end{aligned}$$

Therefore the eigenvalues of \(Df_v^*(0)\) are linearly dependent over \(\mathbb Q\), hence condition 1 cannot hold. Moreover, condition 2 also cannot hold because the integer coefficients in the linear combination have different signs. \(\square \)

Now we are ready to determine degree bounds, applying a result of Carnicer.

Theorem 4

Let the polynomial vector field f of degree m be given on \(\mathbb {C}^3\), and assume that

  1. (i)

    the reduction of the homogeneous highest degree term \(f^{(m)}\) admits no stationary points at infinity;

  2. (ii)

    the vector field has property E.

Then the following hold.

  1. (a)

    Every irreducible homogeneous semi–invariant of \(f^{(m)}\) has degree \(\le m+1\).

  2. (b)

    There exist (up to scalar multiples) only finitely many irreducible homogeneous semi–invariants \(\psi _1\ldots ,\psi _\ell \) of \(f^{(m)}\), and in case \(\ell \ge 2\) one has

    $$\begin{aligned} \sum _{1\le i<j\le \ell }\mathrm{deg}\,\psi _i\cdot \mathrm{deg}\psi _j\le (m^n-1)/(m-1); \end{aligned}$$

    in particular \(\ell (\ell -1)/2\le (m^n-1)/(m-1)\).

  3. (c)

    The vector field f admits only finitely many irreducible and pairwise relatively prime semi–invariants.

Proof

We first prove statement (a). The condition in Carnicer’s theorem [4] is that no singular point of the corresponding one-form in the projective plane is dicritical. Given a non-nilpotent Jacobian, dicritical singular points are characterized by positive rational eigenvalue ratio. But all the singular points of the one-form correspond to stationary points of the reduction of \(f^{(m)}\) in the affine plane, thanks to condition (i), and at every stationary point the linearization is invertible and the eigenvalue ratio is not a positive rational number, by property E and Lemma 8. Thus Carnicer’s theorem yields the degree bound. (Note that the degree of the foliation is less than or equal to one plus the degree of q.)

For statement (b), let \(\psi _1\) and \(\psi _2\) be irreducible and relatively prime semi-invariants. Then by Bezout’s theorem they intersect in \(\mathrm{deg}\psi _1\cdot \mathrm{deg}\psi _2\) points. Property E ensures that every intersection point is of multiplicity one, and none of these intersection points is contained in the vanishing set of another irreducible semi-invariant (observe Lemma 2 and note that locally there are just two invariant curves passing through each singular point; see e.g. [32], Theorem 2.3). Now add up the contributions of pairs of semi–invariants and use Lemma 5.

The assertion of statement (c) follows readily with Proposition 3 and Theorem 2. \(\square \)

4.2 Quadratic Vector Fields in Dimension Three

We still have to show that vector fields with property E actually exist in dimension 3. Since only the homogeneous highest degree terms are involved in these conditions one may restrict attention to homogeneous polynomial vector fields (and add arbitrary terms of smaller degree). A direct verification for a given homogeneous vector field is problematic, because determining the invariant lines explicitly (which would seem a natural first step in a straightforward approach) is generally not possible. Therefore we take a roundabout approach, explicitly constructing vector fields by prescribing invariant lines.

In this subsection we will show that property E is generically satisfied for degree two vector fields in \(\mathbb C^3\). It suffices to consider homogeneous quadratic maps

$$\begin{aligned} p:\,\mathbb C^3\rightarrow \mathbb C^3; \quad p(x)=\left( \sum _{i, j: i<j}\beta _{i,j,k}x_ix_j\right) _{1\le k\le 3}, \end{aligned}$$
(6)

and we will identify such a map with the collection of its structure coefficients \((\beta _{i,j,k})\in \mathbb C^{18}\). Following Röhrl [25] we introduce some terminology here which is adapted from the theory of nonassociative algebras; see also Sect. 2 below. An idempotent of p is a \(v\in \mathbb C^n\) such that \(p(v)=v\not =0\); and \(w\not =0\) with \(p(w)=0\) is called a nilpotent. It is known that generically (corresponding to a Zariski–open and dense subset of coefficient space) a homogeneous quadratic vector field posseses no nilpotent (see e.g. Röhrl [25], Theorem 1), and that vector fields without a nilpotent have only finitely many idempotents (otherwise the variety in \(\mathbb P^3(\mathbb {C})\) defined by \(p(x)-\xi \cdot x=0\) would have positive dimension and would intersect the hyperplane given by \(\xi =0\)). By Lemma 5, at most seven idempotents exist. According to Theorem 5 below, generically there exists a basis of idempotents, and one may infer from its proof that generically there are exactly seven idempotents. We use this observation to discuss a special class of homogeneous quadratic vector fields.

Definition 4

We call the homogeneous quadratic vector field p in \(\mathbb C^3\) distinguished if

  1. (i)

    p admits the standard basis elements \(e_1,\,e_2,\,e_3\) as idempotents;

  2. (ii)

    there are three further idempotents \(v_1,\,v_2,\,v_3\) determined by

    $$\begin{aligned} v_i=\gamma _{i,1}e_1+\gamma _{i,2}e_2+\gamma _{i,3}e_3; \quad 1\le i\le 3; \end{aligned}$$
    (7)

    with complex coefficients \(\gamma _{i,j}\);

  3. (iii)

    the matrix

    $$\begin{aligned} A:=\begin{pmatrix}\gamma _{11}\gamma _{12}&{}\quad \gamma _{12}\gamma _{13}&{}\quad \gamma _{13}\gamma _{11}\\ \gamma _{21}\gamma _{22}&{}\quad \gamma _{22}\gamma _{23}&{}\quad \gamma _{23}\gamma _{21}\\ \gamma _{31}\gamma _{32}&{}\quad \gamma _{32}\gamma _{33}&{}\quad \gamma _{33}\gamma _{31}\end{pmatrix} \end{aligned}$$

    is invertible.

From these data p can be reconstructed, since p corresponds to a symmetric bilinear map

$$\begin{aligned} \widehat{p}:\mathbb {C}^3\times \mathbb {C}^3\rightarrow \mathbb {C}^3, \quad (u,v)\mapsto \widehat{p}(u,v):=\frac{1}{2}\left( p(u+v)-p(u)-p(v)\right) \end{aligned}$$

with \(\widehat{p}(u,u)=p(u)\) for all u. Thus p is uniquely determined by the \(\widehat{p}(e_i,e_j)\), and these may be obtained from the relations

$$\begin{aligned} \widehat{p}(\gamma _{i,1}e_1+\gamma _{i,2}e_2+\gamma _{i,3}e_3,\gamma _{i,1}e_1+\gamma _{i,2}e_2+\gamma _{i,3}e_3)=\gamma _{i,1}e_1+\gamma _{i,2}e_2+\gamma _{i,3}e_3 \end{aligned}$$

and bilinearity. The neccesary calculations for this and for further steps require a computer algebra system (we use Maple in the present paper). As it turns out, stipulating the idempotents in (7) defines a unique homogeneous quadratic map whenever \(\det A\) does not vanish; see Sect. 3 below. In coordinates one finds an expression

$$\begin{aligned} p(x)=\begin{pmatrix}x_1^2+\theta _1x_1x_2+\theta _2x_2x_3+\theta _3x_3x_1\\ x_2^2+\theta _4x_1x_2+\theta _5x_2x_3+\theta _6x_3x_1\\ x_3^2+\theta _7x_1x_2+\theta _8x_2x_3+\theta _9x_3x_1\end{pmatrix}, \end{aligned}$$
(8)

which corresponds to a nine-dimensional affine subspace Y of coefficient space \(\mathbb C^{18}\), and the \(\theta _k\) are rational functions in the \(\gamma _{ij}\); see Sect. 3 below for the explicit form.

Lemma 9

The image of the map

$$\begin{aligned} \Gamma : \mathbb {C}^9\rightarrow Y, \quad \left( \gamma _{ij}\right) \mapsto \left( \theta _1\left( (\gamma _{ij})\right) ,\ldots ,\theta _9\left( (\gamma _{ij})\right) \right) \end{aligned}$$

contains a Zariski–open subset of Y.

Proof

It is sufficient to show that the Jacobian of this map is invertible at some \(\left( \widehat{\gamma }_{ij}\right) \), and this can be verified by direct calculation using Maple; see Sect. 3. \(\square \)

From vector fields with the standard basis elements as idempotents, thus with coefficients in Y, one obviously obtains all vector fields admitting a basis of idempotents by linear coordinate transformations \(T\in GL_3(\mathbb {C})\), sending p to \(T^{-1}\circ p\circ T\). To summarize, we have the first statement of the following proposition; the proof of the second statement (which is computationally involved) will be outlined in Sect. 3.

Proposition 4

  1. (a)

    The set of coordinate transformations of the distinguished homogeneous quadratic vector fields (seen as a subset of coefficient space) contains a Zariski–open set.

  2. (b)

    All distinguished vector fields have precisely seven idempotents, with the coordinates of the seventh idempotent being rational in the \(\gamma _{ij}\).

If v is any idempotent of p then 2 is an eigenvalue of the Jacobian Dp(v), with eigenvector v. Denote the remaining ones by \(\lambda _1\) and \(\lambda _2\), noting that these lie in a degree two extension of the rational function field \(\mathbb {C}\left( (\gamma _{ij})_{i,j}\right) \), and explicit expressions for them can be determined using computer algebra. The eigenvalues for the linearization of the Poincaré transform are then \(-1, \lambda _1-1\) and \(\lambda _2-1\), according to Lemma 5. With this in hand, one can show that quadratic vector fields with property E are indeed generic.

Proposition 5

  1. (a)

    Whenever the \(\gamma _{ij}\) are algebraically independent over the rational numbers \(\mathbb Q\) then the distinguished homogeneous quadratic vector field constructed with these parameters satisfies property E.

  2. (b)

    The homogeneous quadratic vector fields (6) which satisfy property E correspond to the complement of a Lebesgue measure zero subset of parameter space.

Proof

For statement (a) one uses computer algebra, by inspecting the eigenvalues and verifying that they are linearly independent over the rationals. It is sufficient to do so for a specialization, assigning rational values to some of the parameters and leaving only three algebraically independent ones. (Moreover one may work directly with the eigenvalues of the Jacobians at idempotents due to Lemma 5(b); computing the Poincaré transform is not necessary.) See Sect. 3 below.

To prove statement (b), recall that the set of parameters which are algebraically dependent over \(\mathbb Q\) is of Lebesgue measure zero in \(\mathbb C^9\), and this property transfers to the corresponding subset of Y given by the image of the map \(\Gamma \), which is generically locally invertible. Using coordinate transformations as the last step, the claim is proven. \(\square \)

This statement is not yet quite satisfactory, since one knows that there is an open and dense subset of coefficient space so that (6) admits no semi-invariant at all; see Żoła̧dek [33]. Therefore we next ascertain the existence of vector fields which have property E and admit nontrivial semi–invariants. This is taken care of by the next result.

Proposition 6

In Definition 4, let \(\gamma _{13}=\gamma _{21}=\gamma _{32}=0 \), with the remaining \(\gamma _{ij}\) algebraically independent over the rational numbers \(\mathbb Q\). Then \(x_i,\, 1\le i\le 3\), is a semi–invariant of the distinguished vector field, and this vector field satisfies property E.

Proof

All claims are again proven by inspection of computer algebra calculations; see Sect. 3. \(\square \)

Finally, we exhibit an example which shows the existence of distinguished quadratic vector fields with algebraic coefficients \(\gamma _{ij}\).

Example 1

With the algebraic coefficients

$$\begin{aligned} \gamma _{{11}}= & {} \sqrt{2},\ \gamma _{{12}}=\sqrt{3},\ \gamma _{{13}} =0,\ \gamma _{{21}}=0,\ \gamma _{{22}}=\sqrt{3},\ \gamma _{{23}}=\sqrt{5}, \ \gamma _{{31}}\\= & {} \sqrt{2},\ \gamma _{{32}}=0,\ \gamma _{{33}}=\sqrt{5}, \end{aligned}$$

the distinguished system has components

$$\begin{aligned} \begin{array}{cl} p_1(x)&{}={{ x_1}}^{2}-\dfrac{ \left( 10\,\sqrt{2}\sqrt{3 }-10\,\sqrt{3} \right) }{30} { x_1}\,{ x_2}\,-\,\dfrac{ \left( 6\,\sqrt{2 }\sqrt{5}-6\,\sqrt{5} \right) }{30}{ x_1}\,{ x_3},\\ p_2(x)&{}={{ x_2}}^{2}-\dfrac{ \left( 15\,\sqrt{2}\sqrt{3 }-15\,\sqrt{2} \right) }{30}{ x_1}\,{ x_2} -\dfrac{\, \left( 6\,\sqrt{3 }\sqrt{5}-6\,\sqrt{5} \right) }{30}{ x_2}\,{ x_3},\\ p_3(x)&{}={{ x_3}}^{2}-\dfrac{\left( \sqrt{5}-1 \right) \sqrt{2}}{2}{ x_3}\,{ x_1}- \dfrac{ \left( \sqrt{5}-1 \right) \sqrt{3}}{3}{ x_3}\,{ x_2}. \end{array} \end{aligned}$$
(9)

Note that this system admits the invariant surfaces given by \(x_1=0\), \(x_2=0\), resp. \(x_3=0\). We will show in Sect. 3 that property E is satisfied by exhibiting the eigenvalues of the Jacobians for all idempotents. To verify linear independence of these eigenvalues over \(\mathbb Q\) by inspection, recall that \(\sqrt{2},\,\sqrt{3}\) and \(\sqrt{5}\) generate a field extension of degree 8 over \(\mathbb Q\).