1 Introduction and Preliminaries

Equations satisfied by additive functions play an important role not only in the theory of commutative algebra, but also in the theory of functional equations. It is an important and challenging question how special morphisms (such as homomorphisms and derivations) can be characterized among additive mappings in general. In this paper classes of multivariable algebraic equations are introduced with appropriate solutions as field homomorphisms and derivations.

Concerning all the cases we consider here, the involved additive functions are defined on a field \(\mathbb {F}\subset \mathbb {C}\) and have values in the complex field, therefore we introduce the preliminaries in this setting.

We adopt the standard notations, that is, \(\mathbb {N}\) and \(\mathbb {C}\) denote the set of positive integers and the set of complex numbers, respectively.

Henceforth we assume \(\mathbb {F}\subset \mathbb {C}\) to be a field.

Definition 1

We say that a function \(f:\mathbb {F}\rightarrow \mathbb {C}\) is additive if it fulfills the so-called Cauchy functional equation, that is,

$$\begin{aligned} f(x+y)= f(x)+f(y) \qquad \left( x, y\in \mathbb {F}\right) . \end{aligned}$$

An additive function \(d:\mathbb {F} \rightarrow \mathbb {C}\) is termed to be a derivation (of order 1) if it also fulfills the Leibniz equation, i.e.,

$$\begin{aligned} d(xy)= d(x)y+xd(y) \qquad \left( x, y\in \mathbb {F}\right) . \end{aligned}$$

An additive function \(\varphi :\mathbb {F}\rightarrow \mathbb {C}\) is said to be a homomorphism if it is multiplicative as well, in other words, besides additivity we also have

$$\begin{aligned} \varphi (xy)= \varphi (x)\varphi (y) \qquad \left( x, y\in \mathbb {F}\right) . \end{aligned}$$

If \(\mathbb {F}= \mathbb {C}\) and \(\varphi \) is an isomorphism, then \(\varphi \) is called a complex automorphism.

Certain well-known equations are especially important. For instance, the additive solutions of following equation on a ring R with char\({R}\ne 2\)

$$\begin{aligned} f(x^{2})=2xf(x) \qquad \left( x\in \mathbb {}\right) \end{aligned}$$

are derivations, under some assumptions, where the additive mapping f acts. As an extension of such type of results in [1, 2, 5] the additive solutions of the equations

$$\begin{aligned} \sum _{i=0}^{n}x^{i}f_{n+1-i}(x^{n+1-i})=0 \qquad \left( x\in R\right) \end{aligned}$$

and

$$\begin{aligned} \sum _{i=0}^{n} f(x^{p_{i}})x^{q_{i}}=0 \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

were described, here R denotes a ring with char\((R)\ge n\), while \(\mathbb {F}\subset \mathbb {C}\) is a field. By a polynomial equation of additive functions we mean an equation of the form

$$\begin{aligned}P(f_1^{r_1}(x^{s_1}), \dots , f_n^{r_n}(x^{s_n}))=0,\end{aligned}$$

where \(P:\mathbb {C}^n \rightarrow \mathbb {C}\) is a n-variable polynomial, \(r_i\), \(s_i\) are positive integers and \(f_i\) denote the unknown additive functions. Without further restrictions (e.g., on the polynomial P or on the parameters \(r_i, s_i\)), the above equation is unfortunately too general for its solutions to be fully determined. Indeed, it is not too hard to specify a polynomial P and parameters \(r_i, s_i\) so that the above equation is satisfied by all additive functions. Therefore, in this series, we will focus on the following classes of equations.

$$\begin{aligned} \begin{array}{rcl} \displaystyle \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}})&{}=&{} 0, \\ \displaystyle \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x)^{q_{i}}&{}=&{} 0, \\ \displaystyle \sum _{i=1}^{n}f_{i}(x)^{p_{i}}g_{i}(x)^{q_{i}}&{}=&{} 0, \\ \end{array} \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

In this paper the most impressive equation, namely

$$\begin{aligned} \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}})= 0, \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$
(1)

will be studied from this list with a fruitful theoretical description. (This we call ’inner parameter case’, as the parameters \(p_i, q_i\) are exponents of the variable x, so they act on the domain of the functions \(f_i, g_i\), respectively.) Under some natural conditions equation (1) is satisfied by compositions of (higher order) derivations and homomorphisms. The purpose of this paper is about the converse by showing proper characterizations for the solutions of (1) in the class of additive functions.

1.1 Structure of the Paper

In Sect. 2 the most important notations, terminology and theoretical background is summarized. Concerning the notions of polynomials, generalized polynomials, exponentials and exponential polynomials, here we follow the monograph of Székelyhidi [10]. Besides these notions, decomposable functions, introduced by Shulman in [9], will play a key role in the second section. We show that all solutions of equation (1) are decomposable functions. After that a result of Laczkovich will be used, who proved in [7] that on unital commutative topological semigroups, decomposable mappings are generalized exponential polynomials. On fields these are closely related to higher order derivations that were introduced by [8, 11]. There will also be cases, when we will restrict to finitely generated subfields of \(\mathbb {F}\) since on them higher order derivations are differential operators. The latter concept is significantly easier to calculate. This makes it possible to determine the exact upper bound for their orders.

The main results of the paper can be found in the third section. At first some elementary yet important lemmata serve to settle reasonable conditions for the parameters \(p_i, q_i\) such as the Homogenization Principle (see Lemma 9), which ensures that the parameters satisfy

$$\begin{aligned} p_{i}+q_{i}=N \qquad \left( i=1, \ldots , n\right) . \end{aligned}$$

Based on the remarks and the examples of Sect. 3.1, we will provide characterization theorems for Eq. (1) under the following conditions

  1. C(i)

    the positive integers \(p_{1}, \ldots , p_{n}\) satisfy \(p_1<\dots <p_{n}\);

  2. C(ii)

    for all \(i=1, \ldots , n\) we have \(p_{i}+q_{i}=N\);

  3. C(iii)

    for all \(i, j \in \left\{ 1, \ldots , n\right\} \), \(i\ne j\) we have \(p_{i}\ne q_{j}\).

Further, according to Lemma 10, the solutions of the above functional equations are sufficient to determine ‘up to equivalence’. This is because the functions \(f_{i}\) and \(g_{i}\) fulfill equation (1), if and only if for any automorphism \(\varphi :\mathbb {C}\rightarrow \mathbb {C}\), the functions \(\varphi \circ f_{i}\) and \(\varphi \circ g_{i}\) also fulfill (1), \(i=1,\dots , n\).

Looking at Eq. (1), it yields a technical problem that there is only one independent variable in the equation. At the same time, the involved functions are assumed to be additive. Thus the polarization formula for multi-additive functions can be used in the symmetrization method, which allows us to enlarge the number of independent variables from one to N.

In Lemma 13 it is shown that the functions \(f_{i}\) and \(g_{i}\) satisfies the system of equations given by this method are decomposable functions thus generalized exponential polynomials on the group \(\mathbb {F}^{\times }\). Theorem 14 says that for any \(i\in \{1, \ldots , n\}\), in the variety of the functions \(f_{i}\) and \(g_{i}\) there is exactly one exponential \(m_{i}\). Focusing on the irreducible solutions, this means that all \(f_i\) (resp. \(g_i\)) are of the form \(P_i\cdot m\) (resp. \(Q_i\cdot m\)), where \(P_i\) (and \(Q_i\)) are (generalized) polynomials and m is a unique exponential function.

Translating the problem to higher order derivations there can be found a natural basis of compositions of derivations of order 1 by using moment generating functions. Applying the arithmetic of derivations we get a sharp upper bound, which is \(n-1\) for the order of derivation solutions under some conditions, see Theorem 20. We close this section with the study of some important special cases. In Conjecture 21 and Open Problem 1 we pose problems on the exact order of the (higher order) derivation solutions in different settings.

2 Notation, Terminology and Theoretical Background

2.1 Polynomials and Generalized Polynomials

Definition 2

Let GS be commutative semigroups (written additively), \(n\in \mathbb {N}\) and let \(A:G^{n}\rightarrow S\) be a function. We say that A is n-additive if it is a homomorphism of G into S in each variable. If \(n=1\) or \(n=2\) then the function A is simply termed to be additive or biadditive, respectively.

The diagonalization or trace of an n-additive function \(A:G^{n}\rightarrow S\) is defined as

$$\begin{aligned} A^{*}(x)=A\left( x, \ldots , x\right) \qquad \left( x\in G\right) . \end{aligned}$$

As a direct consequence of the definition each n-additive function \(A:G^{n}\rightarrow S\) satisfies

$$\begin{aligned} A(x_{1}, \ldots , x_{i-1}, kx_{i}, x_{i+1}, \ldots , x_n)= kA(x_{1}, \ldots , x_{i-1}, x_{i}, x_{i+1}, \ldots , x_{n})\\ \left( x_{1}, \ldots , x_{n}\in G\right) \end{aligned}$$

for all \(i=1, \ldots , n\), where \(k\in \mathbb {N}\) is arbitrary. The same identity holds for any \(k\in \mathbb {Z}\) provided that G and S are groups, and for \(k\in \mathbb {Q}\), provided that G and S are linear spaces over the rationals. For the diagonalization of A we have

$$\begin{aligned} A^{*}(kx)=k^{n}A^{*}(x) \qquad \left( x\in G\right) . \end{aligned}$$

The above notion can also be extended for the case \(n=0\) by letting \(G^{0}=G\) and by calling 0-additive any constant function from G to S.

One of the most important theoretical results concerning multiadditive functions is the so-called Polarization formula, that briefly expresses that every n-additive symmetric function is uniquely determined by its diagonalization under some conditions on the domain as well as on the range. Suppose that G is a commutative semigroup and S is a commutative group. The action of the difference operator \(\Delta \) on a function \(f:G\rightarrow S\) is defined by the formula

$$\begin{aligned}\Delta _y f(x)=f(x+y)-f(x) \qquad \left( x, y\in G\right) . \end{aligned}$$

Note that the addition in the argument of the function is the operation of the semigroup G and the subtraction means the inverse of the operation of the group S.

Theorem 1

(Polarization formula). Suppose that G is a commutative semigroup, S is a commutative group, \(n\in \mathbb {N}\). If \(A:G^{n}\rightarrow S\) is a symmetric, n-additive function, then for all \(x, y_{1}, \ldots , y_{m}\in G\) we have

$$\begin{aligned} \Delta _{y_{1}, \ldots , y_{m}}A^{*}(x)= \left\{ \begin{array}{lll} 0 &{} \text { if} &{} m>n \\ n!A(y_{1}, \ldots , y_{m}) &{} \text { if}&{} m=n. \end{array} \right. \end{aligned}$$

Corollary 2

Suppose that G is a commutative semigroup, S is a commutative group, \(n\in \mathbb {N}\). If \(A:G^{n}\rightarrow S\) is a symmetric, n-additive function, then for all \(x, y\in G\)

$$\begin{aligned} \Delta ^{n}_{y}A^{*}(x)=n!A^{*}(y). \end{aligned}$$

Lemma 3

Let \(n\in \mathbb {N}\) and suppose that the multiplication by n! is surjective in the commutative semigroup G or injective in the commutative group S. Then for any symmetric, n-additive function \(A:G^{n}\rightarrow S\), \(A^{*}\equiv 0\) implies that A is identically zero, as well.

Definition 3

Let G and S be commutative semigroups, a function \(p:G\rightarrow S\) is called a generalized polynomial from G to S, if it has a representation as the sum of diagonalizations of symmetric multi-additive functions from G to S. In other words, a function \(p:G\rightarrow S\) is a generalized polynomial if and only if, it has a representation

$$\begin{aligned} p= \sum _{k=0}^{n}A^{*}_{k}, \end{aligned}$$

where n is a nonnegative integer and \(A_{k}:G^{k}\rightarrow S\) is a symmetric, k-additive function for each \(k=0, 1, \ldots , n\). In this case we also say that p is a generalized polynomial of degree at most n.

Let n be a nonnegative integer, functions \(p_{n}:G\rightarrow S\) of the form

$$\begin{aligned} p_{n}= A_{n}^{*}, \end{aligned}$$

where \(A_{n}:G^{n}\rightarrow S\) is a symmetric and n-additive mapping, are the so-called generalized monomials of degree n.

In this subsection \((G, \cdot )\) is assumed to be a commutative group (written multiplicatively).

Definition 4

Polynomials are elements of the algebra generated by additive functions over G. More exactly, a mapping \(f:G\rightarrow \mathbb {C}\) is called a polynomial if there is a positive integer n, there exists a (classical) complex polynomial \(P:\mathbb {C}^{n}\rightarrow \mathbb {C}\) in n variables and there are additive functions \(a_{k}:G\rightarrow \mathbb {C}\; (k=1, \ldots , n)\) such that

$$\begin{aligned} f(x)= P(a_{1}(x), \ldots , a_{n}(x)) \qquad \left( x\in G\right) . \end{aligned}$$

Remark 1

We recall that the elements of \(\mathbb {N}^{n}\) for any positive integer n are called (n-dimensional) multi-indices. Addition, multiplication and inequalities between multi-indices of the same dimension are defined component-wise. Further, we define \(x^{\alpha }\) for any n-dimensional multi-index \(\alpha \) and for any \(x=(x_{1}, \ldots , x_{n})\) in \(\mathbb {C}^{n}\) by

$$\begin{aligned} x^{\alpha }=\prod _{i=1}^{n}x_{i}^{\alpha _{i}} \end{aligned}$$

where we always adopt the convention \(0^0=0\). We also use the notation \(|\alpha |= \alpha _1+\cdots +\alpha _n\). With these notations any polynomial of degree at most N on the commutative semigroup G has the form

$$\begin{aligned} p(x)= \sum _{|\alpha |\le N}c_{\alpha }a(x)^{\alpha } \qquad \left( x\in G\right) , \end{aligned}$$

where \(c_{\alpha }\in \mathbb {C}\) and \(a=(a_1, \dots , a_n) :G\rightarrow \mathbb {C}^{n}\) is an additive function. Furthermore, the homogeneous term of degree k of p is

$$\begin{aligned} \sum _{|\alpha |=k}c_{\alpha }a(x)^{\alpha } . \end{aligned}$$

Lemma 4

(Lemma 2.7 of [10]). Let G be a commutative group, n be a positive integer and let

$$\begin{aligned} a=\left( a_{1}, \ldots , a_{n}\right) , \end{aligned}$$

where \(a_{1}, \ldots , a_{n}\) are linearly independent complex valued additive functions defined on G. Then the monomials \(\left\{ a^{\alpha }\right\} \) for different multi-indices are linearly independent.

Definition 5

A function \(m:G\rightarrow \mathbb {C}\) is called an exponential function if it satisfies

$$\begin{aligned} m(xy)=m(x)m(y) \qquad \left( x,y\in G\right) . \end{aligned}$$

Furthermore, on a(n) (generalized) exponential polynomial we mean a linear combination of functions of the form \(p \cdot m\), where p is a (generalized) polynomial and m is an exponential function.

The following lemma shows that generalized exponential polynomial functions are linearly independent. Although it can be stated in a more general way (see [10]), we adopt it to our situation, when the functions are complex valued.

Lemma 5

(Lemma 4.3 of [10]). Let G be a commutative group, n a positive integer, \(m_{1}, \ldots , m_{n} :G\rightarrow \mathbb {C}\, (i=1, \ldots , n)\) be distinct nonzero exponentials and \(p_{1}, \ldots , p_{n} :G\rightarrow \mathbb {K}\, (i=1, \ldots , n)\) be generalized polynomials. If \(\displaystyle \sum \nolimits _{i=1}^n p_i\cdot m_i\) is identically zero, then for all \(i=1, \ldots , n\) the generalized polynomial \(p_i\) is identically zero.

Additionally, we will need the analogous statement for polynomial expressions of generalized exponential polynomials which was proved in [5].

Theorem 6

Let \(\mathbb {K}\) be a field of characteristic 0 and klN be positive integers such that \(k,l\le N\). Let \(m_1, \dots , m_k:\mathbb {K}^{\times }\rightarrow \mathbb {C}\) be distinct exponential functions that are additive on \(\mathbb {K}\), let \(a_1, \dots , a_l:\mathbb {K}^{\times }\rightarrow \mathbb {C}\) be additive functions that are linearly independent over \(\mathbb {C}\) and for all \(|s|\le N\) let \( P_s:\mathbb {C}^l\rightarrow \mathbb {C}\) be classical complex polynomials of l variables. If

$$\begin{aligned} \sum _{|s|\le N } P_{s}(a_1, \dots , a_l) m_1^{s_1}\cdots m_k^{s_k}=0 \end{aligned}$$

then for all \(|s|\le N\), the polynomials \(P_s\) vanish identically.

Definition 6

Let G be a commutative group and \(V\subseteq \mathbb {C}^G\) a set of functions. We say that V is translation invariant if for every \(f\in V\) the function \(\tau _{g}f\in V\) also holds for all \(g\in G\), where

$$\begin{aligned} \tau _{g}f(h)= f(hg) \qquad \left( h\in G\right) . \end{aligned}$$

In view of Theorem 10.1 of Székelyhidi [10], any finite dimensional translation invariant linear space of complex valued functions on a commutative group consists of exponential polynomials. This implies that if G is a commutative group, then any function \(f:G\rightarrow \mathbb {C}\), satisfying the functional equation

$$\begin{aligned} f(xy)= \sum _{i=1}^{n}g_{i}(x)h_{i}(y) \qquad \left( x, y\in G\right) \end{aligned}$$

for some positive integer n and functions \(g_{i}, h_{i}:G\rightarrow \mathbb {C}\) (\(i=1, \ldots , n\)), is an exponential polynomial of degree at most n.

This enlightens the connection between generalized polynomials and polynomials. It is easy to see that each polynomial, that is, any function of the form

$$\begin{aligned} x\longmapsto P(a_{1}(x), \ldots , a_{n}(x)), \end{aligned}$$

where n is a positive integer, \(P:\mathbb {C}^{n}\rightarrow \mathbb {C}\) is a (classical) complex polynomial in n variables and \(a_{k}:G\rightarrow \mathbb {C}\; (k=1, \ldots , n)\) are additive functions, is a generalized polynomial. The converse however is in general not true. A complex-valued generalized polynomial p defined on a commutative group G is a polynomial if and only if its variety (the linear space spanned by its translates) is of finite dimension.

Henceforth, not only the notion of (exponential) polynomials, but also that of decomposable functions will be used. The basics of this concept are due to Shulman [9], besides this we heavily rely on the work of Laczkovich [7].

Definition 7

Let G be a group and \(n\in \mathbb {N}, n\ge 2\). A function \(F:G^{n}\rightarrow \mathbb {C}\) is said to be decomposable if it can be written as a finite sum of products \(F_{1}\cdots F_{k}\), where all \(F_{i}\) depend on disjoint sets of variables.

Remark 2

Without loss of generality we can suppose that \(k=2\) in the above definition, that is, decomposable functions are those mappings that can be written in the form

$$\begin{aligned} F(x_{1}, \ldots , x_{n})= \sum _{E}\sum _{j}A_{j}^{E}B_{j}^{E} \end{aligned}$$

where E runs through all non-void proper subsets of \(\left\{ 1, \ldots , n\right\} \) and for each E and j the function \(A_{j}^{E}\) depends only on variables \(x_{i}\) with \(i\in E\), while \(B_{j}^{E}\) depends only on the variables \(x_{i}\) with \(i\notin E\).

Theorem 7

Let G be a commutative topological semigroup with unit. A continuous function \(f:G\rightarrow \mathbb {C}\) is a generalized exponential polynomial if and only if there is a positive integer \(n\ge 2\) such that the mapping

$$\begin{aligned} G^{n} \ni (x_{1}, \ldots , x_{n}) \longmapsto f(x_1+\cdots + x_n) \end{aligned}$$

is decomposable.

The notion of derivations can be extended in several ways. We will employ the concept of higher order derivations according to Reich [8] and Unger and Reich [11]. For further results on characterization theorems on higher order derivations consult e.g. [1,2,3, 5].

Definition 8

Let \(\mathbb {F}\subset \mathbb {C}\) be a field. The identically zero map is the only derivation of order zero. For each \(n\in \mathbb {N}\), an additive mapping \(f:\mathbb {F}\rightarrow \mathbb {C}\) is termed to be a derivation of order n, if there exists \(B:\mathbb {F}\times \mathbb {F}\rightarrow \mathbb {C}\) such that B is a bi-derivation of order \(n-1\) (that is, B is a derivation of order \(n-1\) in each variable) and

$$\begin{aligned} f(xy)-xf(y)-f(x)y=B(x, y) \qquad \left( x, y\in \mathbb {F}\right) . \end{aligned}$$

The set of derivations of order n of the ring R will be denoted by \({\mathscr {D}}_{n}(\mathbb {F})\).

Remark 3

Since \({\mathscr {D}}_{0}(\mathbb {F})=\left\{ 0\right\} \), the only bi-derivation of order zero is the identically zero function, thus \(f\in {\mathscr {D}}_{1}(\mathbb {F})\) if and only if

$$\begin{aligned} f(xy)=xf(y)+f(x)y \qquad \left( x, y\in \mathbb {F}\right) , \end{aligned}$$

that is, the notions of first order derivations and derivations coincide. On the other hand for any \(n\in \mathbb {N}\) the set \({\mathscr {D}}_{n}(\mathbb {F})\setminus {\mathscr {D}}_{n-1}(\mathbb {F})\) is nonempty because \(d_{1}\circ \cdots \circ d_{n}\in {\mathscr {D}}_{n}(\mathbb {F})\), but \(d_{1}\circ \cdots \circ d_{n}\notin {\mathscr {D}}_{n-1}(R)\), where \(d_{1}, \ldots , d_{n}\in {\mathscr {D}}_{1}(\mathbb {F})\) are non-identically zero derivations.

For our future purposes the notion of differential operators will also be important, see [6].

Definition 9

Let \(\mathbb {F}\subset \mathbb {C}\) be a field. We say that the map \(D :\mathbb {F}\rightarrow \mathbb {C}\) is a differential operator of order at most n if D is the linear combination, with coefficients from \(\mathbb {F}\), of finitely many maps of the form \(d_1 \circ \cdots \circ d_k\), where \(d_1, \ldots , d_k\) are derivations on \(\mathbb {F}\) and \(k\le n\). If \(k = 0\) then we interpret \(d_1\circ \cdots \circ d_k\) as the identity function. We denote by \({\mathscr {O}}_n(\mathbb {F})\) the set of differential operators of order at most n defined on \(\mathbb {F}\). We say that the order of a differential operator D is n if \(D \in {\mathscr {O}}_{n}(\mathbb {F})\setminus {\mathscr {O}}_{n-1}(\mathbb {F})\) (where \({\mathscr {O}}_{-1}(\mathbb {F})= \emptyset \), by definition).

Remark 4

The term differential operator is justified by the following fact. Let \(\mathbb {K} =\mathbb {Q}(t_1, \ldots , t_k)\), where \(t_1, \ldots , t_k\) are algebraically independent over \(\mathbb {Q}\). Then \(\mathbb {K}\) is the field of all rational functions of \(t_1, \ldots , t_k\) with rational coefficients. It is clear that

$$\begin{aligned} d_i = \frac{\partial }{ \partial t_{i}} \end{aligned}$$

is a derivation on \(\mathbb {K}\) for every \(i = 1, \ldots , k\). Therefore, every differential operator

$$\begin{aligned} D= \sum _{i_{1}+\cdots +i_{k}\le n}c_{i_{1}, \ldots , i_{k}}\cdot \frac{\partial ^{i_{1}+\cdots +i_{k}}}{\partial t_{1}^{i_{1}}\cdots \partial t_{k}^{i_{k}}}, \end{aligned}$$

where the coefficients \(c_{i_{1}, \ldots , i_{k}}\) belong to \(\mathbb {K}\), is a differential operator of order at most n, and also conversely, if D is a differential operator of order at most n on the field \(\mathbb {K} = \mathbb {Q}(t_1, \ldots , t_k)\), then D is of the above form.

The main result of [6] is Theorem 1.1 that reads in our settings as follows.

Theorem 8

Let \(\mathbb {F}\subset \mathbb {C}\) be a field and let n be a positive integer. Then, for every function \(D :\mathbb {F}\rightarrow \mathbb {C}\), the following are equivalent.

  1. (i)

    \(D\in {\mathscr {D}}_{n}(\mathbb {F})\)

  2. (ii)

    \(D\in \textrm{cl}\left( {\mathscr {O}}_{n}(\mathbb {F})\right) \)

  3. (iii)

    D is additive on \(\mathbb {F}\), \(D(1) = 0\), and D/j, as a map from the group \(\mathbb {F}^{\times }\) to \(\mathbb {C}\), is a generalized polynomial of degree at most n. Here j stands for the identity map defined on \(\mathbb {F}\).

3 Results

3.1 Elementary Observations: Reduction of the Problem

This part begins with some elementary, yet fundamental observations. As the following lemmata show, the original problem can be reduced to a more simpler equation.

Lemma 9

(Homogenization). Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers. Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1), that is,

$$\begin{aligned} \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}})= 0 \end{aligned}$$

for each \(x\in \mathbb {F}\). If the set \(\left\{ p_{1}, \ldots , p_{n}\right\} \) has a partition \({\mathcal {P}}_{1}, \ldots , {\mathcal {P}}_{k}\) with the property

$$\begin{aligned} \text {if } p_{\alpha }, p_{\beta } \in {\mathcal {P}}_{j} \text { for a certain index} j, \text { then } p_{\alpha }+q_{\alpha }= p_{\beta }+q_{\beta }, \end{aligned}$$

then the system of equations

$$\begin{aligned} \sum _{p_{\alpha }\in {\mathcal {P}}_{j}} f_{\alpha }(x^{p_{\alpha }})g_{\alpha }(x^{q_{\alpha }})=0 \qquad \left( x\in \mathbb {F}, j=1, \ldots , k\right) \end{aligned}$$

is satisfied.

Proof

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers. Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1) for each \(x\in \mathbb {F}\). Assume further that the set \(\left\{ p_{1}, \ldots , p_{n}\right\} \) has a partition \({\mathcal {P}}_{1}, \ldots , {\mathcal {P}}_{k}\) with the property

$$\begin{aligned} \text {if } p_{\alpha }, p_{\beta } \in {\mathcal {P}}_{j} \text { for a certain index} j, \text { then } p_{\alpha }+q_{\alpha }= p_{\beta }+q_{\beta }. \end{aligned}$$

Observe that for all \(i=1, \ldots , n\), the mapping

$$\begin{aligned} \mathbb {F}\ni x\longmapsto f_{i}(x^{p_{i}})g_{i}(x^{q_{i}}) \end{aligned}$$

is a generalized monomial of degree \(p_{i}+q_{i}\). Indeed, it is the diagonalization of the symmetric \((p_{i}+q_{i})\)-additive mapping

$$\begin{aligned} \mathbb {F}^{p_{i}+q_{i}} \ni (x_{1}, \ldots , x_{p_{i}+q_{i}}) \longmapsto f_{i}(x_{\sigma (1)}\cdots x_{\sigma (p_{i})})g_{i}(x_{\sigma (p_{i}+1)}\cdots x_{\sigma (p_{i}+q_{i})}). \end{aligned}$$

Since \(\mathbb {F}\subset \mathbb {C}\), we necessarily have \(\mathbb {Q}\subset \mathbb {F}\). Let now \(r\in \mathbb {Q}\) be arbitrary and substitute rx in place of x in Eq. (1) to get

$$\begin{aligned} \sum _{i=1}^{n}f_{i}((rx)^{p_{i}})g_{i}((rx)^{q_{i}})= 0 \qquad \left( r\in \mathbb {Q}, x\in \mathbb {F}\right) . \end{aligned}$$

Using the \(\mathbb {Q}\)-homogeneity of the additive functions \(f_{1}, \ldots , f_{n}\) and \(g_{1}, \ldots , g_{n}\), we deduce

$$\begin{aligned} 0&= \sum _{i=1}^{n}f_{i}((rx)^{p_{i}})g_{i}((rx)^{q_{i}})= \sum _{i=1}^{n}f_{i}(r^{p_{i}}x^{p_{i}})g_{i}(r^{q_{i}}x^{q_{i}}) \\&= \sum _{i=1}^{n}r^{p_{i}+q_{i}}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}}) = \sum _{j=1}^{k} \sum _{p_{\alpha }\in {\mathcal {P}}_{j}}r^{p_{\alpha }+q_{\alpha }} f_{\alpha }(x^{p_{\alpha }})g_{\alpha }(x^{q_{\alpha }}) \\&\quad \left( r\in \mathbb Q, x\in \mathbb {F}\right) . \end{aligned}$$

Note that the right hand side of this equation is a (classical) polynomial in r which is identically zero. Thus all of its coefficients should be (identically) zero, yielding that the system of equations

$$\begin{aligned} \sum _{p_{\alpha }\in {\mathcal {P}}_{j}} f_{\alpha }(x^{p_{\alpha }})g_{\alpha }(x^{q_{\alpha }})=0 \qquad \left( x\in \mathbb {F}, j=1, \ldots , k\right) \end{aligned}$$

is fulfilled. \(\square \)

Remark 5

The above lemma guarantees that ab initio

$$\begin{aligned} p_{i}+q_{i}=N \qquad \left( i=1, \ldots , n\right) \end{aligned}$$

can be assumed. Otherwise, after using the above homogenization, we get a system of functional equations in which this condition is already fulfilled. For instance, due to the above lemma, if the additive functions \(f_{1}, \ldots , f_{5}:\mathbb {F}\rightarrow \mathbb {C}\) and \(g_{1}, \ldots , g_{5}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy equation

$$\begin{aligned}&f_{1}(x^{24})g_{1}(x^{5})+f_{2}(x^{20})g_{2}(x^{9})+f_{3}(x^{19})g_{3}(x^{10}) \\&\quad + f_{4}(x^{13})g_{4}(x^{7})+ f_{5}(x^{12})g_{4}(x^{8}) = 0 \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

then the equations

$$\begin{aligned} f_{1}(x^{24})g_{1}(x^{5})+f_{2}(x^{20})g_{2}(x^{9})+f_{3}(x^{19})g_{3}(x^{10})=0 \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

and

$$\begin{aligned} f_{4}(x^{13})g_{4}(x^{7})+ f_{5}(x^{12})g_{4}(x^{8})=0 \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

are also fulfilled (separately).

Remark 6

At first glance the assumption that \(p_{1}, \ldots , p_{n}\) are different seems a reasonable and sufficient supposition. Clearly, if the parameters are not necessarily different then we cannot expect anything special for the form of the involved additive functions. Indeed, let \(L\subset \mathbb {C}^{n}\) be a linear subspace and let \(f_{1}, \ldots , f_{n}:\mathbb {F}\rightarrow \mathbb {C}\) and \(g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) be additive functions such that \(\textrm{rng}(f)\subset L\) and \(\textrm{rng}(g)\subset L^{\perp }\), where

$$\begin{aligned} f(x)= \left( f_{1}(x), \ldots , f_{n}(x)\right) \quad \text {and} \quad g(x)= \left( g_{1}(x), \ldots , g_{n}(x)\right) \quad \left( x\in \mathbb {F}\right) . \end{aligned}$$

In this case

$$\begin{aligned} \sum _{i=1}^{n}f_{i}(x)g_{i}(x)= \langle f(x), g(x) \rangle = 0 \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

This shows the necessity of the above assumption. Unfortunately, the sufficiency fails to hold. To see this, let p and q be positive integers and \(f:\mathbb {F}\rightarrow \mathbb {C}\) be an arbitrary additive function and define the complex-valued functions \(f_{1}, g_{1}, f_{2}, g_{2}\) on \(\mathbb {F}\) by

$$\begin{aligned} f_{1}(x)= f(x) \quad g_{1}(x)= f(x) \quad f_{2}(x)= if(x) \quad g_{2}(x)= if(x) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

An immediate computation shows that we have

$$\begin{aligned} f_{1}(x^{p})g_{1}(x^{q})+f_{2}(x^{p})g_{2}(x^{q})= 0 \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

In view of the above remarks, from now on, the following assumptions are adopted.

  1. C(i)

    the positive integers \(p_{1}, \ldots , p_{n}\) are arranged in a strictly increasing order, i.e., \(p_1<\dots <p_{n}\);

  2. C(ii)

    for all \(i=1, \ldots , n\) we have \(p_{i}+q_{i}=N\);

  3. C(iii)

    for all \(i, j \in \left\{ 1, \ldots , n\right\} \), \(i\ne j\) we have \(p_{i}\ne q_{j}\).

Remark 7

Define the relation \(\sim \) on \(\mathbb {F}^{\mathbb {C}}\) by \( f\sim g \) if and only if there exists an automorphism \(\varphi :\mathbb {C}\rightarrow \mathbb {C}\) such that \(\varphi \circ f=g\). Obviously \(\sim \) is an equivalence relation on \(\mathbb {F}^{\mathbb {C}}\) that induces a partition on \(\mathbb {F}^{\mathbb {C}}\).

Lemma 10

(Equivalence). Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling the conditions C(i)–C(iii) of Remark 6. Assume that the additive functions \(f_{1}, \ldots , f_{n},\) \(g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1). Then for an arbitrary automorphism \(\varphi :\mathbb {C}\rightarrow \mathbb {C}\) the functions \(\varphi \circ f_{1}, \ldots , \varphi \circ f_{n}, \varphi \circ g_{1}, \ldots , \varphi \circ g_{n}\) also fulfill equation (1).

3.2 Structure of Solutions

We can always restrict ourselves to the case when all the involved functions are non-identically zero. Otherwise, the number of the terms appearing in Eq. (1) can be reduced.

Lemma 11

(Symmetrization). Let k and n be positive integers, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(m_{1}, \ldots , m_{n}:\mathbb {F}\rightarrow \mathbb {C}\) be monomials of degree k. If

$$\begin{aligned} \sum _{i=1}^{n}m_{i}(x)=0 \end{aligned}$$

holds for all \(x\in \mathbb {F}\), then

$$\begin{aligned} \sum _{i=1}^{n}M_{i}(x_{1}, \ldots , x_{k})=0 \end{aligned}$$

is fulfilled for all \(x_{1}, \ldots , x_{k}\), where for all \(i=1, \ldots , n\), the mapping \(M_{i}:\mathbb {F}^{k}\rightarrow \mathbb {C}\) is the uniquely determined symmetric, k-additive function such that

$$\begin{aligned} M_{i}(x, \ldots , x)= m_{i}(x) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Proof

Let k and n be positive integers, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(m_{1}, \ldots , m_{n}:\mathbb {F}\rightarrow \mathbb {C}\) be monomials of degree k and assume that

$$\begin{aligned} \sum _{i=1}^{n}m_{i}(x)=0 \end{aligned}$$

holds for all \(x\in \mathbb {F}\). Since for all \(i=1, \ldots , n\), the function \(m_{i}\) is a monomial of degree k, there exists a symmetric, k-additive function \(M_{i}:\mathbb {F}^{k}\rightarrow \mathbb {C}\) such that we have

$$\begin{aligned} M_{i}(x, \ldots , x)= m_{i}(x) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Obviously the mapping \(\displaystyle \sum \nolimits _{i=1}^{n}m_{i}\) is a monomial of degree k which is, by the assumptions, identically zero. To this monomial there also corresponds a symmetric and k-additive mapping, namely

$$\begin{aligned} \mathbb {F}^{k}\ni (x_{1}, \ldots , x_{k}) \longmapsto \sum _{i=1}^{n}M_{i}(x_{1}, \ldots x_{k}). \end{aligned}$$

Observe that the trace of this symmetric and k-additive mapping is identically zero. At the same time, due to the Polarization formula (Theorem 1), every symmetric and k-additive function is uniquely determined by its trace. Thus

$$\begin{aligned} \sum _{i=1}^{n}M_{i}(x_{1}, \ldots , x_{k})=0 \end{aligned}$$

for all \(x_{1}, \ldots , x_{k}\in \mathbb {F}\). \(\square \)

Lemma 12

(Symmetrization). Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(ii), i.e., there is a \(N\in \mathbb {N}\) such \(p_i+q_i=N\) for all \(i=1,\dots ,n\). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1) for each \(x\in \mathbb {F}\). Then

$$\begin{aligned} \frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{N}}\sum _{i=1}^{n} f_{i}(x_{\sigma (1)} \cdots x_{\sigma (p_{i})}) \cdot g_{i}(x_{\sigma (p_{i}+1)} \cdots x_{\sigma (N)})=0 \end{aligned}$$

holds for all \(x_{1}, \ldots , x_{N}\in \mathbb {F}\).

Proof

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(ii). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation

$$\begin{aligned} \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}})= 0 \end{aligned}$$

for each \(x\in \mathbb {F}\). Due to the additivity of the functions \(f_{1}, \ldots , f_{n}\) and \(g_{1}, \ldots , g_{n}\) for all \(i=1, \ldots , n\), the mapping

$$\begin{aligned} x\longmapsto f_{i}(x^{p_{i}})g_{i}(x^{q_{i}}) \end{aligned}$$

is a monomial of degree \(p_{i}+q_{i}=N\). Further, it is the trace of the symmetric and N-additive mapping

$$\begin{aligned}&F_{i}(x_{1}, \ldots , x_{N}) \\&\quad = \frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{N}} f_{i}(x_{\sigma (1)} \cdots x_{\sigma (p_{i})}) \cdot g_{i}(x_{\sigma (p_{i}+1)} \cdots x_{\sigma (N)}) \\&\qquad \left( x_{1}, \ldots , x_{N}\in \mathbb {F}\right) . \end{aligned}$$

Therefore, the statement follows from Lemma 11. \(\square \)

3.3 Solutions of Eq. (1)

The main purpose of the subsection is to describe under the conditions C(i)–C(iii), the solution space of Eq. (1). We first prove that solutions of Eq. (1) are decomposable functions on the multiplicative group \(\mathbb {F}^{\times }\). In view of Laczkovich [7], this immediately yields that the solutions of Eq. (1) are generalized exponential polynomials of this group.

Lemma 13

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(i) and C(ii). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1) for each \(x\in \mathbb {F}\). Then all the functions \(f_{1}, \ldots , f_{n}\) as well as \(g_{1}, \ldots , g_{n}\) are decomposable functions of the group \(\mathbb {F}^{\times }\).

Proof

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(i) and C(ii).

Let us assume first that condition C(iii) is also satisfied. Assume that the additive functions \(f_{1}, \ldots , f_{n},\) \(g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1) for each \(x\in \mathbb {F}\). Let

$$\begin{aligned} S= \left\{ p_{1}, \ldots , p_{n} \right\} \cup \left\{ q_{1}, \ldots , q_{n}\right\} . \end{aligned}$$

Then \(\max S= \max \left\{ p_{n}, q_{1} \right\} \). By condition C(iii), we have \(p_{n}\ne q_{1}\). Without the loss of generality \(p_{n}>q_{1}\) can be assumed, otherwise we follow a similar argument. In view of Lemma 12, we have

$$\begin{aligned} \frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{N}}\sum _{i=1}^{n} f_{i}(x_{\sigma (1)} \cdots x_{\sigma (p_{i})}) \cdot g_{i}(x_{\sigma (p_{i}+1)} \cdots x_{\sigma (N)})=0 \end{aligned}$$

for all \(x_{1}, \ldots , x_{N}\in \mathbb {F}\), or after some rearrangement,

$$\begin{aligned}&\frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{N}} f_{n}(x_{\sigma (1)} \cdots x_{\sigma (p_{n})}) \cdot g_{n}(x_{\sigma (p_{n}+1)} \cdots x_{\sigma (N)}) \\&\quad = -\frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{N}}\sum _{i=1}^{n-1} f_{i}(x_{\sigma (1)} \cdots x_{\sigma (p_{i})}) \cdot g_{i}(x_{\sigma (p_{i}+1)} \cdots x_{\sigma (N)}) \\&\qquad \left( x_{1}, \ldots , x_{N}\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Let now

$$\begin{aligned} x_{p_{n}+1}= \cdots = x_{N}= 1, \end{aligned}$$

then the above identity says that \(g_{n}(1) \cdot f_{n}\) is decomposable. If \(g_{n}(1)\) were zero, but \(g_{n}\) would not be identically zero, then there would exist \(a\in \mathbb {F}^{\times }\) such that \(g_{n}(a)\ne 0\). In this case the above substitutions should be modified to

$$\begin{aligned} x_{p_{n}+1}= a , \; x_{p_{n}+2}= \cdots = x_{N}= 1, \end{aligned}$$

to get the same conclusion.

If \(p_{n}= q_{j}\) for some \(j=\{1, \dots , n\}\) (i.e., C(iii) does not hold), then without loss of generality we may assume that \(j=1\), otherwise we may change the role of \(f_i\) and \(g_i\), and \(p_i\) and \(q_i\), respectively, and proceed as above. If \(p_{n}= q_{1}\), then we have

$$\begin{aligned}&\frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{n}} f_{n}(x_{\sigma (1)} \cdots x_{\sigma (p_{n})}) \cdot g_{n}(x_{\sigma (p_{n}+1)} \cdots x_{\sigma (N)}) \\&\qquad + \frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{n}} g_{1}(x_{\sigma (1)} \cdots x_{\sigma (p_{n})}) \cdot f_{1}(x_{\sigma (p_{n}+1)} \cdots x_{\sigma (N)}) \\&\quad = -\frac{1}{N!} \sum _{\sigma \in {\mathscr {S}}_{n}}\sum _{i=2}^{n-1} f_{i}(x_{\sigma (1)} \cdots x_{\sigma (p_{i})}) \cdot g_{i}(x_{\sigma (p_{i}+1)} \cdots x_{\sigma (N)}) \\&\qquad \left( x_{1}, \ldots , x_{N}\in \mathbb {F}^{\times }\right) . \end{aligned}$$

This equation with the substitutions

$$\begin{aligned} x_{p_{n}+1}= \cdots = x_{N}= 1, \end{aligned}$$

yields that a linear combination of \(f_{n}\) and \(g_{1}\) is decomposable. If \(\left\{ f_{n}, g_{1} \right\} \) is linearly dependent, then this obviously means that both \(f_{n}\) and \(g_{1}\) are decomposable functions. If this system is not linearly dependent, then there exist \(a, b\in \mathbb {F}^{\times }\), \(a\ne b\) and different complex constants \(c_{1}\) and \(c_{2}\) such that

$$\begin{aligned} f_{n}(a)= c_{1}g_{1}(a) \qquad f_{n}(b)= c_{2}g_{1}(b). \end{aligned}$$

With the substitutions

$$\begin{aligned} x_{p_{n}+1}= a , \; x_{p_{n}+2}= \cdots = x_{N}= 1, \end{aligned}$$

and

$$\begin{aligned} x_{p_{n}+1}= b , \; x_{p_{n}+2}= \cdots = x_{N}= 1, \end{aligned}$$

we get that the functions

$$\begin{aligned} (x_{1}, \ldots , x_{p_{1}}) \longmapsto g_{1}(x_{1} \cdots x_{p_{1}}) f_{n}(a) + f_{n}(x_{1} \cdots x_{p_{1}}) g_{1}(a) \end{aligned}$$

and

$$\begin{aligned} (x_{1}, \ldots , x_{p_{1}}) \longmapsto g_{1}(x_{1} \cdots x_{p_{1}}) f_{n}(b) + f_{n}(x_{1} \cdots x_{p_{1}}) g_{1}(b) \end{aligned}$$

are decomposable. Since finite linear combinations of decomposable functions are also decomposable, it follows that \(f_{n}\) and \(g_{1}\) are decomposable, separately.

After that, let us consider the set \(S\setminus \left\{ p_{n}\right\} \) and apply the above argument for this set. With this step-by-step descending argument the statement of the lemma follows. \(\square \)

Remark 8

We emphasize that condition C(iii) in Lemma 13 has not been assumed. Thus, the fact that the additive solutions of (1) are decomposable functions can be deduced only under the conditions C(i) and C(ii). On the other hand, for our purpose to describe the solutions more concretely we have to assume also condition C(iii) to avoid further difficulties. We believe however that most of our methods can work similarly only under the conditions C(i) and C(ii).

Remark 9

Note also that in general we cannot state more than that the involved functions \(f_{1}, \ldots , f_{n}\) and \(g_{1}, \ldots , g_{n}\) are decomposable functions on the commutative group \(\mathbb {F}^{\times }\). In other words, we can only state that the solutions of the functional equation in question are higher order derivations (see below Corollary 15). In general it is not true that the solutions of this functional equation are differential operators (i.e., exponential polynomials of the multiplicative group \(\mathbb {F}^{\times }\)). To see this, let us consider the functional equation

$$\begin{aligned} xf_{1}(x^{6})+x^{2}f_{2}(x^{5})+x^{3}f_{3}(x^{4})=0 \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Indeed, using the results of [5], we deduce that \(f_{1}, f_{2}, f_{3}\in {\mathscr {D}}_{2}(\mathbb {F})\).

Due to a result of Laczkovich [7] and under the assumptions of Lemma 13, there exist a positive integer l, there are generalized polynomials \(P_{k, i}, Q_{k, i}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) (\(k=1, \ldots , l\) and \(i=1, \ldots , n\)) and there exist linearly independent exponentials \(m_{1}, \ldots , m_{l}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) such that

$$\begin{aligned} f_{i}(x)= \sum _{k=1}^{l}P_{k, i}(x)m_{k}(x) \; \text {and} \; g_{i}(x)= \sum _{k=1}^{l}Q_{k, i}(x)m_{k}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Since the generalized polynomials on any finitely generated field are polynomials and the exponentials \(m_{1}, \ldots , m_{k}\) are linearly independent, we can apply Theorem 6. This implies that after substituting the above form the functions \(f_{1}, \ldots , f_{n}\) and \(g_{1}, \ldots , g_{n}\) and using that for each \(\kappa \in \mathbb {N}\) and \(k=1, \ldots , n\), we have

$$\begin{aligned} m_{k}(x^{\kappa })= m_{k}(x)^{\kappa } \qquad \left( x\in \mathbb {F}^{\times }\right) , \end{aligned}$$

especially

$$\begin{aligned} \sum _{i=1}^{n}P_{k, i}(x^{p_{i}})Q_{k, i}(x^{q_{i}})=0 \qquad \left( x\in \mathbb {F}^{\times }\right) \end{aligned}$$

follows for all \(k=1, \ldots , l\). This tells us that it is enough to solve Eq. (1) for generalized polynomials of the group \(\mathbb {F}^{\times }\). In fact, we can prove the following.

Theorem 14

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(i)–C(iii). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy functional equation (1) for each \(x\in \mathbb {F}\). Then there exists a positive integer l, there exist exponentials \(m_i:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) and there are generalized polynomials \(P_{i}, Q_{i}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) of degree at most l such that

$$\begin{aligned} f_{i}(x)= P_{i}(x)m_i(x) \qquad \text {and} \qquad g_{i}(x)= Q_{i}(x)m_i(x) \qquad \left( x\in \mathbb {F}^{\times }\right) \end{aligned}$$

for each \(i=1, \ldots , n\).

Proof

As we saw above, under the hypothesis of the lemma, if the functions \(f_{1}, \ldots , f_{n}\) and \(g_{1}, \ldots , g_{n}\) solve Eq. (1), then there exist a positive integer l, there are generalized polynomials \(P_{k, i}, Q_{k, i}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) (\(k=1, \ldots , l\) and \(i=1, \ldots , n\)) and there exist linearly independent exponentials \(m_{1}, \ldots , m_{l}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) such that

$$\begin{aligned} f_{i}(x)= \sum _{k=1}^{l}P_{k, i}(x)m_{k}(x) \; \text {and} \; g_{i}(x)= \sum _{k=1}^{l}Q_{k, i}(x)m_{k}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Assume to the contrary that \(l\ge 2\).

Let

$$\begin{aligned} S= \left\{ p_{1}, \ldots , p_{n} \right\} \cup \left\{ q_{1}, \ldots , q_{n}\right\} . \end{aligned}$$

Then due to conditions C(i)–C(iii) we have \(\max S= \max \left\{ p_{n}, q_{1} \right\} \). Similarly as in Lemma 13, without the loss of generality \(p_{n}>q_{1}\) can be assumed, otherwise we follow a similar argument. By our assumption \(l\ge 2\), this yields that there exist different exponential terms in \(f_{1}\) and \(g_{1}\) with nonzero polynomial coefficients. For the sake of simplicity, suppose that these different exponentials are \(m_{1}\) and \(m_{2}\). Since \(\max S= p_{1}\), the term \(m_{1}^{p_{1}}m_{2}^{q_{1}}\) appears only in \(f_{1}(x^{p_{1}})g_{1}(x^{q_{1}})\) while expanding Eq. (1). Since generalized polynomials \(P_{k,i}\) and exponentials \(m_k\) satisfies the conditions of Theorem 6 for every finitely generated subfield of \(\mathbb {F}\), the coefficient of the above-mentioned term which is \(P_{1, 1}(x^{p_{1}})Q_{1, 1}(x^{q_{1}})\) has to vanish on \(\mathbb {F}\). From this we can deduce that \(P_{1, 1}\) or \(Q_{1, 1}\) is identically zero, contrary to our assumption. Thus

$$\begin{aligned} f_{1}(x)= P_{1}(x)m_{1}(x) \qquad \text {and} \qquad g_{1}(x)= Q_{1}(x)m_{1}(x) \qquad \left( x\in \mathbb {F}\right) , \end{aligned}$$

with appropriate generalized polynomials \(P_{1}, Q_{1}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) and exponential \(m_{1}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\).

Suppose now that there is a positive integer k, less than n such that for all \(i=1, \ldots , k\) we have

$$\begin{aligned} f_{i}(x)= P_{i}(x)m_{i}(x) \; \text {and} \; g_{i}(x)= Q_{i}(x)m_{i}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Assume that in the representation of \(f_{k+1}\) and \(g_{k+1}\) there are different exponentials with nonzero polynomial coefficients, say \(m_{j_{1}}\) and \(m_{j_{2}}\). Observe that while expanding Eq. (1), the term \(m_{j_{1}}^{p_{k+1}}m_{j_{2}}^{q_{k+1}}\) appears only at once, namely in the product \(f_{k+1}(x^{p_{k+1}})g_{k+1}(x^{q_{k+1}})\). Again, due to Theorem 6, we deduce that the appropriate polynomial term, that is,

$$\begin{aligned} P_{k+1, j_{1}}(x^{p_{k+1}})Q_{k+1, j_{2}}(x^{q_{k+1}}) \end{aligned}$$

has to vanish. This proves that necessarily

$$\begin{aligned} f_{k+1}(x)= P_{k+1}(x)m_{k+1}(x) \; \text {and} \; g_{k+1}= Q_{k+1}(x)m_{k+1}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) \end{aligned}$$

hold. This shows that there exist exponentials \(m_{1}, \ldots , m_{n}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) and generalized polynomials \(P_{1}, \ldots , P_{n}\) and \(Q_{1}, \ldots , Q_{n}\) on the group \(\mathbb {F}^{\times }\) such that for all \(i=1, \ldots , n\)

$$\begin{aligned} f_{i}(x)= P_{i}(x)m_{i}(x) \qquad \text {and} \qquad g_{i}= Q_{i}(x)m_{i}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

\(\square \)

Remark 10

Due to conditions C(i)–C(iii), Eq. (1) has the form

$$\begin{aligned} 0&= \sum _{i=1}^{n}f_{i}(x^{p_{i}})g_{i}(x^{q_{i}}) \\&= \sum _{i=1}^{n}P_{i}(x^{p_{i}})m_{i}(x)^{p_{i}}Q_{i}(x^{q_{i}})m_{i}(x)^{q_{i}} = \sum _{i=1}^{n}P_{i}(x^{p_{i}})Q_{i}(x^{q_{i}})m_{i}(x)^{N} \\&\quad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

If the exponentials appearing on the right hand side would be different, then by Theorem 6, their coefficients would be zero. This implies however that there exists a proper subset \(J\subset \left\{ 1, \ldots , n\right\} \) such that

$$\begin{aligned} \sum _{j\in J}f_{j}(x^{p_{j}})g_{j}(x^{q_{j}})=0 \; \text {as well as} \; \sum _{j\notin J}f_{j}(x^{p_{j}})g_{j}(x^{q_{j}})=0 \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$
(2)

This leads to the following definition of irreducible solutions.

Definition 10

A system of solutions \(\left\{ f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}\right\} \) of Eq. (1) is called irreducible if it does not satisfy a sub-term of (1). Otherwise, we say that a system of solutions is reducible.

Clearly, a system of solutions \(\left\{ f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}\right\} \) of (1) which fulfills (2) is a reducible solution. On the other hand, the argument in Remark 10 shows that every solution of (1) can be given as a sum of irreducible solutions of disjoint sub-terms of (1). Therefore, we restrict ourselves to the irreducible case, since every reducible solution can be deduced as a sum of irreducible solutions.

Corollary 15

Under the conditions of Theorem 14, suppose that system of functions \(\left\{ f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}\right\} \) is an irreducible solution of Eq. (1). Then there exists an exponential \(m:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) and there are generalized polynomials \(P_{i}, Q_{i}:\mathbb {F}^{\times }\rightarrow \mathbb {C}\) such that for each \(i=1, \ldots , n\)

$$\begin{aligned} f_{i}(x)= P_{i}(x)m(x) \qquad \text {and} \qquad g_{i}(x)= Q_{i}(x)m(x) \qquad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$
(3)

In other words, for each \(i=1, \ldots , n\) there exists higher order derivations \(D_{i}, \widetilde{D_{i}}:\mathbb {F}\rightarrow \mathbb {C}\) such that

$$\begin{aligned} f_{i}(x)\sim D_{i}(x) \qquad \text {and} \qquad g_{i}(x)\sim \widetilde{D_{i}}(x) \qquad \left( x\in \mathbb {F}^{\times }\right) , \end{aligned}$$

where \(\sim \) in the latter two equations is the equivalence relation defined in Remark 7.

Proof

By Remark 10, all of the exponentials \(m_i\) have to be the same in the description of the solutions of Theorem 14. Therefore, Eq. (3) describes the irreducible solutions of (1).

Using Lemma 10, solutions of Eq. (1) are enough to be determined up to the equivalence relation \(\sim \) defined in Remark 7. Accordingly, we can suppose that the exponential m in the above representation is the identity mapping. Hence, in view of Theorem 8 we get that the functions \(f_{1}, \ldots , f_{n}\) as well as \(g_{1}, \ldots , g_{n}\) are (or more precisely, are equivalent to) higher order derivations, as we stated. \(\square \)

3.4 The Order of Higher Order Derivation Solutions

Every higher order derivation on \(\mathbb {F}\) is a differential operator on any finitely generated subfield of \(\mathbb {F}\) (see Theorem 8 and [6]). Hence on these fields the solutions are differential operators. Moreover, if the solutions on any finitely generated subfield of \(\mathbb {F}\) are differential operators of order at most n, then every solution on \(\mathbb {F}\) is a derivation of order n. From now on, instead of finding solutions as higher order derivations we are looking for differential operators as solutions.

For this purpose, our next aim is to understand the arithmetic of the composition of derivations of the form \(d_1\circ \dots \circ d_r\) that are building blocks of differential operators. First we show that there is a natural form of composition of derivations that can be taken as a standard basis.

For this target, the notion of moment function sequences turn out to be useful. Here we follow [4]. A composition of a nonnegative integer n is a sequence of nonnegative integers \(\alpha = \left( \alpha _{k}\right) _{k\in \mathbb {N}}\) such that

$$\begin{aligned} n= \sum _{k=1}^{\infty }\alpha _{k}. \end{aligned}$$

For a positive integer r, an r-composition of a nonnegative integer n is a composition \(\alpha = \left( \alpha _{k}\right) _{k\in \mathbb {N}}\) with \(\alpha _{k}=0\) for \(k>r\).

Given a sequence of variables \(x=(x_{k})_{k\in \mathbb {N}}\) and compositions \(\alpha = \left( \alpha _{k}\right) _{k\in \mathbb {N}}\) and \(\beta = \left( \beta _{k}\right) _{k\in \mathbb {N}}\) we define

$$\begin{aligned} \alpha !=\prod _{k=1}^{\infty }\alpha _{k},\quad |\alpha |= \sum _{k=1}^{\infty }\alpha _{k}, \quad x^{\alpha }=\prod _{k=1}^{\infty }x_{k}^{\alpha _{k}},\quad \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) = \prod _{k=1}^{\infty }\left( {\begin{array}{c}\alpha _{k}\\ \beta _{k}\end{array}}\right) . \end{aligned}$$

Furthermore, \(\beta \le \alpha \) means that \(\beta _{k}\le \alpha _{k}\) for all \(k\in \mathbb {N}\) and \(\beta < \alpha \) stands for \(\beta \le \alpha \) and \(\beta \ne \alpha \).

Definition 11

Let G be a commutative group, r a positive integer, and for each multi-index \(\alpha \) in \(\mathbb {N}^r\) let \(f_{\alpha }:G\rightarrow \mathbb {C}\) be a continuous function. We say that \((f_{\alpha })_{\alpha \in \mathbb {N}^{r}}\) is a generalized moment sequence of rank r, if

$$\begin{aligned} f_{\alpha }(x+y)=\sum _{\beta \le \alpha } \left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) f_{\beta }(x)f_{\alpha -\beta }(y) \end{aligned}$$
(4)

holds whenever xy are in G. The function \(f_0\), where 0 is the zero element in \(\mathbb {N}^r\), is called the generating function of the sequence.

Theorem 16

Let G be a commutative group, r a positive integer, and for each \(\alpha \) in \(\mathbb {N}^{r}\) let \(f_{\alpha }:G\rightarrow \mathbb {C}\) be a function. If the sequence of functions \((f_{\alpha })_{\alpha \in \mathbb {N}^{r}}\) forms a generalized moment sequence of rank r, then there exists an exponential \(m:G\rightarrow \mathbb {C}\) and a sequence of complex-valued additive functions \(a= (a_{\alpha })_{\alpha \in \mathbb {N}^{r}}\) such that for every multi-index \(\alpha \) in \(\mathbb {N}^{r}\) and x in G we have

$$\begin{aligned} f_{\alpha }(x)=B_{\alpha }(a(x))m(x), \end{aligned}$$

where \(B_{\alpha }\) denotes the multivariate Bell polynomial corresponding to the multi-index \(\alpha \).

Remark 11

It is well-known that every polynomial \(P:\mathbb {C}^r\rightarrow \mathbb {C}\) (\(r\in \mathbb {N}\)) can be given as a linear combination of Bell polynomials \(B_{\alpha }\), where \(\alpha \in \mathbb {N}^{r}\). Hence, momentum generating functions generate the exponential polynomial functions on G. By Lemma 4, polynomials of the form \(B_{\alpha }\circ a\) are linearly independent over \(\mathbb {C}\).

Lemma 17

Let \(\mathbb {F}\subset \mathbb {C}\) be a field, r be a positive integer and \(d_{1}, \ldots , d_{r}:\mathbb {F}\rightarrow \mathbb {F}\) be linearly independent derivations. For all multi-index \(\alpha \in \mathbb {N}^{r}\), \(\alpha = \left( \alpha _{1}, \ldots , \alpha _{r}\right) \) define the function \(\varphi _{\alpha }:\mathbb {F}\rightarrow \mathbb {C}\) by

$$\begin{aligned} \varphi _{\alpha }(x)&= d^{\alpha }(x)= d_{1}^{\alpha _{1}} \circ \cdots \circ d_{r}^{\alpha _{r}}(x) \\&= \underbrace{d_{1}\circ \cdots \circ d_{1}}_{\alpha _{1}\text {times}} \circ \cdots \circ \underbrace{d_{r}\circ \cdots \circ d_{r}}_{\alpha _{r}\text { times}} (x) \\&\quad \left( x\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Then \(({\varphi _{\alpha }})_{\alpha \in \mathbb {N}^{r}}\) is a generalized moment sequence of rank r on the commutative group \(\mathbb {F}^{\times }\) and the generating function of the sequence is the identity function.

Proof

Let \(\mathbb {F}\subset \mathbb {C}\) be a field, r be a positive integer and \(d_{1}, \ldots , d_{r}:\mathbb {F}\rightarrow \mathbb {F}\) be linearly independent derivations. For all multi-index \(\alpha \in \mathbb {N}^{r}\), \(\alpha = \left( \alpha _{1}, \ldots , \alpha _{r}\right) \) define the function \(\varphi _{\alpha }:\mathbb {F}\rightarrow \mathbb {C}\) as in the lemma. We prove the statement by induction of the length of the multi-index \(\alpha \). Assume that \(\alpha \in \mathbb {N}^{r}\) and \(|\alpha |=1\). Then there exists \(i\in \left\{ 1, \ldots r \right\} \) such that \(\alpha _{i}=1\) and \(\alpha _{j}=0\) for \(j\ne i\) and

$$\begin{aligned} \varphi _{\alpha }(xy)&= d_{i}^{\alpha _{i}}(xy)= d_{i}(xy)= xd_{i}(y)+yd_{i}(x) \\&= \varphi _{0}(x)\varphi _{\alpha }(y)+\varphi _{0}(y)\varphi _{\alpha }(x)= \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \varphi _{\beta }(x)\varphi _{\alpha -\beta }(y) \\&\quad \left( x, y\in \mathbb {F}^{\times }\right) . \end{aligned}$$

Let now \(\alpha \in \mathbb {N}^{r}\) and assume that the statement is true for all multi-indices \(\beta \in \mathbb {N}^{r}\) with \(\beta < \alpha \). Then

$$\begin{aligned} \varphi _{\alpha }(xy)&= d^{\alpha }(xy)= d_{1}^{\alpha _{1}} \circ \cdots \circ d_{r}^{\alpha _{r}}(xy)= \underbrace{d_{1}\circ \cdots d_{1}}_{\alpha _{1}\text { times}} \circ \cdots \circ \underbrace{d_{r}\circ \cdots \circ d_{r}}_{\alpha _{r} \text { times}}(xy) \\&= \underbrace{d_{1}\circ \cdots \circ d_{1}}_{\alpha _{1}\text { times}} \circ \cdots \circ \underbrace{d_{r-1}\circ \cdots \circ d_{r-1}}_{\alpha _{r-1}\text { times}} \left( d_{r}^{\alpha _{r}}(xy)\right) \\&= \underbrace{d_{1}\circ \cdots \circ d_{1}}_{\alpha _{1}\text { times}} \circ \cdots \circ \underbrace{d_{r-1}\circ \cdots \circ d_{r-1}}_{\alpha _{r-1}\text { times}} \left( \sum _{\beta _r=0}^{\alpha _{r}}\left( {\begin{array}{c}\alpha _{r}\\ \beta _r\end{array}}\right) d_{r}^{\beta _r}(x)\cdot d^{\alpha _{r}-\beta _r}_{r}(y)\right) \\&= \sum _{\beta _r=0}^{\alpha _{r}}\left( {\begin{array}{c}\alpha _{r}\\ \beta _r\end{array}}\right) \underbrace{d_{1}\circ \cdots \circ d_{1}}_{\alpha _{1}\text { times}} \circ \cdots \circ \underbrace{d_{r-1}\circ \cdots \circ d_{r-1}}_{\alpha _{r-1}\text { times}} \left( d_{r}^{\beta _r}(x)\cdot d^{\alpha _{r}-\beta _r}_{r}(y)\right) \\&= \sum _{\beta _1=0}^{\alpha _{1}}\dots \sum _{\beta _{r-1}=0}^{\alpha _{r-1}}\sum _{\beta _r=0}^{\alpha _{r}}\left( {\begin{array}{c}\alpha _{1}\\ \beta _1\end{array}}\right) \dots \left( {\begin{array}{c}\alpha _{r-1}\\ \beta _{r-1}\end{array}}\right) \left( {\begin{array}{c}\alpha _{r}\\ \beta _r\end{array}}\right) \\&\quad \times d_{1}^{\beta _{1}} \circ \cdots \circ d_{r}^{\beta _{r}}(x)~\cdot ~ d_{1}^{{\alpha _{1}}-{\beta _{1}}} \circ \cdots \circ d_{r}^{{\alpha _{r}}-{\beta _{r}}}(y) \\&= \sum _{\beta \le \alpha }\left( {\begin{array}{c}\alpha \\ \beta \end{array}}\right) \varphi _{\beta }(x)\varphi _{\alpha -\beta }(y) \qquad \left( x, y\in \mathbb {F}^{\times }\right) . \end{aligned}$$

\(\square \)

Corollary 18

By Lemma 17 and Remark 11 implies that the functions \(d^{\alpha }= d_{1}^{\alpha _{1}} \circ \cdots \circ d_{r}^{\alpha _{r}}\) constitute a basis of the differential operators in \(\mathbb {F}^{\times }\). Since every \(d^{\alpha }\) is additive, by Lemma 4, all elements of the system \(\{d^{\alpha }(x)\}\), where \(\alpha \in \cup _{r\in \mathbb {N}}\mathbb {N}^r\) are algebraically independent.

A consequence of the algebraic independence of the elements of \(d^{\alpha }\), where \(\alpha \in \cup _{r\in \mathbb {N}}\mathbb {N}^r\) is the following. Let \(P\in \mathbb {C}[x_1, \dots , x_n]\) be a polynomial, \(d_1, \dots d_r\) be derivations as in Lemma 17 and \(\alpha _1, \dots , \alpha _n \in \cup _{r\in \mathbb {N}}\mathbb {N}^r\). Then the following polynomial form

$$\begin{aligned} P(d^{\alpha _1}(x), \ldots , d^{\alpha _n}(x))=0 \qquad \left( x\in G\right) \end{aligned}$$

holds if and only if

$$\begin{aligned} P({\hat{d}}^{~|\alpha _1|}(x), \ldots , {\hat{d}}^{~|\alpha _n|}(x))=0 \qquad \left( x\in G\right) , \end{aligned}$$

where \({\hat{d}}\) is an arbitrary derivation (of order 1). In other words, we can substitute \(d_1, \dots , d_r\) by \({\hat{d}}\) in \(d^{\alpha _1}, \ldots , d^{\alpha _n}\).

By Corollary 18, it would be desirable to calculate \(d^{k}= \underbrace{d\circ \cdots \circ d}_{k \text {times}}\), where d is a derivation (of order 1) and \(k\in \mathbb {N}\). Lemma 17, together with [4, Proposition 1], implies the following statement.

Proposition 19

Let \(\mathbb {F}\subset \mathbb {C}\) be a field and \(d:\mathbb {F}\rightarrow \mathbb {C}\) a derivation. For all positive integer k we define the function \(d^{k}\) on \(\mathbb {F}\) by

$$\begin{aligned} d^{k}(x)= \underbrace{d\circ \cdots \circ d}_{k \text { times}}(x) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Then for all positive integer p we have

$$\begin{aligned} d^{k}(x_{1}\cdots x_{p})= \sum _{\begin{array}{c} l_{1}, \ldots , l_{p}\ge 0\\ l_{1}+\cdots +l_{p}=k \end{array}} \left( {\begin{array}{c}k\\ l_{1}, \ldots , l_{p}\end{array}}\right) \prod _{t=1}^{p} d^{l_{t}}(x_{t}) \qquad \left( x_{1}, \ldots , x_{p}\in \mathbb {F}\right) , \end{aligned}$$

where the conventions \(d^{0}= \textrm{id}\) and \(\displaystyle \left( {\begin{array}{c}k\\ l_{1}, \ldots , l_{p}\end{array}}\right) = \dfrac{k!}{l_{1}! \cdots l_{p}!}\) are adopted. Especially, for all positive integer p, we have

$$\begin{aligned} d^{k}(x^{p})= \sum _{\begin{array}{c} l_{1}, \ldots , l_{p}\ge 0\\ l_{1}+\cdots +l_{p}=k \end{array}} \left( {\begin{array}{c}k\\ l_{1}, \ldots , l_{p}\end{array}}\right) \cdot d^{l_{1}}(x)\cdots d^{l_{p}}(x) \qquad \left( x_{1}, \ldots , x_{p}\in \mathbb {F}\right) , \end{aligned}$$

Reordering the previous expression we can get the following

$$\begin{aligned} d^{k}(x^{p}) = \sum _{\begin{array}{c} j_{1}+\cdots +j_{s}=p'< p\\ j_1+2j_2+\dots +sj_s=k \end{array}} \left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j_{1}}, \ldots , \underbrace{s,\dots , s}_{j_{s}}\end{array}}\right) \prod _{t=1}^s \frac{1}{(j_t!)}\\ \times \left( {\begin{array}{c}p\\ \underbrace{1,\dots ,1}_{p'}\end{array}}\right) \cdot (d(x))^{j_1}\cdots (d^{s}(x))^{j_s}\cdot x^{p-p'},\qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

where \(j_1,\dots , j_s\) denotes the number of \(d(x),\dots , d^s(x)\) in a given composition of \(d^k(x^p)\).

With the above results, we are now ready to prove an upper bound for the order of derivations appearing in Theorem 14.

Theorem 20

Under the hypotheses of Theorem 14, the solutions \(f_i\) and \(g_i\) are derivations \(D_{i}\) and \({\widetilde{D}}_{i}\) for all \(i=1, \ldots , n\). Let k and l denote the maximal orders of derivations \(D_{i}\) and \({\widetilde{D}}_{i}\), respectively. Suppose that there exists some \(i'\) such that the order of \(D_{i'}\) and \({\widetilde{D}}_{i'}\) is exactly k and l, respectively. Then for all \(i=1, \ldots , n\) the order of \(D_{i}\) and \({\widetilde{D}}_{i}\) is less or equal to \(n-1\).

Proof

Assume contrary that the maximal order k of the above derivations \(D_{1}, \ldots , D_{n}\) is greater than \(n-1\). The argument for the case when the maximal order l of \({\widetilde{D}}_{1}, \ldots , {\widetilde{D}}_{n}\) is greater than \(n-1\) is analogous.

By our assumption there exists an index \(i'\) such that the orders of the derivations \(D_{i'}\) and \({\widetilde{D}}_{i'}\) is exactly k and l, respectively. It is important to note that then the sum of the orders of \(D_{i}\) and \({\widetilde{D}}_{i}\) is at most \(k+l\) for any i, and it is exactly \(k+l\) for some indices if and only if the corresponding D and \({\widetilde{D}}\) is of order k and of order l, respectively. Furthermore, by the algebraic independence of higher order derivations, there the number of these indices are at least two.

From now on we assume that \(\mathbb {F}\) is finitely generated. Indeed, if we verify the statement for any finitely generated subfield of a field \(\mathbb {F}\), then it holds for \(\mathbb {F}\) itself, as well. On finitely generated fields all of these higher order derivations can be represented as differential operators, that is, on finitely generated fields we have

$$\begin{aligned} D_{i}(x)= \sum _{|\alpha |\le k}\lambda _{i,\alpha }d_i^{\alpha }(x) \qquad \text {and} \qquad {\widetilde{D}}_{i}(x)= \sum _{|\beta |\le l}{\widetilde{\lambda }}_{i,\beta }{\widetilde{d}}_i^{\beta }(x) \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

with appropriate complex constants \(\lambda _{i,\alpha }\), \({\widetilde{\lambda }}_{i, \beta }\), (\(|\alpha |\le k\), \(|\beta |\le l\), \(i=1, \ldots , n\)) and higher order derivations \(d_{i}^{\alpha }, {\widetilde{d}}_{i}^{\beta } :\mathbb {F}\rightarrow \mathbb {C}\) defined in Lemma 17. Further, we have

$$\begin{aligned} 0= & {} \sum _{i=1}^{n}D_{i}(x^{p_{i}}){\widetilde{D}}_{i}(x^{q_{i}}) \\= & {} \sum _{i=1}^{n}\left( \sum _{|\alpha |\le k}\lambda _{i, \alpha } d_i^{\alpha }(x^{p_{i}}) \right) \cdot \left( \sum _{|\beta |\le l}{\widetilde{\lambda }}_{i, \beta }{\widetilde{d}}_i^{\beta }(x^{q_{i}})\right) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

If we expand the right hand side of the above identity with the aid of Proposition 19, we get an expression of the following polynomial form

$$\begin{aligned}&P(x, d_{1}(x), \ldots , d_{k}(x), {\widetilde{d}}_{1}(x), \ldots , {\widetilde{d}}_{l}(x), \ldots , d_{1}^{k}(x), \ldots , \\&\quad d_{k}^{k}(x), {\widetilde{d}}^{l}_{1}(x), \ldots , {\widetilde{d}}^{l}_{l}(x))=0 \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

If this identity can be satisfied by different functions, it can also be satisfied by a single one. By Corollary 18, this enables us to substitute the functions \(d_{i}^{\alpha }, {\widetilde{d}}_{i}^{\beta } \, (i=1, \ldots , n, |\alpha |\le k, |\beta |\le l)\) with suitable compositions of a given derivation d of order 1. In other words, instead of the above identity, we can restrict ourselves to

$$\begin{aligned} \sum _{i=1}^{n}\left( \sum _{j=0}^{k}\lambda _{i, j}d^{j}(x^{p_{i}}) \right) \cdot \left( \sum _{j=0}^{l}{\widetilde{\lambda }}_{i, j}d^{j}(x^{q_{i}})\right) =0 \qquad \left( x\in \mathbb {F}\right) \end{aligned}$$

with appropriate complex constants \(\lambda _{i, j}\)   (\(i=1, \ldots , n\), \(j=0, \ldots , k\)), and \({\widetilde{\lambda }}_{i, j}\)  (\(i=1, \ldots , n\), \(j=0, \ldots , l\)) and derivation \(d :\mathbb {F}\rightarrow \mathbb {C}\). By our assumptions there are some \(i'\) such that \(\lambda _{i',k}\ne 0\) and \({\widetilde{\lambda }}_{i', l}\ne 0\).

Dividing the above sum to smaller ones, we get

$$\begin{aligned}&\sum _{i=1}^{n}\left( \sum _{j=0}^{k}\lambda _{i, j}d^{j}(x^{p_{i}}) \right) \cdot \left( \sum _{j=0}^{l}{\widetilde{\lambda }}_{i, j}d^{j}(x^{q_{i}})\right) \\&\quad = \sum _{i=1}^{n}\left( \sum _{j=0}^{k-1}\lambda _{i, j}d^{j}(x^{p_{i}})+\lambda _{i, k}d^{k}(x^{p_{i}}) \right) \cdot \left( \sum _{j=0}^{k-1}{\widetilde{\lambda }}_{i, j}d^{j}(x^{q_{i}})+{\widetilde{\lambda }}_{i, l}d^{l}(x^{q_{i}})\right) \\&\qquad \sum _{i=1}^{n} \left[ S({p_i}, k-1)S(q_{i}, l-1)+{\widetilde{\lambda }}_{i, l}d^{l}(x^{q_{i}})S(p_{i}, k-1) \right. \\&\qquad \left. + \lambda _{i, k}d^{k}(x^{p_{i}})S(q_{i}, l-1)+\lambda _{i, k}{\widetilde{\lambda }}_{i, k}d^{k}(x^{p_{i}})d^{l}(x^{q_{i}})\right] =0 \\&\qquad \left( x\in \mathbb {F}\right) , \end{aligned}$$

where

$$\begin{aligned} S(p_{i}, k-1)= \sum _{j=0}^{k-1}\lambda _{i, j}d^{j}(x^{p_{i}}) \; \text {and} \; S(q_{i}, l-1)= \sum _{j=0}^{l-1}{\widetilde{\lambda }}_{i, j}d^{j}(x^{q_{i}}) \qquad \left( x\in G\right) . \end{aligned}$$

Note that, by the algebraic independence used in Corollary 18, this sum splits into separate terms of the form \(d^{s}(x^{p_i})d^{t}(x^{q_i})\), where \(s+t\) is a fixed number. By our assumption, when \(s+t=k+l\), then the only way is that \(s= k, t= l\). This implies that using Proposition 19 we get that

$$\begin{aligned} 0&=\sum _{i=1}^{n} \lambda _{i, k}{\widetilde{\lambda }}_{i, l}d^{k}(x^{p_{i}})d^{l}(x^{q_{i}}) \\&= \sum _{i=1}^{n}\lambda _{i, k}{\widetilde{\lambda }}_{i, k} \\&\quad \times \left( \sum _{\begin{array}{c} j_{1}+\cdots +j_{s}=p'< p_i\\ j_1+2j_2+\dots +sj_s=k \end{array}} \left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j_{1}}, \ldots , \underbrace{s,\dots , s}_{j_{s}}\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{p'}\end{array}}\right) \prod _{t=1}^s \frac{(d^t(x))^{j_t}}{(j_t!)} x^{p_i-p'}\right) \\&\quad \times \left( \sum _{\begin{array}{c} j_{1}+\cdots +j_{s}=q'< q_i\\ j_1+2j_2+\dots +sj_s=l \end{array}} \left( {\begin{array}{c}l\\ \underbrace{1,\dots , 1}_{j_{1}}, \ldots , \underbrace{s,\dots , s}_{j_{s}}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{q'}\end{array}}\right) \prod _{t=1}^s \frac{(d^t(x))^{j_t}}{(j_t!)} x^{q_i-q'}\right) \end{aligned}$$

while the rest in the above sum can be computed similarly.

Case 1. If \(k<l\), then we compute the coefficients of the terms

$$\begin{aligned} (d(x))^{j} d^{k-j}(x)d^{l}(x) ~ ~ ~ (j=0,\dots , k-1). \end{aligned}$$
(5)

For each \(j=0, \dots , k-1\) this can be taken from the expansion of

$$\begin{aligned}\lambda _{i, k}{\widetilde{\lambda }}_{i, l}d^{k}(x^{p_{i}})d^{l}(x^{q_{i}})\end{aligned}$$

in only one way. Namely, by splitting \(d^{k}(x^{p_{i}})\) into \(j+1\) parts and \(d^{l}(x^{q_{i}})\) into one. Then the corresponding coefficients are

$$\begin{aligned} \sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, l}\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \frac{1}{j!}\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ 1\end{array}}\right) =0. \end{aligned}$$

Since each of the terms contains \(\displaystyle \left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \dfrac{1}{j!}=\left( {\begin{array}{c}k\\ j\end{array}}\right) \), this can be eliminated from the above equation. The corresponding equations (\(j=0,\dots , k\)) can be written in the following matrix equation

$$\begin{aligned} \begin{pmatrix} \left( {\begin{array}{c}p_{1}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \left( {\begin{array}{c}p_{1}\\ 1, 1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1, 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \vdots &{} \ddots &{} \vdots \\ \left( {\begin{array}{c}p_{1}\\ \underbrace{1, \ldots , 1}_k\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ \underbrace{1, \ldots , 1}_k\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \end{pmatrix} \cdot \begin{pmatrix} \lambda _{1, k}{\widetilde{\lambda }}_{1, l}\\ \lambda _{2, k}{\widetilde{\lambda }}_{2, l}\\ \vdots \\ \lambda _{n, k}{\widetilde{\lambda }}_{n, l} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

Here we note that as \(p_i\)’s are all different positive integers, it follows that \(p_i\ge n\) for some i and hence the first n rows of the matrix are not identically zero, as \(k>n-1\). It is straightforward to verify that the first n row of above matrix equation is equivalent to

$$\begin{aligned} \begin{pmatrix} p_{1} &{} \ldots &{} p_{n}\\ p_{1}^2 &{} \ldots &{} p_{n}^{2} \\ \vdots &{} \ddots &{} \vdots \\ p_{1}^{n} &{} \ldots &{} p_{n}^{n} \end{pmatrix} \cdot \begin{pmatrix} q_1\lambda _{1, k}{\widetilde{\lambda }}_{1, l}\\ q_2\lambda _{2, k}{\widetilde{\lambda }}_{2, l}\\ \vdots \\ q_n\lambda _{n, k}{\widetilde{\lambda }}_{n, l} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

Since this is a Vandermonde type matrix with different \(p_i\)’s, the only solution of this homogeneous linear system is the zero vector, i.e., \( q_i\lambda _{i, k}{\widetilde{\lambda }}_{i, l}= 0\) for all \(i=1, \ldots , n\). This contradicts to our assumption that there is some \(i'\) for which \(\lambda _{i', k}\ne 0\) and \({\widetilde{\lambda }}_{i', l}\ne 0\) (\(q_{i'}\ne 0\) as \(q_i\ne 0\) for all \(i\in \{1, \dots , k\}\)).

Case 2. If \(k=l\), then we compute the coefficients of the terms

$$\begin{aligned} (d(x))^{2j} (d^{k-j}(x))^2 \qquad (j=0,\dots , k-1). \end{aligned}$$
(6)

If \(j<\dfrac{k}{2}\), then this term can be taken from the expansion of

$$\begin{aligned}\lambda _{i, k}{\widetilde{\lambda }}_{i, l}d^{k}(x^{p_{i}})d^{k}(x^{q_{i}})\end{aligned}$$

in only one way. Namely, by splitting \(d^{k}(x^{p_{i}})\) and \(d^{k}(x^{q_{i}})\) into \(j+1\) parts. Then the corresponding coefficients are

$$\begin{aligned} \sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, l}\left( \left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \frac{1}{j!}\right) ^2\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) =0. \end{aligned}$$

Since each of the terms contains \(\Big (\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \dfrac{1}{j!}\Big )^2=\left( {\begin{array}{c}k\\ j\end{array}}\right) ^2\), this can be eliminated from the above equations. These equations for \(j=0,\dots , \lceil k/2\rceil -1\) can be written in the following matrix equation

$$\begin{aligned} \begin{pmatrix} \left( {\begin{array}{c}p_{1}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \left( {\begin{array}{c}p_{1}\\ 1,1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1,1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1,1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1, 1\end{array}}\right) \\ \vdots &{} \ddots &{} \vdots \\ \left( {\begin{array}{c}p_{1}\\ \underbrace{1, \ldots , 1}_{\lceil k/2\rceil -1}\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ \underbrace{1, \ldots , 1}_{\lceil k/2\rceil -1}\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ \underbrace{1, \ldots , 1}_{\lceil k/2\rceil -1}\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ \underbrace{1, \ldots , 1}_{\lceil k/2\rceil -1}\end{array}}\right) \end{pmatrix} \cdot \begin{pmatrix} \lambda _{1, k}{\widetilde{\lambda }}_{1, k}\\ \lambda _{2, k}{\widetilde{\lambda }}_{2, k}\\ \vdots \\ \lambda _{n, k}{\widetilde{\lambda }}_{n, k} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

Here we note that as \(p_i, q_i\) are all different positive integers, it follow that \(\max _i \left\{ p_i,q_i\right\} \ge 2n\) and hence the first \(n'=\min (n, \lceil k/2\rceil -1)\) rows of the matrix in not identically zero, as \(k>n-1\). It is straightforward to verify that the first \(n'\) row of above matrix equation is equivalent to

$$\begin{aligned} \begin{pmatrix} p_1q_1&{} \ldots &{}p_nq_n\\ p_{1}^2q_1^2 &{} \ldots &{} p_{n}^{2}q_n^2 \\ \vdots &{} \ddots &{} \vdots \\ p_{1}^{n'}q_1^{n'} &{} \ldots &{} p_{n}^{n'}q_n^{n'} \end{pmatrix} \cdot \begin{pmatrix} \lambda _{1, k}{\widetilde{\lambda }}_{1, k}\\ \lambda _{2, k}{\widetilde{\lambda }}_{2, l}\\ \vdots \\ \lambda _{n, k}{\widetilde{\lambda }}_{n, k} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

Note that if \(n'=n\), then we are done with a Vandermonde matrix argument similar as it is used in Case 1. So from now on we assume that \(n'=\lceil k/2\rceil -1\). This also means that \(k<2n\), thus the maximal order of the corresponding derivations is at most \(2n-1\).

If \(j\ge \dfrac{k}{2}\), then the term \((d(x))^{2j} (d^{k-j}(x))^2\) can be taken from the expansion of

$$\begin{aligned}\lambda _{i, k}{\widetilde{\lambda }}_{i, l}d^{k}(x^{p_{i}})d^{k}(x^{q_{i}})\end{aligned}$$

in three ways. One is as above, when we split both \(d^{k}(x^{p_{i}})\) and \(d^{k}(x^{q_{i}})\) into \(j+1\) parts. Another one is when \(d^{k}(x^{p_{i}})\) is split into k parts giving \(d(x)^k\), and \(d^{k}(x^{q_{i}})\) provides \((d^{k-j}(x))^2(d(x))^{2j-k}\). The third one is given by changing the role of \(p_i\) and \(q_i\).

Then the corresponding coefficients are

$$\begin{aligned}&\sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k}\Big (\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \frac{1}{(j!)}\Big )^2\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) \\&\quad +\sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k} \left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{k}\end{array}}\right) \frac{1}{(k!)}\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{2j-k}, j\end{array}}\right) \frac{1}{2}\frac{1}{(2j-k)!}\\&\quad \times \Big (\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) + \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \Big )=0. \end{aligned}$$

First we show that the second sum has to vanish for all \(j=\lceil k/2\rceil , \dots , k-1\). In such cases, the coefficients

$$\begin{aligned}\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{k}\end{array}}\right) \frac{1}{k!}\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{2j-k}, j\end{array}}\right) \frac{1}{2}\frac{1}{(2j-k)!}\end{aligned}$$

are the same is each summand, it is enough to show that

$$\begin{aligned} \sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k} \Big (\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) + \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \Big )=0.\end{aligned}$$
(7)

This clearly holds, since the term \((d(x))^{2j+1}d^{2k-2j-1}(x)\) in the expansion of \(d^k(x^{p_i}) d^k(x^{q_i})\) for \(j=\lceil k/2\rceil , \dots , k-1\) can be given in exactly two ways. Either \((d(x))^k\) stems from \(d^k(x^{p_i})\) and \((d(x))^{2j-k+1}\cdot d^{2k-2j-1}(x)\) stems from \(d^k(x^{q_i})\), or reversely changing the role of \(p_i\) and \(q_i\). Hence we get

$$\begin{aligned}&\sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k}\left( {\begin{array}{c}k\\ \underbrace{1,\dots ,1}_{2j-k+1}\end{array}}\right) \cdot \frac{1}{(2j-k+1)!}\\&\quad \times \Big (\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) + \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{2j-k+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{k}\end{array}}\right) \Big )=0, \end{aligned}$$

which is equivalent to Eq. (7). Thus, for all \(j=\lceil k/2\rceil , \dots , k-1\) it follows

$$\begin{aligned} \sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k}\Big (\left( {\begin{array}{c}k\\ \underbrace{1,\dots , 1}_{j}\end{array}}\right) \frac{1}{(j!)}\Big )^2\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) =0 \end{aligned}$$

This implies that

$$\begin{aligned} \sum _{i=1}^n\lambda _{i, k}{\widetilde{\lambda }}_{i, k}\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) =0 \end{aligned}$$

hold for all \(j=0,\dots , k-1\). Thus we get

$$\begin{aligned} \begin{pmatrix} \left( {\begin{array}{c}p_{1}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \left( {\begin{array}{c}p_{1}\\ 1,1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1,1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1,1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1, 1\end{array}}\right) \\ \vdots &{} \ddots &{} \vdots \\ \left( {\begin{array}{c}p_{1}\\ \underbrace{1, \ldots , 1}_{k}\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ \underbrace{1, \ldots , 1}_{k}\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ \underbrace{1, \ldots , 1}_{k}\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ \underbrace{1, \ldots , 1}_{k}\end{array}}\right) \end{pmatrix} \cdot \begin{pmatrix} \lambda _{1, k}{\widetilde{\lambda }}_{1, k}\\ \lambda _{2, k}{\widetilde{\lambda }}_{2, k}\\ \vdots \\ \lambda _{n, k}{\widetilde{\lambda }}_{n, k} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}, \end{aligned}$$

where the first n rows of the matrix is not identically zero as \(p_i\) and \(q_i\) are different, hence there is an index \(i'\) such that \(p_{i'}\ge n\) and \(q_{i'}\ge n\). Thus this system consisting the first n rows is equivalent to

$$\begin{aligned} \begin{pmatrix} p_1q_1&{} \ldots &{}p_nq_n\\ p_{1}^2q_1^2 &{} \ldots &{} p_{n}^{2}q_n^2 \\ \vdots &{} \ddots &{} \vdots \\ p_{1}^{n}q_1^{n} &{} \ldots &{} p_{n}^{n}q_n^{n} \end{pmatrix} \cdot \begin{pmatrix} \lambda _{1, k}{\widetilde{\lambda }}_{1, k}\\ \lambda _{2, k}{\widetilde{\lambda }}_{2, l}\\ \vdots \\ \lambda _{n, k}{\widetilde{\lambda }}_{n, k} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

By the usual Vandermonde argument as in Case 1, the only solution of this homogeneous linear system is the zero vector, i.e., \( \lambda _{i, k}{\widetilde{\lambda }}_{i, l}= 0\) for all \(i=1, \ldots , n\). This contradicts our assumption that there is some \(i'\) for which \(\lambda _{i', k}\ne 0\) and \({\widetilde{\lambda }}_{i', k}\ne 0\).

Case 3. If \(k>l\), then we prove by induction in j that

$$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j+1-s}q_i^{s}=0 \end{aligned}$$

holds for every \(j=0,\dots , n-1\) and every \(s=0,\dots , j\).

In each step we consider how \((d(x))^{j}d^{k-j}(x)d^{l}(x)\) can be given from the expansion \(d^{k}(x^{p_i})d^{l}(x^{q_i})\). There are three possible ways, where this term can stem from.

  1. (a)

    \((d(x))^{j}d^{k-j}(x)\) stems from \(d^{k}(x^{p_i})\) and \(d^{l}(x)\) stems from \(d^{l}(x^{q_i})\). This can happen for every \(j\in \{0, \dots , k\}\). In this case the coefficient of \((d(x))^{j}d^{k-j}(x)d^{l}(x)\) is

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ \underbrace{1,\dots ,1}_{j}\end{array}}\right) \frac{1}{j!}\left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ 1\end{array}}\right) \\ =\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) q_i. \end{aligned}$$
  2. (b)

    \(d^{k-j}(x)d^{l}(x)(d(x))^{j-l}\) stems from \(d^{k}(x^{p_i})\) and \((d(x))^{l}\) stems from \(d^{l}(x^{q_i})\). This can happen if \(j\ge l\). In this case the coefficient of \((d(x))^{j}d^{k-j}(x)d^{l}(x)\) is

    $$\begin{aligned}&\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ k-j,l,\underbrace{1,\dots ,1}_{j-l}\end{array}}\right) \frac{1}{(j-l)!}\left( {\begin{array}{c}l\\ \underbrace{1,\dots ,1}_{l}\end{array}}\right) \frac{1}{l!} \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j-l+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l}\end{array}}\right) \\&=\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j-l+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l}\end{array}}\right) .\end{aligned}$$
  3. (c)

    \(d^{l}(x)(d(x))^{k-l}\) stems from \(d^{k}(x^{p_i})\) and \((d(x))^{l-(k-j)}d^{k-j}(x)\) stems from \(d^{l}(x^{q_i})\). This can happen if \(l\ge k-j\). In this case the coefficient of \((d(x))^{j}d^{k-j}(x)d^{l}(x)\) is

    $$\begin{aligned}&\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ k-l,\underbrace{1,\dots ,1}_{l}\end{array}}\right) \frac{1}{l!}\left( {\begin{array}{c}l\\ k-j,\underbrace{1,\dots ,1}_{l-(k-j)}\end{array}}\right) \frac{1}{(l-(k-j))!}\\&\qquad \times \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{l+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l-(k-j)+1}\end{array}}\right) \\&\quad =\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{l+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l-(k-j)+1}\end{array}}\right) .\end{aligned}$$

    For \(j=0\) (and hence \(s=0\)) only the first term takes into account. This means that

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_iq_i=0. \end{aligned}$$

    So the inductive hypothesis holds for \(j=0\). Now we assume that

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j'+1-s}q_i^{s}=0 \end{aligned}$$

    holds for every \(j'=0,\dots , j-1\) and every \(s=0,\dots , j'\). We prove that

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j+1-s}q_i^{s}=0 \end{aligned}$$

    holds for every \(s=0,\dots , j\), as well. Generally, some of the previous compositions are possible for a given j but the following argument works in all cases, however we just prove it when all compositions discussed below appear in the expansion. Thus we assume that the coefficient of \((d(x))^{j}d^{k-j}(x)d^{l}(x)\) is

    $$\begin{aligned} 0&=\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left[ \left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j+1}\end{array}}\right) q_i \right. \\&\quad \left. + \left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{j-l+2}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l}\end{array}}\right) \right. \\&\quad \left. +\left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) \left( {\begin{array}{c}p_i\\ \underbrace{1,\dots ,1}_{l+1}\end{array}}\right) \left( {\begin{array}{c}q_i\\ \underbrace{1,\dots ,1}_{l-(k-j)+1}\end{array}}\right) .\right] \end{aligned}$$

    By the inductive hypothesis and the fact that \(j<n\le \min \left\{ p_i',q_i'\right\} \) for some \(i'\), this is equivalent to

    $$\begin{aligned} 0&= \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( \left( {\begin{array}{c}k\\ j\end{array}}\right) p_i^{j+1}q_i+ \left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) p_i^{j-l+2}q_i^{l} \right. \\&\quad \left. +\left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) p_i^{l+1}q_i^{l-(k-j)+1}\right) . \end{aligned}$$

    Note that the expressions \(p_i^{j+1-s}q_i^{s}\) for \(s=0, \dots , j\) can be interchanged in the following sense. By C(ii), \(p_i+q_i=N\), thus we have that \(p_i^{j+1-s}q_i^{s}=Np_i^{j-s}q_i^{s}-p_i^{j-s}q_i^{s+1}\). As

    $$\begin{aligned} \sum _{i=1}^n\lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j-s}q_i^{s}=0 \end{aligned}$$

    for every \(s=0,\dots , j-1\) by the inductive hypothesis, the term \(Np_i^{j-s}q_i^{s}\) can be eliminated. After several repetition of this step we get that

    $$\begin{aligned} 0&=\sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}\left( \left( {\begin{array}{c}k\\ j\end{array}}\right) p_i^{j+1}q_i\pm \left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) p_i^{j+1}q_i \right. \\&\quad \left. \pm \left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) p_i^{j+1}q_i\right) . \end{aligned}$$

    We also note that using this interchange rule and the inductive hypothesis it is clear that

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j+1-s}q_i^{s}=0 \end{aligned}$$

    for any \(s=0,\dots , j\) is equivalent to

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j+1}q_i=0. \end{aligned}$$

    Therefore, to finish the proof it is enough to show that

    $$\begin{aligned} \left( {\begin{array}{c}k\\ j\end{array}}\right) \pm \left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) \pm \left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) \ne 0. \end{aligned}$$

    This is equivalent to verify that

    $$\begin{aligned} \frac{1}{j!}\pm \frac{1}{(j-l)!(l!)} \pm \frac{1}{(k-l)!(l-(k-j)!)}\ne 0. \end{aligned}$$

    Multiplying by j! this lead to

    $$\begin{aligned} 1\pm \left( {\begin{array}{c}j\\ l\end{array}}\right) \pm \left( {\begin{array}{c}j\\ k-l\end{array}}\right) \ne 0. \end{aligned}$$

    It is straightforward to show using the growth of \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) in k (if \(k\le n/2\)) that one term dominates the others if \(l\ne k-l\) and \(l\ne j-(k-l)\). Thus in these cases this (weighted) sum is nonzero. If \(l= k-l\) or \(l= j-(k-l)\), then either their sign is the same and hence the sum is nonzero, or their sign is different and hence they eliminate each other, hence the sum is 1 which is nonzero. Summarizing, we get that

    $$\begin{aligned} \left( {\begin{array}{c}k\\ j\end{array}}\right) \pm \left( {\begin{array}{c}k\\ k-j,j-l,l\end{array}}\right) \pm \left( {\begin{array}{c}k\\ k-l,k-j,l-(k-j)\end{array}}\right) \ne 0 \end{aligned}$$

    and hence

    $$\begin{aligned} \sum _{i=1}^n \lambda _{i,k}{\widetilde{\lambda }}_{i,l}p_i^{j+1}q_i=0,\end{aligned}$$
    (8)

    which is equivalent to the inductive hypothesis for j as we noted above. Thus Eq. (8) holds for every \(j=0, \dots , n-1\). In matrix form this means that

    $$\begin{aligned} \begin{pmatrix} 1&{} \ldots &{}1\\ p_{1} &{} \ldots &{} p_{n} \\ \vdots &{} \ddots &{} \vdots \\ p_{1}^{n-1} &{} \ldots &{} p_{n}^{n-1} \end{pmatrix} \cdot \begin{pmatrix} p_1q_1\lambda _{1, k}{\widetilde{\lambda }}_{1, k}\\ p_2q_2\lambda _{2, k}{\widetilde{\lambda }}_{2, l}\\ \vdots \\ p_nq_n\lambda _{n, k}{\widetilde{\lambda }}_{n, k} \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ \vdots \\ 0 \end{pmatrix}. \end{aligned}$$

    Since this matrix is a Vandermonde type matrix with different \(p_i\)’s, as Case 1 and Case 2, this implies that, the only solution of this homogeneous linear system is the zero vector, i.e., \( p_iq_i\lambda _{i, k}{\widetilde{\lambda }}_{i, l}= 0\) for all \(i=1, \ldots , n\). This contradicts to our assumption that there is some \(i'\) for which \(\lambda _{i', k}\ne 0\) and \({\widetilde{\lambda }}_{i', l}\ne 0\) (and \(p_{i}\ne 0\), \(q_i\ne 0\) by C(i)). This also finishes the proof of the theorem, thus the order of all derivations involved in (1) is at most \(n-1\).

\(\square \)

Remark 12

The upper bound appearing in the above theorem is sharp. To see this, let nN be positive integers, \(\lambda _{1}, \ldots , \lambda _{n}\in \mathbb {C}\) and let \(f:\mathbb {F}\rightarrow \mathbb {C}\) be an additive function for which

$$\begin{aligned} \sum _{i=1}^{n}\lambda _{i}f(x^{i})x^{N-i}=0 \end{aligned}$$

is fulfilled for all \(x\in \mathbb {F}\). Then \(f\in {\mathscr {D}}_{n-1}(\mathbb {R})\) if and only if \(\lambda _{i}=(-1)^i \displaystyle \left( {\begin{array}{c}n\\ i\end{array}}\right) \) for all \(i=1,\dots ,n\), see [1, 3, 5].

Remark 13

The proof of Theorem 20 in all the cases is based on the fact that the matrix

$$\begin{aligned} \begin{pmatrix} \left( {\begin{array}{c}p_{1}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \left( {\begin{array}{c}p_{1}\\ 1,1\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ 1\end{array}}\right) +\left( {\begin{array}{c}p_{1}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{1,1}\\ 1\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1, 1\end{array}}\right) +\left( {\begin{array}{c}p_{n}\\ 1, 1\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ 1\end{array}}\right) \\ \vdots &{} \ddots &{} \vdots \\ \left( {\begin{array}{c}p_{1}\\ \underbrace{1, \ldots , 1}_{n-1}\end{array}}\right) \left( {\begin{array}{c}q_{1}\\ \underbrace{1, \ldots , 1}_{n -1}\end{array}}\right) &{} \ldots &{} \left( {\begin{array}{c}p_{n}\\ \underbrace{1, \ldots , 1}_{n-1}\end{array}}\right) \left( {\begin{array}{c}q_{n}\\ \underbrace{1, \ldots , 1}_{n-1}\end{array}}\right) \end{pmatrix} \end{aligned}$$

has rank n, though it is far from being trivial to find the proper sub-matrix, which verifies that.

On the other hand, the situation is much more complicated, if the maximal order k of \(D_i\), and the maximal order l of \(\widetilde{D_i}\) is not uniquely determined. Namely, if there are several different pairs \((k_i, l_i)\) so that \(k_i+l_i=K\), where K is constant and \(k_i\) is the maximum order of \(D_i\), \(l_i\) is the maximal order of \(\widetilde{D_i}\). Then the equations first can only be determined for subsets of the index set, which satisfy some nontrivial relations. In this case the first task is to show that the problem can be formalized separately for the index sets, which seems a very hard problem in full generality. In this case our method can be applied.

Theorem 20 and Remark 13 motivates our conjecture, that we verified for \(n\le 4\).

Conjecture 21

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(i)–C(iii). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy Eq. (1). Then every solution is a generalized exponential polynomial function of degree at most \(n-1\) on \(\mathbb {F}^{\times }\). In particular, if

$$\begin{aligned} f_{i}(x)= D_i(x) \qquad \text {and} \qquad g_{i}(x)= {\widetilde{D}}_i(x) \qquad \left( x\in \mathbb {F}^{\times }\right) \end{aligned}$$
(9)

for each \(i=1, \ldots , n\), then the order of \(D_i, {\widetilde{D}}_i\) is at most \(n-1\).

This conjecture leads to the following more general open question.

OpenQuestion 1

Is it true that every nonzero additive, irreducible solutions \(f_1,\dots , f_n\) of

$$\begin{aligned} P(f_1(x^{p_1}),\dots ,f_n(x^{p_n}))=0, ~~~~\text {with }~~~ P(0,\dots , 0)=0 \end{aligned}$$

are derivations of order at most \(n-1\) and the identity function up to a homomorphism, if \(P:\mathbb {C}\rightarrow \mathbb {C}\) is polynomial and \(p_1\dots , p_n\) are distinct positive integers?

Finally we highlight some important special cases when Theorem 20 gives the proper bound of the order of the derivations.

Corollary 22

Let n be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and \(p_{1}, \ldots , p_{n}, q_{1}, \ldots , q_{n}\) be fixed positive integers fulfilling conditions C(i)–C(iii). Assume that the additive functions \(f_{1}, \ldots , f_{n}, g_{1}, \ldots , g_{n}:\mathbb {F}\rightarrow \mathbb {C}\) satisfy Eq. (1) as an irreducible solution. Then \(f_i\sim D_i\) and \(g_i\sim \widetilde{D_i}\), where \(D_i\) and \(\widetilde{D_i}\) are higher order derivations. Assume further that one of the following holds.

  1. (A)

    All \(D_i\) have the same order. This is the case, when \(f_i(x)=c_if(x)\, (x\in \mathbb {F})\) for some nonzero constants \(c_i\in \mathbb {C}\), \(i\in \{1, \dots , n\}\).

  2. (B)

    All \(\widetilde{D_i}\) have the same order. This is the case, when \(g_i(x)=c_ig(x)\, (x\in \mathbb {F})\) for some nonzero constants \(c_i\in \mathbb {C}\), \(i\in \{1, \dots , n\}\).

  3. (C)

    \(f_i=c_i\cdot g_i\) for all \(i\in \{1, \dots , n\}\) with some nonzero constants \(c_i\in \mathbb {C}\), \(i=1, \ldots , n\).

Then the order \(D_i\) and \(\widetilde{D_i}\) is at most \(n-1\).

Proof. Theorem 14 implies that every solution \(f_i\) (resp. \(g_i\)) is an exponential polynomial of the form \(P_i\cdot m\) (resp. \(Q_i\cdot m\)), which means that \(f_i\sim D_i\) and \(g_i\sim \widetilde{D_i}\) for some derivations \(D_i\) and \(\widetilde{D_i}\).

  1. (A)

    Let k denote the order of \(D_i\), and let l be the maximal order of \(\widetilde{D_i}\). Now we are in the position to apply Theorem 20.

  2. (B)

    Similar to (A), since in case of Eq. (1), the role of the parameters \(p_{1}, \ldots , p_{n}\) and \(q_{1}, \ldots , q_{n}\) is symmetric.

  3. (c)

    As the maximal degree of \(f_i\) is the same as the maximal degree of \(g_i\) and it is taken for the same index we can apply Theorem 20.

\(\square \)

3.4.1 Special Cases of Eq. (1)

In this subsection we consider special cases of Eq. (1). All the equations we consider here are of the form

$$\begin{aligned} f_{1}(x^{p_{1}})g_{1}(x^{q_{1}})+f_{2}(x^{p_{2}})g_{2}(x^{q_{2}})=0 \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Here \(f_{1}, f_{2}, g_{1}, g_{2}:\mathbb {F}\rightarrow \mathbb {C}\) denote the unknown additive functions and the parameters \(p_{1}, p_{2}, q_{1}, q_{2}\) fulfill conditions C(i)–C(iii). Due to the results of the previous section, we get that

$$\begin{aligned} \begin{array}{rcl} f_{i}(x) &{}\sim &{} \lambda _{i, 0}x+\lambda _{i, 1}d_{i}(x) \\ g_{i}(x) &{}\sim &{} \mu _{i, 0}x+\mu _{i, 1}\widetilde{d_{i}}(x) \end{array} \qquad \left( x\in \mathbb {F}, i=1, 2\right) \end{aligned}$$

with appropriate complex constants \(\lambda _{i, j}, \mu _{i, j}\) (\(i=1, 2\), \(j=0, 1\)) and derivations \(d_{i}, \tilde{d_{i}}:\mathbb {F}\rightarrow \mathbb {C}\) (\(i=1, 2\)).

Corollary 23

Let N be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and pq be different positive integers (strictly) less than N and assume that \(q\ne N-p\). If the additive functions \(f, g:\mathbb {F}\rightarrow \mathbb {C}\) satisfy

$$\begin{aligned} f(x^{p})f(x^{N-p})=g(x^{q})g(x^{N-q}) \qquad \left( x\in \mathbb {F}\right) , \end{aligned}$$

then

  1. (A)

    either there exists a homomorphism \(\varphi :\mathbb {C}\rightarrow \mathbb {C}\), a derivation \(d:\mathbb {F}\rightarrow \mathbb {C}\) such that

    $$\begin{aligned} f(x)= \varphi (d(x)) \qquad \text {and} \qquad g(x)= \alpha \varphi (d(x)) \qquad \left( x\in \mathbb {F}\right) , \end{aligned}$$

    where \(\alpha =\dfrac{p(N-p)}{q(N-q)}\),

  2. (B)

    or there exists a homomorphism \(\varphi :\mathbb {F}\rightarrow \mathbb {C}\) such that

    $$\begin{aligned} f(x)= f(1)\cdot \varphi (x) \qquad g(x)= \pm f(1)\cdot \varphi (x) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Corollary 24

Let N be a positive integer, \(\mathbb {F}\subset \mathbb {C}\) be a field and pq be different positive integers (strictly) less than N and assume that \(q\ne N-p\). If the additive functions \(f, g:\mathbb {F}\rightarrow \mathbb {C}\) satisfy

$$\begin{aligned} f(x^{p})g(x^{N-p})= \kappa f(x^{q})g(x^{N-q}) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Then

  1. (A)

    if \(\kappa \notin \left\{ 1, \dfrac{p(N-p)}{q(N-q)} \right\} \), then f is identically zero,

  2. (B)

    if \(\kappa = 1\), then the only possibility is that

    $$\begin{aligned} f(x)= f(1)\cdot \psi (x) \qquad \text {and} \qquad g(x)= f(1)\cdot \psi (x) \qquad \left( x\in \mathbb {F}\right) , \end{aligned}$$

    where \(\psi :\mathbb {F}\rightarrow \mathbb {C}\) is a homomorphism,

  3. (C)

    if \(\kappa =\dfrac{p(N-p)}{q(N-q)}\), then there exists a homomorphism \(\varphi :\mathbb {C}\rightarrow \mathbb {C}\) and derivations \(d_1, d_2 :\mathbb {F}\rightarrow \mathbb {C}\) such that

    $$\begin{aligned} f(x)= \varphi (d_1(x)) \qquad \text {and} \qquad g(x)= \varphi (d_2(x)) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

Both results implies that the nonzero additive solutions of equation

$$\begin{aligned} f(x^{p})f(x^{N-p})=\kappa f(x^{q})f(x^{N-q}) \qquad \left( x\in \mathbb {F}\right) . \end{aligned}$$

are derivations of order 1 (\(\kappa = \dfrac{p(N-p)}{q(N-q)}\)) or the identity function (\(\kappa =1\)) up to a homomorphism.