1 Introduction and preliminaries

Perhaps G. Ancochea was the first who studied additive mappings from a ring into another ring which also fulfill a ‘polynomial equation’. More concretely in [2] he described those additive functions that preserve squares. Later, these results were strengthened by (among others) Kaplansky [14] and Jacobson–Rickart [13].

Recall that if \(R, R'\) are rings, then the mapping \(\varphi :R\rightarrow R'\) is called a homomorphism if

$$\begin{aligned} \varphi (a+b)=\varphi (a)+\varphi (b) \qquad \left( a, b\in R\right) \end{aligned}$$

and

$$\begin{aligned} \varphi (ab)=\varphi (a)\varphi (b) \qquad \left( a, b\in R\right) . \end{aligned}$$

Furthermore, the function \(\varphi :R\rightarrow R'\) is an anti-homomorphism if

$$\begin{aligned} \varphi (a+b)=\varphi (a)+\varphi (b) \qquad \left( a, b\in R\right) \end{aligned}$$

and

$$\begin{aligned} \varphi (ab)=\varphi (b)\varphi (a) \qquad \left( a, b\in R\right) . \end{aligned}$$

Henceforth, \({\mathbb {N}}\) will denote the set of positive integers. Let \(n\in {\mathbb {N}}, n\ge 2\) be fixed. The function \(\varphi :R\rightarrow R'\) is called an n-Jordan homomorphism if

$$\begin{aligned} \varphi (a+b)=\varphi (a)+\varphi (b) \qquad \left( a, b\in R\right) \end{aligned}$$

and

$$\begin{aligned} \varphi (a^{n})=\varphi (a)^{n} \qquad \left( a\in R\right) . \end{aligned}$$

In case \(n=2\) we speak about homomorphisms and Jordan homomorphisms, respectively. It was G. Ancochea who first dealt with the connection of Jordan homomorphisms and homomorphisms, see [2]. These results were generalized and extended in several ways, see for instance [13, 14, 23]. In [12] I.N. Herstein showed that if \(\varphi \) is a Jordan homomorphism of a ring R onto a prime ring \(R'\) of characteristic different from 2 and 3, then either \(\varphi \) is a homomorphism or an anti-homomorphism.

Besides homomorphisms, derivations also play a key role in the theory of rings and fields. Concerning this notion, we will follow [15, Chapter 14].

Let Q be a ring and let P be a subring of Q. A function \(d:P\rightarrow Q\) is called a derivation if it is additive, i.e.

$$\begin{aligned} d(x+y)=d(x)+d(y) \quad \left( x, y\in P\right) \end{aligned}$$

and also satisfies the so-called Leibniz rule, i.e. equation

$$\begin{aligned} d(xy)=d(x)y+xd(y) \quad \left( x, y\in P\right) . \end{aligned}$$

It is well-known that in case of additive functions, Hamel bases play an important role. As [15, Theorem 14.2.1] shows in case of derivations, algebraic bases are fundamental.

Theorem 1.1

Let \(({\mathbb {K}}, +,\cdot )\) be a field of characteristic zero, let \(({\mathbb {F}}, +,\cdot )\) be a subfield of \(({\mathbb {K}}, +,\cdot )\), let S be an algebraic base of \({\mathbb {K}}\) over \({\mathbb {F}}\), if it exists, and let \(S=\emptyset \) otherwise. Let \(f:{\mathbb {F}}\rightarrow {\mathbb {K}}\) be a derivation. Then, for every function \(u:S\rightarrow {\mathbb {K}}\), there exists a unique derivation \(g:{\mathbb {K}}\rightarrow {\mathbb {K}}\) such that \(g \vert _{{\mathbb {F}}}=f\) and \(g \vert _{S}=u\).

1.1 Generalized polynomial functions

While proving our results, the so-called Polarization formula for multi-additive functions will play a key role. In this Sect. 1.1 the most important notations and statements are summarized. Here we follow the monograph [21].

Definition 1.1

Let GS be commutative semigroups (written additively), \(n\in {\mathbb {N}}\) and let \(A:G^{n}\rightarrow S\) be a function. We say that A is n-additive if it is a homomorphism of G into S in each variable. If \(n=1\) or \(n=2\) then the function A is simply termed to be additive or bi-additive, respectively.

The diagonalization or trace of an n-additive function \(A:G^{n}\rightarrow S\) is defined as

$$\begin{aligned} A^{*}(x)=A\left( x, \ldots , x\right) \qquad \left( x\in G\right) . \end{aligned}$$

As a direct consequence of the definition, each n-additive function \(A:G^{n}\rightarrow S\) satisfies

$$\begin{aligned}{} & {} A(x_{1}, \ldots , x_{i-1}, kx_{i}, x_{i+1}, \ldots , x_n) \\{} & {} \quad = kA(x_{1}, \ldots , x_{i-1}, x_{i}, x_{i+1}, \ldots , x_{n}) \\{} & {} \quad \left( x_{1}, \ldots , x_{n}\in G\right) \end{aligned}$$

for all \(i=1, \ldots , n\), where \(k\in {\mathbb {N}}\) is arbitrary. The same identity holds for any \(k\in {\mathbb {Z}}\) provided that G and S are groups, and for \(k\in {\mathbb {Q}}\), provided that G and S are linear spaces over the rationals. For the diagonalization of A we have

$$\begin{aligned} A^{*}(kx)=k^{n}A^{*}(x) \qquad \left( x\in G\right) . \end{aligned}$$

The above notion can also be extended to the case \(n=0\) by letting \(G^{0}=G\) and by calling any constant function from G to S, 0-additive.

One of the most important theoretical results concerning multi-additive functions is the so-called Polarization formula, that briefly expresses that every n-additive symmetric function is uniquely determined by its diagonalization under some conditions on the domain as well as on the range. Suppose that G is a commutative semigroup and S is a commutative group. The action of the difference operator \(\Delta \) on a function \(f:G\rightarrow S\) is defined by the formula

$$\begin{aligned} \Delta _y f(x)=f(x+y)-f(x) \qquad \left( x, y\in G\right) . \end{aligned}$$

Note that the addition in the argument of the function is the operation of the semigroup G and the subtraction means the inverse of the operation of the group S.

Theorem 1.2

(Polarization formula) Suppose that G is a commutative semigroup, S is a commutative group, \(n\in {\mathbb {N}}\). If \(A:G^{n}\rightarrow S\) is a symmetric, n-additive function, then for all \(x, y_{1}, \ldots , y_{m}\in G\) we have

$$\begin{aligned} \Delta _{y_{1}, \ldots , y_{m}}A^{*}(x)= \left\{ \begin{array}{rcl} 0 &{} \quad \text { if} &{} m>n \\ n!A(y_{1}, \ldots , y_{m}) &{} \quad \text { if}&{} m=n. \end{array} \right. \end{aligned}$$

Corollary 1.1

Suppose that G is a commutative semigroup, S is a commutative group, \(n\in {\mathbb {N}}\). If \(A:G^{n}\rightarrow S\) is a symmetric, n-additive function, then for all \(x, y\in G\)

$$\begin{aligned} \Delta ^{n}_{y}A^{*}(x)=n!A^{*}(y). \end{aligned}$$

Lemma 1.1

Let \(n\in {\mathbb {N}}\) and suppose that the multiplication by n! is surjective in the commutative semigroup G or injective in the commutative group S. Then for any symmetric, n-additive function \(A:G^{n}\rightarrow S\), \(A^{*}\equiv 0\) implies that A is identically zero, as well.

Definition 1.2

Let G and S be commutative semigroups, a function \(p:G\rightarrow S\) is called a generalized polynomial from G to S if it has a representation as the sum of diagonalizations of symmetric multi-additive functions from G to S. In other words, a function \(p:G\rightarrow S\) is a generalized polynomial if and only if, it has a representation

$$\begin{aligned} p= \sum _{k=0}^{n}A^{*}_{k}, \end{aligned}$$

where n is a nonnegative integer and \(A_{k}:G^{k}\rightarrow S\) is a symmetric, k-additive function for each \(k=0, 1, \ldots , n\). In this case we also say that p is a generalized polynomial of degree at most n.

Let n be a nonnegative integer, functions \(p_{n}:G\rightarrow S\) of the form

$$\begin{aligned} p_{n}= A_{n}^{*}, \end{aligned}$$

where \(A_{n}:G^{n}\rightarrow S\) are the so-called generalized monomials of degree n.

Remark 1.1

Obviously, generalized monomials of degree 0 are constant functions and generalized monomials of degree 1 are additive functions.

Furthermore, generalized monomials of degree 2 will be termed quadratic functions.

Definition 1.3

Let G be a commutative semigroup. We say that the nonzero function \(m:G\rightarrow {\mathbb {C}}\) is an exponential if

$$\begin{aligned} m(x+y)=m(x)m(y) \end{aligned}$$

holds for all \(x, y\in G\).

Remark 1.2

Recall that on any commutative semigroup, the identically 1 function is always an exponential.

Definition 1.4

Let G be a commutative group, n be a positive integer and \(m:G\rightarrow {\mathbb {C}}\) be an exponential. The function \(f:G\rightarrow {\mathbb {C}}\) is called a generalized exponential monomial of degree at most n corresponding to the exponential m, if there exists a generalized polynomial \(p:G\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= p(x)m(x) \qquad \left( x\in G\right) . \end{aligned}$$

Finite sums of generalized exponential monomials are called generalized exponential polynomials.

1.2 Polynomial functions

As Laczkovich [16] highlights, on groups there are several polynomial notions. One of them is what we introduced in Sect. 1.1, that is the notion of generalized polynomials. As we will see in the forthcoming sections, not only this notion, but also that of (normal) polynomials will be important. The definitions and results recalled here can be found in [21].

Throughout this subsection G is assumed to be a commutative group (written additively).

Definition 1.5

Polynomials are elements of the algebra generated by additive functions over G. Namely, if n is a positive integer, \(P:{\mathbb {C}}^{n}\rightarrow {\mathbb {C}}\) is a (classical) complex polynomial in n variables and \(a_{k}:G\rightarrow {\mathbb {C}}\; (k=1, \ldots , n)\) are additive functions, then the function

$$\begin{aligned} x\longmapsto P(a_{1}(x), \ldots , a_{n}(x)) \end{aligned}$$

is a polynomial and, also conversely, every polynomial can be represented in such a form.

Remark 1.3

For the sake of easier distinction, at some places polynomials will be called normal polynomials.

Remark 1.4

We recall that the elements of \({\mathbb {N}}^{n}\) for any positive integer n are called (n-dimensional) multi-indices. Addition, multiplication and inequalities between multi-indices of the same dimension are defined component-wise. Further, we define \(x^{\alpha }\) for any n-dimensional multi-index \(\alpha \) and for any \(x=(x_{1}, \ldots , x_{n})\) in \({\mathbb {C}}^{n}\) by

$$\begin{aligned} x^{\alpha }=\prod _{i=1}^{n}x_{i}^{\alpha _{i}} \end{aligned}$$

where we always adopt the convention \(0^{0}=1\). We also use the notation \(\left| \alpha \right| = \alpha _{1}+\cdots +\alpha _{n}\). With these notations any polynomial of degree at most N on the commutative semigroup G has the form

$$\begin{aligned} p(x)= \sum _{\left| \alpha \right| \le N}c_{\alpha }a(x)^{\alpha } \qquad \left( x\in G\right) , \end{aligned}$$

where \(c_{\alpha }\in {\mathbb {C}}\) and \(a=(a_1, \dots , a_n) :G\rightarrow {\mathbb {C}}^{n}\) is an additive function. Furthermore, the homogeneous term of degree k of p is

$$\begin{aligned} \sum _{\left| \alpha \right| =k}c_{\alpha }a(x)^{\alpha }. \end{aligned}$$

It is easy to see that each polynomial, that is, any function of the form

$$\begin{aligned} x\longmapsto P(a_{1}(x), \ldots , a_{n}(x)), \end{aligned}$$

where n is a positive integer, \(P:{\mathbb {C}}^{n}\rightarrow {\mathbb {C}}\) is a (classical) complex polynomial in n variables and \(a_{k}:G\rightarrow {\mathbb {C}}\; (k=1, \ldots , n)\) are additive functions, is a generalized polynomial. The converse however is in general not true. A complex-valued generalized polynomial p defined on a commutative group G is a polynomial if and only if its variety (the linear space spanned by its translates) is of finite dimension. To make the situation more clear, here we also recall Theorem 13.4 from Székelyhidi [22].

Theorem 1.3

The torsion free rank of a commutative group is finite if and only if every generalized polynomial on the group is a polynomial.

Definition 1.6

Let G be a commutative group, n be a positive integer and \(m:G\rightarrow {\mathbb {C}}\) be an exponential. The function \(f:G\rightarrow {\mathbb {C}}\) is called an exponential monomial of degree at most n corresponding to the exponential m, if there exists a polynomial \(p:G\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= p(x)m(x) \qquad \left( x\in G\right) . \end{aligned}$$

Finite sums of exponential monomials are called exponential polynomials.

In the next section the lemma below will be used, see [9, Lemma 6], too.

Lemma 1.2

Let k and n be positive integers and \(f :{\mathbb {F}} \rightarrow {\mathbb {C}}\) be a generalized monomial of degree n, where \({\mathbb {F}}\) is assumed to be a field with \(\textrm{char}({\mathbb {F}}) = 0\). Then the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(x^{k}) \end{aligned}$$

is a generalized monomial of degree \(n \cdot k\).

Henceforth, not only the notion of (exponential) polynomials, but also that of decomposable functions will be used. The basics of this concept are due to Shulman [20].

Definition 1.7

Let G be a group and \(n\in {\mathbb {N}}, n\ge 2\). A function \(F:G^{n}\rightarrow {\mathbb {C}}\) is said to be decomposable if it can be written as a finite sum of products \(F_{1}\cdots F_{k}\), where all \(F_{i}\) depend on disjoint sets of variables.

Remark 1.5

Without loss of generality we can suppose that \(k=2\) in the above definition, that is, decomposable functions are those mappings that can be written in the form

$$\begin{aligned} F(x_{1}, \ldots , x_{n})= \sum _{E}\sum _{j}A_{j}^{E}B_{j}^{E} \end{aligned}$$

where E runs through all non-void proper subsets of \(\left\{ 1, \ldots , n\right\} \) and for each E and j the function \(A_{j}^{E}\) depends only on the variables \(x_{i}\) with \(i\in E\), while \(B_{j}^{E}\) depends only on the variables \(x_{i}\) with \(i\notin E\).

The connection between decomposable functions and generalized exponential polynomials was described in Laczkovich [18].

Theorem 1.4

Let G be a commutative topological semigroup with unit. A continuous function \(f:G\rightarrow {\mathbb {C}}\) is a generalized exponential polynomial if and only if there is a positive integer \(n\ge 2\) such that the mapping

$$\begin{aligned} G^{n} \ni (x_{1}, \ldots , x_{n}) \longmapsto f(x_1+\cdots + x_n) \end{aligned}$$

is decomposable.

The notion of derivations can be extended in several ways. We will employ the concept of higher order derivations according to Reich [19] and Unger–Reich [24]. For further results on characterization theorems on higher order derivations consult e.g. [6,7,8, 10].

Definition 1.8

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field. The identically zero map is the only derivation of order zero. For each \(n\in {\mathbb {N}}\), an additive mapping \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is termed to be a derivation of order n, if there exists \(B:{\mathbb {F}}\times {\mathbb {F}}\rightarrow {\mathbb {C}}\) such that B is a bi-derivation of order \(n-1\) (that is, B is a derivation of order \(n-1\) in each variable) and

$$\begin{aligned} f(xy)-xf(y)-f(x)y=B(x, y) \qquad \left( x, y\in {\mathbb {F}}\right) . \end{aligned}$$

The set of derivations of order n of the field \({\mathbb {F}}\) will be denoted by \({\mathscr {D}}_{n}({\mathbb {F}})\).

Remark 1.6

Since \({\mathscr {D}}_{0}({\mathbb {F}})=\{0\}\), the only bi-derivation of order zero is the identically zero function, thus \(f\in {\mathscr {D}}_{1}({\mathbb {F}})\) if and only if

$$\begin{aligned} f(xy)=xf(y)+f(x)y \qquad \left( x, y\in {\mathbb {F}}\right) , \end{aligned}$$

that is, the notions of first order derivations and derivations coincide. On the other hand for any \(n\in {\mathbb {N}}\) the set \({\mathscr {D}}_{n}({\mathbb {F}}){\setminus } {\mathscr {D}}_{n-1}({\mathbb {F}})\) is nonempty because \(d_{1}\circ \cdots \circ d_{n}\in {\mathscr {D}}_{n}({\mathbb {F}})\), but \(d_{1}\circ \cdots \circ d_{n}\notin {\mathscr {D}}_{n-1}({\mathbb {F}})\), where \(d_{1}, \ldots , d_{n}\in {\mathscr {D}}_{1}({\mathbb {F}})\) are non-identically zero derivations.

The main result of [17] is Theorem 1.1 that reads in our settings as follows.

Theorem 1.5

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field and let n be a positive integer. Then, for every function \(D :{\mathbb {F}}\rightarrow {\mathbb {C}}\), \(D\in {\mathscr {D}}_{n}({\mathbb {F}})\) if and only if D is additive on \({\mathbb {F}}\), \(D(1) = 0\), and \(\dfrac{D}{\textrm{id}}\), as a map from the group \({\mathbb {F}}^{\times }\) to \({\mathbb {C}}\), is a generalized polynomial of degree at most n. Here \(\textrm{id}\) stands for the identity map defined on \({\mathbb {F}}\).

Remark 1.7

Recall that the notions of generalized polynomials, polynomials, generalized exponential polynomials and exponential polynomials, resp. were introduced on commutative groups (i.e., on an algebraic structure where we have one binary operation, the addition). Note however that in Definition 1.8 and in Theorem 1.5 we considered functions that are defined on a field \({\mathbb {F}}\subset {\mathbb {C}}\), where we have two binary operations. On fields two commutative groups arise naturally. Namely, the additive group \(({\mathbb {F}}, +)\), but also the multiplicative group \(({\mathbb {F}}^{\times }, \cdot )\), where \({\mathbb {F}}^{\times }= \left\{ x\in {\mathbb {F}}\, \vert \, x \ne 0 \right\} \). The previous theorem says in an illustrative way that these functions are well-connected to both the additive and the multiplicative structures. The same holds for field homomorphisms, since the assumption that the function \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) is a field homomorphism can be expressed (using the previous notions) as \(\varphi \) is a additive function on the additive group \(({\mathbb {F}}, +)\), while it is an exponential on the multiplicative group \(({\mathbb {F}}^{\times }, \cdot )\).

This observation will play a very important role in the next section. Our goal will be to show that if a monomial function satisfies the conditions presented there, then this function is necessarily a generalized polynomial on the multiplicative structure. This is why homomorphisms and higher order derivations appear in these theorems.

2 Results

2.1 Preliminary results

Henceforth let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, n be a positive integer and \(P\in {\mathbb {F}}[x]\) be a polynomial. In what follows we will study generalized monomials \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) of degree n under the condition that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x\longmapsto f(P(x)) \end{aligned}$$

is a (normal) polynomial.

At first we show that instead of polynomials P, we may always restrict ourselves to (classical) monomials. For this we need the following statement which is in some sense an extension of Lemma 1.2.

Lemma 2.1

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(n\in {\mathbb {N}}\), \(\alpha _{1}, \ldots , \alpha _{n}\) be non-negative integers and \(F:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a symmetric and n-additive function. Then the function \(g:{\mathbb {F}}\rightarrow {\mathbb {C}}\) defined by

$$\begin{aligned} g(x)= F(x^{\alpha _{1}}, \ldots , x^{\alpha _{n}}) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

is a generalized monomial of degree \((\alpha _{1}+\cdots + \alpha _{n})\).

Proof

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(n\in {\mathbb {N}}\), \(\alpha _{1}, \ldots , \alpha _{n}\) be non-negative integers and \(F:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a symmetric and n-additive function and consider the function \(g:{\mathbb {F}}\rightarrow {\mathbb {C}}\) defined by

$$\begin{aligned} g(x)= F(x^{\alpha _{1}}, \ldots , x^{\alpha _{n}}) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Let \(N=\alpha _{1}+\cdots + \alpha _{n}\) and define the mapping \({\mathscr {F}}:{\mathbb {F}}^{N}\rightarrow {\mathbb {C}}\) through

$$\begin{aligned}{} & {} {\mathscr {F}}(x_{1}, \ldots , x_{N}) \\{} & {} \quad = \dfrac{1}{N!}\sum _{\sigma \in {\mathscr {S}}_{n}} F(x_{\sigma (1)}\cdots x_{\sigma (\alpha _{1})}, \ldots , x_{\sigma (N-\alpha _{n}+1)}\cdots x_{\sigma (N)}) \\{} & {} \quad \left( x_{1}, \ldots , x_{N}\in {\mathbb {F}}\right) . \end{aligned}$$

Since F is a symmetric and n-additive function, \({\mathscr {F}}\) is also symmetric and N-additive, further we have

$$\begin{aligned} {\mathscr {F}}(x, \ldots , x)= F(x^{\alpha _{1}}, \ldots , x^{\alpha _{n}})= g(x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Thus g can be represented as the trace of a symmetric and \((\alpha _{1}+\cdots + \alpha _{n})\)-additive mapping, showing that the function g is indeed a generalized monomial of degree \((\alpha _{1}+\cdots + \alpha _{n})\). \(\square \)

Lemma 2.2

Let \(k, n\in {\mathbb {N}}\), \(k\ge 2\), \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(P\in {\mathbb {F}}[x]\) be a (classical) polynomial of degree k with leading coefficient 1 and \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a generalized monomial of degree n. If the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(P(x)) \end{aligned}$$

is a normal polynomial, then the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial as well.

Proof

Let \(k, n\in {\mathbb {N}}\), \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(P\in {\mathbb {F}}[x]\) be a (classical) polynomial of degree k and \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a generalized monomial of degree n. Suppose further that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(P(x)) \end{aligned}$$

is a normal polynomial.

Since \(P\in {\mathbb {F}}[x]\) is a (classical) polynomial of degree k, we have

$$\begin{aligned} P(x)= \sum _{l=0}^{k}\alpha _{l}x^{l} \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

with some constants \(\alpha _{l}\in {\mathbb {F}}\), \(l=0, 1, \ldots , k\) such that \(\alpha _{k}=1\). Further, as \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is a monomial of degree n, there exists a uniquely determined symmetric and n-additive function \(F:{\mathbb {F}}^{n}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= F(x, \ldots , x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

These together yield that there exists a positive integer m, linearly independent additive functions \(a_{1}, \ldots , a_{m}\) and a complex polynomial \(Q\in {\mathbb {C}}[x]\) such that

$$\begin{aligned}{} & {} F\left( \sum _{l=0}^{k}\alpha _{l}x^{l}, \ldots , \sum _{l=0}^{k}\alpha _{l}x^{l}\right) \\{} & {} \quad = Q(a_{1}(x), \ldots , a_{m}(x))= \sum _{\begin{array}{c} \beta \in {\mathbb {N}}^{m}\\ |\beta |\le kn \end{array}}c_{\beta }a(x)^{\beta } \quad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Using that F is symmetric and additive in each of its variables, we get that

$$\begin{aligned}{} & {} F(x^{k}, \ldots , x^{k})+ \lambda _{k-1, k, \ldots , k} F(\alpha _{k-1}x^{k-1}, \alpha _{k}x^{k}, \ldots , \alpha _{k}x^{k}) + \\{} & {} \quad \cdots + \lambda _{0, \ldots , 0}F(\alpha _{0}, \ldots , \alpha _{0})= \sum _{\begin{array}{c} \beta \in {\mathbb {N}}^{m}\\ |\beta |\le kn \end{array}}c_{\beta }a(x)^{\beta } \end{aligned}$$

for all \(x\in {\mathbb {F}}\) with some complex constants \(\lambda _{\gamma }\), \(\gamma \in {\mathbb {N}}^{n}\), \(|\gamma |\le k n\). Due to Lemma 2.1, the left and also the right hand side of this identity are generalized polynomials (being linear combinations of generalized monomials) and these agree for all possible values. This can however happen only if the monomial terms are the same on each side. From this we get especially that

$$\begin{aligned} F(x^{k}, \ldots , x^{k}) = \sum _{\begin{array}{c} \beta \in {\mathbb {N}}^{m}\\ |\beta |=kn \end{array}}c_{\beta }a(x)^{\beta }, \end{aligned}$$

showing that the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial as well. \(\square \)

Remark 2.1

According to the previous lemma since \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is a generalized monomial of degree n such that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(P(x)) \end{aligned}$$

is a normal polynomial, the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial as well. This enables us to restrict ourselves to considering generalized monomials for which the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial for a fixed \(k\ge 2\). At the same time, we have to emphasize that the assumption that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(P(x)) \end{aligned}$$

is a normal polynomial for a fixed polynomial \(P\in {\mathbb {F}}[x]\) is more restrictive than the previous one. To illustrate this let us consider the following example. Let \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a quadratic function and let

$$\begin{aligned} P(x)= x^{2}+\alpha _{1}x+\alpha _{0} \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

where \(\alpha _{1}, \alpha _{0}\in {\mathbb {F}}\) are fixed. The assumption that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(P(x)) \end{aligned}$$

is a normal polynomial means that we have

$$\begin{aligned} F(x^{2}+\alpha _{1}x+\alpha _{0}, x^{2}+\alpha _{1}x+\alpha _{0})= Q(a_{1}(x), \ldots , a_{m}(x)) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

with appropriate linearly independent additive functions \(a_{1}, \ldots , a_{m}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) and a complex polynomial \(Q\in {\mathbb {C}}[x_{1}, \ldots , x_{m}]\), where \(F:{\mathbb {F}}^{2}\rightarrow {\mathbb {C}}\) denotes the uniquely determined bi-additive mapping whose trace is the quadratic function f. Using the symmetry and also the bi-additivity of F, we get that

$$\begin{aligned}{} & {} F(x^{2}, x^{2})+ 2F(x^{2}, \alpha _{1}x)+ 2F(x^2, \alpha _{0})+ F(\alpha _{1}x, \alpha _{1}x) \\{} & {} \quad +2F(\alpha _{1}x, \alpha _{0})+ F(\alpha _{0}, \alpha _{0}) = Q(a_{1}(x), \ldots , a_{m}(x)) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Due to Lemma 2.1, the mappings

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto F(x^{i}, x^{j}) \end{aligned}$$

are generalized monomials of degree \((i+j)\). Further, generalized monomials of different degrees are linearly independent. Therefore, not only the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto F(x^{2}, x^{2}) \end{aligned}$$

is a normal polynomial, but also the mappings

$$\begin{aligned} \begin{array}{rcl} {\mathbb {F}}\ni &{}x&{} \longmapsto F(x^{2}, \alpha _{1}x)\\ {\mathbb {F}}\ni &{}x&{} \longmapsto 2F(x^{2}, \alpha _{0})+F(\alpha _{1}x, \alpha _{1}x)\\ {\mathbb {F}}\ni &{}x&{} \longmapsto F(\alpha _{1}x, \alpha _{0})\\ {\mathbb {F}}\ni &{}x&{} \longmapsto F(\alpha _{0}, \alpha _{0}), \end{array} \end{aligned}$$

too. Obviously, this is always true for the last two functions. Nevertheless, the fact that the first three functions have this property carries more information than that only the first of them is a normal polynomial.

Our second lemma says that while considering this problem, we may restrict ourselves to homogeneous (normal) polynomials.

Lemma 2.3

Let \(k, n\in {\mathbb {N}}\), \(k\ge 2\), \({\mathbb {F}}\subset {\mathbb {C}}\) be a field and \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a generalized monomial of degree n. If the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial, then there exists a homogeneous complex polynomial \({\tilde{P}}\) and there are linearly independent, complex valued additive functions \(a_{1}, \ldots , a_{m}\) on \({\mathbb {F}}\) such that

$$\begin{aligned} f(x^{k})= {\tilde{P}}(a_{1}(x), \ldots , a_{m}(x)) \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

in other words, we have

$$\begin{aligned} f(x^{k})= \sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{m}\\ |\alpha |=kn \end{array}}\lambda _{\alpha }a^{\alpha }(x) = \sum _{\begin{array}{c} \alpha _{1}, \ldots , \alpha _{m}\ge 0\\ \alpha _{1}+\cdots + \alpha _{m}=kn \end{array}}\lambda _{\alpha _{1}, \ldots , \alpha _{m}}a_{1}^{\alpha _{1}}(x)\cdots a_{m}^{\alpha _{m}}(x) \end{aligned}$$

for each \(x\in {\mathbb {F}}\).

Proof

Let \(k, n\in {\mathbb {N}}\), \(k\ge 2\), \({\mathbb {F}}\) be a field and \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a generalized monomial of degree n. Assume further that the mapping

$$\begin{aligned} {\mathbb {F}}\ni x \longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial. Then due to Lemma 1.2 this mapping is a generalized monomial of degree kn and hence it is \({\mathbb {Q}}\)-homogeneous of degree kn.

If additionally, this mapping is a normal polynomial, there exists a complex polynomial P and there are linearly independent, complex valued additive functions \(a_{1}, \ldots , a_{m}\) on \({\mathbb {F}}\) such that

$$\begin{aligned} f(x^{k})= P(a_{1}(x), \ldots , a_{m}(x))= P(a(x))= \sum _{l=0}^{N}\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{m}\\ |\alpha |=l \end{array}} \lambda _{\alpha } a^{\alpha }(x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Let \(r\in {\mathbb {Q}}\) be arbitrary and let us substitute rx in place of x in the above identity. Using the \({\mathbb {Q}}\)-homogeneity of the involved functions, we deduce

$$\begin{aligned} r^{kn}f(x^{k})- \sum _{l=0}^{N} r^{l}\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{m}\\ |\alpha |=l \end{array}} \lambda _{\alpha } a^{\alpha }(x)=0 \qquad \left( r\in {\mathbb {Q}}, x\in {\mathbb {F}}\right) . \end{aligned}$$

Observe that the left hand side of this identity is a polynomial in r that is identically zero. So all of its coefficients should vanish, yielding that

$$\begin{aligned} f(x^{k})= \sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{m}\\ |\alpha |=kn \end{array}}\lambda _{\alpha }a^{\alpha }(x) \end{aligned}$$

holds for all \(x\in {\mathbb {F}}\). \(\square \)

2.2 Illustrative examples

At first glance the assumption of the lemma above, i.e., the mapping

$$\begin{aligned} {\mathbb {F}} \ni x\longmapsto f(x^{k}) \end{aligned}$$

is a normal polynomial, seems a bit artificial. Nevertheless, the following examples show that this is not the case.

Example 2.1

Let k be a positive integer, \(\varphi _{1}, \ldots , \varphi _{k}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be linearly independent homomorphisms and \(\lambda _{i, j}\in {\mathbb {C}}\) for all \(i, j=1, \ldots , k\). Then the mapping \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) defined by

$$\begin{aligned} f(x)= \sum _{i, j=1}^{k}\lambda _{i, j}\varphi _{i}(x)\varphi _{j}(x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

is a quadratic function. Further if \(n\in {\mathbb {N}}\), then we also have

$$\begin{aligned} f(x^{n})= \sum _{i, j=1}^{k}\lambda _{i, j}\varphi _{i}(x)^{n}\varphi _{j}(x)^{n} \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

In other words, we have

$$\begin{aligned} f(x^{n})= P(\varphi _{1}(x), \ldots , \varphi _{k}(x)) \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

where the k-variable complex, homogeneous polynomial P is defined by

$$\begin{aligned} P(x_{1}, \ldots , x_{k})= \sum _{i, j=1}^{k}\lambda _{i, j}x_{i}^{n}x_{j}^{n} \qquad \left( x_{1}, \ldots , x_{k}\in {\mathbb {C}}\right) . \end{aligned}$$

Example 2.2

Suppose now that \({\mathbb {F}}\) is a subfield of \({\mathbb {C}}\). Let k be a positive integer, \(d_{1}, \ldots , d_{k}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be linearly independent derivations and \(\lambda _{i, j}\in {\mathbb {C}}\) for all \(i, j=1, \ldots , k\). Then the mapping \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) defined by

$$\begin{aligned} f(x)= \sum _{i, j=1}^{n}\lambda _{i, j}d_{i}(x)d_{j}(x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

is a quadratic function. Further if \(n\in {\mathbb {N}}\), then we also have

$$\begin{aligned} f(x^{n})= \sum _{i, j=1}^{n}\lambda _{i, j}n^{2}x^{2n-2}d_{i}(x)d_{j}(x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

In other words,

$$\begin{aligned} f(x^{n})= P(x, d_{1}(x), \ldots , d_{k}(x)) \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

where the \((k+1)\)-variable complex polynomial P is defined by

$$\begin{aligned} P(x_{1}, \ldots , x_{k}, z)= \sum _{i, j=1}^{n}\lambda _{i, j}n^{2}z^{2n-2}x_{i}x_{j} \qquad \left( x_{1}, \ldots , x_{k}, z\in {\mathbb {C}}\right) . \end{aligned}$$

Example 2.3

Assume in this example that \({\mathbb {F}}\) is a subfield of \({\mathbb {C}}\). Let k be a positive integer and \(d:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a derivation and define the quadratic function \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) by

$$\begin{aligned} f(x)= d^{k}(x^{2})= \underbrace{d\circ \cdots \circ d}_\text {k times}(x^{2}) \qquad \left( x\in {\mathbb {C}}\right) . \end{aligned}$$

Further, if n is a positive integer, then we have

$$\begin{aligned} f(x^{n})= d^{k}(x^{2n})= \sum _{\begin{array}{c} l_{1}, \ldots , l_{2n}\ge 0\\ l_{1}+\cdots +l_{2n}=k \end{array}} \left( {\begin{array}{c}k\\ l_{1}, \ldots , l_{2n}\end{array}}\right) d^{l_{1}}(x)\cdots d^{l_{2n}}(x) \end{aligned}$$

for all \(x\in {\mathbb {F}}\). In other words, we have

$$\begin{aligned} f(x^{n})= P(x, d(x), d\circ d(x), \ldots , d^{k}(x)) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Remark 2.2

All of the above examples can easily be generalized from quadratic functions to monomial functions. To get similar examples instead of quadratic functions, for monomials of degree n, where n is fixed it is enough to consider the mappings

$$\begin{aligned} f(x)= & {} \sum _{i=1}^{k}\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{n}\\ |\alpha |=n \end{array}}\lambda _{i}\Phi _{i}(x)^{\alpha } \qquad \left( x\in {\mathbb {F}}\right) , \\ f(x)= & {} \sum _{i=1}^{k}\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{n}\\ |\alpha |=n \end{array}}\lambda _{i}D _{i}(x)^{\alpha } \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

and

$$\begin{aligned} f(x)= d^{k}(x^{n})= \underbrace{d\circ \cdots \circ d}_\text {k times}(x^{n}) \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

here

$$\begin{aligned} \Phi _{i}(x)= \left( \varphi _{1, i}(x), \ldots , \varphi _{n, i}(x)\right) \; \text {and} \; D_{i}(x)=\left( d_{1, i}(x), \ldots , d_{n, i}(x)\right) \quad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

where the functions \(\varphi _{l, i}\) are homomorphisms, while d and \(d_{l, i}\) are derivations for all possible indices l and i.

2.3 Main results

Regarding ‘polynomial equations’ for generalized monomials, we note that in the literature, there are several results for additive functions \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) that also satisfy a polynomial equation. Based on the results presented above, the following statement can be deduced.

Proposition 2.1

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(k\in {\mathbb {N}}\), \(k\ge 2\) and \(P\in {\mathbb {Q}}[x]\) be a (classical) polynomial of degree k.

  1. (i)

    If the additive function \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) fulfills

    $$\begin{aligned} a(P(x))= P(a(x)) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

    then there exists a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that \(a(x)= a(1)\varphi (x)\) for all \(x\in {\mathbb {F}}\). Further, we also have \(a(1)\in \left\{ 0, 1\right\} \).

  2. (ii)

    If the additive function \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) fulfills

    $$\begin{aligned} a(P(x))= P'(x)a(x) \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

    then a is a derivation.

Proof

  1. (i)

    Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(k\in {\mathbb {N}}\), \(k\ge 2\) and \(P\in {\mathbb {Q}}[x]\) be a (classical) polynomial of degree k. Suppose further that the additive function \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) fulfills

    $$\begin{aligned} a(P(x))= P(a(x)) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

    In other words, we have

    $$\begin{aligned} a\left( \sum _{i=0}^{k}\alpha _{i}x^{i}\right) = \sum _{i=0}^{k}\alpha _{i}a(x)^{i} \end{aligned}$$

    for all \(x\in {\mathbb {F}}\) with some rational numbers \(\alpha _{k}, \ldots , \alpha _{0}\). Observe that this especially yields that the mapping

    $$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a\left( \sum _{i=0}^{k}\alpha _{i}x^{i}\right) \end{aligned}$$

    is a normal polynomial. Due to Lemma 2.2 we infer that then the mapping

    $$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a(x^{k}) \end{aligned}$$

    is also a normal polynomial, where the \({\mathbb {Q}}\)-homogeneity of a was used, too. However, then necessarily

    $$\begin{aligned} a(x^{k})= a(x)^{k} \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

    holds. From this, we deduce (e.g. using the results of [12]) that there exists a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that \(a(x)= a(1)\varphi (x)\) for all \(x\in {\mathbb {F}}\). Substituting this back into our equation, we finally get that the only possibility is that \(a(1)\in \left\{ 0, 1\right\} \).

  2. (ii)

    Using a similar reasoning as in case (i), here we deduce that the assumptions imply that necessarily

    $$\begin{aligned} a(x^{k})= kx^{k-1}a(x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

    holds for the additive function \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\). Using some classical characterization theorems concerning derivations (for instance the results of [15, Chapter 14]), we finally get that a is indeed a derivation.

\(\square \)

Remark 2.3

We emphasize that the results of the previous statement are classical ones. Nevertheless, we would like to indicate that on the one hand the problem we would like to consider in this paper has some prior results both in algebra and the theory of functional equations. Further, with the help of the statements presented here, the proofs can be significantly simplified (at least for mappings \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\)).

In the papers [6,7,8, 10, 11] further results can be found concerning additive functions that also fulfill certain polynomial equations.

Remark 2.4

We also note that related problems have already been considered by Boros and Garda-Mátyás in [3, 4], by Boros and Menzer in [5] and also by Amou in [1]. In these papers the authors consider real monomial functions, which satisfy certain conditional equations on a specified planar curve. Further, in [9], the polynomial equation \(f(P(x))=Q(f(x))\) for monomial functions \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) was considered.

The simplest special case of the problem we are interested in is when the generalized monomial \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is of degree 1, i.e., when f is an additive function. In this regard, we have the following statement.

Theorem 2.1

Let k be a positive integer, \({\mathbb {F}}\subset {\mathbb {C}}\) be a field and \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be an additive function. If the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a(x^{k}) \end{aligned}$$

is a (normal) polynomial, then a is a higher order derivation.

Proof

Let k be a positive integer, \({\mathbb {F}}\subset {\mathbb {C}}\) be a field and \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be an additive function. Suppose further that the mapping

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a(x^{k}) \end{aligned}$$

is a (normal) polynomial. Due to Lemma 2.3, we can assume that this mapping is a homogeneous (normal) polynomial, that is, we have

$$\begin{aligned} a(x^{k}) = \sum _{\begin{array}{c} \alpha _{1}, \ldots , \alpha _{m}\ge 0\\ \alpha _{1}+\cdots + \alpha _{m}=k \end{array}}\lambda _{\alpha _{1}, \ldots , \alpha _{m}}a_{1}^{\alpha _{1}}(x)\cdots a_{m}^{\alpha _{m}}(x) \end{aligned}$$

for all \(x\in {\mathbb {F}}\). Observe that both sides of this identity are traces of symmetric and k-additive mappings. Indeed, the left hand side is the trace of the symmetric k-additive function

$$\begin{aligned} A(x_{1}, \ldots , x_{k})= a(x_{1}\cdots x_{k}) \qquad \left( x_{1}, \ldots , x_{k}\in {\mathbb {F}}\right) \end{aligned}$$

while the right hand side is the trace of the symmetric k-additive mapping

$$\begin{aligned}{} & {} {\widetilde{A}}(x_{1}, \ldots , x_{k})\\{} & {} \quad = \dfrac{1}{k!}\sum _{\sigma \in {\mathscr {S}}_{k}} \sum _{\begin{array}{c} \alpha _{1}, \ldots , \alpha _{m}\ge 0\\ \alpha _{1}+\cdots + \alpha _{m}=k \end{array}}\lambda _{\alpha _{1}, \ldots , \alpha _{m}} a_{1}(x_{\sigma (1)}) \cdots a_{1}(x_{\sigma (\alpha _{1})}) \times \\{} & {} \quad \times \cdots a_{m}(x_{\sigma (k-\alpha _{m}+1)}) \cdots a_{m}(x_{\sigma (k)})\\{} & {} \quad \left( x_{1}, \ldots , x_{k}\in {\mathbb {F}}\right) . \end{aligned}$$

Therefore we have

$$\begin{aligned} a(x_{1}\cdots x_{k})= & {} \dfrac{1}{k!}\sum _{\sigma \in {\mathscr {S}}_{k}} \sum _{\begin{array}{c} \alpha _{1}, \ldots , \alpha _{m}\ge 0\\ \alpha _{1}+\cdots + \alpha _{m}=k \end{array}}\lambda _{\alpha _{1}, \ldots , \alpha _{m}} a_{1}(x_{\sigma (1)}) \cdots a_{1}(x_{\sigma (\alpha _{1})}) \times \\{} & {} \times \cdots a_{m}(x_{\sigma (k-\alpha _{m}+1)}) \cdots a_{m}(x_{\sigma (k)}) \end{aligned}$$

for all \(x_{1}, \ldots , x_{k}\in {\mathbb {F}}\).

In other words, the mapping

$$\begin{aligned} {\mathbb {F^{\times }}}^{k} \ni (x_{1}, \ldots , x_{k}) \longmapsto a(x_{1}\cdots x_{k}) \end{aligned}$$

is decomposable. Thus from Theorem 1.4 we deduce that \(a:{\mathbb {F}}^{\times }\rightarrow {\mathbb {C}}\) is a generalized exponential polynomial on the multiplicative group \({\mathbb {F}}^{\times }\) corresponding to the identity function, as exponential. Using Theorem 1.5, we finally obtain that a is a higher order derivation. \(\square \)

Now we turn to quadratic functions. During the proof of Theorem 2.2 we will use [11, Theorem 4.5] in a rather special case that is the following lemma.

Lemma 2.4

Let \(n\in {\mathbb {N}}\) and \({\mathbb {F}}\) be a field and \(\alpha _{1}, \alpha _{2}, \alpha _{3}\in {\mathbb {C}}\). If the additive function \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) satisfies

$$\begin{aligned} \alpha _{1}f(x^{2n})+\alpha _{2}f(x^{n})^{2}+\alpha _{3}f(x)^{2n}=0 \end{aligned}$$

for all \(x\in {\mathbb {F}}\), then there exists a complex constant \(\alpha \) and a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= \alpha \varphi (x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Theorem 2.2

Let \(n\in {\mathbb {N}}\), \(n\ge 2\) and \({\mathbb {F}}\subset \mathbb C\) be a field. Assume that \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is a quadratic function, while \(a:{\mathbb {F}}\rightarrow {\mathbb {C}}\) is additive and we have

$$\begin{aligned} f(x^{n})= a(x)^{2n} \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$
(1)

Then there exists a complex constant \(\alpha \in {\mathbb {C}}\) and a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} a(x)= \alpha \varphi (x) \qquad \text {and} \qquad f(x)= \alpha ^{2n}\varphi (x)^{2} \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

And also conversely, if we define the functions a and f through the above formula then they satisfy equation (1) for all \(x\in {\mathbb {F}}\).

Proof

Since f is quadratic, there exists a uniquely determined symmetric and bi-additive mapping \(F:{\mathbb {F}}^{2}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} F(x, x)= f(x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Define the mapping \(E:{\mathbb {F}}^{2n}\rightarrow {\mathbb {C}}\) by

$$\begin{aligned}{} & {} E(x_{1}, \ldots , x_{2n})\\{} & {} \quad = \frac{1}{(2n)!} \sum _{\sigma \in {\mathscr {S}}_{2n}} F(x_{\sigma (1)}\cdots x_{\sigma (n)}, x_{\sigma (n+1)} \cdots x_{\sigma (2n)})\\{} & {} \quad - a(x_{1})\cdots a(x_{2n}) \qquad \left( x_{1}, \ldots , x_{2n}\in {\mathbb {F}}\right) , \end{aligned}$$

where \({\mathscr {S}}_{2n}\) denotes the symmetric group of order 2n. Since F is bi-additive and a is additive, the mapping E is a symmetric and 2n-additive mapping. Further its trace is

$$\begin{aligned} E(x, \ldots , x)= F(x^{n}, x^{n})-a(x)^{2n}= f(x^{n})-a(x)^{2n}=0 \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

due to the above equation. At the same time, its trace uniquely determines E. Thus the function E vanishes identically, that is,

$$\begin{aligned} \frac{1}{(2n)!} \sum _{\sigma \in {\mathscr {S}}_{2n}} F(x_{\sigma (1)}\cdots x_{\sigma (n)}, x_{\sigma (n+1)} \cdots x_{\sigma (2n)})- a(x_{1})\cdots a(x_{2n})=0 \end{aligned}$$
(2)

holds for all \(x_{1}, \ldots , x_{2n}\in {\mathbb {F}}\). With the substitution \(x_{i}=1\) for \(i=1, \ldots , 2n\) we get that

$$\begin{aligned} F(1, 1)-a(1)^{2n}= f(1)-a(1)^{2n}=0. \end{aligned}$$

Let now \(x\in {\mathbb {F}}\) be arbitrary and let

$$\begin{aligned} x_{1}= x \qquad \text {and} \qquad x_{i}=1 \qquad \text {for} \qquad i=2, \ldots , 2n \end{aligned}$$

to deduce that

$$\begin{aligned} \mu _{1}F(x, 1)-\mu _{2}a(x)=0 \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

holds with some complex numbers \(\mu _{1}, \mu _{2}\) such that \(\mu _{1}\ne 0\) (in fact we have \(\mu _{1}=\frac{1}{(2n-1)!}\)). Thus we have

$$\begin{aligned} F(x, 1)= \lambda a(x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$
(3)

with an appropriate complex constant \(\lambda \).

Let again \(x\in {\mathbb {F}}\) be arbitrary and let now

$$\begin{aligned} x_{1}= x, \; x_{2}= x \qquad \text {and} \qquad x_{i}=0 \qquad \text {for} \qquad i=3, \ldots , 2n \end{aligned}$$

in (2) to obtain

$$\begin{aligned} \nu _{1}F(x, x)+\nu _{2}F(x^{2}, 1)+\nu _{3}a(x)^{2}=0 \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

The latter identity together with (3) yields that there exist complex constants \(\alpha _{1}, \alpha _{2}\) such that

$$\begin{aligned} F(x, x)= \alpha _{1}a(x^{2})+\alpha _{2}a(x)^{2} \qquad \left( x\in {\mathbb {F}}\right) , \end{aligned}$$

in other words, we have

$$\begin{aligned} f(x)= \alpha _{1}a(x^{2})+\alpha _{2}a(x)^{2} \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

If we substitute this representation back into (1), we obtain that

$$\begin{aligned} \alpha _{1}a(x^{2n})+\alpha _{2}a(x^{n})^{2}= a(x)^{2n} \end{aligned}$$

for all \(x\in {\mathbb {F}}\). According to Lemma 2.4 there exists a complex constant \(\alpha \) and a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} a(x)= \alpha \cdot \varphi (x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

and hence

$$\begin{aligned} f(x)= \alpha ^{2n}\varphi (x)^{2} \end{aligned}$$

is fulfilled for all \(x\in {\mathbb {F}}\).

The converse is an easy computation. \(\square \)

Theorem 2.3

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a quadratic function, while \(a_{1}, a_{2}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) are additive functions. Then equation

$$\begin{aligned} f(x^{2})= a_{1}(x)^{2}a_{2}(x)^{2} \end{aligned}$$

holds for all \(x\in {\mathbb {F}}\) if and only if there exist homomorphisms \(\varphi _{1}, \varphi _{2}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= f(1)\cdot \varphi _{1}(x)\varphi _{2}(x) \qquad \text {and} \qquad a_{i}(x)= a_{i}(1)\varphi _{i}(x) \qquad \left( x\in {\mathbb {F}}, i=1, 2\right) . \end{aligned}$$

Proof

Let \({\mathbb {F}}\subset {\mathbb {C}}\) be a field, \(f:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be a quadratic function, while \(a_{1}, a_{2}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) be additive function. Suppose further that for all \(x\in {\mathbb {F}}\), equation

$$\begin{aligned} f(x^{2})= a_{1}(x)^{2}a_{2}(x)^{2} \end{aligned}$$

is fulfilled. Since f is quadratic, there exists a uniquely determined symmetric and bi-additive mapping F such that

$$\begin{aligned} F(x, x)= f(x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Define the mapping \(\Phi \) on \({\mathbb {F}}^{4}\) by

$$\begin{aligned}{} & {} \Phi (x_{1}, x_{2}, x_{3}, x_{4})\\{} & {} \quad = \dfrac{1}{3} \left[ F\left( x_{1}\,x_{4}, x_{2}\,x_{3}\right) +F\left( x_{1}\,x_{3}, x_{2}\,x_{4}\right) +F\left( x_{1}\,x_{2}, x_{3}\,x_{4}\right) \right] \\{} & {} \quad -\frac{1}{6}\left[ a_{1}(x_{1})\,a_{1}(x_{2})\,a_{2}(x_{3})\,a_{2}(x_{4})+a_{1}( x_{1})\,a_{2}(x_{2})\,a_{1}(x_{3})\,a_{2}(x_{4})\right. \\{} & {} \quad \left. +a_{2}(x_{1})\,a_{1} (x_{2})\,a_{1}(x_{3})\,a_{2}(x_{4}) \right] \\{} & {} \quad -\frac{1}{6} \left[ a_{1}(x_{1})\,a_{2}(x_{2})\,a_{2 }(x_{3})\,a_{1}(x_{4})+a_{2}(x_{1})\,a_{1}(x_{2})\,a_{2}(x_{3})\,a_{ 1}(x_{4})\right. \\{} & {} \quad \left. +a_{2}(x_{1})\,a_{2}(x_{2})\,a_{1}(x_{3})\,a_{1}(x_{4}) \right] \\{} & {} \quad \left( x_{1}, x_{2}, x_{3}, x_{4}\in {\mathbb {F}}\right) . \end{aligned}$$

Since F is symmetric and bi-additive and \(a_{1}\) and \(a_{2}\) are additive, the function \(\Phi \) is a symmetric 4-additive mapping. Moreover its trace is

$$\begin{aligned} F\left( x^2, x^2\right) -a_{1}(x)^2\,a_{2}(x)^2=0 \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Thus \(\Phi \) is identically zero on \({\mathbb {F}}^{4}\). From this we especially get that

$$\begin{aligned} 2F\left( x^2, 1\right) -\,a_{1}(1)^2\,a_{2}(1)\,a_{2}(x^2)-\,a_{ 1}(1)\,a_{2}(1)^2\,a_{1}(x^2)= 0 \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

and also

$$\begin{aligned}{} & {} 2\,F\left( x^2, 1\right) +4\,F\left( x, x\right) -a_{1}(1)^2\,a_{2}(x )^2\\{} & {} \quad -4\,a_{1}(1)\,a_{2}(1)\,a_{1}(x)\,a_{2}(x)-a_{2}(1)^2\,a_{1}(x)^2=0 \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

These identities together imply that

$$\begin{aligned}{} & {} 4\,F\left( x, x\right) +a_{1}(1)^2\,a_{2}(1)\,a_{2}(x^2)+a_{1}(1)\,a_{2}(1)^2\,a_{1}(x^2)-a_{1}(1)^2\,a_{2}(x)^2\\{} & {} \quad -4\,a_{1}(1)\,a_{2}(1)\, a_{1}(x)\,a_{2}(x)-a_{2}(1)^2\,a_{1}(x)^2=0\\{} & {} \quad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

In other words, we have

$$\begin{aligned} 4F(x, x)= & {} -a_{1}(1)^2\,a_{2}(1)\,a_{2}(x^2)-a_{1}(1)\,a_{2}(1)^2\,a_{1}(x^2)+ a_{1}(1)^2\,a_{2}(x)^2 \\{} & {} +4\,a_{1}(1)\,a_{2}(1)\,a_{1}(x)\,a_{2}(x)+a_{ 2}(1)^2\,a_{1}(x)^2 \end{aligned}$$

for all \(x\in {\mathbb {F}}\). Since F is a symmetric and bi-additive mapping, we have

$$\begin{aligned} 4F(x, y)= & {} -a_{1}(1)^2\,a_{2}(1)\,a_{2}(x\,y)-a_{1}(1)\,a_{2}(1)^2\,a_{1}(x\,y ) \\{} & {} +2\,\left( a_{1}(1)\,a_{2}(1)\,a_{1}(x)\,a_{2}(y)+a_{1}(1)\,a_{2}(1) \,a_{2}(x)\,a_{1}(y)\right) \\{} & {} +a_{1}(1)^2\,a_{2}(x)\,a_{2}(y)+a_{2}(1)^ 2\,a_{1}(x)\,a_{1}(y) \end{aligned}$$

for all \(x, y\in {\mathbb {F}}\). Combining this with our functional equation, we deduce

$$\begin{aligned}{} & {} -a_{1}(1)^2\,a_{2}(1)\,a_{2}(x^4)-a_{1}(1)\,a_{2}(1)^2\,a_{1}(x^4)+ a_{1}(1)^2\,a_{2}(x^2)^2 \\{} & {} \quad +4\,a_{1}(1)\,a_{2}(1)\,a_{1}(x^2)\,a_{2}(x^ 2)+a_{2}(1)^2\,a_{1}(x^2)^2-4\,a_{1}(x)^2\,a_{2}(x)^2=0 \\{} & {} \quad (x\in {\mathbb {F}}). \end{aligned}$$

Observe that if \(a_{1}(1)\cdot a_{2}(1)=0\), then F and thus f are identically zero. So we may assume \(a_{1}(1)\cdot a_{2}(1)\ne 0\). Without loss of generality we can then suppose that \(a_{1}(1)=a_{1}(1)=1\), otherwise we consider the mappings

$$\begin{aligned} {\tilde{f}}(x)= \frac{f(x)}{a_{1}(1)\cdot a_{2}(1)} \qquad \text {and} \qquad \tilde{a_{i}}(x)= \frac{a_{i}(x)}{a_{i}(1)} \qquad \left( x\in {\mathbb {F}}, i=1, 2\right) . \end{aligned}$$

Then this equation reduces to

$$\begin{aligned}{} & {} a_{2}(x^4)+a_{1}(x^4) \\{} & {} \quad =a_{2}(x^2)^2+a_{1} (x^2)^2+4\,a_{1}(x^2)\,a_{2}(x^2)-4\,a_{1}(x)^2\,a_{2}(x)^2 \\{} & {} \quad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

Observe that the functions

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a_{2}(x^4)+a_{1}(x^4) \end{aligned}$$

and

$$\begin{aligned} {\mathbb {F}} \ni x \longmapsto a_{2}(x^2)^2+a_{1} (x^2)^2+4\,a_{1}(x^2)\,a_{2}(x^2)-4\,a_{1}(x)^2\,a_{2}(x)^2 \end{aligned}$$

are generalized monomials of degree 4 and the latter equation says that these monomials are the same, thus the symmetric and 4-additive functions determined uniquely by these monomials, are also the same. In other words, we have

$$\begin{aligned}{} & {} -a_{2}(x_{1}\,x_{2}\,x_{3}\,x_{4})-a_{1}(x_{1}\,x_{2}\,x_{3}\,x_{4} ) \\{} & {} \quad +\dfrac{1}{3}\left[ {a_{2}(x_{1}\,x_{2})\,a_{2}(x_{3}\,x_{4})+a_{2}(x_{1}\,x_{3})\,a _{2}(x_{2}\,x_{4})+a_{2}(x_{2}\,x_{3})\,a_{2}(x_{1}\,x_{4})}\right] \\{} & {} \quad +\frac{2}{3} \left[ a_{1}(x_{1}\,x_{2})\,a_{2}(x_{3}\,x_{4})+a_{2}(x_{1}\, x_{2})\,a_{1}(x_{3}\,x_{4})+a_{1}(x_{1}\,x_{3})\,a_{2}(x_{2}\,x_{4}) \right. \\{} & {} \quad \left. +a_{2}(x_{1}\,x_{3})\,a_{1}(x_{2}\,x_{4})+a_{1}(x_{2}\,x_{3})\,a_{2} (x_{1}\,x_{4})+a_{2}(x_{2}\,x_{3})\,a_{1}(x_{1}\,x_{4}) \right] \\{} & {} \quad +\dfrac{1}{3}\left[ {a_{1}(x_{1}\,x_{2})\,a_{1}(x_{3}\,x_{4})+a_{1}(x_{1}\, x_{3})\,a_{1}(x_{2}\,x_{4})+a_{1}(x_{2}\,x_{3})\,a_{1}(x_{1}\,x_{4}) }\right] \\{} & {} \quad -\dfrac{2}{3}\,\left[ a_{1}(x_{1})\,a_{1}(x_{2})\,a_{2}(x_{3})\,a_{2 }(x_{4})+a_{1}(x_{1})\,a_{2}(x_{2})\,a_{1}(x_{3})\,a_{2}(x_{4}) \right. \\{} & {} \quad +a_{2 }(x_{1})\,a_{1}(x_{2})\,a_{1}(x_{3})\,a_{2}(x_{4}) +a_{1}(x_{1})\,a_{ 2}(x_{2})\,a_{2}(x_{3})\,a_{1}(x_{4}) \\{} & {} \quad \left. +a_{2}(x_{1})\,a_{1}(x_{2})\,a _{2}(x_{3})\,a_{1}(x_{4})+a_{2}(x_{1})\,a_{2}(x_{2})\,a_{1}(x_{3})\, a_{1}(x_{4})\right] =0 \\{} & {} \quad \left( x_{1}, x_{2}, x_{3}, x_{4}\in {\mathbb {F}}\right) . \end{aligned}$$

Let now \(x, y, z\in {\mathbb {F}}\) be arbitrary and substitute

$$\begin{aligned} x_{1}= x \qquad x_{2}= y \qquad x_{3}= z \qquad x_{4}= 1 \end{aligned}$$

to deduce that

$$\begin{aligned}{} & {} 3\,a_{2}(x\,y\,z)+3\,a_{1}(x\,y\,z) \\{} & {} \quad = a_{2}(x)\,a_{2}(y\,z)+2\,a_{1} (x)\,a_{2}(y\,z)+2\,a_{2}(x)\,a_{1}(y\,z) \\{} & {} \quad \quad +a_{1}(x)\,a_{1}(y\,z)+a_{2 }(y)\,a_{2}(x\,z) +2\,a_{1}(y)\,a_{2}(x\,z) \\{} & {} \quad \quad +2\,a_{2}(y)\,a_{1}(x\,z)+ a_{1}(y)\,a_{1}(x\,z)+a_{2}(x\,y)\,a_{2}(z) \\{} & {} \quad \quad +2\,a_{1}(x\,y)\,a_{2}(z) -2\,a_{1}(x)\,a_{2}(y)\,a_{2}(z) \\{} & {} \quad \quad -2\,a_{2}(x)\,a_{1}(y)\,a_{2}(z) -2\, a_{1}(x)\,a_{1}(y)\,a_{2}(z) \\{} & {} \quad \quad +2\,a_{2}(x\,y)\,a_{1}(z) +a_{1}(x\,y)\,a _{1}(z) \\{} & {} \quad \quad -2\,a_{2}(x)\,a_{2}(y)\,a_{1}(z) -2\,a_{1}(x)\,a_{2}(y)\,a_{1} (z)\\{} & {} \quad \quad -2\,a_{2}(x)\,a_{1}(y)\,a_{1}(z) \end{aligned}$$

holds. For all \(z\in {\mathbb {F}}\), define the function \(A_{z}\) by

$$\begin{aligned} A_{z}(x)= & {} 3a_{1}(xz)-a_{1}(x)\left[ a_{1}(z)+2a_{2}(x)\right] \\{} & {} +3a_{2}(xz)-a_{2}(x)\left[ a_{2}(z)+2a_{1}(x)\right] \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

With this notation the above identity turns into

$$\begin{aligned} A_{z}(xy)= a_{1}(x)g_{z}(y)+a_{2}(x)h_{z}(y)+k_{z}(x)a_{1}(y)+l_{z}(x)a_{2}(y) \qquad \left( x, y, z\in {\mathbb {F}}\right) , \end{aligned}$$

with appropriately defined functions \(g_{z}, h_{z}, k_{z}, l_{z}\), yielding that the mapping \(A_{z}\) is a (normal) exponential polynomial of degree at most 4 on the multiplicative group \({\mathbb {F}}^{\times }\). Further,

  1. (A)

    either the system \(a_{1}, a_{2}, k_{z}, l_{z}\) is linearly dependent

  2. (B)

    or the system \(a_{1}, a_{2}, k_{z}, l_{z}\) is linearly independent.

Alternative (A) holds only if the functions \(a_{1}\) and \(a_{2}\) are the same (note that we assumed that \(a_{1}(1)=a_{2}(1)=1\)), but then Theorem 2.2 applies and we obtain that there exists a homomorphism \(\varphi :{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= f(1)\varphi (x)^{2} \qquad a_{i}(x)= a_{i}(1)\varphi (x) \qquad \left( x\in {\mathbb {F}}\right) . \end{aligned}$$

If alternative (B) holds, then we get that not only the mapping \(A_{z}\), but also the functions \(a_{1}, a_{2}, k_{z}, l_{z}\) are (normal) exponential polynomials on the multiplicative group \({\mathbb {F}}^{\times }\) of degree at most four. Especially, \(a_{1}\) and \(a_{2}\) are linearly independent (normal) exponential polynomials of degree at most four. This means that both \(a_{1}\) and \(a_{2}\) can be represented as one of the following functions

  1. (i)
    $$\begin{aligned} a_{i}(x)= & {} \sum _{p, q, r=1}^{3}\beta _{p, q, r}^{(i)}\alpha _{p}(x)\alpha _{q}(x)\alpha _{r}(x)m(x) \\{} & {} +\sum _{p, q=1}^{3}\beta _{p, q}^{(i)}\alpha _{p}(x)\alpha _{q}(x)m(x)+\sum _{p=1}^{3}\beta _{p}^{(i)}\alpha _{p}(x)m(x) \end{aligned}$$
  2. (ii)
    $$\begin{aligned} a_{i}(x)= & {} \sum _{p, q=1}^{2}\beta _{p, q}^{(i)}\alpha _{p}(x)\alpha _{q}(x)m_{1}(x) \\{} & {} +\sum _{p=1}^{2}\beta _{p}^{(i)}\alpha _{p}(x)m_{1}(x)+\beta ^{(i)}m_{1}(x)+\gamma ^{(i)}m_{2}(x) \end{aligned}$$
  3. (iii)
    $$\begin{aligned} a_{i}(x)=(\beta _{1}^{(i)}\alpha _{1}(x)+\beta _{2}^{i})m_{1}(x)+(\beta _{3}^{(i)}\alpha _{2}(x)+\beta _{4}^{(i)})m_{2}(x) \end{aligned}$$
  4. (iv)
    $$\begin{aligned} a_{i}(x)= (\beta _{1}^{(i)}\alpha (x)+\beta _{2}^{(i)})m_{1}(x)+\beta _{3}^{(i)}m_{2}(x)+\beta _{4}^{(i)}m_{3}(x) \end{aligned}$$
  5. (v)
    $$\begin{aligned} a_{i}(x)= \beta _{1}^{(i)}m_{1}(x)+\beta _{2}^{(i)}m_{2}(x)+\beta _{3}^{(i)}m_{3}(x)+\beta _{4}^{(i)}m_{4}(x), \end{aligned}$$

where \(m, m_{p}:{\mathbb {F}}^{\times }\rightarrow {\mathbb {C}}\) are exponentials, while \(\alpha , \alpha _{p}, \alpha _{p, q}, \alpha _{p, q, r}:{\mathbb {F}}^{\times }\rightarrow {\mathbb {C}}\) are additive functions for all \(p, q, r=1, 2, 3\). Checking all the possibilities, we finally get that this can happen only if \(a_{1}\) and \(a_{2}\) are exponentials on the multiplicative group \({\mathbb {F}}^{\times }\). Since these functions were also assumed to be additive on \({\mathbb {F}}\), we get that they are homomorphisms on \({\mathbb {F}}\).

Summing up, there exist homomorphisms \(\varphi _{1}, \varphi _{2}:{\mathbb {F}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} f(x)= f(1)\varphi _{1}(x)\varphi _{2}(x) \qquad a_{i}(x)= a_{i}(1)\varphi _{i}(x) \qquad \left( x\in {\mathbb {F}}\right) \end{aligned}$$

where the above constants \(f(1), a_{1}(1)\) and \(a_{2}(1)\) also fulfill \(f(1)= a_{1}(1)a_{2}(1)\). \(\square \)