On a new class of functional equations satisfied by polynomial functions

The classical result of L. Székelyhidi states that (under some assumptions) every solution of a general linear equation must be a polynomial function. It is known that Székelyhidi’s result may be generalized to equations where some occurrences of the unknown functions are multiplied by a linear combination of the variables. In this paper we study the equations where two such combinations appear. The simplest nontrivial example of such a case is given by the equation F(x+y)-F(x)-F(y)=yf(x)+xf(y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} F(x + y) - F(x) - F(y) = yf(x) + xf(y) \end{aligned}$$\end{document}considered by Fechner and Gselmann (Publ Math Debrecen 80(1–2):143–154, 2012). In the present paper we prove several results concerning the systematic approach to the generalizations of this equation.


Introduction
First we recall briefly the known results connected with the notion of polynomial functions. The history of polynomial functions goes back to the year 1909 when the paper by Fréchet [9] appeared. Let G, H be abelian groups (for some results concerning the noncommutative case see the papers of Almira and Shulman [3] and Shulman [31]) and let f : G → H be a given function. The difference operator Δ h with span h ∈ G is defined by Having a result of this kind, it is much easier to solve linear equations because it is no longer necessary to deal with each equation separately. Instead, we may formulate results which are valid for large classes of equations. It is even possible to write computer programs which solve linear functional equations, see the papers of Gilányi [13] and Borus and Gilányi [5].
Székelyhidi's result though very nice and general does not close the research on polynomial functions. In [26] a lemma more general than Theorem 1.3 was used by the third author to obtain the solutions of the equation (1.4) connected with the Taylor formula. As we can see, Eq. (1.4) is not linear and, thereby, the family of equations having only polynomial solutions is enriched. Later on in papers by Koclȩga-Kulpa, Wasowicz and the fourth author [16][17][18][19][20]33] the mentioned lemma was used to deal with functional equations connected with numerical analysis. For a systematic approach to this topic see the monograph of the fourth author [33]. Let us also cite another monograph by Sahoo and Riedel [29] where other functional equations stemming from mean value theorems are discussed. Actually, there are several examples of results dealing with solving functional equations without or under weak regularity properties, let us mention e.g. [1,2,4,6,10,11,14,15,24,25,27,28] or [30]. The present paper is inspired by the equation (1.5) (solved by Fechner and Gselmann in [8]), where f is multiplied by two different expressions. In the second section of the paper we present a lemma which generalizes results in the third author's and Lisak's papers [21,26] and which shows that the solutions of a very general equation must be polynomial. The solutions of (1.5) must be polynomial but it is interesting that some monomial summands of them must be continuous whereas others may be any monomial functions. In the third section we deal with generalizations of (1.5) and we explain this behaviour.

A lemma
Let us begin with the following general Lemma (cf. Wilson [36], Székelyhidi [32], the third author [26], Pawlikowska [23] and Lisak and the third author [21]). Before we state the Lemma let us adopt the following notation. Let G and H be commutative groups. Then SA i (G; H) denotes the group of all i-additive, symmetric mappings from G i into H for i ≥ 2, while SA 0 (G; H) denotes the family of constant functions from G to H and SA 1 (G; H) = Hom(G; H). We also denote by I the subset of Hom(G; G) × Hom(G; G) containing all pairs (α, β) for which Ran(α) ⊂ Ran(β). Furthermore, we adopt a convention that a sum over an empty set of indices equals 0. We denote also for an for each (x, y) ∈ G × G. In fact, Γ is nothing else but the operator Δ defined above, applied to functions of two variables. However we wish to stress the difference between one and two variables, this is why we denote the new operator with a different symbol.
for every x, y ∈ G. Then ϕ N is a polynomial function of degree not greater than for each x, y ∈ G. It turns out that the polynomial in y and coefficients ϕ i (x), i ∈ {0, . . . , N} vanish identically. It is not difficult to see that it is equivalent to the system of identities ϕ i = 0, i ∈ {0, . . . , N}. In particular ϕ N is a polynomial function, identically equal to 0, the degree is hence estimated by 0. Now suppose that our Lemma holds for some M ≥ −1 and consider the equation for every x, y ∈ G. Assume that K M +1 = ∅ -otherwise (2.4) reduces to (2.1) and we are done. Further, assume that I p,M +1−p = ∅ for some p ∈ {0, . . . , M + 1}. Fix such a p and write I p,M +1−p = {(α j , β j ) : j ∈ {1, . . . , m}} for some m ∈ N. Choose a pair (α, β) ∈ I p,M +1−p and fix a u 1 ∈ G arbitrarily. To the u 1 take a v 1 ∈ β −1 ({α(−u 1 )}) so that α(u 1 ) + β(v 1 ) = 0. Now let us apply the operator Γ (u1,v1) to both sides of (2.4). On the left-hand side we obtain Denotingφ N := Δ u1 ϕ N we get again the left-hand side of the Eq. (2.1) but withφ N instead of ϕ N (note that the remaining summands may be written as polynomial functions in y but of degrees lower than N, and they can be rearranged in such a way that the left-hand side is again a finite sum of polynomial functions in y with coefficients dependent on x). Let us now look at the right-hand side. If we apply Γ (u1,v1) to the first summands it will transform them into summands of similar character, with α(x) + β(y) replaced by α(x) + β(y) + α(u 1 ) + β(v 1 ). But in the last summand, and more exactly in the summand determined by the pair (α, β) to which u 1 and v 1 were selected, we have the following situation for every x, y ∈ G. We see that the action of Γ (u1,v1) increases the number of summands but decreases the degree of polynomial functions by 1. Applying the operator p − 1 more times we will eventually annihilate the summand on the right-hand side. Repeating the above procedure for arbitrary u j ∈ G, j ∈ {1, . . . , q} we obtain the Eqs. for every x, y ∈ G. Hereψ r,n−r,(α,β) andφ i are new functions obtained after applying the operator Γ to the previous ones. Anyway, the method shows that repeating it we may arrive at the complete annihilation of the summand corresponding to M + 1 and finally replace (2.7) by the following.
n p=0 (α,β)∈Ip,n−pψ p,n−p,(α,β) (α(x) + β(y)) (x p , y n−p ), (2.8) for all x, y ∈ G and u 1 , . . . , u q ∈ G. Now we may use the induction hypothesis and infer that Δ u1,...,uq ϕ N is a polynomial function. The estimation of the degree consists in realizing what is happening indeed. Applying the operator Γ (u,v) (with properly selected u and v) to both sides we "annihilate" one summand on the right-hand side of (2.1) at level 0. Thus, applying the operator Γ cardK 0 times with arbitrary u's we get rid of the summands constituting level 0. Then we apply Γ again to annihilate the level 1 summands but we have to do it in two steps. First we decrease the degree of summand by 1 and only then, in step two, can we annihilate the summand. It takes thus 2cardK 1 to annihilate the terms of degree 1. Similarly, it takes 3cardK 2 to annihilate terms of the second degree, and, in general, (n + 1)cardK n to annihilate terms of the n-th degree. On the left-hand side appears the sign of Δ u1,.

Theorem 3.1.
Let the pair (f, F ) of functions mapping R to R satisfy the equation for all x, y ∈ R. Then f is a polynomial function of degree not greater than 2 and F is a polynomial function of degree not greater than 3.
Proof. Let us rewrite Eq. (3.1) in the form is a particular case of (2.1). We also have K 0 = I 0,0 and K 1 = I 1,0 with card(K 0 ∪ K 1 ) = 2 and cardK 1 = 1. Therefore (cf. (2.2)) f is a polynomial function of degree at most 2. Hence there exist such that f is given by for every x ∈ R. On the other hand, taking (3.1) into consideration again and putting y = h in (3.1) we obtain after rearranging the equation Since f is a polynomial function, we see that the right-hand side of the above is a polynomial function. Now, applying the Fréchet operator three times to both sides of (3.4) we see that the right-hand side vanishes and so does the left-hand side. This means however that F is a polynomial function of order greater by 1 than order of f.
for every x ∈ R.
Remark 3.2. In (3.1) taking qx; qy in places of x and y; respectively, using the rational homogeneity of monomial summands of f and F and joining together the terms with equal powers of q we can see that this equation is possible only if it occurs for monomials of equal order.
Taking the above remark into account, we start with F = B * 0 = B 0 . Then from (3.1) we infer that f = 0 and so whence it follows that f = 0. Thus B 1 is an arbitrary additive function, and in particular A 0 = 0. The next step is for every x, y ∈ R. Hence for every x ∈ R. Now, let us pass to the case where for every x ∈ R. Inserting the above equality into (3.1), we obtain After some elementary calculations we obtain hence Putting here y = 1, we obtain for every x ∈ R. We obtain from (3.8) for every x ∈ R. Taking (3.7) into account, we have by (3.9) for every x ∈ R. Thus we have proved the following.
Here A 1 and B 1 are arbitrary additive functions, and a 2 ∈ R is an arbitrary constant. Now we are going to investigate a more general equation. We are interested in solving the equation for every x, y ∈ R. First, we assume that both functions f and F are polynomial functions. Then, similarly as in the case of Theorem 3.1, the monomial summands of f and F of orders k and k+1, respectively satisfy (3.11). Later on we will discuss how Lemma 2.1 may be used to show that (in some situations) f and F are indeed polynomial functions.
A characteristic feature of (3.11) is the dependence of the existence of solutions on the behaviour of the sequence (S k ) k∈N given by (3.12) for all k ∈ N ∪ {0}. Let us observe that in the case of (3.1) we have n = 3 and Using our Lemma 2.1 we infer rather easily that f is a polynomial function. We assume that also F is a polynomial function. The aim of the next theorem is to prove that, under the assumptions made, solutions of (3.11) are continuous, except for an additive summand. Similarly as in the case of Theorem 3.1, it is enough to assume that f and F are monomials.
Further, let f : R −→ R be either 0 or a monomial function of order k, let F : R −→ R be a monomial function of order k + 1 and suppose that the pair (f, F ) satisfies equation (3.11).

15)
and obviously Proof. Let us start with the case k = 0. Then f = const = A 0 and F is additive. Putting x = y in (3.11) we obtain (taking into account the rational homogeneity of F ) for each x ∈ R. Using the assumption that S 0 = 0 we get for each x ∈ R, and hence F is a continuous function. In the case k = 1 we obtain that f = A 1 is additive and F is a quadratic function, i.e. a diagonalization of a biadditive symmetric function, for every x ∈ R, whence (keeping in mind that S 1 = 0) we get (denoting 2 S1 by C 1 ) for every x ∈ R. Substituting the above into (3.11), we obtain for all x, y ∈ R. Comparing terms of the same degree on both sides of the above equation, we obtain for all x ∈ R, and, symmetrically, for all y ∈ R. Both of these equations hold if either Now if A 1 = 0 then also F = 0, and we get the continuity of a solution (f, F ) of (3.11) in this case. Further let us look for non-zero solutions of (3.11). The existence of a nontrivial A 1 implies that (3.19) holds. So, in this case we have for all x, y ∈ R; which actually means that taking an arbitrary additive function A 1 as f, we get that the pair (f, F ) is a solution of (3.11) for k = 1. Of course, the solutions are mostly discontinuous. Now, let us proceed to the case k = 2. Observe that f is now a diagonalization of a biadditive, symmetric function A 2 . Similarly as in the previous cases, putting x = y we obtain from (3.11) for every x ∈ R, whence in view of S 2 = 0, for all x ∈ R. Denote 2 S2 by C 2 . Let us substitute the formula (3.21) into (3.11). We obtain for all x, y ∈ R. Using the biadditivity of f (and hence its rational homogeneity) we obtain hence for all x, y ∈ R. Now, comparing the terms of the same degree on both sides of (3.22), we get first that either or A 2 = 0. In the sequel we assume that A 2 = 0, hence (3.23) holds. In other words Let us compare the remaining terms. We get for all x, y ∈ R. Putting x = y above; and taking into account that A 2 = 0 we infer that Hence we may write and yA 2 (x, y) = xA * 2 (y) (3.25) for all x, y ∈ R. Putting y = 1 into (3.24) and (3.25) we obtain for every x ∈ R, hence f and F are continuous. Now, let us pass to the situation where k ≥ 3. In general, if k ≥ 3 and f and F satisfy (3.11) then for every x ∈ R and hence for every x ∈ R. Put C k := 2 S k . We can write (3.27) for all x, y ∈ R. Comparing the terms of equal degrees, we infer that either (3.14)). Assume from now on that we are interested in nontrivial solutions of (3.11). Continuing comparisons of the terms on both sides of (3.27), we get for every j ∈ {2, . . . , k − 1} (cf. (3.13)) for otherwise (putting x = y) we would get which is impossible. Note that from the above (3.15) and (3.16) follow. Taking this into account, as well as the definition of C k and comparing the remaining terms in (3.27), we get for all x, y ∈ R. Using (3.15), we get hence (3.28) and analogously we infer (3.29) for all x, y ∈ R. Let us put x + y instead of x in (3.29). We obtain, after some easy though tedious calculations, that the left-hand side is equal to while the right-hand side is equal to Comparing on both sides the terms of equal degrees we obtain in particular the following sequence of equalities.
for every x, y ∈ R, in other words, putting y = 1 we obtain for every x ∈ R, which means that A k is continuous for k ≥ 3 and thus the proof is finished.
Remark 3.3. Using Lemma 2.1 exactly in the same way as we did in the proof of Theorem 3.1, we infer rather easily that if the functions F and f satisfy (3.11) then f must be a polynomial function. In the following simple example we observe that the function F is not necessarily polynomial.

Example 1.
Observe that the equation is satisfied by any even function F and f = 0.

AEM
The reason why the above example works is that the equation for all x ∈ R, has solutions which are not polynomial. If we consider a general linear equation for all x, y ∈ R, and we assume that at least one of the pairs (α i , β i ) is linearly independent from all the others then, using Theorem 1.3, it may be shown that every solution of (3.33) is a polynomial function. Therefore it is natural to formulate the following problem.
. . , n be such that there exists an i 0 ∈ {1, . . . , n} satisfying Is it possible that the functional equation (3.11) is satisfied by some functions f, F where F is not a polynomial function?
As we have seen (cf. Example 1) it is possible that the Eq. (3.11) is satisfied by a pair (f, F ) where F is not a polynomial function. However we will give some examples of particular forms of this equation which have only polynomial solutions and therefore we can apply Theorem 3.3 to solve these equations. Proof. Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take x = y in (3.11) to show that also F must be polynomial. Now we show some examples of equations (with nontrivial solutions) which may be solved with the use of the above proposition.

Example 2.
Assume that functions f, F : R → R satisfy the functional equation for all x, y ∈ R. Rearranging (3.35) in the form for all x, y ∈ R, we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If k = 0 then f (x) = b for some constant b ∈ R and all x ∈ R, further S 0 = −2 = 0 and, consequently, and again from Theorem 3.3 we infer that f is any additive function and F (x) = −xf (x) for all x ∈ R. If k = 2, 3, then it is easy to see that the solutions of of (3.35) must vanish. Thus the general solution of this equation is given by where a : R → R is additive and b is a constant.

Example 3.
Assume that functions f, F : R → R satisfy the functional equation for all x, y ∈ R. Rearranging (3.36) in the form for all x, y ∈ R, we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If k = 0 then f (x) = b for some constant b ∈ R, and all x ∈ R, further S 0 = −6 = 0 and, consequently, F (x) = − b 3 x for all x ∈ R. Now let k = 1, then S 1 = −6 but this time and again from Theorem 3.3 we infer that f = F = 0. If k = 2 then the solutions must be continuous since S 2 = −6 = 0, moreover which means that f (x) = cx 2 and F (x) = − c 3 x 3 , x ∈ R satisfy (3.36). Thus the general solution of this equation is given by Observe that in the Eq. (1.5) the left-hand side is the difference connected with the Cauchy equation. Since additive functions are monomial functions of order one, it is natural to ask whether this difference may be replaced by the difference connected with monomial function of higher orders or with the polynomial functions. In the next part of the paper we consider a functional equation constructed in such way. for all x, y ∈ R, then f is a polynomial function of order at most n + 1 and F is a polynomial function of order not greater than n + 2.
Proof. We write (3.37) in the form for all x, y ∈ R. Similarly as before, using Lemma 2.1, we can see that f is a polynomial function of order at most (n + 1) + 1 − 1 = n + 1. Indeed, observe that in the present situation we have Hence card(K 0 ∪ K 1 ) = n + 1 and cardK 1 = 1, whence the estimation follows (cf. (2.2)). Further, applying the difference operator with the span y (n + 2)−times to the both sides of (3.37) we get Δ 2n+2 y F (x) = 0, for all x ∈ R i.e. F is a polynomial function of order 2n + 1. Now consider any k > n + 1, the function f is a polynomial function of order smaller than k thus the monomial summand of F of order k + 1 satisfies (3.37) with f = 0. However the n−th difference does not vanish for monomial functions of order k. This means that the summands of F of orders greater than n + 2 must be zero, i.e. F is a polynomial function of order at most n + 2. for all x, y ∈ R, then f is a polynomial function of order at most n + 1 and F is a polynomial function of order not greater than n + 2.
Proof. We write (3.38) in the form for all x, y ∈ R. We see that Hence card(K 0 ∪ K 1 ) = n + 1 and cardK 1 = 1. Now applying again Lemma 2.1, we can see (cf. (2.2)) that f is a polynomial function of order at most (n + 1) + 1 − 1 = n + 1. Further, applying the difference operator with the span y (n + 2)−times to the both sides of (3.38) we get for all x ∈ R, i.e. F is a polynomial function of order 2n + 1. Now, similarly as in the respective part of the proof of Lemma 3.1, we can see that the order of F cannot be greater than n + 2. Indeed, the summands of F of orders k > n+ 2 must satisfy (3.38) with the right-hand side equal to zero (since f has no terms of order k − 1) which is impossible since the equation  Proof. From Lemma 3.1 we know that both f and F are polynomial functions. Take first k ∈ {0, 1, . . . , n − 2}, and assume that f is a monomial function of order k and that F is a monomial function of order k + 1. We can see that S k = 0 i.e. from Theorem 3.3 we obtain f = 0. Now, take k ∈ {n − 1, n, n + 1}, then S k = 0 and, as previously, assume that f is a monomial function of order k and that F is a monomial function of order k + 1. We want to show that f = 0. Thus for the indirect proof assume that f = 0, then F is also nonzero. Observe that it leads to a contradiction. Indeed, Eq. (3.37) cannot be satisfied, since in the expression Δ n y F (x) we have the term of order k + 1 with respect to y which is missing on the right-hand side.
We proved that f = 0, thus F obviously satisfies Δ n y F (x) = 0 for all x, y ∈ R, i.e. F is a polynomial function of order at most n − 1.
In the next theorem we obtain the solution of Eq. (3.38). Proof. If n = 1 then (3.38) reduces to (1.5) which is already solved. Thus we may assume that n ≥ 2. Using Lemma 3.2, we can see that the functions F and f are polynomial and as usually we will work with monomial functions. Thus let f and F be monomial functions of orders k, k + 1; respectively. We want to show that f = 0. However, if f = 0 then the right-hand side which is of the form xf (y) + yf (x), contains the term yf (x) of order k with respect to the variable x. Such a term is missing in the expression Δ n y F (x) − n!F (y), since n ≥ 2. Therefore also in this case we have f = 0.
Using the equality f = 0 in (3.38), we get Δ n y F (x) − n!F (y) = 0, for all x, y ∈ R, for each monomial summand of F. This means that F is a monomial function of order n. Remark 3.4. It is interesting that we have a nice set of solutions only for the difference stemming from Cauchy's equation. Thus the case n = 1 in (3.38) is exceptional. It seems that the right-hand side of (1.5) must be suitably modified to get a similar effect for n > 1.
We can add one more class of functional equations which may be solved with the use of Theorem 3.3. for all x, y ∈ R, then the functions f and F are polynomial.
Proof. Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take y = 0 in (3.39) to show that also F must be polynomial. However the methods used in Lemmas 3.1 and 3.2 were needed to show that F is polynomial because, in case of these equation, condition (3.34) is not satisfied.
We end the paper with a remark connecting the results obtained here with the topic called alienation of functional equations (for some details concerning the problem of alienation of functional equations see the survey paper of R. Ger and the third author [12]). Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.