1 Introduction

First we recall briefly the known results connected with the notion of polynomial functions. The history of polynomial functions goes back to the year 1909 when the paper by Fréchet [9] appeared. Let GH be abelian groups (for some results concerning the noncommutative case see the papers of Almira and Shulman [3] and Shulman [31]) and let \(f:G\rightarrow H\) be a given function. The difference operator \(\Delta _h\) with span \(h\in G\) is defined by

$$\begin{aligned} \Delta _hf(x):=f(x+h)-f(x) \end{aligned}$$

and \(\Delta _h^n\) is defined recursively

$$\begin{aligned} \Delta _h^0f:=f,\;\Delta _h^{n+1}f:=\Delta _h(\Delta _h^nf)= \Delta _h\circ \Delta _h^nf,\;n\in {\mathbb {N}}. \end{aligned}$$

Using this operator, polynomial functions are defined in the following way.

Definition 1.1

A function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is called a polynomial function of order at most n if it satisfies the equality

$$\begin{aligned} \Delta _h^{n+1}f(x)=0 \end{aligned}$$

for all \(x\in {\mathbb {R}}.\)

Remark 1.1

It is known (see e.g. [32] or [7]) that a function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is polynomial of order at most n (in the sense of Definition 1.1) if, and only if, it satisfies the equation

$$\begin{aligned} \Delta _{h_1,\ldots ,h_{n+1}}f(x)=0, \end{aligned}$$
(1.1)

for every \(h_1,\ldots , h_{n+1}, x\in {\mathbb {R}},\) where \(\Delta _{h_1,\ldots , h_{n+1}}= \Delta _{h_{n+1}}\circ \cdots \circ \Delta _{h_1}.\)

Polynomial functions are sometimes called generalized polynomials. The shape of solutions of this equation was obtained in various situations among others by Mazur and Orlicz [22], Van der Lijn [35] and Ɖoković [7]. To describe the form of polynomial functions we need the notion of multiadditive functions. A function \(A_n:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) is \(n-\)additive if and only if for every \(i\in \{1,2,\ldots ,n\}\) and for all \(x_1,\ldots ,x_n,y_i\in {\mathbb {R}}\) we have

$$\begin{aligned} \begin{aligned}&A_n(x_1,\ldots , x_{i-1},x_i+y_i,x_{i+1},\dots ,x_n)\\&\quad =A_n(x_1,\dots ,x_{i-1},x_i,x_{i+1},\dots ,x_n) +A_n(x_1,\dots ,x_{i-1},y_i,x_{i+1},\dots ,x_n). \end{aligned} \end{aligned}$$

Further, for a function \(A_n:{\mathbb {R}}^n\rightarrow {\mathbb {R}},\) the diagonalization \(A_n^*\) is defined by

$$\begin{aligned} A_n^*(x):=A_n(x,\dots ,x). \end{aligned}$$

Now we can present the mentioned characterization of polynomial functions.

Theorem 1.2

Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a polynomial function of order at most n,  then there exist unique \(k-\)additive functions \(A_k:{\mathbb {R}}^k\rightarrow {\mathbb {R}},\) \(k\in \{1,\dots ,n\}\) and a constant \(A_0\) such that

$$\begin{aligned} f(x)=A_0+A_1^*(x)+\cdots +A_n^*(x), \end{aligned}$$
(1.2)

where \(A_k^*\) is a diagonalization of \(A_k.\) Conversely, every function of the shape (1.2) is a polynomial function of order at most n.

A very important result is due to L. Székelyhidi who proved that every solution of a very general linear equation is a polynomial function (see [32] Theorem 9.5, cf. also Wilson [36]).

Theorem 1.3

Let G be an Abelian semigroup, S an Abelian group, n a nonnegative integer, \(\varphi _i,\psi _i\) additive functions from G to G and let \(\varphi _i(G)\subset \psi _i(G),\;i\in \{1,\dots ,n\}.\) If functions \(f,f_i:G\rightarrow S\) satisfy the equation

$$\begin{aligned} f(x)+\sum _{i=1}^nf_i(\varphi _i(x)+\psi _i(y))=0 \end{aligned}$$
(1.3)

then f satisfies (1.1).Footnote 1

Having a result of this kind, it is much easier to solve linear equations because it is no longer necessary to deal with each equation separately. Instead, we may formulate results which are valid for large classes of equations. It is even possible to write computer programs which solve linear functional equations, see the papers of Gilányi [13] and Borus and Gilányi [5].

Székelyhidi’s result though very nice and general does not close the research on polynomial functions. In [26] a lemma more general than Theorem 1.3 was used by the third author to obtain the solutions of the equation

$$\begin{aligned} f(y) = \sum \limits _{k=0}^{n-2} \gamma _k(x)(y-x)^k + \phi \left( \left( 1-\frac{1}{n}\right) x + \frac{1}{n}y\right) (y-x)^{n-1} \end{aligned}$$
(1.4)

connected with the Taylor formula. As we can see, Eq. (1.4) is not linear and, thereby, the family of equations having only polynomial solutions is enriched. Later on in papers by Koclȩga-Kulpa, Wa̧sowicz and the fourth author [16,17,18,19,20, 33] the mentioned lemma was used to deal with functional equations connected with numerical analysis. For a systematic approach to this topic see the monograph of the fourth author [33]. Let us also cite another monograph by Sahoo and Riedel [29] where other functional equations stemming from mean value theorems are discussed. Actually, there are several examples of results dealing with solving functional equations without or under weak regularity properties, let us mention e.g. [1, 2, 4, 6, 10, 11, 14, 15, 24, 25, 27, 28] or [30].

The present paper is inspired by the equation

$$\begin{aligned} F(x + y) - F(x) - F(y) = yf(x) + xf(y) \end{aligned}$$
(1.5)

(solved by Fechner and Gselmann in [8]), where f is multiplied by two different expressions. In the second section of the paper we present a lemma which generalizes results in the third author’s and Lisak’s papers [21, 26] and which shows that the solutions of a very general equation must be polynomial. The solutions of (1.5) must be polynomial but it is interesting that some monomial summands of them must be continuous whereas others may be any monomial functions. In the third section we deal with generalizations of (1.5) and we explain this behaviour.

2 A lemma

Let us begin with the following general Lemma (cf. Wilson [36], Székelyhidi [32], the third author [26], Pawlikowska [23] and Lisak and the third author [21]). Before we state the Lemma let us adopt the following notation. Let G and H be commutative groups. Then \(SA^{i}(G;H)\) denotes the group of all i-additive, symmetric mappings from \(G^{i}\) into H for \(i\ge 2\), while \(SA^{0}(G;H)\) denotes the family of constant functions from G to H and \(SA^{1}(G;H)=\text{ Hom }(G;H)\). We also denote by \({\mathcal {I}}\) the subset of \(\text{ Hom }(G;G)\times \text{ Hom }(G;G)\) containing all pairs \((\alpha ,\beta )\) for which \(\text{ Ran }(\alpha )\subset \text{ Ran }(\beta )\). Furthermore, we adopt a convention that a sum over an empty set of indices equals 0. We denote also for an \(A_i\in SA^{i}(G;H)\) by \(A_i^*\) the diagonalization of \(A_i, \, i\in {\mathbb {N}}\cup \{0\}.\) Let us also introduce the operator \(\Gamma : G\times G\times H^{G\times G} \rightarrow H^{G\times G}\) defined as follows. For each \(\phi :G\times G\rightarrow H\) and each \((u,v)\in G\times G\) we set

$$\begin{aligned} \Gamma _{(u,v)}\phi (x,y):= \phi (x+u,y+v) - \phi (x,y), \end{aligned}$$

for each \((x,y)\in G\times G.\) In fact, \(\Gamma \) is nothing else but the operator \(\Delta \) defined above, applied to functions of two variables. However we wish to stress the difference between one and two variables, this is why we denote the new operator with a different symbol.

Lemma 2.1

Fix \(N, \, M\in {\mathbb {N}}\cup \{0\}, \, \) and let \(I_{p,n-p}, \, 0\le p\le n, \, n\in \{0,\dots ,M\}\) be finite subsets of \({\mathcal {I}}\). Suppose further that H is uniquely divisible by N! and let functions \(\varphi _i:G\rightarrow SA^i(G;H),\, i\in \{0,\dots ,N\}\) and \(\psi _{p,n-p,(\alpha ,\beta )}:G\rightarrow SA^i(G;H),\, (\alpha ,\beta )\in I_{p,n-p},\, 0\le p \le n, n\in \{0,\dots ,M\} \) satisfy

$$\begin{aligned}&\varphi _N(x)(y^N) + \sum _{i=0}^{N-1}\varphi _i(x)(y^i) \nonumber \\&\quad =\sum _{n=0}^{M}\sum _{p=0}^n \sum _{(\alpha ,\beta )\in I_{p,n-p}}\psi _{p,n-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{n-p}) \end{aligned}$$
(2.1)

for every \(x,y\in G.\) Then \(\varphi _N\) is a polynomial function of degree not greater than

$$\begin{aligned} \sum _{n=0}^M \mathrm{card}\left( \bigcup _{s=n}^M K_s\right) - 1, \end{aligned}$$
(2.2)

where \(K_s= \bigcup _{p=0}^s I_{p,s-p}\) for each \(s\in \{0,\dots ,M\}.\)

Proof

Let us fix an \(N\in {\mathbb {N}}\cup \{0\}.\) We prove the Lemma using induction with respect to M. Let us start with \(M=-1\) - we find that the right-hand side of the Eq. (2.1) vanishes. Thus (2.1) reduces to

$$\begin{aligned} \varphi _N(x)(y^N) + \sum _{i=0}^{N-1}\varphi _i(x)(y^i) = 0, \end{aligned}$$
(2.3)

for each \(x, \, y\in G.\) It turns out that the polynomial in y and coefficients \(\varphi _i(x), \, i\in \{0,\dots ,N\} \) vanish identically. It is not difficult to see that it is equivalent to the system of identities \(\varphi _i =0, \, i\in \{0, \dots ,N\}.\) In particular \(\varphi _N\) is a polynomial function, identically equal to 0,  the degree is hence estimated by 0.

Now suppose that our Lemma holds for some \(M\ge -1\) and consider the equation

$$\begin{aligned}&\varphi _N(x)(y^N) + \sum _{i=0}^{N-1}\varphi _i(x)(y^i) \nonumber \\&=\quad \sum _{n=0}^{M+1}\sum _{p=0}^n \sum _{(\alpha ,\beta )\in I_{p,n-p}}\psi _{p,n-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{n-p}) \end{aligned}$$
(2.4)

for every \(x,y\in G.\) Assume that \(K_{M+1} \ne \emptyset \) - otherwise (2.4) reduces to (2.1) and we are done. Further, assume that \(I_{p,M+1-p}\ne \emptyset \) for some \(p\in \{0,\dots ,M+1\}.\) Fix such a p and write \(I_{p,M+1-p} = \{(\alpha _j, \beta _j): j\in \{1,\dots , m\}\}\) for some \(m\in {\mathbb {N}}.\) Choose a pair \((\alpha , \beta )\in I_{p, M+1-p}\) and fix a \(u_1\in G \) arbitrarily. To the \(u_1\) take a \(v_1\in \beta ^{-1}(\{\alpha (-u_1)\})\) so that \(\alpha (u_1) +\beta (v_1) =0.\) Now let us apply the operator \(\Gamma _{(u_1,v_1)}\) to both sides of (2.4). On the left-hand side we obtain

$$\begin{aligned}&\varphi _N(x+u_1)((y+v_1)^N) -\varphi _N(x)(y^N) +\sum _{i=0}^{N-1} \Gamma _{(u_1,v_1)}\varphi _i(x)(y^i) \nonumber \\&\quad = \varphi _N(x+u_1)(y^N) -\varphi _N(x)(y^N) + \sum _{k=1}^N{N \atopwithdelims ()k}\varphi _N(x+u_1)(y^{N-k},v_1^k ) \nonumber \\&\qquad + \sum _{i=0}^{N-1}\Gamma _{(u_1,v_1)}\varphi _i(x)(y^i) = \Delta _{u_1}\varphi _N(x)(y^N) + \sum _{k=1}^N{N \atopwithdelims ()k}\Delta _{u_1} \varphi _N(x)(y^{N-k}, v_1^k) \nonumber \\&\qquad + \sum _{k=1}^N{N \atopwithdelims ()k}\varphi _N(x)(y^{N-k},v_1^k) + \sum _{i=0}^{N-1}\Gamma _{(u_1,v_1)}\varphi _i(x)(y^i). \end{aligned}$$
(2.5)

Denoting \({\hat{\varphi }}_N: = \Delta _{u_1}\varphi _N\) we get again the left-hand side of the Eq. (2.1) but with \({\hat{\varphi }}_N\) instead of \(\varphi _N\) (note that the remaining summands may be written as polynomial functions in y but of degrees lower than N,  and they can be rearranged in such a way that the left-hand side is again a finite sum of polynomial functions in y with coefficients dependent on x).

Let us now look at the right-hand side. If we apply \(\Gamma _{(u_1,v_1)}\) to the first summands it will transform them into summands of similar character, with \(\alpha (x) + \beta (y)\) replaced by \(\alpha (x) + \beta (y)+ \alpha (u_1) + \beta (v_1).\) But in the last summand, and more exactly in the summand determined by the pair \((\alpha , \beta )\) to which \(u_1\) and \(v_1\) were selected, we have the following situation

$$\begin{aligned}&\psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)+ \alpha (u_1) +\beta (v_1)\right) ((x+u_1)^p,(y+v_1)^{M+1-p}) \nonumber \\&\qquad -\psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{M+1-p}) \nonumber \\&\quad =\psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) ((x+u_1)^p,(y+v_1)^{M+1-p}) \nonumber \\&\qquad - \psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{M+1-p}) \nonumber \\&\quad =\psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{M+1-p}) \nonumber \\&\qquad - \psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{M+1-p}) \nonumber \\&\qquad + \sum _{(s,t)\in \{0,\dots ,p\}\times \{0,\dots ,M+1-p\}\setminus \{(0,0)\}}{p\atopwithdelims ()s}{M+1-p \atopwithdelims ()t} \nonumber \\&\qquad \times \psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^{p-s},u_1^s,y^{M+1-p-t},v_1^t) \nonumber \\&\quad = \sum _{(s,t)\in \{0,\dots ,p\}\times \{0,\dots ,M+1-p\}\setminus \{(0,0)\}}{p\atopwithdelims ()s}{M+1-p \atopwithdelims ()t} \nonumber \\&\qquad \times \psi _{p,M+1-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^{p-s},u_1^s,y^{M+1-p-t},v_1^t) \end{aligned}$$
(2.6)

for every \(x, \, y \in G.\) We see that the action of \(\Gamma _{(u_1,v_1)}\) increases the number of summands but decreases the degree of polynomial functions by 1. Applying the operator \(p-1\) more times we will eventually annihilate the summand on the right-hand side. Repeating the above procedure for arbitrary \(u_j\in G, \, j\in \{1,\dots , q\}\) we obtain the Eqs. (cf. (2.5) and (2.6))

$$\begin{aligned}&\Delta _{u_1,\dots ,u_q}\varphi _N(x)(y^N) + \sum _{i=0}^{N-1} {\hat{\varphi }}_i(x)(y^i) \nonumber \\&\quad =\sum _{n=0}^M\sum _{p=0}^n \sum _{(\alpha ,\beta )\in I_{p,n-p}}{\hat{\psi }}_{p,n-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{n-p}) \nonumber \\&\qquad + \sum _{j=0, j\ne p}^m \sum _{(\alpha ,\beta )\in I_{j,M+1-j}}{\hat{\psi }}_{j,M+1-j,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^j,y^{M+1-j}), \end{aligned}$$
(2.7)

for every \(x,\, y\in G.\) Here \({\hat{\psi }}_{r, n-r, (\alpha , \beta )}\) and \({\hat{\varphi }}_i\) are new functions obtained after applying the operator \(\Gamma \) to the previous ones. Anyway, the method shows that repeating it we may arrive at the complete annihilation of the summand corresponding to \(M+1\) and finally replace (2.7) by the following.

$$\begin{aligned}&\Delta _{u_1,\dots ,u_q}\varphi _N(x)(y^N) + \sum _{i=0}^{N-1} {\hat{\varphi }}_i(x)(y^i)\nonumber \\&\quad = \sum _{n=0}^M\sum _{p=0}^n \sum _{(\alpha ,\beta )\in I_{p,n-p}}{\hat{\psi }}_{p,n-p,(\alpha ,\beta )}\left( \alpha (x) + \beta (y)\right) (x^p,y^{n-p}), \end{aligned}$$
(2.8)

for all \(x, \, y\in G \) and \(u_1,\dots , u_q \in G.\) Now we may use the induction hypothesis and infer that

$$\begin{aligned} \Delta _{u_1,\dots ,u_q}\varphi _N \end{aligned}$$

is a polynomial function.

The estimation of the degree consists in realizing what is happening indeed. Applying the operator \(\Gamma _{(u,v)}\) (with properly selected u and v) to both sides we “annihilate” one summand on the right-hand side of (2.1) at level 0. Thus, applying the operator \(\Gamma \) \(\mathrm{card}K_0\) times with arbitrary u’s we get rid of the summands constituting level 0. Then we apply \(\Gamma \) again to annihilate the level 1 summands but we have to do it in two steps. First we decrease the degree of summand by 1 and only then, in step two, can we annihilate the summand. It takes thus \(2\mathrm{card}K_1\) to annihilate the terms of degree 1. Similarly, it takes \(3\mathrm{card}K_2\) to annihilate terms of the second degree, and, in general, \((n+1)\mathrm{card}K_n\) to annihilate terms of the n-th degree. On the left-hand side appears the sign of \(\Delta _{u_1,\dots ,u_q}\varphi _N(x)(y)\) where

$$\begin{aligned} q=\sum _{n=0}^M \mathrm{card}\left( \bigcup _{s=n}^M K_s\right) . \end{aligned}$$

\(\square \)

3 Results

Let us solve Eq. (1.5), applying our Lemma 2.1.

Theorem 3.1

Let the pair (fF) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfy the equation

$$\begin{aligned} F(x + y) - F(x) - F(y) = yf(x) + xf(y), \end{aligned}$$
(3.1)

for all \(x, \, y\in {\mathbb {R}}.\) Then f is a polynomial function of degree not greater than 2 and F is a polynomial function of degree not greater than 3.

Proof

Let us rewrite Eq. (3.1) in the form

$$\begin{aligned} f(x)y + F(x) = -f(y)x + F(x + y) - F(y) \end{aligned}$$
(3.2)

for all \(x, y\in {\mathbb {R}}.\) If we take now \(G=H={\mathbb {R}}, \, N=1, \, M=1,\) \( I_{0,0}= \{(0,\mathrm{id}), (\mathrm{id}, \mathrm{id})\},\) \(\psi _{0,0,(0,\mathrm{id})} = -F, \, \psi _{0,0,(\mathrm{id},\mathrm{id})} = F,\) \(I_{0,1}=\emptyset , \, I_{1,0}= \{(0,\mathrm{id})\},\) \( \psi _{1,0,(0,\mathrm{id})} = -f,\) \(\varphi _1 = f, \, \varphi _0 = F\) then we see that (3.2) is a particular case of (2.1). We also have \(K_0= I_{0,0}\) and \(K_1= I_{1,0}\) with \(\mathrm{card }(K_0\cup K_1) = 2\) and \(\mathrm{card} K_1 =1.\) Therefore (cf. (2.2)) f is a polynomial function of degree at most 2. Hence there exist \(A_0\in SA^0({\mathbb {R}},{\mathbb {R}}), A_1\in SA^1({\mathbb {R}}, {\mathbb {R}})\) and \( A_2\in SA^2({\mathbb {R}}, {\mathbb {R}})\) such that f is given by

$$\begin{aligned} f(x) = A_0^* + A_1^*(x) + A_2^*(x) \end{aligned}$$
(3.3)

for every \(x\in {\mathbb {R}}.\) On the other hand, taking (3.1) into consideration again and putting \(y=h\) in (3.1) we obtain after rearranging the equation

$$\begin{aligned} F(x+h)-F(x) = hf(x) + xf(h) + F(h), \end{aligned}$$

or

$$\begin{aligned} \Delta _hF(x) = hf(x) + xf(h) + F(h). \end{aligned}$$
(3.4)

Since f is a polynomial function, we see that the right-hand side of the above is a polynomial function. Now, applying the Fréchet operator three times to both sides of (3.4) we see that the right-hand side vanishes and so does the left-hand side. This means however that F is a polynomial function of order greater by 1 than order of f. \(\square \)

Remark 3.1

In fact we have shown above that the class of polynomial functions has the so called double difference property, more exactly if DF defined by \(DF(x,y) = F(x+y) - F(x) - F(y)\) is a polynomial function of two variables then \(F=a+p,\) where \(a:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) is an additive function and \(p:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) is a polynomial function.

Let \(B_i\in SA^i({\mathbb {R}},{\mathbb {R}}), \, i\in \{0,\dots ,3\}\) be such that

$$\begin{aligned} F(x) = B_0^* + B_1^*(x) + B_2^*(x) + B_3^*(x) \end{aligned}$$
(3.5)

for every \(x\in {\mathbb {R}}.\)

Remark 3.2

In (3.1) taking qxqy in places of x and y;  respectively, using the rational homogeneity of monomial summands of f and F and joining together the terms with equal powers of q we can see that this equation is possible only if it occurs for monomials of equal order.

Taking the above remark into account, we start with \(F=B_0^*=B_0.\) Then from (3.1) we infer that \(f=0\) and so

$$\begin{aligned} -B_0 = 0, \end{aligned}$$

In particular, \(F(0)=0.\) Let us now assume that \(F(x)=B_1^*(x)=B_1(x).\) Then necessarily (cf. (3.1))

$$\begin{aligned} 0= F(2x)-2F(x) = 2xf(x) \end{aligned}$$

whence it follows that \(f=0 .\) Thus \(B_1\) is an arbitrary additive function, and in particular \(A_0=0.\)

The next step is

$$\begin{aligned} F(x) = B_2^*(x) = B_2(x,x) \end{aligned}$$

for every \(x\in {\mathbb {R}}.\) From (3.1) we derive

$$\begin{aligned} 2B_2(x,y) = xA_1(y) +yA_1(x) \end{aligned}$$

for every \(x, y\in {\mathbb {R}}.\) Hence

$$\begin{aligned} B_2^*(x) = xA_1(x) \end{aligned}$$
(3.6)

for every \(x\in {\mathbb {R}}.\) Now, let us pass to the case where \(F(x)= B_3^*(x)\) for every \(x\in {\mathbb {R}}.\) Then we have \(f(x)=A_2^*(x), \, x\in {\mathbb {R}}\) and from (3.1) we get, taking \(x=y\)

$$\begin{aligned} 6B_3^*(x) = 2xA_2^*(x) \end{aligned}$$

whence

$$\begin{aligned} B_3^*(x) = \frac{1}{3}xA_2^*(x) \end{aligned}$$
(3.7)

for every \(x\in {\mathbb {R}}.\) Inserting the above equality into (3.1), we obtain

$$\begin{aligned} (x+y)A_2^*(x+y) - xA_2^*(x) - yA_2^*(y) = 3\left( xA_2^*(y) + yA_2^*(x)\right) \end{aligned}$$

for every \(x, \, y\in {\mathbb {R}}.\) After some elementary calculations we obtain hence

$$\begin{aligned} (x+y)A_2(x,y) = yA_2^*(x) + xA_2^*(y) \end{aligned}$$

for every \(x,\, y\in {\mathbb {R}}.\) Putting here \(y=1,\) we obtain

$$\begin{aligned} xA_2(x,1) + A_2(x,1) = A_2(x,x) +xA_2^*(1) \end{aligned}$$
(3.8)

for every \(x\in {\mathbb {R}}.\) We obtain from (3.8)

$$\begin{aligned} A_2(x,1) = xA_2^*(1) \end{aligned}$$

and

$$\begin{aligned} A_2^*(x)= xA_2(x,1) = x^2A_2^*(1) \end{aligned}$$
(3.9)

for every \(x\in {\mathbb {R}}.\) Taking (3.7) into account, we have by (3.9)

$$\begin{aligned} B_3^*(x) = \frac{1}{3}x^3A_2^*(1) \end{aligned}$$
(3.10)

for every \(x\in {\mathbb {R}}.\) Thus we have proved the following.

Proposition 3.2

The pair (fF) is a solution of (3.1) if, and only if

  • \(f(x) = A_1(x) + a_2x^2,\)

  • \(F(x) = B_1(x) + xA_1(x) + \frac{1}{3}a_2x^3,\)

for all \(x\in {\mathbb {R}}.\) Here \(A_1\) and \(B_1\) are arbitrary additive functions, and \(a_2\in {\mathbb {R}}\) is an arbitrary constant.

Now we are going to investigate a more general equation. We are interested in solving the equation

$$\begin{aligned} \sum _{i=1}^n \gamma _i F(\alpha _ix + \beta _iy) = xf(y) + yf(x) \end{aligned}$$
(3.11)

for every \(x,\, y\in {\mathbb {R}}.\) First, we assume that both functions f and F are polynomial functions. Then, similarly as in the case of Theorem 3.1, the monomial summands of f and F of orders k and \(k+1,\) respectively satisfy (3.11). Later on we will discuss how Lemma 2.1 may be used to show that (in some situations) f and F are indeed polynomial functions.

A characteristic feature of (3.11) is the dependence of the existence of solutions on the behaviour of the sequence \((S_k)_{k\in {\mathbb {N}}}\) given by

$$\begin{aligned} S_k=\sum _{i=1}^n \gamma _i(\alpha _i + \beta _i)^{k+1}, \end{aligned}$$
(3.12)

for all \(k\in {\mathbb {N}}\cup \{0\}.\) Let us observe that in the case of (3.1) we have \(n=3\) and \(\gamma _1=\alpha _1=\beta _1=\alpha _2=\beta _3=1,\) and \(\beta _2 = \alpha _3=0\) while \(\gamma _2=\gamma _3=-1.\) We have \(S_k= 2^{k+1}-2=2(2^k -1), \, k\in {\mathbb {N}},\) in particular \(S_0= 1\cdot 2 -1\cdot 1 - 1\cdot 1 = 0.\)

Using our Lemma 2.1 we infer rather easily that f is a polynomial function. We assume that also F is a polynomial function. The aim of the next theorem is to prove that, under the assumptions made, solutions of (3.11) are continuous, except for an additive summand. Similarly as in the case of Theorem 3.1, it is enough to assume that f and F are monomials.

Theorem 3.3

Let \(k\in {\mathbb {N}}\cup \{0\}.\) Let \(\gamma _i \in {\mathbb {R}},\, \alpha _i,\beta _i \in {\mathbb {Q}}\) be such that (cf. (3.12)) \(S_k\ne 0, \, k\in {\mathbb {N}}\cup \{0\}.\) Further, let \(f:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) be either 0 or a monomial function of order k, let \(F:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) be a monomial function of order \(k+1\) and suppose that the pair (fF) satisfies equation (3.11).

  1. (i)

    If \(k=1\) then either \(\sum _{i=1}^n \gamma _i\alpha _i^2\ne 0\ne \sum _{i=1}^n \gamma _i\beta _i^2\) and \(f=F =0\) is the only solution of (3.11), or \(\sum _{i=1}^n \gamma _i\alpha _i ^2=\sum _{i=1}^n \gamma _i\beta _i^2= 0\) and f is an arbitrary additive function while F is given by \(F(x) =\frac{2}{S_1}xf(x).\)

  2. (ii)

    If \(k=0\) or \(k\ge 2\) then both f and F are continuous.

Moreover, for every \(k>2\) and for every \(j\in \{2,\dots , k-1\},\) if \(f\ne 0\) then

$$\begin{aligned} \sum _{i=1}^n \gamma _i \alpha _i^j \beta _i^{k+1-j} = 0, \end{aligned}$$
(3.13)
$$\begin{aligned} \sum _{i=1}^n \gamma _i \alpha _i^{k+1} \beta _i = \sum _{i=1}^n \gamma _i \alpha _i \beta _i^{k+1} =0, \end{aligned}$$
(3.14)

which implies

$$\begin{aligned} S_k=2(k+1)\sum _{i=1}^n \gamma _i\alpha _i^k \beta _i =2(k+1)\sum _{i=1}^n\gamma _i \alpha _i\beta _i^k, \end{aligned}$$
(3.15)

and obviously

$$\begin{aligned} \sum _{i=1}^n \gamma _i\alpha _i^k \beta _i =\sum _{i=1}^n\gamma _i \alpha _i\beta _i^k. \end{aligned}$$
(3.16)

Proof

Let us start with the case \(k=0.\) Then \(f=\mathrm{const}=A_0\) and F is additive. Putting \(x=y\) in (3.11) we obtain (taking into account the rational homogeneity of F)

$$\begin{aligned} S_0F(x) = 2xA_0, \end{aligned}$$

for each \(x\in {\mathbb {R}}. \) Using the assumption that \(S_0\ne 0\) we get

$$\begin{aligned} F(x) = \frac{2}{S_0}A_0 x, \end{aligned}$$

for each \(x\in {\mathbb {R}},\) and hence F is a continuous function.

In the case \(k=1\) we obtain that \(f=A_1\) is additive and F is a quadratic function, i.e. a diagonalization of a biadditive symmetric function, \(S_1= \sum _{i=1}^n \gamma _i(\alpha _i+ \beta _i)^2.\) Putting \(x=y\) in (3.11), we obtain

$$\begin{aligned} S_1 F(x) = 2xA_1(x), \end{aligned}$$

for every \(x\in {\mathbb {R}}, \) whence (keeping in mind that \(S_1\ne 0\)) we get (denoting \(\frac{2}{S_1}\) by \(C_1\))

$$\begin{aligned} F(x) = C_1 xA_1(x) \end{aligned}$$
(3.17)

for every \(x\in {\mathbb {R}}.\) Substituting the above into (3.11), we obtain

$$\begin{aligned}&C_1\sum _{i=1}^n \gamma _i \left[ \left( \alpha _i x+ \beta _i y\right) \left( \alpha _i A_1(x) + \beta _i A_1(y)\right) \right] \nonumber \\&\quad = C_1\left[ \left( \sum _{i=1}^n \gamma _i \alpha _i^2\right) xA_1(x) + \left( \sum _{i=1}^n \gamma _i \beta _i^2\right) yA_1(y)\right] \nonumber \\&\qquad + C_1\sum _{i=1}^n \gamma _i \alpha _i\beta _i \left( xA_1(y) + yA_1(x)\right) \nonumber \\&\quad =xA_1(y) + yA_1(x), \end{aligned}$$
(3.18)

for all \(x,\, y\in {\mathbb {R}}.\) Comparing terms of the same degree on both sides of the above equation, we obtain

$$\begin{aligned} \sum _{i=1}^n \gamma _i \alpha _i^2 xA_1(x)=0, \end{aligned}$$

for all \(x\in {\mathbb {R}},\) and, symmetrically,

$$\begin{aligned} \sum _{i=1}^n \gamma _i \beta _i^2 yA_1(y)=0, \end{aligned}$$

for all \(y\in {\mathbb {R}}.\) Both of these equations hold if either \(A_1=0\) or

$$\begin{aligned} \sum _{i=1}^n \gamma _i \alpha _i^2 =0 = \sum _{i=1}^n \gamma _i \beta _i^2 . \end{aligned}$$
(3.19)

Now if \(A_1=0\) then also \(F=0,\) and we get the continuity of a solution (fF) of (3.11) in this case. Further let us look for non-zero solutions of (3.11). The existence of a nontrivial \(A_1\) implies that (3.19) holds. So, in this case we have

$$\begin{aligned} S_1=2\sum _{i=1}^n \gamma _i\alpha _i\beta _i. \end{aligned}$$
(3.20)

Taking (3.18) and (3.19) (hence (3.20)) into account we obtain (keeping in mind that \(S_1\ne 0\))

$$\begin{aligned} \frac{2}{S_1}\frac{S_1}{2} xA_1(y) = xA_1(y) \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}};\) which actually means that taking an arbitrary additive function \(A_1\) as f,  we get that the pair (fF) is a solution of (3.11) for \(k=1.\) Of course, the solutions are mostly discontinuous.

Now, let us proceed to the case \(k=2.\) Observe that f is now a diagonalization of a biadditive, symmetric function \(A_2.\) Similarly as in the previous cases, putting \(x=y\) we obtain from (3.11)

$$\begin{aligned} S_2 F(x) =2xf(x), \end{aligned}$$

for every \(x\in {\mathbb {R}},\) whence in view of \(S_2\ne 0,\)

$$\begin{aligned} F(x)=\frac{2}{S_2} xf(x) \end{aligned}$$
(3.21)

for all \(x\in {\mathbb {R}}.\) Denote \(\frac{2}{S_2}\) by \(C_2.\)

Let us substitute the formula (3.21) into (3.11). We obtain

$$\begin{aligned} C_2\left[ \sum _{i=1}^n \gamma _i\left( \alpha _ix +\beta _iy\right) f\left( \alpha _ix +\beta _iy\right) \right] =xf(y) + yf(x), \end{aligned}$$

for all \(x, \, y \in {\mathbb {R}}.\) Using the biadditivity of f (and hence its rational homogeneity) we obtain hence

$$\begin{aligned}&C_2 \sum _{i=1}^n \gamma _i \left( \alpha _i x +\beta _i y\right) \left( \alpha _i^2A_2^*(x) + 2\alpha _i \beta _i A_2(x,y) + \beta _i^2 A_2^*(y)\right) \nonumber \\&\quad = C_2 \sum _{i=1}^n \gamma _i\left( \alpha _i^3 xA_2^*(x) + 2\alpha _i ^2 \beta _i xA_2(x,y) +\alpha _i \beta _i^2 xA_2^*(y) \right. \nonumber \\&\qquad + \left. \alpha _i^2 \beta _i y A_2^*(x) + 2\alpha _i \beta _i^2 yA_2(x,y) +\beta _i^3 yA_2^*(y)\right) \nonumber \\&\quad = C_2 \sum _{i=1}^n \gamma _i \left( \alpha _i^3 xA_2^*(x) + \beta _i^3 yA_2^*(y)\right) \nonumber \\&\qquad + C_2 \sum _{i=1}^n \gamma _i \alpha _i ^2 \beta _i\left( 2 xA_2(x,y) + yA_2^*(x)\right) \nonumber \\&\qquad +C_2 \sum _{i=1}^n \gamma _i \alpha _i \beta _i^2\left( xA_2^*(y) + 2yA_2(x,y)\right) \nonumber \\&\quad = xA_2^*(y) + yA_2^*(x), \end{aligned}$$
(3.22)

for all \(x, \, y\in {\mathbb {R}}.\) Now, comparing the terms of the same degree on both sides of (3.22), we get first that either

$$\begin{aligned} \sum _{i=1}^n \gamma _i\alpha _i^3 = \sum _{i=1}^n \gamma _i\beta _i^3 =0 \end{aligned}$$
(3.23)

or \(A_2 = 0.\) In the sequel we assume that \(A_2\ne 0,\) hence (3.23) holds. In other words \(S_2= 3\sum _{i=1}^n\gamma _i \left( \alpha _i^2\beta _i + \alpha _i \beta _i^2\right) .\) Let us compare the remaining terms. We get

$$\begin{aligned} C_2 \sum _{i=1}^n \gamma _i \alpha _i ^2 \beta _i\left( 2 xA_2(x,y) + yA_2^*(x)\right) = yA_2^*(x), \end{aligned}$$

and

$$\begin{aligned} C_2 \sum _{i=1}^n \gamma _i \alpha _i \beta _i^2\left( 2 yA_2(x,y) + xA_2^*(y)\right) = xA_2^*(y), \end{aligned}$$

for all \(x, \, y\in {\mathbb {R}}.\) Putting \(x=y\) above; and taking into account that \(A_2\ne 0\) we infer that \( \sum _{i=1}^n \gamma _i \alpha _i ^2 \beta _i = \sum _{i=1}^n \gamma _i \alpha _i \beta _i^2 = \frac{1}{3C_2}=\frac{S_2}{6}.\) Hence we may write

$$\begin{aligned} xA_2(x,y) = yA_2^*(x) \end{aligned}$$
(3.24)

and

$$\begin{aligned} yA_2(x,y) = xA_2^*(y) \end{aligned}$$
(3.25)

for all \(x, \, y\in {\mathbb {R}}.\) Putting \(y=1\) into (3.24) and (3.25) we obtain

$$\begin{aligned} A_2^*(x) = x^2A_2^*(1) \end{aligned}$$
(3.26)

for every \(x\in {\mathbb {R}},\) hence f and F are continuous.

Now, let us pass to the situation where \(k\ge 3.\) In general, if \(k\ge 3\) and f and F satisfy (3.11) then

$$\begin{aligned} f(x) = A_k^*(x), \end{aligned}$$

for every \(x\in {\mathbb {R}}\) and hence

$$\begin{aligned} F(x) = \frac{2}{S_k}xA_k^*(x), \end{aligned}$$

for every \(x\in {\mathbb {R}}.\) Put \(C_k:= \frac{2}{S_k}.\) We can write

$$\begin{aligned}&C_k\sum _{i=1}^n \gamma _i \left[ \alpha _i x\left( \sum _{j=0}^k{k \atopwithdelims ()j}\alpha _i^j \beta _i^{k-j}A_k(x^j,y^{k-j})\right) \right. \nonumber \\&\qquad +\left. \beta _iy\left( \sum _{j=0}^k{k \atopwithdelims ()j}\alpha _i^j \beta _i^{k-j}A_k(x^j,y^{k-j})\right) \right] \nonumber \\&\quad =C_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{j=0}^k{k \atopwithdelims ()j}\alpha _i^{j+1} \beta _i^{k-j}xA_k(x^j,y^{k-j})\right) \right. \nonumber \\&\qquad + \left. \left( \sum _{j=0}^k{k \atopwithdelims ()j}\alpha _i^j \beta _i^{k+1-j}yA_k(x^j,y^{k-j})\right) \right] \nonumber \\&\quad = C_k\left[ \left( \sum _{i=1}^n \gamma _i \alpha _i^{k+1}\right) xA_k^*(x) + \left( \sum _{i=1}^n \gamma _i \beta _i^{k+1}\right) yA_k^*(y)\right] \nonumber \\&\qquad + C_k\sum _{i=1}^n \gamma _i\left[ \left( \sum _{j=0}^{k-1}{k \atopwithdelims ()j}\alpha _i^{j+1} \beta _i^{k-j}xA_k(x^j,y^{k-j})\right) \right. \nonumber \\&\qquad + \left. \left( \sum _{j=1}^{k}{k \atopwithdelims ()j}\alpha _i^j \beta _i^{k+1-j}yA_k(x^j,y^{k-j})\right) \right] \nonumber \\&\quad =C_k\left[ \left( \sum _{i=1}^n \gamma _i \alpha _i^{k+1}\right) xA_k^*(x) + \left( \sum _{i=1}^n \gamma _i \beta _i^{k+1}\right) yA_k^*(y)\right] \nonumber \\&\qquad + C_k\sum _{i=1}^n \gamma _i \left[ \alpha _i \beta _i^k \left( xA_k^*(y) +ky A_k(x,y^{k-1})\right) + \alpha _i^k \beta _i\left( kxA_k(x^{k-1},y) + yA_k^*(x)\right) \right] \nonumber \\&\qquad + C_k\sum _{i=1}^n \gamma _i \left[ \sum _{j=2}^{k-1} \alpha _i^j \beta _i^{k+1-j} \left( {k \atopwithdelims (){j-1}}xA_k(x^{j-1},y^{k+1-j})+{k \atopwithdelims ()j}yA_k(x^j, y^{k-j})\right) \right] \nonumber \\&\quad = xA^*_k(y) +yA^*_k(x), \end{aligned}$$
(3.27)

for all \(x, \, y\in {\mathbb {R}}.\) Comparing the terms of equal degrees, we infer that either \(A_k=0\) or \(\sum _{i=1}^n \gamma _i \alpha _i^{k+1} = \sum _{i=1}^n \gamma _i \beta _i^{k+1}=0\) (cf. (3.14)). Assume from now on that we are interested in nontrivial solutions of (3.11). Continuing comparisons of the terms on both sides of (3.27), we get for every \(j\in \{2,\dots , k-1\}\)

$$\begin{aligned} C_k\sum _{i=1}^n \gamma _i \alpha _i^j \beta _i^{k+1-j} = 0, \end{aligned}$$

(cf. (3.13)) for otherwise (putting \(x=y\)) we would get

$$\begin{aligned} {k+1 \atopwithdelims ()j}xA_k^*(x) = 0, \end{aligned}$$

which is impossible. Note that from the above (3.15) and (3.16) follow. Taking this into account, as well as the definition of \(C_k\) and comparing the remaining terms in (3.27), we get

$$\begin{aligned} C_k\sum _{i=1}^n \gamma _i \alpha _i \beta _i^k \left( xA_k^*(y) +ky A_k(x,y^{k-1})\right) = xA^*_k(y), \end{aligned}$$

for all \(x, \, y\in {\mathbb {R}}.\) Using (3.15), we get hence

$$\begin{aligned} yA_k(x,y^{k-1})= xA^*_k(y), \end{aligned}$$
(3.28)

and analogously we infer

$$\begin{aligned} xA_k(x^{k-1},y)= yA^*_k(x), \end{aligned}$$
(3.29)

for all \(x, \, y\in {\mathbb {R}}.\) Let us put \(x+y\) instead of x in (3.29). We obtain, after some easy though tedious calculations, that the left-hand side is equal to

$$\begin{aligned} L:= x\left[ \sum _{j=0}^{k-1} {k-1 \atopwithdelims ()j}A_k(x^{k-1-j},y^{j+1})\right] + y\left[ \sum _{j=0}^{k-1} {k-1 \atopwithdelims ()j}A_k(x^{k-1-j},y^{j+1})\right] , \end{aligned}$$

while the right-hand side is equal to

$$\begin{aligned} R:= y\sum _{j=0}^{k} {k \atopwithdelims ()j}A_k(x^{k-j},y^j). \end{aligned}$$

Comparing on both sides the terms of equal degrees we obtain in particular the following sequence of equalities.

$$\begin{aligned} xA_k(x^{k-j-1},y^{j+1}) = yA_k(x^{k-j},y^j), \end{aligned}$$
(3.30)

for \(j\in \{0,\dots , k-1\}\) and all \(x, \, y\in {\mathbb {R}}.\) Now, using (3.30) for \(j\in \{0,\dots , k-1\}\) we arrive at

$$\begin{aligned} y^kA_k^*(x) = y^{k-1}\left[ yA_k^*(x)\right] = y^{k-1}\left[ xA_k(x^{k-1},y)\right] = \dots = x^kA_k^*(y) \end{aligned}$$

for every \(x,\, y\in {\mathbb {R}},\) in other words, putting \(y=1\) we obtain

$$\begin{aligned} A_k^*(x)= A_k^*(1) x^k, \end{aligned}$$
(3.31)

for every \(x\in {\mathbb {R}},\) which means that \(A_k\) is continuous for \(k\ge 3\) and thus the proof is finished. \(\square \)

Remark 3.3

Using Lemma 2.1 exactly in the same way as we did in the proof of Theorem 3.1, we infer rather easily that if the functions F and f satisfy (3.11) then f must be a polynomial function. In the following simple example we observe that the function F is not necessarily polynomial.

Example 1

Observe that the equation

$$\begin{aligned} F(x)-F(-x)=xf(y)+yf(x) \end{aligned}$$
(3.32)

is satisfied by any even function F and \(f=0.\)

The reason why the above example works is that the equation

$$\begin{aligned} F(x)-F(-x)=0, \end{aligned}$$

for all \(x\in {\mathbb {R}},\) has solutions which are not polynomial. If we consider a general linear equation

$$\begin{aligned} \sum _{i=1}^n\gamma _iF(\alpha _ix+\beta _iy)=0, \end{aligned}$$
(3.33)

for all \(x,\, y\in {\mathbb {R}},\) and we assume that at least one of the pairs \((\alpha _i,\beta _i)\) is linearly independent from all the others then, using Theorem 1.3, it may be shown that every solution of (3.33) is a polynomial function. Therefore it is natural to formulate the following problem.

Problem 1

Let \(\alpha _i,\beta _i,\gamma _i\in {\mathbb {R}},\gamma _i\ne 0,i=1,\dots ,n\) be such that there exists an \(i_0\in \{1,\dots ,n\}\) satisfying

$$\begin{aligned} \left| \begin{array}{cc} \alpha _{i_0}&{}\beta _{i_0}\\ \alpha _i&{}\beta _i \end{array}\right| \ne 0,\;i\ne i_0. \end{aligned}$$

Is it possible that the functional equation (3.11) is satisfied by some functions \(f,\, F\) where F is not a polynomial function?

As we have seen (cf. Example 1) it is possible that the Eq. (3.11) is satisfied by a pair (fF) where F is not a polynomial function. However we will give some examples of particular forms of this equation which have only polynomial solutions and therefore we can apply Theorem 3.3 to solve these equations.

Proposition 3.4

Let \(\alpha _i,\beta _i,\gamma _i, \, i\in \{1,\dots ,n\}\) be real numbers such that

$$\begin{aligned} \sum _{i=1}^n\gamma _i\ne 0 \end{aligned}$$
(3.34)

holds and \(\alpha _i+\beta _i=1, \, i\in \{1,\dots ,n\}.\) If the pair (fF) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies Eq.  (3.11) then the functions f and F are polynomial.

Proof

Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take \(x=y\) in (3.11) to show that also F must be polynomial. \(\square \)

Now we show some examples of equations (with nontrivial solutions) which may be solved with the use of the above proposition.

Example 2

Assume that functions \(f,F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfy the functional equation

$$\begin{aligned} F(x)-4F\left( \frac{x+y}{2}\right) +F(y)=xf(y)+yf(x), \end{aligned}$$
(3.35)

for all \(x,\, y\in {\mathbb {R}}.\) Rearranging (3.35) in the form

$$\begin{aligned} yf(x)-F(x)=-f(y)x-4F\left( \frac{x+y}{2}\right) +F(y), \end{aligned}$$

for all \(x, \, y\in {\mathbb {R}},\) we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If \(k=0\) then \(f(x)=b\) for some constant \(b\in {\mathbb {R}}\) and all \(x\in {\mathbb {R}},\) further \(S_0=-2\ne 0\) and, consequently, \(F(x)=-bx,\) for all \(x\in {\mathbb {R}}.\) Now let \(k=1,\) then \(S_1=-2,\)

$$\begin{aligned} \sum _{i=1}^3\gamma _i\alpha _i^2=\sum _{i=1}^3\gamma _i\beta _i^2=0, \end{aligned}$$

and again from Theorem 3.3 we infer that f is any additive function and \(F(x)=-xf(x)\) for all \(x\in {\mathbb {R}}.\) If \(k=2,3,\) then it is easy to see that the solutions of of (3.35) must vanish. Thus the general solution of this equation is given by \(f(x)=a(x)+b\) and \(F(x)=-xa(x)-bx, \, x\in {\mathbb {R}},\) where \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is additive and b is a constant.

Example 3

Assume that functions \(f,F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) satisfy the functional equation

$$\begin{aligned} F(x)-8F\left( \frac{x+y}{2}\right) +F(y)=xf(y)+yf(x), \end{aligned}$$
(3.36)

for all \(x,\, y \in {\mathbb {R}}.\) Rearranging (3.36) in the form

$$\begin{aligned} yf(x)-F(x)=-f(y)x-8F\left( \frac{x+y}{2}\right) +F(y), \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}},\) we can see that f is a polynomial function of order at most 2. From Proposition 3.4 we know that also F is a polynomial function. Now we check the conditions of Theorem 3.3. If \(k=0\) then \(f(x)=b\) for some constant \(b\in {\mathbb {R}},\) and all \(x\in {\mathbb {R}},\) further \(S_0=-6\ne 0\) and, consequently, \(F(x)=-\frac{b}{3}x\) for all \(x\in {\mathbb {R}}.\) Now let \(k=1,\) then \(S_1=-6\) but this time

$$\begin{aligned} \sum _{i=1}^3\gamma _i\alpha _i^2=\sum _{i=1}^3\gamma _i\beta _i^2=-1\ne 0 \end{aligned}$$

and again from Theorem 3.3 we infer that \(f=F=0.\) If \(k=2\) then the solutions must be continuous since \(S_2=-6\ne 0,\) moreover

$$\begin{aligned} \sum _{i=1}^3\gamma _i\alpha _i^2=\sum _{i=1}^3\gamma _i\beta _i^2=0 \end{aligned}$$

which means that \(f(x)=cx^2\) and \(F(x)=-\frac{c}{3}x^3, \, x\in {\mathbb {R}}\) satisfy (3.36). Thus the general solution of this equation is given by \(f(x)=cx^2+b\) and \(F(x)=-\frac{c}{3}x^3-\frac{b}{3}x, \, x\in {\mathbb {R}}\) where bc are real constants.

Observe that in the Eq. (1.5) the left-hand side is the difference connected with the Cauchy equation. Since additive functions are monomial functions of order one, it is natural to ask whether this difference may be replaced by the difference connected with monomial function of higher orders or with the polynomial functions. In the next part of the paper we consider a functional equation constructed in such way.

Lemma 3.1

Let n be a given positive integer, if the pair \((f,\,F) \) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation

$$\begin{aligned} \Delta ^n_yF(x)=xf(y)+yf(x), \end{aligned}$$
(3.37)

for all \(x,\, y\in {\mathbb {R}},\) then f is a polynomial function of order at most \(n+1\) and F is a polynomial function of order not greater than \(n+2.\)

Proof

We write (3.37) in the form

$$\begin{aligned} f(x)y-(-1)^{n}F(x)=-f(y)x-\sum _{i=1}^{n}(-1)^i{n\atopwithdelims ()i}F(x+iy), \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}}.\) Similarly as before, using Lemma 2.1, we can see that f is a polynomial function of order at most \((n+1)+1-1=n+1.\) Indeed, observe that in the present situation we have \(K_0 = \{(\mathrm{id}, i\mathrm{id}): i\in \{1,\dots ,n \}\}\) and \(K_1=\{(0,\mathrm{id})\}.\) Hence \(\mathrm{card}(K_0\cup K_1) = n+1\) and \(\mathrm{card}K_1 = 1,\) whence the estimation follows (cf. (2.2)).

Further, applying the difference operator with the span y \((n+2)-\)times to the both sides of (3.37) we get

$$\begin{aligned} \Delta ^{2n+2}_yF(x)=0, \end{aligned}$$

for all \(x\in {\mathbb {R}}\) i.e. F is a polynomial function of order \(2n+1.\)

Now consider any \(k>n+1,\) the function f is a polynomial function of order smaller than k thus the monomial summand of F of order \(k+1\) satisfies (3.37) with \(f=0.\) However the \(n-\)th difference does not vanish for monomial functions of order k. This means that the summands of F of orders greater than \(n+2\) must be zero, i.e. F is a polynomial function of order at most \(n+2.\)

\(\square \)

Now we turn our attention to the equation where the left-hand side is the difference connected with the equation of monomial functions

Lemma 3.2

Let n be a given positive integer, if the pair (fF) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation

$$\begin{aligned} \Delta ^n_yF(x)-n!F(y)=xf(y)+yf(x), \end{aligned}$$
(3.38)

for all \(x,\, y\in {\mathbb {R}},\) then f is a polynomial function of order at most \(n+1\) and F is a polynomial function of order not greater than \(n+2.\)

Proof

We write (3.38) in the form

$$\begin{aligned} f(x)y-(-1)^{n}F(x)=-f(y)x-\sum _{i=1}^{n}(-1)^i{n\atopwithdelims ()i}F(x+iy)-n!F(y), \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}}.\) We see that \(K_0 = \{(\mathrm{id}, i\mathrm{id}): i\in \{1,\dots ,n \}\}\cup \{(0, \mathrm{id})\}\) and \(K_1=\{(0,\mathrm{id})\}.\) Hence \(\mathrm{card}(K_0\cup K_1) = n+1\) and \(\mathrm{card}K_1 = 1.\) Now applying again Lemma 2.1, we can see (cf. (2.2)) that f is a polynomial function of order at most \((n+1)+1-1=n+1.\) Further, applying the difference operator with the span y \((n+2)-\)times to the both sides of (3.38) we get

$$\begin{aligned} \Delta ^{2n+2}_yF(x)=0, \end{aligned}$$

for all \(x\in {\mathbb {R}},\) i.e. F is a polynomial function of order \(2n+1.\)

Now, similarly as in the respective part of the proof of Lemma 3.1, we can see that the order of F cannot be greater than \(n+2.\) Indeed, the summands of F of orders \(k>n+2\) must satisfy (3.38) with the right-hand side equal to zero (since f has no terms of order \(k-1\)) which is impossible since the equation

$$\begin{aligned} \Delta ^n_yF(x)-n!F(y)=0, \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}},\) characterizes monomial functions of order \(n<k.\) \(\square \)

Now we can present the general solutions of Eqs. (3.37) and (3.38).

Theorem 3.5

A pair \((f,\, F)\) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the Eq. (3.37) if and only if F is a polynomial function of order at most \(n-1\) and \(f=0.\)

Proof

From Lemma 3.1 we know that both f and F are polynomial functions. Take first \(k\in \{0,1,\dots ,n-2\},\) and assume that f is a monomial function of order k and that F is a monomial function of order \(k+1.\) We can see that \(S_k=0\) i.e. from Theorem 3.3 we obtain \(f=0.\)

Now, take \(k\in \{n-1,n,n+1\},\) then \(S_k\ne 0\) and, as previously, assume that f is a monomial function of order k and that F is a monomial function of order \(k+1.\) We want to show that \(f=0.\) Thus for the indirect proof assume that \(f\ne 0,\) then F is also nonzero. Observe that it leads to a contradiction. Indeed, Eq. (3.37) cannot be satisfied, since in the expression \(\Delta ^n_yF(x)\) we have the term of order \(k+1\) with respect to y which is missing on the right-hand side.

We proved that \(f=0,\) thus F obviously satisfies

$$\begin{aligned} \Delta ^n_yF(x)=0 \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}},\) i.e. F is a polynomial function of order at most \(n-1.\) \(\square \)

In the next theorem we obtain the solution of Eq.  (3.38).

Theorem 3.6

Let \((f,\, F)\) be a pair of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}.\) If \(n=1\) then the solutions of (3.38) are of the form obtained in Proposition 3.2. If \(n>2\) then F is a monomial function of order n and \(f=0.\)

Proof

If \(n=1\) then (3.38) reduces to (1.5) which is already solved. Thus we may assume that \(n\ge 2.\) Using Lemma 3.2, we can see that the functions F and f are polynomial and as usually we will work with monomial functions. Thus let f and F be monomial functions of orders \(k,k+1;\) respectively. We want to show that \(f=0.\) However, if \(f\ne 0\) then the right-hand side which is of the form \(xf(y)+yf(x),\) contains the term yf(x) of order k with respect to the variable x. Such a term is missing in the expression \(\Delta ^n_yF(x)-n!F(y),\) since \(n\ge 2.\) Therefore also in this case we have \(f=0.\)

Using the equality \(f=0\) in (3.38), we get

$$\begin{aligned} \Delta ^n_yF(x)-n!F(y)=0, \end{aligned}$$

for all \(x,\, y\in {\mathbb {R}},\) for each monomial summand of F. This means that F is a monomial function of order n. \(\square \)

Remark 3.4

It is interesting that we have a nice set of solutions only for the difference stemming from Cauchy’s equation. Thus the case \(n=1\) in (3.38) is exceptional. It seems that the right-hand side of (1.5) must be suitably modified to get a similar effect for \(n>1.\)

We can add one more class of functional equations which may be solved with the use of Theorem 3.3.

Proposition 3.7

Let \(\beta _i, \, i\in \{1,\dots ,n\}, \, \gamma _i, \, i\in \{1,\dots ,n+1\}\) be real numbers such that (3.34) holds. If the pair (fF) of functions mapping \({\mathbb {R}}\) to \({\mathbb {R}}\) satisfies the equation

$$\begin{aligned} \sum _{i=1}^n\gamma _iF(x+\beta _iy)+\gamma _{n+1}F(y)=xf(y)+yf(x), \end{aligned}$$
(3.39)

for all \(x,\, y\in {\mathbb {R}},\) then the functions f and F are polynomial.

Proof

Similarly as before, from Lemma 2.1 we know that f is a polynomial function. Now it is enough to take \(y=0\) in (3.39) to show that also F must be polynomial. \(\square \)

Remark 3.5

Note that Eq. (3.39) is a generalization of Eqs. (3.38) and (3.37). However the methods used in Lemmas 3.1 and 3.2 were needed to show that F is polynomial because, in case of these equation, condition (3.34) is not satisfied.

We end the paper with a remark connecting the results obtained here with the topic called alienation of functional equations (for some details concerning the problem of alienation of functional equations see the survey paper of R. Ger and the third author [12]).

Remark 3.6

Consider two equations:

$$\begin{aligned} xf(y)+yf(x)=0 \end{aligned}$$
(3.40)

which is satisfied only by \(f=0\) and

$$\begin{aligned} \sum _{i=1}^n \gamma _i F(\alpha _ix + \beta _iy)=0 \end{aligned}$$
(3.41)

which usually has some solutions (depending on n and the constants involved). Results concerning Eq. (3.11) may be viewed from the perspective of the so called alienation of functional equations. Any pair of the form (F, 0) where F satisfies (3.41) is clearly a solution of (3.11). Interesting is the question if (3.11) may have solutions of a different nature. As we proved in case of some equations there are only solutions of this kind whereas in some other cases new solutions appear. Thus, in fact, we have examples of alienation and nonalienation of equations of this kind. It may even happen that for monomial functions of some order some equations are alien and for other orders the same equations are not alien. This effect is similar to the approach presented in [34] by the fourth author.