1 Introduction

In this paper, we consider the following functional equation

$$\begin{aligned} \sum _{i=1}^n \gamma _i F(a_i x + b_i y)=\sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y), \end{aligned}$$
(1.1)

for every \(x,y\in \mathbb R\), \(\gamma _i,\alpha _j,\beta _j \in \mathbb R,\) and \(a_i,b_i,c_j,d_j \in \mathbb Q,\) and its special forms. The idea to study this generalized equation was motivated by the growing number of its particular forms studied by several mathematicians; let us quote here a few of them Aczél [1], Aczél and Kuczma [2], Alsina et al. [3], Fechner and Gselmann [6], Koclega-Kulpa and Szostok [8], Koclega-Kulpa et al. [9, 10], Nadhomi et al. [14] and Okeke and Sablik [15]. From their studies, it turns out that these particular forms have real applications.

The primary goal of this paper is to continue the investigation proposed in [15] (see Remark 2.3). In particular, to obtain the polynomial solutions of Eq. (1.1) and compare the solutions with the solutions of equations

$$\begin{aligned} \sum _{i=1}^n \gamma _i F(a_i x + b_i y) = yf(x) + xf(y), \end{aligned}$$
(1.2)

and

$$\begin{aligned} F(x + y) - F(x) - F(y) = \sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y). \end{aligned}$$
(1.3)

The first of the special forms of (1.1) we solved is the functional equation considered by Koclga-Kulpa et al. [9] namely,

$$\begin{aligned} F(x) - F(y) = (x -y)[\alpha _1f(c_1x + d_1y) + \cdots + \alpha _m f(c_mx + d_my)]. \end{aligned}$$
(1.4)

It is worth noting that (1.4) stems from a well known quadrature rule used in numerical analysis. Further, we also considered other special forms of (1.1) namely,

$$\begin{aligned} F(y) - F(x)= & {} \frac{1}{y -x}\int _{x}^{y} f(t) \,dt = (y-x) \sum _{j=1}^m \beta _jf(c_j x + (1-c_j) y), \end{aligned}$$
(1.5)
$$\begin{aligned} F(y) - F(x)= & {} (y - x) f(x + y), \end{aligned}$$
(1.6)
$$\begin{aligned} F(x) - F(y)= & {} (x - y) f\left( \tfrac{x +y}{2}\right) , \end{aligned}$$
(1.7)

and

$$\begin{aligned} 2F(y) - 2F(x) = (y -x)\left( f\left( \tfrac{x+y}{2}\right) + \tfrac{f(x) + f(y)}{2}\right) . \end{aligned}$$
(1.8)

Equation (1.5) is the functional equation connected with the Hermite–Hadamard inequality in the class of continuous functions, and it is related to the approximate integration. Note that the quadrature rules of an approximate integration can be obtained by the appropriate specification of the coefficients of (1.5). Moreso, Eqs. (1.6) and (1.7) are variations of Lagrange mean value theorem with many applications in mathematical analysis, computational mathematics and other fields. Finally, Eq. (1.8) stems from the descriptive geometry used for graphical constructions.

In addition we will show that the main results obtained by Koclga-Kulpa et al. [9] (see Theorems 1 and 2 in [9]) are special forms of our results. In line with their papers [8, 10], we would use our method to obtain the polynomial functions connected with the Hermite–Hadamard inequality in the class of continuous functions. Furthermore, we would show that the functional equation considered by Aczél [1] and Aczél and Kuczma [2] (cf. Theorem 5 in [2]) are special forms of Eq. (1.1). Moreso, we would show that our method can be used to solve the functional equation arising from the geometric problems considered by Alsina et al. [3]. Now observe that Eqs. (1.1), (1.2) and (1.3) are obvious generalization of the equation considered by Fechner and Gselmann [6] namely,

$$\begin{aligned} F(x + y) - F(x) - F(y) = xf(y)+yf(x). \end{aligned}$$
(1.9)

Nadhomi et al. [14] investigated Eq. (1.2) and in Okeke and Sablik [15] investigated Eq. (1.3). In our works it turns out that under some mild assumption, the pair (Ff) of functions satisfies Eqs. (1.2) and (1.3) are polynomial functions, and in some important cases, just the usual polynomials (even though we assume no regularity of solutions a priori).

The fundamental tool in achieving the results in [14, 15] is a very special Lemma (cf. Lemma 2.1 in [14], Lemma 1 in [12], Lemma 2.3 in [16] and Lemma 1.1 in [15]). Let us observe that the result is a generalization of theorems from Székelyhidi’s book [17] (Theorem 9.5) which in turn is a generalization of a Wilson result from [19]. We quote here a slight modification of the Lemma. Before we state the Lemma let us adopt the following notation. Let G and H be a commutative groups. Then \(SA^i(G;H)\) denotes the group of all \(i\texttt {-}\)additive, symmetric mappings from \(G^i\) into H for \(i\geqslant 2,\) while \(SA^0(G;H)\) denotes the family of constant functions from G to H and \(SA^1(G;H)= Hom(G;H).\) We also denote by \(\mathcal {I}\) the subset of \(Hom(G;G) \times Hom(G;G)\) containing all pairs \((\alpha , \beta )\) for which \(Ran(\alpha ) \subset Ran(\beta ).\) Furthermore, we adopt a convention that a sum over an empty set of indices equals zero. We denote also for an \(A_i \in SA^i(G;H)\) by \(A_i^*\) the diagonalization of \(A_i,\) \( i\in \mathbb N\cup \{0\}.\)

Lemma 1.1

Fix \(N\in \mathbb N\cup \{0\}, \, M\in \mathbb N\cup \{-1, 0\}\) and, if \(M\ge 0,\) let \(I_{p,n-p}, \, 0\le p\le n, \, n\in \{0,\ldots ,M\}\) be finite subsets of \(\mathcal {I}\). Suppose further that H is an Abelian group uniquely divisible by N! and G is an Abelian group. Moreover, let functions \(\varphi _i:G\rightarrow SA^i(G;H),\, i\in \{0,\ldots ,N\}\) and, if \(M\ge 0,\) \(\psi _{p,n-p,(\alpha ,\beta )}:G\rightarrow SA^i(G;H),\, (\alpha ,\beta )\in I_{p,n-p},\, 0\le p \le n, n\in \{0,\ldots ,M\}\),satisfy

$$\begin{aligned} \varphi _N(x)(y^N) + \sum _{i=0}^{N-1}\varphi _i(x)(y^i) = R_M(x,y), \end{aligned}$$
(1.10)

where \(R_M(x,y)\) is defined in the following way

$$\begin{aligned} R_M(x,y)=\left\{ \begin{array}{ll} 0, &{} M=-1,\\ \sum _{n=0}^{M}\sum _{p=0}^n \sum _{(\alpha ,\beta )\in I_{p,n-p}}\psi _{p,n-p,(\alpha ,\beta )}&{}\\ \quad \left( \alpha (x) + \beta (y)\right) (x^p,y^{n-p}), &{} M\ge 0 \end{array} \right. \end{aligned}$$

for every \(x,y\in G.\) Then \(\varphi _N\) is a polynomial function of degree not greater than m, where

$$\begin{aligned} m= \sum _{n=0}^M \textrm{card}\left( \bigcup _{s=n}^M K_s\right) - 1, \end{aligned}$$
(1.11)

and \(K_s= \bigcup _{p=0}^s I_{p,s-p}\) for each \(s\in \{0,\ldots ,M\},\) if \(M\ge 0.\) Moreover, if \(M=-1\),

$$\begin{aligned} \varphi _N(x)(y^N) + \sum _{i=0}^{N-1}\varphi _i(x)(y^i) =0 \end{aligned}$$

then \(m=-1\) and \(\varphi _N\) is the zero function.

While proving our main results in [14, 15], we observed that the behaviour of solutions depends on the sequences \((L_k)_{k\in \mathbb N\cup \{0\}}\) and \((R_k)_{k\in \mathbb N\cup \{0\}} \) given by

$$\begin{aligned} L_k=\sum _{i=1}^n \gamma _i(a_i + b_i)^{k+1}, \end{aligned}$$
(1.12)

and

$$\begin{aligned} R_k=\sum _{j=1}^m (\alpha _j + \beta _j)(c_j + d_j)^{k}, \end{aligned}$$
(1.13)

respectively, for all \(k \in \mathbb {N} \cup \{0\}.\)

Let us recall that the polynomial function of order at most n,  defined in a semigroup S and taking values in a group H is a mapping \(f:S\longrightarrow H\) satisfying the so called Fréchet functional equation, that is

$$\begin{aligned} \Delta ^{n+1}_{h_{n+1}h_n\ldots h_1}f(x)=0, \end{aligned}$$
(1.14)

for all \(x, h_1, \ldots , h_{n+1}\in S\) (here \(\Delta ^{n+1}_{h_{n+1}h_n\dots h_1}f:= \Delta _{h_{n+1}}\circ \Delta ^n_{h_n\ldots h_1}f, \) and \(\Delta \) is the Fréchet operator, defined by \(\Delta _hf(x)=f(x+h)-f(x)\) for every \(h, x\in S).\) In the case of \(S=H=\mathbb {R}\) we have the following characterization of the polynomial functions (cf. Van der Lijn [18] and Mazur and Orlicz [13], cf. also Fréchet [7]).

Theorem 1.1

Let \(f:\mathbb {R}\rightarrow \mathbb {R}\) be a polynomial function of order at most m,  then there exist unique \(k-\)additive functions \(A_k: \mathbb {R}^k\rightarrow \mathbb {R},\) \(k \in \{1,\ldots ,m\}\) and a constant \(A_0\) such that

$$\begin{aligned} f(x) = A_0 + A_1^*(x) + \cdots + A_m^*(x), \end{aligned}$$
(1.15)

where \(A_k^*\) is a diagonalization of \(A_k.\) Conversely, every function of the shape (1.15) is a polynomial function of order at most m.

Let us note that usually \(A_k, \, k\in \{1,\ldots , m\},\) are not continuous (or regular in any sense). If they are slightly regular (e.g., bounded on an interval, measurable, or monotonic), then they take the form \(A_k^*(x)=c_kx^k,\) for every \(x \in \mathbb R,\) where \(c_k\) is a real constant, \(k\in \{1,\ldots , m\}\), and thus f is the ordinary polynomial. But there exist a highly irregular solutions of (1.14), even in the case of \(n=1.\) The discontinuous additive functions are dominating in the family of additive ones, more informations one can find in the book of Kuczma [11].

Let us mention a very important result used in this present paper due to Székelyhidi who proved that every solution of a very general linear equation is a polynomial function (see [17] Theorem 9.5, cf. also Wilson [19]).

Theorem 1.2

Let G be an Abelian semigroup, S an Abelian group, n a positive integer, \(\varphi _i,\psi _i\) additive functions from G to G and let \(\varphi _i(G)\subset \psi _i(G),\;i\in \{1,\ldots ,n\}.\) If functions \(f,f_i:G\rightarrow S\) satisfy equation

$$\begin{aligned} f(x)+\sum _{i=1}^nf_i(\varphi _i(x)+\psi _i(y))=0, \end{aligned}$$
(1.16)

then f satisfies (1.14).

The Székelyhidi’s result makes it easier to solve linear equations because it is no longer necessary to deal with each equation separately. Instead, we may formulate results which are valid for large classes of equations. It is even possible to write computer programs which solve functional equations, see the papers of Gilányi [4], Borus and Gilányi [5] and Okeke and Sablik [15].

2 Results

We begin by showing that in general Eq. (1.1) has polynomial functions as solutions. To this name, rewrite (1.1) in the following form:

$$\begin{aligned} \sum _{(a_i,b_i)} \gamma _i F(a_i x + b_i y)=\sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y), \end{aligned}$$

which allows us to write the left-hand side in the form

$$\begin{aligned}{} & {} \sum _{\{(a_i,b_i): a_i\ne 0\ne b_i\}} \gamma _i F(a_i x + b_i y)+ \sum _{\{(a_i,0): a_i\ne 0\}} \gamma _i F(a_i x) \nonumber \\{} & {} \qquad +\sum _{\{(0,b_i): 0\ne b_i\}} \gamma _i F(b_i y)=\sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y). \end{aligned}$$
(2.1)

We excluded above the summands where \(a_i=0=b_i.\) Such a summand can be omitted, indeed. Namely suppose that \(a_n=b_n=0.\) Let F be a solution of (1.1), and assume that \(n\ge 2\) (otherwise the whole problem becomes trivial). If we put \(x=y=0\) in (1.1) then we get

$$\begin{aligned} \left( \sum _{i=1}^{n-1} \gamma _i + \gamma _n\right) F(0)= 0. \end{aligned}$$
(2.2)

From (2.2) we infer that either \(F(0)=0\) or \(\sum _{i=1}^{n-1} \gamma _i + \gamma _n=0.\) In the former the constant F(0) disappears, and the left hand side of (1.1) satisfies our assumptions. In the latter case, if moreover \(\gamma _n=0,\) the situation is analogous. Let us consider therefore the case

$$\begin{aligned} \left( \sum _{i=1}^{n-1} \gamma _i + \gamma _n=0\right) \wedge \left( \gamma _n\ne 0\right) . \end{aligned}$$

Observe that \(\sum _{i=1}^{n-1} \gamma _i \ne 0,\) and

$$\begin{aligned} \gamma _nF(0)= \sum _{i=1}^{n-1} \gamma _i \frac{\gamma _n}{\sum _{i=1}^{n-1} \gamma _i }F(0). \end{aligned}$$

Hence (1.1) may written in the form

$$\begin{aligned} \sum _{i=1}^{n-1} \gamma _i\left( F(a_ix+b_iy) + \frac{\gamma _n}{\sum _{i=1}^{n-1} \gamma _i }F(0)\right) = \sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y). \end{aligned}$$

Substituting \(\tilde{F}(z):=F(z)+\frac{\gamma _n}{\sum _{i=1}^{n-1} \gamma _i }F(0)\) we obtain

$$\begin{aligned} \sum _{i=1}^{n-1} \gamma _i\tilde{F}(a_ix+b_iy) = \sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y), \end{aligned}$$

where \((a_i,b_i)\ne (0,0), i\in \{1,\ldots ,n-1\}.\) From (2.1) we see that there are three essential groups of terms on left-hand side, the first group contains the summands containing values of F at the points \(a_ix+b_iy,\) where \(a_i \ne 0 \ne b_i.\) The second group contains those summands in which \(b_i=0,\) and the third one those in which \(a_i=0.\) We saw in the paper by Nadhomi et al. (see [14]), that if the first and the third groups are empty, and the second one consists of two pairs (1, 0) and \((-1,0)\) and the corresponding \(\gamma \)’s are 1 and \(-1,\) then arbitrary even function F yields a solution to (2.1), together with \(f=0.\) Thus in general there is no chance that we obtain polynomiality of F. However, a closer look shows that we can state some positive claims.

Namely, rewrite again (2.1) in the form

$$\begin{aligned} \sum _{i\in I_1} \varphi _i(a_ix+b_iy) +\varphi _{I}(x) +\varphi _{II}(y)= \sum _{j=1}^m(\alpha _j x + \beta _j y) f(c_j x + d_j y), \end{aligned}$$
(2.3)

and assume that there is a \(j\in \{1,\ldots ,m\}\) such that

$$\begin{aligned} \textrm{det}\left( \begin{array}{cc} \alpha _j &{} \beta _j \\ c_j &{} d_j \end{array} \right) \ne 0, \quad \text{ and } \quad d_j\ne 0. \end{aligned}$$
(2.4)

Then it is possible to perform the change of variables \(c_jx+d_jy=z, \, x=w.\) It remains to express (xy) in terms of (zw) and to rewrite (2.3) in the form

$$\begin{aligned} Cf(z)w = P_{f, \varphi _I,\varphi _{II}, \{\varphi _i: i\in I_1\}}(z,w), \end{aligned}$$

where \( P_{f, \varphi _I,\varphi _{II}, \{\varphi _i: i\in I_1\}}\) is a polynomial function in z and w,  with coefficients depending on \(f, \varphi _I, \varphi _{II}\) and \(\varphi _i, i\in I_1.\) We can apply Lemma 1.1 to get the polynomiality of f.

Thus the right-hand side of (2.1) is a polynomial in x and y,  say Q(xy). In other words we have

$$\begin{aligned} \sum _{i\in I_1} \varphi _i(a_ix+b_iy) +\varphi _{I}(x) +\varphi _{II}(y)= Q(x,y). \end{aligned}$$
(2.5)

If \(\varphi _{I} \ne 0\) then we may rewrite the above in the form

$$\begin{aligned} \varphi _{I}(x) =-\sum _{i\in I_1} \varphi _i(a_ix+b_iy) -\varphi _{II}(y)+ Q(x,y), \end{aligned}$$
(2.6)

whence by induction we obtain that \(\varphi _{I}\) is a polynomial function in x. Actually, assuming that Q is a polynomial of order say k in x,  we apply the operator \(\Delta ,\) \(k+1\) times to both sides of Eq. (2.6). We get

$$\begin{aligned} \Delta _{h_1,\ldots ,h_{k+1}}\varphi _{I}(x) =-\sum _{i\in I_1}\Delta _{h_1,\ldots ,h_{k+1}} \varphi _i(a_ix+b_iy) + \tilde{\varphi }_{II}(y)+ R(y), \end{aligned}$$
(2.7)

where R is a polynomial function in y (which remains after annihilating the x part of Q(xy)). Now, denoting \(\Delta _{h_1,\ldots ,h_{k+1}}\varphi _{I}\) by \(\overline{\varphi }\) and \(\Delta _{h_1,\ldots ,h_{k+1}} \varphi _i\) by \(f_i,\) as well as \( \tilde{\varphi }_{II}(y)+ R(y)\) by \(\overline{f},\) we get the equation from the Székelyhidi’s result(see (1.16)). We use it to infer that \(\overline{\varphi }\) is a polynomial function, whence polynomiality of \(\varphi _{I} =F \) follows.

If we knew that \(\varphi _{I}(x) = DF(x)\) for some constants D then we are done. Analogously, if \(\varphi _{II}(y)\ne 0\) with similar argument as above we get polynomiality of \(\varphi _{II} = F.\) But even if \(\varphi _{I}(x)\ne DF(x)\) or \( \varphi _{II}(y)\ne EF(y)\) for some constants E and D,  we can still look at the first summand on the left-hand side of (2.5). Then it is enough to see whether the first sum is non zero and admits \(z=a_{i_0}x+b_{i_0}y, w=x\) for some \(i_0\) and to rewrite (2.5) in the form

$$\begin{aligned} \varphi _{i_0}(z) = -\sum _{i\ne i_0} \varphi _i(e_iz+f_iw) -\varphi _{I}(w) -\varphi _{II}(g_iz+h_iw)+ \tilde{Q}(z,w), \end{aligned}$$
(2.8)

and hence we see, similarly as before, that F has to be a polynomial function.

Therefore, it is enough to assume that

  1. 1.

    There exist a \(j \in \{1,\ldots , m\}\) such that (2.4) holds and

  2. 2.

    \(\varphi _{I}= DF\) or,

  3. 3.

    \(\varphi _{II}= EF\) or,

  4. 4.

    for some \(i_0 \in \{1,\ldots , n\}\) we have \(a_{i_0}\ne 0 \ne b_{i_0},\)

to get polynomiality of both f and F.

Having a result of this kind, it is now enough to assume that the pair of functions (Ff) satisfying Eq. (1.1) are monomials. However, from Eqs. (1.2) and (1.3), we may assume that a characteristic feature of Eq. (1.1) is the dependence of the existence of solutions on the sequences given by (1.12) and (1.13), respectively. Hence, we proceed to next theorem.

Theorem 2.1

Suppose \(\gamma _i, \alpha _j, \beta _j \in \mathbb {R}, \, a_i,b_i, c_j,d_j \in \mathbb {Q}, i \in \{1,\ldots , n \}, j \in \{1,\ldots , m \}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) and \((R_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12) and (1.13) respectively. Assume that \(L_k, R_k \ne 0\) for some \(k\in \mathbb {N}\cup \{0\},\) and Eq. (1.1) is satisfied by the pair \((F,f): \mathbb R\longrightarrow \mathbb R\) of monomial functions of order \(k+1\) and k,  respectively.

  1. (i)

    if \(k=0\) then \(f=0=F\) or \(f=A_0\ne 0\) and \(F(x)=\tfrac{R_0}{L_0}A_0x;\) in the latter case necessarily

    $$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^n \gamma _i a_i= \sum _{j=1}^m \alpha _j, \end{aligned}$$
    (2.9)

    and

    $$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^n \gamma _i b_i= \sum _{j=1}^m \beta _j. \end{aligned}$$
    (2.10)
  2. (ii)

    if \(k \ne 0\) then either \(f = F = 0\) is the only solution of (1.1), or f is an arbitrary additive function while F is given by \(F(x)=\tfrac{R_k}{L_k}xf(x),\) \(x\in \mathbb R,\) when the below equations holds

    $$\begin{aligned}{} & {} \tfrac{R_k}{L_k}\sum _{i=1}^n \gamma _i a_i^{k+1}= \sum _{j=1}^m \alpha _jc_j^k, \end{aligned}$$
    (2.11)
    $$\begin{aligned}{} & {} \tfrac{R_k}{L_k}\sum _{i=1}^n \gamma _i b_i^{k+1}= \sum _{j=1}^m \beta _jd_j^k, \end{aligned}$$
    (2.12)

    and

    $$\begin{aligned} \tfrac{R_k}{L_k}\sum _{i=1}^n \genfrac(){0.0pt}1{k+1}{p} \gamma _i a_i^pb_i^{k+1-p}= & {} \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p}\beta _j c_j^pd_j^{k-p} \nonumber \\{} & {} + \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p-1} \alpha _jc_j^{p-1}d_j^{k+1-p}, \end{aligned}$$
    (2.13)

    for each \(p\in \{1,\ldots , k\}.\) Furthermore, for non-trivial f we see that either

    1. (a)

      \(\sum _{j=1}^m \beta _j c_j^pd_j^{k-p} = \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p}\) for each \(p\in \{1,\ldots , k\},\) and f is an arbitrary k-monomial function, or

    2. (b)

      \(\sum _{j=1}^m \beta _j c_j^pd_j^{k-p} \ne \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p}\) for each \(p\in \{1,\ldots , k\},\) and f is necessarily a continuous monomial function of order k and so is F of order \(k+1.\)

Proof

Suppose that \(k = 0.\) Then \(f =\) const \(= A_0\) and F is additive. Putting \(x=y\) in (1.1) we get

$$\begin{aligned} L_0F(x) = \sum _{j=1}^m (\alpha _j + \beta _j) xA_0, \end{aligned}$$

i.e.

$$\begin{aligned} F(x) = \tfrac{R_0}{L_0}xA_0 =Cx, \end{aligned}$$
(2.14)

for every \(x \in \mathbb {R},\) since \(L_0,R_0 \ne 0,\) thus F is a continuous function. Substituting (2.14) into (1.1) we obtain

$$\begin{aligned} C\left( \sum _{i=1}^n \gamma _i a_i\right) x+C\left( \sum _{i=1}^n \gamma _i b_i\right) y = A_0\left( \sum _{j=1}^m \alpha _j\right) x +A_0\left( \sum _{j=1}^m \beta _j\right) y, \end{aligned}$$

for all \(x,y\in \mathbb R,\) whence formulae (2.9) and (2.10) easily follow. Observe that it is impossible to have

$$\begin{aligned} \sum _{i=1}^n \gamma _i a_i = \sum _{i=1}^n \gamma _i b_i =0. \end{aligned}$$
(2.15)

Indeed, in such a case \(L_0=0,\) which contradicts our assumption.

Suppose that \(k=1,\) we obtain that \(f = A_1\) is additive and \(F = B_2 ^{*}\) is a quadratic function, or in other words diagonalization of a biadditive function. Putting \(x=y\) in (1.1) we obtain (taking into account the rational homogeneity of f and F)

$$\begin{aligned} L_1B_2 ^{*}(x)= \left( \sum _{j=1}^m (\alpha _j + \beta _j)(c_j + d_j)\right) xA_1(x), \end{aligned}$$

whence,

$$\begin{aligned} L_1F(x) = R_1xA_1(x), \end{aligned}$$

for every \(x \in \mathbb {R}.\) Keeping in mind that \(L_1\ne 0\) and denoting \(E_1 = \tfrac{R_1}{L_1}\) we get

$$\begin{aligned} F(x) = E_1xA_1(x), \end{aligned}$$

for every \(x\in \mathbb R.\) Substituting the above into (1.1) we obtain

$$\begin{aligned}{} & {} E_1\left( \sum _{i=1}^n \gamma _i a_i^2\right) xA_1(x) + E_1\left( \sum _{i=1}^n \gamma _i b_i^2\right) yA_1(y) \nonumber \\{} & {} \qquad +E_1\left( \sum _{i=1}^n \gamma _i a_ib_i \right) xA_1(y) + E_1\left( \sum _{i=1}^n \gamma _i a_ib_i \right) yA_1(x) \nonumber \\{} & {} \quad =\left( \sum _{j=1}^m \alpha _jc_j \right) xA_1(x) + \left( \sum _{j=1}^m \beta _jd_j \right) yA_1(y) \nonumber \\{} & {} \qquad +\left( \sum _{j=1}^m \alpha _jd_j \right) xA_1(y) + \left( \sum _{j=1}^m\beta _jc_j \right) yA_1(x). \end{aligned}$$
(2.16)

Comparing the terms with the same degrees we obtain

$$\begin{aligned}{} & {} \left( E_1\left( \sum _{i=1}^n \gamma _i a_i^2\right) - \left( \sum _{j=1}^m \alpha _jc_j \right) \right) xA_1(x) = 0, \end{aligned}$$
(2.17)
$$\begin{aligned}{} & {} \left( E_1\left( \sum _{i=1}^n \gamma _i b_i^2\right) - \left( \sum _{j=1}^m \beta _jd_j \right) \right) yA_1(y) = 0, \end{aligned}$$
(2.18)
$$\begin{aligned}{} & {} \left( E_1\left( \sum _{i=1}^n \gamma _i a_ib_i \right) - \left( \sum _{j=1}^m \alpha _jd_j \right) \right) xA_1(y) \nonumber \\{} & {} \quad =\left( \left( \sum _{j=1}^m\beta _jc_j \right) - E_1\left( \sum _{i=1}^n \gamma _i a_ib_i \right) \right) yA_1(x). \end{aligned}$$
(2.19)

Observe that (2.17) holds if either \(A_1 =0\) or

$$\begin{aligned} E_1\left( \sum _{i=1}^n \gamma _i a_i^2\right) = \sum _{j=1}^m \alpha _jc_j . \end{aligned}$$
(2.20)

Similarly, (2.18) holds if either \(A_1 =0\) or

$$\begin{aligned} E_1\left( \sum _{i=1}^n \gamma _i b_i^2\right) = \sum _{j=1}^m \beta _jd_j . \end{aligned}$$
(2.21)

Finally, (2.19) holds if either \(A_1 =0\) or,

$$\begin{aligned} 2E_1\left( \sum _{i=1}^n \gamma _i a_ib_i \right) = \sum _{j=1}^m \beta _j c_j + \sum _{j=1}^m \alpha _jd_j . \end{aligned}$$
(2.22)

Now if \(A_1 =0\) then \(F=0\). Let us consider the non-zero solutions of (1.1). Then all Eqs. (2.20), (2.21) and (2.22) hold. Note that it is impossible that

$$\begin{aligned} \sum _{i=1}^n \gamma _i a_i^2 = \sum _{i=1}^n \gamma _i b_i^2 = \sum _{i=1}^n \gamma _i a_ib_i = 0. \end{aligned}$$

In fact, in such a situation we would have \(L_1=0,\) which contradicts our assumption. Moreover, substituting (2.22) into (2.19) we get

$$\begin{aligned} \left( \sum _{j=1}^m \beta _j c_j - \sum _{j=1}^m \alpha _jd_j \right) (xA_1(y) - yA_1(x))=0, \end{aligned}$$
(2.23)

for all \(x,y \in \mathbb {R}.\) From (2.23) we see that either

$$\begin{aligned} \sum _{j=1}^m \beta _j c_j = \sum _{j=1}^m \alpha _jd_j, \end{aligned}$$

which leads to a situation where \(A_1\) can be an arbitrary (in particular discontinuous) additive function and we get that the pair (Ff) is a solution of (1.1), or

$$\begin{aligned} yA_1(x) = xA_1(y), \end{aligned}$$

for all \(x,y \in \mathbb {R}.\) Putting \(y=1\) in the above equation we have

$$\begin{aligned} A_1(x) = xA_1(1), \end{aligned}$$

for every \(x \in \mathbb {R},\) hence f and F are continuous.

Now, let us pass to the situation where \(k\ge 2.\) In general, if \(k\ge 2\) and the pair (Ff) satisfies (1.1) then

$$\begin{aligned} f(x) = A_k^*(x), \end{aligned}$$

for every \(x \in \mathbb {R},\) and hence

$$\begin{aligned} F(x)= \tfrac{R_k}{L_k}x A_k^*(x), \end{aligned}$$

for every \(x \in \mathbb {R}.\) Denote \(E_k =\tfrac{R_k}{L_k}\), we can write (1.1) as

$$\begin{aligned} E_k\sum _{i=1}^n \gamma _i(a_i x + b_i y) A_k^*(a_i x + b_i y) = \sum _{j=1}^m(\alpha _j x + \beta _j y) A_k^*(c_j x + d_j y), \end{aligned}$$

or

$$\begin{aligned}{} & {} E_k\sum _{i=1}^n \gamma _i \left[ a_i x\left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}a_i^p b_i^{k-p}A_k(x^p,y^{k-p})\right) \right. \\{} & {} \qquad + \left. b_iy\left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}a_i^p b_i^{k-p}A_k(x^p,y^{k-p})\right) \right] \\{} & {} \quad = \sum _{j=1}^m(\alpha _j x + \beta _j y)\left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}A_k(x^p, y^{k-p})\right) , \end{aligned}$$

whence

$$\begin{aligned}{} & {} E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}a_i^{p+1} b_i^{k-p}xA_k(x^p,y^{k-p})\right) \right] \\{} & {} \qquad +E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}a_i^{p} b_i^{k+1-p}yA_k(x^p,y^{k-p})\right) \right] \\{} & {} \quad =\sum _{j=1}^m\alpha _j \left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}xA_k(x^p, y^{k-p})\right) \\{} & {} \qquad +\sum _{j=1}^m\beta _j \left( \sum _{p=0}^k\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}yA_k(x^p, y^{k-p})\right) , \end{aligned}$$

or

$$\begin{aligned}{} & {} E_k\sum _{i=1}^n \gamma _i a_i^{k+1}xA_k^*(x)+ E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=0}^{k-1}\genfrac(){0.0pt}1{k}{p}a_i^{p+1} b_i^{k-p}xA_k(x^p,y^{k-p})\right) \right] \\{} & {} \qquad +E_k\sum _{i=1}^n \gamma _i b_i^{k+1}yA_k^*(y)+ E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=1}^k\genfrac(){0.0pt}1{k}{p}a_i^{p} b_i^{k+1-p}yA_k(x^p,y^{k-p})\right) \right] \\{} & {} \quad =\sum _{j=1}^m\alpha _j c_j^k xA_k^*(x) + \sum _{j=1}^m\alpha _j \left( \sum _{p=0}^{k-1}\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}xA_k(x^p, y^{k-p})\right) \\{} & {} \qquad +\sum _{j=1}^m\beta _j d_j^k yA_k^*(y) + \sum _{j=1}^m\beta _j \left( \sum _{p=1}^k\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}yA_k(x^p, y^{k-p})\right) , \end{aligned}$$

or

$$\begin{aligned}{} & {} E_k\sum _{i=1}^n \gamma _i a_i^{k+1}xA_k^*(x)+ E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=1}^{k}\genfrac(){0.0pt}1{k}{p-1}a_i^{p} b_i^{k+1-p}xA_k(x^{p-1},y^{k+1-p})\right) \right] \\{} & {} \qquad +E_k\sum _{i=1}^n \gamma _i b_i^{k+1}yA_k^*(y)+ E_k\sum _{i=1}^n \gamma _i \left[ \left( \sum _{p=1}^k\genfrac(){0.0pt}1{k}{p}a_i^{p} b_i^{k+1-p}yA_k(x^p,y^{k-p})\right) \right] \\{} & {} \quad =\sum _{j=1}^m\alpha _j c_j^k xA_k^*(x) + \sum _{j=1}^m\alpha _j \left( \sum _{p=1}^{k}\genfrac(){0.0pt}1{k}{p-1}c_j^{p-1} d_j^{k+1-p}xA_k(x^{p-1}, y^{k+1-p})\right) \\{} & {} \qquad +\sum _{j=1}^m\beta _j d_j^k yA_k^*(y) + \sum _{j=1}^m\beta _j \left( \sum _{p=1}^k\genfrac(){0.0pt}1{k}{p}c_j^p d_j^{k-p}yA_k(x^p, y^{k-p})\right) , \end{aligned}$$

for all \(x, \, y\in \mathbb R.\) Comparing terms of equal degrees we have the following equations

$$\begin{aligned}{} & {} \left( E_k\sum _{i=1}^n \gamma _i a_i^{k+1} - \sum _{j=1}^m\alpha _j c_j^k\right) xA_k^*(x) = 0, \end{aligned}$$
(2.24)
$$\begin{aligned}{} & {} \left( E_k\sum _{i=1}^n \gamma _i b_i^{k+1} - \sum _{j=1}^m\beta _j d_j^k\right) yA_k^*(y) = 0, \end{aligned}$$
(2.25)
$$\begin{aligned}{} & {} \left( E_k\sum _{i=1}^n \genfrac(){0.0pt}1{k}{p-1}\gamma _i a_i^{p} b_i^{k+1-p}- \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p-1}\alpha _jc_j^{p-1} d_j^{k+1-p}\right) xA_k(x^{p-1}, y^{k+1-p}) \nonumber \\{} & {} \quad =\left( \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p}\beta _j c_j^{p} d_j^{k-p}- E_k\sum _{i=1}^n \genfrac(){0.0pt}1{k}{p}\gamma _i a_i^{p} b_i^{k+1-p}\right) yA_k(x^{p}, y^{k-p}), \end{aligned}$$
(2.26)

for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now, we observe that if (2.24) holds then either \(A_k=0\) or

$$\begin{aligned} E_k\sum _{i=1}^n \gamma _i a_i^{k+1} = \sum _{j=1}^m\alpha _j c_j^k. \end{aligned}$$
(2.27)

Similarly, (2.25) holds if either \(A_k =0\) or

$$\begin{aligned} E_k\sum _{i=1}^n \gamma _i b_i^{k+1}= \sum _{j=1}^m \beta _jd_j^k. \end{aligned}$$
(2.28)

Finally, (2.26) holds if either \(A_k =0\) or

$$\begin{aligned} E_k\sum _{i=1}^n \genfrac(){0.0pt}1{k+1}{p} \gamma _i a_i^pb_i^{k+1-p}= \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p}\beta _j c_j^pd_j^{k-p} + \sum _{j=1}^m \genfrac(){0.0pt}1{k}{p-1} \alpha _jc_j^{p-1}d_j^{k+1-p}, \end{aligned}$$
(2.29)

for \(p\in \{1,\ldots , k\}.\) Assume from now on that we are interested in nontrivial solutions of (1.1), that is when \(A_k \ne 0\) and Eqs. (2.27), (2.28) and (2.29) holds. Observe that is impossible to have

$$\begin{aligned} \sum _{i=1}^n \gamma _i a_i^{k+1}=\sum _{i=1}^n \gamma _i b_i^{k+1}= \sum _{i=1}^n \genfrac(){0.0pt}1{k+1}{p} \gamma _i a_i^pb_i^{k+1-p} =0, \end{aligned}$$

for each \(p\in \{1,\ldots , k\}.\) Indeed, in such a case we would have that \(L_k=0,\) which contradicts our assumption. Now, substituting (2.29) into (2.26)

$$\begin{aligned} \left( \sum _{j=1}^m \beta _j c_j^pd_j^{k-p} - \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p} \right) (xA_k(x^{p-1},y^{k+1-p}) - yA_k(x^p, y^{k-p}))=0, \end{aligned}$$
(2.30)

for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now from (2.30) we see that either

$$\begin{aligned} \sum _{j=1}^m \beta _j c_j^pd_j^{k-p} = \sum _{j=1}^m \alpha _jc_j^{p-1}d_j^{k+1-p}, \end{aligned}$$

for each \(p\in \{1,\ldots , k\},\) which leads to a situation where \(A_k\) can be an arbitrary additive function and we get that the pair (Ff) is a solution of (1.1) or

$$\begin{aligned} xA_k(x^{p-1},y^{k+1-p}) = yA_k(x^p, y^{k-p}), \end{aligned}$$
(2.31)

for \(p\in \{1,\ldots , k\}\) and all \(x, \, y\in \mathbb R.\) Now, using (2.31) for \(p\in \{1,\ldots , k\}\) we arrive at

$$\begin{aligned} y^kA_k^*(x) = y^{k-1}\left[ yA_k^*(x)\right] = y^{k-1}\left[ xA_k(x^{k-1},y)\right] = \cdots = x^kA_k^*(y), \end{aligned}$$

for every \(x,\, y\in \mathbb R,\) in other words, putting \(y=1\) we obtain

$$\begin{aligned} A_k^*(x)= A_k^*(1) x^k, \end{aligned}$$
(2.32)

for every \(x\in \mathbb R,\) which means that \(A_k\) is continuous for \(k\ge 2\) and thus ends the proof. \(\square \)

Remark 2.1

We note here that in Eqs. (1.1) and (1.12), if \(f=0\) and \(k\in \mathbb {N}\cup \{0\}\) with

$$\begin{aligned} L_k =\sum _{i=1}^n \gamma _i (a_i+b_i)^{k+1} = 0, \end{aligned}$$

then F is not necessarily equal to zero. Of course this does not contradict Theorem 2.1 because \(L_k \ne 0.\) Therefore, we state the below propositions.

Proposition 2.2

Let \(\gamma _i \in \mathbb R,\) \(a_i, b_i \in \mathbb Q,\) \(i \in \{1,\ldots , n\}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12). Assume that \(k=0\) such that

$$\begin{aligned} L_0 =\sum _{i=1}^n \gamma _i (a_i+b_i) = 0, \end{aligned}$$
(2.33)

holds then either \(f = F = 0\) is the only solution of (1.1) or,

  1. (a)

    If \(f=0\) and \(F = const = A_0\) where \(A_0\) is any real number is the solution to (1.1) then

    $$\begin{aligned} \sum _{i=1}^n \gamma _i =0. \end{aligned}$$
  2. (b)

    If \(f=0\) and \(F = A_1\) is an additive function is the solution to (1.1) then

    $$\begin{aligned} \sum _{i=1}^n \gamma _ia_i = \sum _{i=1}^n \gamma _i b_i =0. \end{aligned}$$

Proof

Suppose that (2.33) holds. Let \(f=0\) and \(F = const = A_0\) where \(A_0\) is any real number. Substituting this in Eq. (1.1) we have

$$\begin{aligned} \sum _{i=1}^n \gamma _i A_0 =0, \end{aligned}$$

this holds if either \(A_0=0\) or \(\sum _{i=1}^n \gamma _i =0.\) Now for non trivial solutions of (1.1) we have that \(A_0 \ne 0\) and \(\sum _{i=1}^n \gamma _i =0\).

Finally, suppose that (2.33) holds. Let \(f=0\) and \(F = A_1\) be additive. Substituting this in Eq. (1.1) we get (taking into account the rational homogeneity of \(A_1\))

$$\begin{aligned} \sum _{i=1}^n \gamma _i A_1(a_ix+b_iy)=0, \end{aligned}$$

i.e.

$$\begin{aligned} \sum _{i=1}^n \gamma _i a_i A_1(x) + \sum _{i=1}^n \gamma _i b_i A_1(y) = 0. \end{aligned}$$

Comparing terms of the same degree on both sides of the above equation, we obtain

$$\begin{aligned} \sum _{i=1}^n \gamma _i a_i A_1(x)=0, \end{aligned}$$

for all \(x \in \mathbb R,\) and symmetrically

$$\begin{aligned} \sum _{i=1}^n \gamma _i b_i A_1(y)=0, \end{aligned}$$

for all \(y \in \mathbb R.\) Both of these equations hold if either \(A_1 = 0\) or \(\sum _{i=1}^n \gamma _i a_i = \sum _{i=1}^n \gamma _i b_i = 0.\) Now for non trivial solutions of (1.1) we have that \(A_1 \ne 0\) and \(\sum _{i=1}^n \gamma _i a_i = \sum _{i=1}^n \gamma _i b_i = 0.\) \(\square \)

Proposition 2.3

Let \(\gamma _i \in \mathbb R,\) \(a_i, b_i \in \mathbb Q,\) \(i \in \{1,\ldots , n\}.\) Let \((L_k)_{k\in \mathbb {N}\cup \{0\}}\) be defined by (1.12). Assume that \(k\in \mathbb {N}\) such that

$$\begin{aligned} L_k =\sum _{i=1}^n \gamma _i (a_i+b_i)^{k+1} = 0, \end{aligned}$$
(2.34)

holds then either \(f = F = 0\) is the only solution of (1.1) or, \(f=0\) and \(F= A_{k+1}^*\) is an arbitrary \(k+1\) additive function when

$$\begin{aligned} \sum _{i=1}^n \genfrac(){0.0pt}1{k+1}{p} \gamma _i a_i^pb_i^{k+1-p} =0, \end{aligned}$$

for each \(p\in \{0,\ldots , k+1\}.\)

Remark 2.2

We note that if \(f=0,\) \(k=0,\) and \(\sum _{i=1}^n \gamma _i =0\) then \(F = A_0\) where \(A_0\) is any real number is also a solution to (1.1).

Remark 2.3

Since we are interested in the pair (Ff) of polynomial functions that satisfies (1.1), thus, we mention here that assumptions (2.33), (2.34) and Remark 2.2 are essential when \(f = 0.\) Therefore, if \(f = 0\) and \(k\in \mathbb {N}\cup \{0\}\) with \(L_k \ne 0\) then \(f = F = 0\) is the only solution to (1.1).

Now, we show that the main results obtained by Koclga-Kulpa et al. [9] (see Theorems 1 and 2 in [9]) are indeed special forms of our results.

Theorem 2.4

(cf. Theorem 1 in [9]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} 8F(y) - 8F(x)= & {} yf(x) + 3yf\left( \tfrac{x+2y}{3}\right) + 3yf\left( \tfrac{2x+y}{3}\right) \nonumber \\{} & {} +yf(y) - xf(x) -3xf\left( \tfrac{x+2y}{3}\right) -3xf\left( \tfrac{2x+y}{3}\right) \nonumber \\{} & {} -xf(y) \end{aligned}$$
(2.35)

for \(x,y \in \mathbb R\), if and only if

$$\begin{aligned} f(x) = ax^3 + bx^2 + cx + d \end{aligned}$$

and

$$\begin{aligned} F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e \end{aligned}$$

for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)

Proof

Suppose that the pair (Ff) satisfies Eq. (2.35), then putting \(y = x+y\) in the equation we get

$$\begin{aligned} 8F(x + y) - 8F(x)= & {} yf(x) + 3yf\left( \tfrac{3x+2y}{3}\right) + 3yf\left( \tfrac{3x+ y}{3}\right) \nonumber \\{} & {} + yf(x+y). \end{aligned}$$
(2.36)

Now rearranging (2.36) in the form

$$\begin{aligned} yf(x) + 8F(x)= & {} 8F(x + y) - 3yf\left( \tfrac{3x+2y}{3}\right) - 3yf\left( \tfrac{3x+ y}{3}\right) \nonumber \\{} & {} - yf(x+y), \end{aligned}$$
(2.37)

and applying Lemma 1.1, we get \(I_{0,0} = \{(id, id)\},\) \(I_{0,1} = \{(id, \tfrac{2}{3}id),(id, \tfrac{1}{3}id),(id, id)\},\) \(\psi _{0,0,(id,id)}= F,\) \(\psi _{0,1,( id,\frac{2}{3}id)}= -f,\) \(\psi _{0,1,(id,\frac{1}{3}id)}= -f,\) \(\psi _{0,1,(id,id)}= -f,\) \(\varphi _0 = F,\) \(\varphi _1 = f.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1},\) and \(K_0 \cup K_1 = \{(id, \tfrac{2}{3}id),(id, \tfrac{1}{3}id),(id, id)\}.\) Therefore, \(\varphi _1 = f\) is a polynomial function of degree at most \(m=5\) i.e.

$$\begin{aligned} m = \hbox {card}(K_0 \cup K_1) + \hbox {card}(K_1) -1 = 3+3-1 =5. \end{aligned}$$

Observe that (2.35) is a special form of (1.1), thus we have that F is a polynomial function. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^8 \alpha _j \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^8 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) we get,

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^8 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^8 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^8 \beta _j c_j + \sum _{j=1}^8 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^8 \beta _j c_j = 4 \ne -4 = \sum _{j=1}^8 \alpha _jd_j \end{aligned}$$

then by Theorem 2.1, we infer that the monomial functions Ff are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we have,

$$\begin{aligned}{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i a_i^3= \sum _{j=1}^8 \alpha _jc_j^2,\\{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i b_i^3= \sum _{j=1}^8 \beta _j d_j^2, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 \genfrac(){0.0pt}1{3}{p} \gamma _i a_i^p b_i^{3-p} = 0 = \sum _{j=1}^8 \genfrac(){0.0pt}1{2}{p}\beta _j c_j^p d_j^{2-p} + \sum _{j=1}^8 \genfrac(){0.0pt}1{2}{p-1}\alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,

$$\begin{aligned} \sum _{j=1}^8 \beta _j c_j^p d_j^{2-p} = \tfrac{8}{3} \ne - \tfrac{8}{3} = \sum _{j=1}^8 \alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) By Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=3,\) then we obtain,

$$\begin{aligned}{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i a_i^4= \sum _{j=1}^8 \alpha _jc_j^3,\\{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i b_i^4= \sum _{j=1}^8 \beta _j d_j^3, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_3}{L_3}\sum _{i=1}^2 \genfrac(){0.0pt}1{4}{p} \gamma _i a_i^p b_i^{4-p} = 0 = \sum _{j=1}^8 \genfrac(){0.0pt}1{3}{p}\beta _j c_j^p d_j^{3-p} + \sum _{j=1}^8 \genfrac(){0.0pt}1{3}{p-1} \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,

$$\begin{aligned} \sum _{j=1}^8 \beta _j c_j^p d_j^{3-p} = 2 \ne - 2 = \sum _{j=1}^8 \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions (Ff) are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=4\) then we get,

$$\begin{aligned}{} & {} \tfrac{R_4}{L_4}\sum _{i=1}^2 \gamma _i a_i^5= \sum _{j=1}^8 \alpha _jc_j^4,\\{} & {} \tfrac{R_4}{L_4}\sum _{i=1}^2 \gamma _i b_i^5= \sum _{j=1}^8 \beta _j d_j^4, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_4}{L_4}\sum _{i=1}^2 \genfrac(){0.0pt}1{5}{p} \gamma _i a_i^p b_i^{5-p} = 0 \ne \sum _{j=1}^8 \genfrac(){0.0pt}1{4}{p}\beta _j c_j^p d_j^{4-p} + \sum _{j=1}^8 \genfrac(){0.0pt}1{4}{p-1} \alpha _j c_j^{p-1}d_j^{5-p}, \end{aligned}$$

for some \(p \in \{1,2,3,4\}.\) In particular take \(p=1\) then we see that

$$\begin{aligned} \tfrac{R_4}{L_4}\sum _{i=1}^2 5\gamma _i a_i b_i^4 = 0 \ne -\tfrac{4}{27} =\sum _{j=1}^8 4 \beta _j c_j d_j^3 + \sum _{j=1}^8 \alpha _j d_j^4. \end{aligned}$$

Hence, this leads to \(f = F = 0.\) Finally, if \(k=5,\) then we obtain

$$\begin{aligned}{} & {} \tfrac{R_5}{L_5}\sum _{i=1}^2 \gamma _i a_i^6= \sum _{j=1}^8 \alpha _jc_j^5,\\{} & {} \tfrac{R_5}{L_5}\sum _{i=1}^2 \gamma _i b_i^6= \sum _{j=1}^8 \beta _j d_j^5, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_5}{L_5}\sum _{i=1}^2 \genfrac(){0.0pt}1{6}{p} \gamma _i a_i^p b_i^{6-p} = 0 \ne \sum _{j=1}^8 \genfrac(){0.0pt}1{5}{p}\beta _j c_j^p d_j^{5-p} + \sum _{j=1}^8 \genfrac(){0.0pt}1{5}{p-1} \alpha _j c_j^{p-1}d_j^{6-p}, \end{aligned}$$

for some \(p \in \{1,2,3,4,5\}.\) In particular take \(p=1\) then we see that

$$\begin{aligned} \tfrac{R_5}{L_5}\sum _{i=1}^2 6\gamma _i a_i b_i^5 = 0 \ne -\tfrac{8}{27} =\sum _{j=1}^8 5 \beta _j c_j d_j^4 + \sum _{j=1}^8 \alpha _j d_j^5. \end{aligned}$$

Hence, this leads to \(f = F = 0.\) Now taking into account Proposition 2.2 we see that, if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e,\) where e is a real number, is also a solution to (2.35), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.35) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d, e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.35). \(\square \)

Theorem 2.5

(cf. Theorem 2 in [9]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} F(y) - F(x)= & {} \tfrac{1}{6}yf(x) + \tfrac{2}{3}yf\left( \tfrac{x+y}{2}\right) + \tfrac{1}{6}yf(y) -\tfrac{1}{6}xf(x) - \tfrac{2}{3}xf\left( \tfrac{x+y}{2}\right) \nonumber \\{} & {} -\tfrac{1}{6}xf(y) \end{aligned}$$
(2.38)

for \(x,y \in \mathbb R\), if and only if

$$\begin{aligned} f(x) = ax^3 + bx^2 + cx + d \end{aligned}$$

and

$$\begin{aligned} F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e \end{aligned}$$

for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)

Proof

Suppose that the pair (Ff) satisfies Eq. (2.38), then putting \(y = x+y\) in the equation we get

$$\begin{aligned} F(x + y) - F(x) = \tfrac{1}{6}yf(x) + \tfrac{2}{3}yf\left( \tfrac{2x+y}{2}\right) + \tfrac{1}{6}yf(x+y). \end{aligned}$$
(2.39)

Now rearranging (2.39) in the form

$$\begin{aligned} \tfrac{1}{6}yf(x) + F(x) = F(x + y) - \tfrac{2}{3}yf\left( \tfrac{2x+y}{2}\right) - \tfrac{1}{6}yf(x+y), \end{aligned}$$
(2.40)

and applying Lemma 1.1 we get, \(I_{0,0} = \{(id, id)\},\) \(I_{0,1} = \{(id, \tfrac{1}{2}id),(id, id)\},\) \(\psi _{0,0,(id,id)}= F,\) \(\psi _{0,1,(id,\frac{1}{2}id)}= -f,\) \(\psi _{0,1,(id,id)}= -f,\) \(\varphi _0 = F,\) \(\varphi _1 = f.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1},\) and \(K_0 \cup K_1 = \{(id, \tfrac{1}{2}id),(id, id)\}.\) Therefore, \(\varphi _1 = f\) is a polynomial function of degree at most \(m=3\) i.e.

$$\begin{aligned} m = \hbox {card}(K_0 \cup K_1) + \hbox {card}(K_1) -1 = 2+2-1 =3. \end{aligned}$$

Since (2.38) is a special case of (1.1) we know also that F is a polynomial function. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^6 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^6 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=1\) we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^6 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^6 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^6 \beta _j c_j + \sum _{j=1}^6 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j = \tfrac{1}{2} \ne - \tfrac{1}{2} = \sum _{j=1}^6 \alpha _jd_j \end{aligned}$$

then by Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=2\) then we have

$$\begin{aligned}{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i a_i^3= \sum _{j=1}^6 \alpha _jc_j^2,\\{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i b_i^3= \sum _{j=1}^6 \beta _j d_j^2, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 \genfrac(){0.0pt}1{3}{p} \gamma _i a_i^p b_i^{3-p} = 0 = \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p}\beta _j c_j^p d_j^{2-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p-1}\alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j^p d_j^{2-p} = \tfrac{1}{3} \ne - \tfrac{1}{3} = \sum _{j=1}^6 \alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) By Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=3,\) then we obtain

$$\begin{aligned}{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i a_i^4= \sum _{j=1}^6 \alpha _jc_j^3,\\{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i b_i^4= \sum _{j=1}^6 \beta _j d_j^3, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_3}{L_3}\sum _{i=1}^2 \genfrac(){0.0pt}1{4}{p} \gamma _i a_i^p b_i^{4-p} = 0 = \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p}\beta _j c_j^p d_j^{3-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p-1} \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j^p d_j^{3-p} = \tfrac{1}{4} \ne - \tfrac{1}{4} = \sum _{j=1}^6 \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now taking into account Proposition 2.2 we see that, if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e\) where e is a real number, is also a solution to (2.38), since \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.38) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.38). \(\square \)

Remark 2.4

If in (1.1) \(n=2, \gamma _1 = 1, \gamma _2 = -1, a_1 = b_2 =1, b_1 = a_2 =0,\) and \(\beta _j = - \alpha _j\) for each \(j \in \{1, \ldots , m\}\) then we get the equation considered by Koclga-Kulpa et al. [9] namely,

$$\begin{aligned} F(x) - F(y) = (x -y)[\alpha _1f(c_1x + d_1y) + \cdots + \alpha _m f(c_mx + d_my)]. \end{aligned}$$

It is worth noting that (2.35) stems from a well known quadrature rule used in numerical analysis.

In line with the papers of Koclga-Kulpa and Szostok [8] and Koclga-Kulpa et al. [10], we now consider polynomial functions connected with Hermite–Hadamard inequality in the class of continuous functions. The Hermite–Hadamard inequality is given as

$$\begin{aligned} f\left( \tfrac{x +y}{2}\right) \leqslant \tfrac{1}{y -x}\int _{x}^{y} f(t) \,dt \leqslant \tfrac{f(x) + f(y)}{2}, \end{aligned}$$
(2.41)

for all \(x,y \in \mathbb R.\) Rewrite now inequality (2.41) in the form

$$\begin{aligned} \tfrac{1}{y -x}\int _{x}^{y} f(t) \,dt \in \left[ f\left( \tfrac{x +y}{2}\right) , \tfrac{f(x) + f(y)}{2}\right] , \end{aligned}$$

for all \(x,y \in \mathbb R.\) However, if we consider the function \(f(x) = x^3 + x^2 + x\), then we have much more detailed information namely,

$$\begin{aligned} \tfrac{1}{y -x}\int _{x}^{y} f(t) \,dt = \tfrac{2}{3}f\left( \tfrac{x +y}{2}\right) + \tfrac{1}{3}\tfrac{f(x) + f(y)}{2}. \end{aligned}$$
(2.42)

Now we may rewrite (2.42) in the form

$$\begin{aligned} F(y) - F(x) = (y-x)\left( \tfrac{2}{3}f\left( \tfrac{x +y}{2}\right) + \tfrac{1}{3}\tfrac{f(x) + f(y)}{2}\right) , \end{aligned}$$
(2.43)

where \(F' = f\) (because f is continuous). Now combining Eqs. (2.42) and (2.43) we obtain a more general functional equation namely,

$$\begin{aligned} F(y) - F(x) = \tfrac{1}{y -x}\int _{x}^{y} f(t) \,dt = (y-x)\sum _{j=1}^m \beta _jf(c_j x + (1-c_j) y), \end{aligned}$$
(2.44)

for every \(x,y \in \mathbb R,\) \(c_j \in \mathbb Q,\) and \(\beta _j \in \mathbb R\) with \(\sum _{j=1}^m \beta _j =1.\) This equation is related to the approximate integration. Note that the quadrature rules of an approximate integration can be obtained by the appropriate specification of the coefficients of (2.44).

Remark 2.5

Observe that in (1.1), if \(n=2, \gamma _1 = 1, \gamma _2 = -1, a_1 = b_2 =0, b_1 = a_2 =1,\) \(\alpha _j = - \beta _j\) for each \(j \in \{1, \ldots , m\}\) with\(\sum _{j=1}^m \beta _j =1,\) and \(d_j = 1-c_j\) for each \(j \in \{1, \ldots , m\}\) then we obtain Eq. (2.44) which is the functional equation considered by Koclga-Kulpa et al. [10]. We note here that in their paper \(c_j \in \mathbb R.\)

Since (2.44) is a special form of (1.1), we may now use our method to obtain the polynomial functions of the functional equations belonging to class (2.44).

Theorem 2.6

The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} F(y) - F(x) = (y-x)\left( \tfrac{2}{3}f\left( \tfrac{x +y}{2}\right) + \tfrac{1}{3}\tfrac{f(x) + f(y)}{2}\right) \end{aligned}$$
(2.45)

for \(x,y \in \mathbb R\), if and only if

$$\begin{aligned} f(x) = ax^3 + bx^2 + cx + d \end{aligned}$$

and

$$\begin{aligned} F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e \end{aligned}$$

for all \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\)

Proof

Suppose that the pair (Ff) satisfies Eq. (2.45), then putting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 3. Since (2.45) is a special case of (1.1) we know also that F is a polynomial function. Now we rewrite Eq. (2.45) in the form,

$$\begin{aligned} F(y) - F(x)= \tfrac{2}{3}yf\left( \tfrac{x +y}{2}\right) + \tfrac{1}{6}yf(x)+ \tfrac{1}{6}yf(y)- \tfrac{2}{3}xf\left( \tfrac{x +y}{2}\right) - \tfrac{1}{6}xf(x)- \tfrac{1}{6}xf(y), \end{aligned}$$

and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = d,\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^6 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^6 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = dx\) for some constant \(d \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^6 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^6 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^6 \beta _j c_j + \sum _{j=1}^6 \alpha _jd_j \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j = \tfrac{1}{2} \ne - \tfrac{1}{2} = \sum _{j=1}^6 \alpha _jd_j \end{aligned}$$

thus by Theorem 2.1 we have that the monomial functions Ff are continuous, therefore \(f(x) = cx\) and \(F(x) = \tfrac{1}{2}cx^2\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we have

$$\begin{aligned}{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i a_i^3= \sum _{j=1}^6 \alpha _jc_j^2,\\{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i b_i^3= \sum _{j=1}^6 \beta _j d_j^2, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 \genfrac(){0.0pt}1{3}{p} \gamma _i a_i^p b_i^{3-p} = 0 = \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p}\beta _j c_j^p d_j^{2-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p-1}\alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j^p d_j^{2-p} = \tfrac{1}{3} \ne - \tfrac{1}{3} = \sum _{j=1}^6 \alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) By Theorem 2.1 we have that the monomial functions Ff are continuous, therefore \(f(x) = bx^2\) and \(F(x) = \tfrac{1}{3}bx^3\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=3,\) then we obtain

$$\begin{aligned}{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i a_i^4= \sum _{j=1}^6 \alpha _jc_j^3,\\{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i b_i^4= \sum _{j=1}^6 \beta _j d_j^3, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_3}{L_3}\sum _{i=1}^2 \genfrac(){0.0pt}1{4}{p} \gamma _i a_i^p b_i^{4-p} = 0 = \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p}\beta _j c_j^p d_j^{3-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p-1} \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Hence, \(\tfrac{R_3}{L_3} = \tfrac{1}{4}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j^p d_j^{3-p} = \tfrac{1}{4} \ne - \tfrac{1}{4} = \sum _{j=1}^6 \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for each \(p \in \{1,2,3\}.\) Again by Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = ax^3\) and \(F(x) = \tfrac{1}{4}ax^4\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = e\) where \(e\in \mathbb R\) is also a solution to (2.45), since \(\displaystyle \sum \nolimits _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.45) is given by \(f(x) = ax^3 + bx^2 + cx + d\) and \(F(x) =\tfrac{1}{4}ax^4 + \tfrac{1}{3}bx^3 + \tfrac{1}{2}cx^2 + dx + e\) where \(x \in \mathbb R\) and \(a,b,c,d,e \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.45). \(\square \)

Theorem 2.7

(cf. Theorem 4 in [10]). The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} F(y) - F(x) = (y-x)\left( \tfrac{1}{4}f(x) + \tfrac{3}{4}f\left( \tfrac{1}{3}x + \tfrac{2}{3}y\right) \right) \end{aligned}$$
(2.46)

for \(x,y \in \mathbb R\), if and only if

$$\begin{aligned} f(x) = ax^2 + bx + c \end{aligned}$$

and

$$\begin{aligned} F(x) =\tfrac{1}{3}ax^3 + \tfrac{1}{2}bx^2 + cx + d \end{aligned}$$

for all \(x \in \mathbb R\) and \(a,b,c,d \in \mathbb R.\)

Proof

Suppose that the pair (Ff) satisfies Eq. (2.46), then substituting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 2. Since (2.46) is a special case of (1.1) we have that F is a polynomial function. Now we rewrite Eq. (2.46) in the form,

$$\begin{aligned} F(y) - F(x)= \tfrac{1}{4}yf(x) + \tfrac{3}{4}yf\left( \tfrac{1}{3}x + \tfrac{2}{3}y\right) - \tfrac{1}{4}xf(x) - \tfrac{3}{4}xf\left( \tfrac{1}{3}x + \tfrac{2}{3}y\right) , \end{aligned}$$

and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = c,\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^4 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^4 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = cx\) for some constant \(c \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^4 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^4 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^4 \beta _j c_j + \sum _{j=1}^4 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^4 \beta _j c_j = \tfrac{1}{2} \ne - \tfrac{1}{2} = \sum _{j=1}^4 \alpha _jd_j, \end{aligned}$$

therefore by Theorem 2.1 we have that the monomial functions Ff are continuous, hence \(f(x) = bx\) and \(F(x) = \tfrac{1}{2}bx^2\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, if \(k=2\) then we have

$$\begin{aligned}{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i a_i^3= \sum _{j=1}^4 \alpha _jc_j^2,\\{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i b_i^3= \sum _{j=1}^4 \beta _j d_j^2, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 \genfrac(){0.0pt}1{3}{p} \gamma _i a_i^p b_i^{3-p} = 0 = \sum _{j=1}^4 \genfrac(){0.0pt}1{2}{p}\beta _j c_j^p d_j^{2-p} + \sum _{j=1}^4 \genfrac(){0.0pt}1{2}{p-1}\alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) Hence, \(\tfrac{R_2}{L_2} = \tfrac{1}{3}\) and also,

$$\begin{aligned} \sum _{j=1}^4 \beta _j c_j^p d_j^{2-p} = \tfrac{1}{3} \ne - \tfrac{1}{3} = \sum _{j=1}^4 \alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for each \(p \in \{1,2\}.\) By Theorem 2.1 we have that the monomial functions (Ff) are continuous, therefore \(f(x) = ax^2\) and \(F(x) = \tfrac{1}{3}ax^3\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = d\) where \(d\in \mathbb R\) is also a solution to (2.46), because \(\sum _{i=1}^2 \gamma _i = 0.\) Therefore, the general solution of Eq. (2.46) is given by \(f(x) = ax^2 + bx + c\) and \(F(x) = \tfrac{1}{3}ax^3 + \tfrac{1}{2}bx^2 + cx + d\) where \(x \in \mathbb R\) and \(a,b,c, d\in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.46). \(\square \)

Now we give some examples that include known results which may be solved by the use of our method.

Example 2.1

Assume that the functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy the functional equation

$$\begin{aligned} F(x)= yf(x) \end{aligned}$$
(2.47)

for all \(x,y \in \mathbb R.\)

Now we rearrange (2.47) in the form

$$\begin{aligned} yf(x) - F(x) =0, \end{aligned}$$
(2.48)

for all \(x,y \in \mathbb R.\) Applying Lemma 1.1, we can see that f is the zero function. Clearly, (2.47) is a special case of (1.1), thus we infer that F is also a polynomial function. Now, check conditions of Theorem 2.1 and taking into account Remark 2.3, we see that \(f = F =0\) is the only solution of (2.47).

Example 2.2

(Aczél result cf. [1]) The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} \tfrac{F(y) - F(x)}{y - x} = f(x + y) \end{aligned}$$
(2.49)

for all \(x,y \in \mathbb R,\) if and only if

$$\begin{aligned} f(x) = ax + b \end{aligned}$$

and

$$\begin{aligned} F(x)= ax^2 + bx + c \end{aligned}$$

for all \(x,y \in \mathbb R,\) and \(a,b, c \in \mathbb R\)

Proof

Now we rewrite (2.49) in the form of Eq. (1.1)

$$\begin{aligned} F(y) - F(x) = yf(x + y) - xf(x + y). \end{aligned}$$
(2.50)

Suppose that the pair (Ff) satisfies (2.50), then rearranging (2.50) in the form

$$\begin{aligned} F(x) = F(y) - yf(x + y) + xf(x + y), \end{aligned}$$
(2.51)

and applying Lemma 1.1, we get \(I_{0,0} = \{(0, id)\},\) \(I_{0,1} = I_{1,0} =\{(id, id)\},\) \(\psi _{0,0,(0,id)}= F,\) \(\psi _{0,1,(id,id)}= -f,\) \(\psi _{1,0,(id,id)}= f,\) \(\varphi _0 = F.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{0,1} \cup I_{1,0},\) and \(K_0 \cup K_1 = \{(0, id),(id, id)\}.\) Therefore, \(\varphi _0 = F\) is a polynomial function of degree at most \(m=2\) i.e.

$$\begin{aligned} m = \hbox {card}(K_0 \cup K_1) + \hbox {card}(K_1) -1 = 2+1-1 =2. \end{aligned}$$

Since (2.50) is a special form of (1.1), thus we know also that f is a polynomial function. By Theorem 2.1 we infer that f is at most degree 1. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^2 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^2 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=1\) we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^2 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^2 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^2 \beta _j c_j + \sum _{j=1}^2 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = 1\) and also,

$$\begin{aligned} \sum _{j=1}^2 \beta _j c_j = 1 \ne - 1 = \sum _{j=1}^2 \alpha _jd_j, \end{aligned}$$

hence by Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = ax\) and \(F(x) = ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.49), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.49) is given by \(f(x) = ax + b\) and \(F(x) = ax^2 +bx + c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.49). \(\square \)

Remark 2.6

We note here that the functional equation considereds by Aczél [1] is a special case of Eq. (1.1). In particular, choose \(n= 2, m=2, \gamma _1 =\beta _1 = 1, \gamma _2 = \alpha _2= -1, \alpha _1 = \beta _2 =0, a_1 = b_2 = 0\) and \(b_1 = a_2 = c_1 = d_1 = c_2 = d_2 =1,\) we get

$$\begin{aligned} F(y) - F(x) = (y-x)f(x + y) \end{aligned}$$

Example 2.3

(cf. Theorem 5 in [2]) The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} \tfrac{F(x) - F(y)}{x -y} = f\left( \tfrac{x +y}{2}\right) \end{aligned}$$
(2.52)

for all \(x,y \in \mathbb R,\) if and only if

$$\begin{aligned} f(x) = ax + b \end{aligned}$$

and

$$\begin{aligned} F(x)= \tfrac{1}{2}ax^2 + bx +c \end{aligned}$$

for all \(x,y \in \mathbb R,\) and \(a,b,c \in \mathbb R.\)

Proof

Now we rewrite (2.52) in the form of Eq. (1.1)

$$\begin{aligned} F(x) - F(y) = xf\left( \tfrac{x +y}{2}\right) - yf\left( \tfrac{x +y}{2}\right) . \end{aligned}$$
(2.53)

Suppose that the pair (Ff) satisfies (2.53), then rearranging (2.53) in the form

$$\begin{aligned} F(x) = F(y) + xf\left( \tfrac{x +y}{2}\right) - yf\left( \tfrac{x +y}{2}\right) , \end{aligned}$$
(2.54)

and applying Lemma 1.1 we get \(I_{0,0} = \{(0, id)\},\) \(I_{1,0} = I_{0,1} =\{(\tfrac{1}{2}id, \tfrac{1}{2}id)\},\) \(\psi _{0,0,(0,id)}= F,\) \(\psi _{1,0,(\frac{1}{2}id, \frac{1}{2}id)}= f,\) \(\psi _{0,1,(\frac{1}{2}id, \frac{1}{2}id)}= -f,\) \(\varphi _0 = F.\) We also have \(K_0 = I_{0,0},\) \(K_1 = I_{1,0} \cup I_{0,1},\) and \(K_0 \cup K_1 = \{(0, id),(\tfrac{1}{2}id, \tfrac{1}{2}id)\}.\) Therefore, \(\varphi _0 = F\) is a polynomial function of degree at most \(m=2\) i.e.

$$\begin{aligned} m = \hbox {card}(K_0 \cup K_1) + \hbox {card}(K_1) -1 = 2+1-1 =2. \end{aligned}$$

Clearly, (2.53) is a special form of (1.1), thus we know also that f is a polynomial function. By Theorem 2.1 we infer that f is at most degree 1. Now we check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^2 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^2 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) and consequently \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) Finally, let \(k=1\) we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^2 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^2 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^2 \beta _j c_j + \sum _{j=1}^2 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^2 \beta _j c_j = - \tfrac{1}{2} \ne \tfrac{1}{2} = \sum _{j=1}^2 \alpha _jd_j \end{aligned}$$

then by Theorem 2.1 we infer that the monomial functions Ff are continuous, therefore \(f(x) = ax\) and \(F(x) = \tfrac{1}{2}ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.52), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.52) is given by \(f(x) = ax + b\) and \(F(x) = \tfrac{1}{2}ax^2 +bx +c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.52). \(\square \)

Remark 2.7

We note here that using our method in solving (2.52) we obtained the same results as Aczél and Kuczma [2] (cf. Theorem 5 in [2]).

Example 2.4

The functions \(F,f:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} 2F(y) - 2F(x) = (y -x)\left( f\left( \tfrac{x+y}{2}\right) + \tfrac{f(x) + f(y)}{2}\right) \end{aligned}$$
(2.55)

for \(x,y \in \mathbb R\), if and only if

$$\begin{aligned} f(x) = ax + b \end{aligned}$$

and

$$\begin{aligned} F(x) = \tfrac{1}{2}ax^2 + bx + c \end{aligned}$$

for all \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\)

Proof

Suppose that the pair (Ff) satisfies Eq. (2.55), then substituting \(y = x+y\) in the equation and applying Lemma 1.1 we get that f is a polynomial function of degree at most 3. Since (2.55) is a special case of (1.1) we have that F is a polynomial function. Now we rewrite Eq. (2.55) in the form,

$$\begin{aligned} 2F(y) - 2F(x)= yf\left( \tfrac{x+y}{2}\right) +\tfrac{1}{2}yf(x)+\tfrac{1}{2}yf(y) - xf\left( \tfrac{x+y}{2}\right) -\tfrac{1}{2}xf(x)- \tfrac{1}{2}xf(y), \end{aligned}$$

and check conditions of Theorem 2.1. If \(k=0,\) then \(f(x) = b,\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R,\) further from (2.9) and (2.10) we have

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i a_i= \sum _{j=1}^6 \alpha _j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_0}{L_0}\sum _{i=1}^2 \gamma _i b_i= \sum _{j=1}^6 \beta _j. \end{aligned}$$

Hence, \(\tfrac{R_0}{L_0} =1,\) thus, \(F(x) = bx\) for some constant \(b \in \mathbb R\) and all \(x \in \mathbb R.\) If \(k=1\) then we get

$$\begin{aligned}{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i a_i^2= \sum _{j=1}^6 \alpha _jc_j,\\{} & {} \tfrac{R_1}{L_1}\sum _{i=1}^2 \gamma _i b_i^2= \sum _{j=1}^6 \beta _j d_j, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_1}{L_1}\sum _{i=1}^2 2\gamma _i a_i b_i = 0 = \sum _{j=1}^6 \beta _j c_j + \sum _{j=1}^6 \alpha _jd_j, \end{aligned}$$

thus, \(\tfrac{R_1}{L_1} = \tfrac{1}{2}\) and also,

$$\begin{aligned} \sum _{j=1}^6 \beta _j c_j = \tfrac{3}{2} \ne - \tfrac{3}{2} = \sum _{j=1}^6 \alpha _jd_j, \end{aligned}$$

hence by Theorem 2.1 we have that the monomial functions Ff are continuous, therefore \(f(x) = ax\) and \(F(x) = \tfrac{1}{2}ax^2\) for some constant \(a \in \mathbb R\) and all \(x \in \mathbb R.\) Now let \(k=2\) then we get

$$\begin{aligned}{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i a_i^3= \sum _{j=1}^6 \alpha _jc_j^2,\\{} & {} \tfrac{R_2}{L_2}\sum _{i=1}^2 \gamma _i b_i^3= \sum _{j=1}^6 \beta _j d_j^2, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 \genfrac(){0.0pt}1{3}{p} \gamma _i a_i^p b_i^{3-p} = 0 \ne \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p}\beta _j c_j^p d_j^{2-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{2}{p-1} \alpha _j c_j^{p-1}d_j^{3-p}, \end{aligned}$$

for some \(p \in \{1,2\}.\) In particular take \(p=1\) then we see that

$$\begin{aligned} \tfrac{R_2}{L_2}\sum _{i=1}^2 3\gamma _i a_i b_i^2 = 0 \ne -\tfrac{1}{4} =\sum _{j=1}^6 2 \beta _j c_j d_j + \sum _{j=1}^6 \alpha _j d_j^2. \end{aligned}$$

Hence, this leads to \(f = F = 0.\) Finally, if \(k=3,\) then we obtain

$$\begin{aligned}{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i a_i^4= \sum _{j=1}^6 \alpha _jc_j^3,\\{} & {} \tfrac{R_3}{L_3}\sum _{i=1}^2 \gamma _i b_i^4= \sum _{j=1}^6 \beta _j d_j^3, \end{aligned}$$

and

$$\begin{aligned} \tfrac{R_3}{L_3}\sum _{i=1}^2 \genfrac(){0.0pt}1{4}{p} \gamma _i a_i^p b_i^{4-p}= 0 \ne \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p}\beta _j c_j^p d_j^{3-p} + \sum _{j=1}^6 \genfrac(){0.0pt}1{3}{p-1} \alpha _j c_j^{p-1}d_j^{4-p}, \end{aligned}$$

for some \(p \in \{1,2,3\}.\) In particular take \(p=1\) then we see that

$$\begin{aligned} \tfrac{R_3}{L_3}\sum _{i=1}^2 4\gamma _i a_i b_i^3 = 0 \ne -\tfrac{1}{4} =\sum _{j=1}^6 3 \beta _j c_j d_j^2 + \sum _{j=1}^6 \alpha _j d_j^3. \end{aligned}$$

Again by Theorem 2.1, this leads to \(f = F = 0.\) Now by Proposition 2.2, we see that if \(k=0,\) \(L_0 =\sum _{i=1}^2 \gamma _i (a_i+b_i) = 0\) then \(f=0\) and \(F = c\) where \(c\in \mathbb R\) is also a solution to (2.55), because \(\sum _{i=1}^2 \gamma _i = 0.\) Thus the general solution of Eq. (2.55) is given by \(f(x) = ax + b\) and \(F(x) = \tfrac{1}{2}ax^2 +bx +c\) where \(x \in \mathbb R\) and \(a,b,c \in \mathbb R.\) To finish the proof it suffices to check that these functions satisfy Eq. (2.55). \(\square \)

Remark 2.8

We note here that (2.55) is the functional equation arising from the geometric problems considered by Alsina et al. [3].

Remark 2.9

Let us observe that when \(n = 3, \gamma _1 = a_1 = b_1 = a_2 = b_3 = 1, a_3 = b_2 = 0,\) \(\gamma _2 = \gamma _3 = -1,\) \(m=2, \, \alpha _1=\beta _2=d_1=c_2=1, \,\) and \( \alpha _2=\beta _1=c_1=d_2=0,\) Eqs. (1.1), (1.2) and (1.3) have the same polynomial solutions (see [14, 15]) as Eq. (1.9) considered by Fechner and Gselmann [6]. In addition, the polynomial solutions of Eqs. (1.2) or (1.3) are also polynomial solutions of Eq. (1.1) but the converse is not necessarily true.

Remark 2.10

To this end, we conclude that the main results of [14] (see Theorem 3.3 in [14]) and [15] (see Theorem 2.2 in [15]) are special forms of our results. Moreso, we mention here that the pair of functions (Ff) mapping from \(\mathbb {R}\) to \(\mathbb {R}\) that satisfies Eqs. (1.1), (1.2) and (1.3) respectively, were obtained by assuming that \(x=y \in \mathbb R.\) From the paper of Okeke and Sablik [15], we see that it is possible to use a computer program to solve functional equations in particular, Eq. (1.3). Therefore, these leads to the following questions:

  1. (a)

    Which are the polynomial functions Ff mapping \(\mathbb {R}\) to \(\mathbb {R}\) that satisfy Eqs. (1.1), (1.2), (1.3) and (1.9) when \( x \ne y ? \)

  2. (b)

    Is it possible to formulate a robust computer algorithm which determines the polynomial solutions of Eq. (1.1) and the polynomial solutions of question a)?