1 Introduction

Lagrangian means are defined in the following way, let \(\varphi \) be a continuous, strictly monotonic function, defined on an interval I. The Lagrangian mean associated with \(\varphi \) is defined by

$$\begin{aligned} L_\varphi =\left\{ \begin{array}{ll}\varphi ^{-1}\left( \frac{\int \limits _x^y\varphi (t)dt}{y-x}\right) &{}x\ne y,\\ x&{}x=y\end{array}\right. \end{aligned}$$

(see for example [2, 4]). Inspired by this notion we define means generated by quadrature rules in a similar manner.

Consider a quadrature of the form

$$\begin{aligned} Q(f)=\sum _{i=1}^na_if(\alpha _ix+\beta _i y). \end{aligned}$$
(1)

Then there exist a real number \(A_Q,\) a non-negative integer k and \(\xi (x,y)\in (x,y)\) such that

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^yf(t)dt=\sum _{i=1}^na_if(\alpha _ix+\beta _i y)+A_Qf^{(k)}(\xi (x,y))(y-x)^k, \end{aligned}$$
(2)

where \(A_Q\) and k depend on Q exclusively and \(\xi (x,y)\) depends on Q and on f.

Now let \(\varphi \) be a continuous function and let \(\varphi ^{(-k)},\) be any function such that

$$\begin{aligned} \left( \varphi ^{(-k)}\right) ^{(k)}=\varphi . \end{aligned}$$

Then equality (2) may be rewritten in the form

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^y\varphi ^{(-k)}(t)dt=\sum _{i=1}^na_1\varphi ^{(-k)}(\alpha _ix+\beta _i y)+A_Q\varphi (\xi (x,y))(y-x)^k. \end{aligned}$$

This allows us to formulate the following definition.

Definition 1

Let \(I\subset \mathbb R\) be an interval and let \(\varphi :I\rightarrow \mathbb R\) be a continuous and strictly monotone function. The mean generated by the quadrature rule Q is given by

$$\begin{aligned} Q_\varphi (x,y)=\left\{ \begin{array}{ll}\varphi ^{-1}\left( \frac{\frac{1}{y-x}\int _x^y\varphi ^{(-k)}(t)dt-\sum _{i=1}^na_i\varphi ^{(-k)}(\alpha _ix+\beta _i y)}{A_Q(y-x)^k}\right) &{}x\ne y,\\ x&{}x=y.\end{array}\right. \end{aligned}$$
(3)

Clearly, the choice of the function \(\varphi ^{(-k)}\) is not unique but this definition is correct, since for the polynomials of degree at most \(k-1\) the quadrature Q and the integral coincide.

We give examples of means of this type using the simplest possible quadrature rules.

Example 1

Consider the midpoint quadrature rule \(M(f)=f\left( \frac{x+y}{2}\right) .\) Then we have

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^yf(t)dt=f\left( \frac{x+y}{2}\right) +\frac{1}{24}f''(\xi (x,y))(y-x)^2 \end{aligned}$$

thus, the mean generated by this quadrature (and the function \(\varphi \)) is of the form

$$\begin{aligned} M_\varphi (x,y)=\varphi ^{-1}\left( \frac{24\left( \frac{1}{y-x}\int _x^y\varphi ^{(-2)}(t)dt-\varphi ^{(-2)}\left( \frac{x+y}{2}\right) \right) }{(y-x)^2}\right) . \end{aligned}$$
(4)

Example 2

The trapezium quadrature rule is given by

$$\begin{aligned} T(f)=\frac{1}{2}f(x)+\frac{1}{2}f(y) \end{aligned}$$

and we have

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^yf(t)dt=\frac{1}{2}f(x)+\frac{1}{2}f(y)-\frac{1}{12}f''(\xi (x,y))(y-x)^2 \end{aligned}$$

thus, the mean generated by this quadrature (and function \(\varphi \)) is of the form

$$\begin{aligned} T_\varphi (x,y)=\varphi ^{-1}\left( -\frac{12\left( \frac{1}{y-x}\int \limits _x^y\varphi ^{(-2)}(t)dt-\frac{1}{2}\varphi ^{(-2)}(x)-\frac{1}{2}\varphi ^{(-2)}(y)\right) }{(y-x)^2}\right) . \end{aligned}$$

Example 3

For the Simpson quadrature

$$\begin{aligned} S(f)=\frac{1}{6}f(x)+\frac{2}{3}f(y)+\frac{1}{6}f(y) \end{aligned}$$

we have

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^yf(t)dt=\frac{1}{6}f(x)+\frac{2}{3}f(y)+\frac{1}{6}f(y)-\frac{1}{2880}f^{(4)}(\xi (x,y))(y-x)^4 \end{aligned}$$

thus, the mean generated by this quadrature (and function \(\varphi \)) is of the form

$$\begin{aligned}{} & {} S_\varphi (x,y)=\varphi ^{-1}\\{} & {} \left( -\frac{2880\left( \frac{1}{y-x}\int _x^y\varphi ^{(-4)}(t)dt-\frac{1}{6}\varphi ^{(-4)}(x)-\frac{2}{3}\varphi ^{(-4)}(\frac{x+y}{2})-\frac{1}{6}\varphi ^{(-4)}(y)\right) }{(y-x)^4}\right) . \end{aligned}$$

Remark 1

Directly from the definitions we have \(M_\varphi (x,y)=M_\varphi (y,x),\) \(T_\varphi (y,x)=T_\varphi (y,x)\) and \(S_\varphi (x,y)=S_\varphi (y,x)\) for every continuous and monotone function \(\varphi .\)

The next example is connected with the Radau quadrature which is not symmetric with respect to the change of x and y.

Example 4

For the \(2-\)point Radau quadrature

$$\begin{aligned} R(f)=\frac{1}{4}f(x)+\frac{3}{4}f\left( \frac{x+2y}{3}\right) \end{aligned}$$

we have

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^yf(t)dt=\frac{1}{4}f(x)+\frac{3}{4}f\left( \frac{x+2y}{3}\right) +\frac{1}{216}f^{(3)}(\xi (x,y))(y-x)^3 \end{aligned}$$

thus, the mean generated by this quadrature (and function \(\varphi \)) is of the form

$$\begin{aligned} R_\varphi (x,y)=\varphi ^{-1}\left( \frac{216\left( \frac{1}{y-x}\int _x^y\varphi ^{(-3)}(t)dt-\frac{1}{4}\varphi ^{(-3)}(x)-\frac{3}{4}\varphi ^{(-3)} \left( \frac{x+2y}{3}\right) \right) }{(y-x)^3}\right) .\nonumber \\ \end{aligned}$$
(5)

2 Characterization of arithmetic means among quadrature means

Remark 2

Observe that if we consider the function \(\varphi =id\) then we have \(\varphi ^{(-2)}(t)=\frac{t^3}{6},\) thus from (4) we get

$$\begin{aligned} M_{id}(x,y)= & {} \frac{4\left( \frac{1}{y-x}\int _x^yt^3dt-\left( \frac{x+y}{2}\right) ^3\right) }{(y-x)^2}= \frac{\frac{y^4-x^4}{y-x}-\frac{1}{2}\left( x^3+3x^2y+3xy^2+y^3\right) }{(y-x)^2}\nonumber \\= & {} \frac{2x^3+2x^2y+2xy^2+2y^3-\left( x^3+3x^2y+3xy^2+y^3\right) }{2(y-x)^2}\nonumber \\= & {} \frac{x^3-x^2y-xy^2+y^3}{2(y-x)^2}=\frac{x^2(x-y)-y^2(x-y)}{2(y-x)^2}=\frac{x+y}{2}. \end{aligned}$$
(6)

As we can see from the above remark, for the identity function we obtain the arithmetic mean. Similar is the case of means \(T_\varphi ,\) \(S_\varphi \) (we omit the calculations). Now, we consider the Radau quadrature which is not symmetric with respect to the change of variables xy.

Remark 3

As previously, we consider the function \(\varphi =id\) then we have \(\varphi ^{(-3)}(t)=\frac{t^4}{24},\) thus from (5) we have

$$\begin{aligned} R_{id}(x,y)= & {} \frac{9\left( \frac{1}{y-x}\int _x^yt^4dt-\frac{1}{4}x^4-\frac{3}{4}\left( \frac{x+2y}{3}\right) ^4\right) }{(y-x)^3}\nonumber \\= & {} \frac{\frac{9}{5}\frac{y^5-x^5}{y-x}-\frac{9}{4} x^4-\frac{1}{12}\left( x^4+8x^3y+24x^2y^2+32y^3x+16y^4\right) }{(y-x)^3}\nonumber \\= & {} \frac{\frac{9}{5}(x^4+x^3y+x^2y^2+y^3x+y^4)-\frac{1}{3}(7x^4+2x^3y+6x^2y^2+8y^3x+4y^4)}{(y-x)^3}\nonumber \\= & {} \frac{27(x^4+x^3y+x^2y^2+y^3x+y^4)-5(7x^4+2x^3y+6x^2y^2+8y^3x+4y^4)}{15(y-x)^3}\nonumber \\= & {} \frac{-8x^4+17x^3y-3x^2y^2-13y^3x+7y^4}{15(y-x)^3}=\frac{8x+7y}{15}. \end{aligned}$$
(7)

As we can see, the mean generated by classical quadrature rule may also be a weighted arithmetic mean. The main goal of this part of the paper will be to characterize functions which generate such means. To this end we consider the following functional equation

$$\begin{aligned} F(y)-F(x)=(y-x)\sum _{i=1}^na_if(\alpha _ix+\beta _i y)+g(\alpha x+\beta y)(y-x)^k \end{aligned}$$
(8)

Note that, in (8), the integral is replaced by the expression \(F(y)-F(x)\) and, consequently, no regularity of the functions Ff and g is needed. On the contrary, the regularity of these functions will be proved under some assumptions. Thus (8) yields an example of a situation where the regularity (continuity) of solutions is a consequence of the equation itself.

Further, the power k occurring here is not the same k as in (2). To keep the same k,  (8) should be considered with \(k+1\) instead of k but for the equation (8) it would be artificial.

In the short monograph [16] equations of the form

$$\begin{aligned} \sum _{i=0}^l(y-x)^i[f_{1,i}(\alpha _{1,i}x+\beta _{1,i}y)+\cdots +f_{{k_i},i}(\alpha _{{k_i},i}x+\beta _{{k_i},i}y)]=0 \end{aligned}$$
(9)

were considered. Our equation (8) yields a particular form of (9) and the solutions of (9) are expressed with the use of polynomial functions. Therefore, we say now a few words concerning polynomial functions.

Polynomial functions were introduced by M. Fréchet in [6]. To give the definition we need the notion of the difference operator \(\Delta _h^n.\) First we define \(\Delta _h\) by the formula

$$\begin{aligned} \Delta _hf(x):=f(x+h)-f(x) \end{aligned}$$

and \(\Delta _h^n\) is defined recursively

$$\begin{aligned} \Delta _h^0f:=f,\;\Delta _h^{n+1}f:=\Delta _h(\Delta _h^nf)= \Delta _h\circ \Delta _h^nf,\;n\in \mathbb N. \end{aligned}$$

Using this operator, polynomial functions are defined in the following way.

Definition 2

Function \(f:\mathbb R\rightarrow \mathbb R\) is called a polynomial function of order at most n if it satisfies the equality

$$\begin{aligned} \Delta _h^{n+1}f(x)=0 \end{aligned}$$

for all \(x\in \mathbb R.\)

The general form of a polynomial function of order n is given by the formula

$$\begin{aligned} f(x)=A_0+A_1(x)+A_2(x,x)+\cdots +A_n(x,\dots ,x) \end{aligned}$$

where \(A_i:\mathbb R^i\rightarrow \mathbb R\) is an \(i-\)additive and symmetric function.

For functions defined on \(\mathbb R\) we may use the approach from [16] which is based on a generalized Sablik’s lemma. Let GH be Abelian groups and \(SA^0(G,H):=H\), \(SA^1(G,H):=\text{ Hom }(G,H)\) (i.e. the group of all homomorphisms from G into H), and for \(i\in \mathbb N\), \(i\ge 2\), let \(SA^i(G,H)\) be the group of all i–additive and symmetric mappings from \(G^i\) into H. Furthermore, let

$$\begin{aligned} \mathcal {P}:=\bigl \{(\alpha ,\beta )\in \text{ Hom }(G,G)^2:\alpha (G)\subset \beta (G)\bigr \}. \end{aligned}$$
(10)

Finally, for \(x\in G\) let \(x^i=\underbrace{(x,\dots ,x)}_i\), \(i\in \mathbb N\).

Lemma 1

(A. Lisak, M. Sablik [8]) Let \(N,M,K\in \mathbb N\cup \{0\}\) and let \(I_0,\dots , I_{M+K}\) be finite subsets of \(\mathcal {P}.\) Suppose further that H is uniquely divisible by N! and let functions \(\varphi _i:G\rightarrow SA^i(G;H),\) \(i=0,\dots ,N\) and functions \(\psi _{i,(\alpha ,\beta )}:G\rightarrow SA^i(G;H)\) \((\alpha ,\beta )\in I_i\) \(i=0,\dots ,M+K\) satisfy

$$\begin{aligned}{} & {} \varphi _N(x)(y^N)+\sum _{i=0}^{N-1}\varphi _i(x)(y^i)=\sum _{i=0}^M\sum _{(\alpha ,\beta )\in I_i} \psi _{i,(\alpha ,\beta )}(\alpha (x)+\beta (y))(y^i)+\nonumber \\{} & {} \quad \sum _{i=M+1}^{M+K}\sum _{(\alpha ,\beta )\in I_i} \psi _{i,(\alpha ,\beta )}(\alpha (x)+\beta (y))(x^i) \end{aligned}$$
(11)

for all \(x,y\in G.\) Then \(\varphi _n\) is a polynomial function of order not greater than

$$\begin{aligned} \sum _{i=0}^{M+K}\textrm{card}\left( \bigcup _{s=i}^{M+K} I_s\right) -1. \end{aligned}$$

Remark 4

In the forthcoming theorem we will show how to use Lemma 1 in order to prove that solutions of (8) must be polynomial functions.

Since it is known that a continuous polynomial function is an ordinary polynomial, solutions \(\varphi \) of the equation

$$\begin{aligned} Q_\varphi (x,y)=\alpha x+\beta y \end{aligned}$$

must be polynomials. Nevertheless, as it has already been pointed out, equation (8) may be considered without any regularity assumptions, thus we will prove a result concerning a general solution of this equation. It will be shown that, under some mild assumptions, the continuity of solutions is forced by the equation itself.

Note also that this theorem may look like a particular version of Theorem 3.5 from [16]. However the assumptions here are slightly weaker, we will explain it later on.

Theorem 1

Let \(n\ge 1,k>1\) be integers, let

$$\begin{aligned} \alpha ,\beta ;\,a_i,\alpha _i,\beta _i,i=1,\dots ,n \end{aligned}$$

be some real numbers such that

$$\begin{aligned} \alpha +\beta \ne 0, \\ \alpha _i+\beta _i\ne 0,i=1\dots ,n \end{aligned}$$

and

$$\begin{aligned} \left| \begin{array}{cc} \alpha _i&{}\beta _i\\ \alpha _j&{}\beta _j\end{array}\right| \ne 0 \text { for }i\ne j. \end{aligned}$$
(12)

If \(F,f,g:\mathbb R\rightarrow \mathbb R\) satisfy

$$\begin{aligned} F(y)-F(x)=(y-x)\sum _{i=1}^na_if(\alpha _ix+\beta _i y)+g(\alpha x+\beta y)(y-x)^k \end{aligned}$$
(8)

then Ffg are polynomial function of orders at most \(2n+2,2n+1,2n+2-k,\) respectively. Moreover F must be continuous and if \(\alpha _i,\beta _i\) satisfy

$$\begin{aligned} \alpha _i+\beta _i=\alpha +\beta =1, \end{aligned}$$

and

$$\begin{aligned} a_1+\cdots +a_n\ne 0 \end{aligned}$$
(13)

then solutions fg are also continuous.

Proof

In the first part of the proof we use Lemma 1 to show that functions fgF are polynomial.

Clearly, the homomorphisms occurring in Lemma 1 are now replaced by multiplication by a constant and the condition from (10) simply means that the respective constant is not equal to zero.

As we can see from the formulation of Lemma,  1, the polynomiality of a given function may be proved if the value of this function at x is multiplied by the highest power of y (or conversely).

Observe that putting

$$\begin{aligned} \frac{\tilde{x}-\beta \tilde{y}}{\alpha +\beta } \end{aligned}$$

in place of x and

$$\begin{aligned} \frac{\tilde{x}+\alpha \tilde{y}}{\alpha +\beta } \end{aligned}$$

in place of y,  we have

$$\begin{aligned} g(\alpha x+\beta y)(y-x)^k=g(\tilde{x})(\tilde{y})^k \end{aligned}$$
(14)

thus we may take g in the role of \(\varphi _N\) in (11) (with \(N=k,K=0\) and \(M=1\)) and we can see that g must be a polynomial function (clearly, the sum over an empty set of indices is equal to zero). Of course, it is not a problem if for some i we have

$$\begin{aligned} \left| \begin{array}{cc} \alpha &{}\beta \\ \alpha _j&{}\beta _j\end{array}\right| =0. \end{aligned}$$

In such a case we simply leave the respective term on the same side of the equation as the term given by (14).

Now, to proceed with the proof we must consider two cases. If \(n=1\) then it may happen that the polynomiality of f cannot be obtained directly from Lemma 1 (it is the case if

$$\begin{aligned} \left| \begin{array}{cc} \alpha _1&{}\beta _1\\ \alpha &{}\beta \end{array}\right| =0 ). \end{aligned}$$

However, in this case we get that F is a polynomial function, since there is either no x or no y in the arguments of the functions on the right-hand side. Once we know that g and F are polynomial functions we can obtain the form of f,  taking x or y equal to zero in (8).

The remaining case is \(n\ge 2.\) Then in view of (12), we know that either

$$\begin{aligned} \left| \begin{array}{cc} \alpha _1&{}\beta _1\\ \alpha &{}\beta \end{array}\right| \ne 0 \text { or } \left| \begin{array}{cc} \alpha _2&{}\beta _2\\ \alpha &{}\beta \end{array}\right| \ne 0. \end{aligned}$$

Assume that for example the first of the above possibilities takes place. Then we take

$$\begin{aligned} \frac{\tilde{x}-\beta _1 \tilde{y}}{\alpha _1+\beta _1} \end{aligned}$$

in place of x and

$$\begin{aligned} \frac{\tilde{x}+\alpha _1 \tilde{y}}{\alpha _1+\beta _1} \end{aligned}$$

in place of y. We have

$$\begin{aligned} f(\alpha _1 x+\beta _1 y)(y-x)=\tilde{y}f(\tilde{x}) \end{aligned}$$

and we may use Lemma 1 with: \(N=1,\varphi _N=f\) and \(M=k.\) It is possible that the polynomiality of F cannot be proved with the use of Lemma 1 (if the summands of the form \(a_{i_1}f(x),a_{i_2}f(y)\) occur in (8)) but it suffices to take \(x=0\) in (8) to obtain the form of F.

Once we know that all functions occurring in (8) are polynomial, we need to show the continuity results.

This will be done in three steps. The reasonings are the same as in [16] but we we will describe it briefly for the sake of completeness.

First, we make the following observation. Let Ffg be polynomial functions satisfying (8). It follows from Lemma 3.2 [16] that our functional equation is satisfied by the monomial summands of Ff and g of orders \(l,l-1,l-k,l=0,\dots ,2n+2\) respectively.

In the second step we assume that Ffg are monomial functions of the respective orders. Now, we put \(x=1\) and we get from Lemma 3.4 [16] that F must be an ordinary monomial.

In the last step we also proceed similarly as in [16]. We put \(F(x)=cx^l\) into (8) getting

$$\begin{aligned} cy^l-cx^l= (y-x)\sum _{i=1}^na_if_{l-1}(\alpha _ix+\beta _i y)+g_{l-k}(\alpha x+\beta y)(y-x)^k \end{aligned}$$

(\(f_{l-1},g_{l-k}\) are here the monomial parts of fg of orders \(l-1,l-k\) respectively). Here we cancel \(y-x\) on both sides arriving at

$$\begin{aligned}{} & {} c(y^{l-1}+y^{l-2}x+\cdots +x^{l-1}) \nonumber \\{} & {} = \sum _{i=1}^na_if_{l-1}(\alpha _ix+\beta _i y)+g_{l-k}(\alpha x+\beta y)(y-x)^{k-1}. \end{aligned}$$
(15)

Since we canceled \(y-x,\) we cannot substitute \(y=x.\) However, we can take a sequence of rationals \(1\ne q_n\rightarrow 1\) and substitute \(q_n x\) in place of y. Then we use the rational homogeneity of monomial functions, we pass to the limit and, in consequence, we know that our equation is satisfied also for \(x=y\) (see Lemma 3.7 [16]). Thus we substitute \(y=x\) in (15) and, from (13), we get that f is an ordinary monomial. Knowing that both F and f are continuous, it is enough to take \(y=0\) to show that also g is continuous. \(\square \)

Note that it is quite important that we managed to relax the assumptions of Theorem 3.5 [16]. Namely, in order to use that theorem it would be necessary that at least one of vectors \((\alpha _i,\beta _i)\) is linearly independent of \((\alpha ,\beta ).\) Then it would not be possible to prove the following proposition.

Proposition 1

The solutions \(g,f,F:\mathbb R\rightarrow \mathbb R\) of the equation

$$\begin{aligned} F(y)-F(x)=(y-x)f\left( \frac{x+y}{2}\right) +(y-x)^3g\left( \frac{x+y}{2}\right) \end{aligned}$$
(16)

are of the form

$$\begin{aligned} g(x)= & {} ax+b \\ f(x)= & {} 4ax^3+12bx^2+cx+d \\ F(x)= & {} ax^4+4bx^3+\frac{1}{2}cx^2+dx+e \end{aligned}$$

for some \(a,b,c,d,e\in \mathbb R.\)

Proof

From Theorem 1 we know that g is a polynomial of degree at most 1. Thus \(g(x)=ax+b\) for some constants ab. Note that, dividing the equation by \((y-x)\) and tending with x to y,  we get \(F'=f.\) We mentioned in the proof of Theorem 1 that the monomial parts of Ffg of respective orders satisfy the same equation. Let us for example consider the monomials \(F(x)=\gamma _1x^4,f(x)=\gamma _2 x^3,g(x)=ax.\) Since, \(F'=f,\) we have \(4\gamma _1=\gamma _2.\) Taking \(x=0\) in (16), we get

$$\begin{aligned} F(y)-F(0)=yf\left( \frac{y}{2}\right) +y^3g\left( \frac{y}{2}\right) . \end{aligned}$$

Using here the above-mentioned forms of Ffg we get

$$\begin{aligned} \gamma _1y^4=\frac{1}{2}\gamma _1y^4+\frac{a}{2}y^4 \end{aligned}$$

i.e. \(\gamma _1=\frac{a}{2},\) as claimed. The rest of the proof is completely analogous. \(\square \)

From this proposition we immediately obtain the following corollary.

Corollary 1

Let \(\varphi :\mathbb R\rightarrow \mathbb R\) be a continuous and monotone function. The equality

$$\begin{aligned} M_\varphi (x,y)=\frac{x+y}{2} \end{aligned}$$
(17)

is satisfied if and only if \(\varphi (x)=ax+b\) for some constants ab.

Remark 5

It is clear that it is possible to prove analogous statements for other quadrature means.

Remark 6

The assumption (13) of Theorem 1 is essential since a simple equation

$$\begin{aligned} F(y)-F(x)=(y-x)\biggl (f(x)-2f\left( \frac{x+y}{2}\right) +f(y)\biggr )+(y-x)^kg\left( \frac{x+y}{2}\right) \end{aligned}$$

is satisfied by any (possibly discontinuous) additive function f with \(F=const\) and \(g=0.\)

Until now we have presented a method which may be used to deal with functions defined on \(\mathbb R.\) However, in general, a function \(\varphi \) may be defined on an interval. In such a case the methods from [16] cannot be used. Therefore we cite now a lemma proved by Pawlikowska, which may be called a version of Sablik’s lemma on convex subsets of linear spaces. In this lemma XY are linear spaces and by \(SA^r(X;Y)\) we denote the group of all symmetric \(r-\)additive functions from \(X^r\) into Y. Further \(SA^0(X; Y )\) is the family of all constant mappings from X into Y

Lemma 2

([14] Corollary 2.2) Let XY be linear spaces over a field \(\mathbb K\subset \mathbb R\) and let \(N,M\in \mathbb N\cup \{0\}.\) Suppose further that \(J_0,\dots ,J_M\) are finite subsets of \(\mathbb K\cap [0,1).\) If \(\emptyset \ne K\) is a convex set such that \(x_0\in \text {alg int}K\) and if the functions \(\varphi _i:K\rightarrow SA^i(X,Y),i=0,\dots ,N\) and \(\psi _{j,\alpha }:K\rightarrow SA^i(X,Y), \alpha \in J_j,j=0,\dots ,M\) satisfy the equation

$$\begin{aligned} \sum _{i=1}^N\varphi _i(x)\bigl ((ax+by)^i\bigr )=\sum _{j=0}^M\sum _{\alpha \in J_j}\psi _{j,\alpha }(\alpha x+(1-\alpha y))\bigl ((ax+by)^i\bigr ) \end{aligned}$$
(18)

then there exist a convex set \(K'\subset K\) such that \(x_0\in \text {alg int}K'\) and \(\varphi _N\) is a locally polynomial function of degree at most

$$\begin{aligned} \sum _{i=0}^M card\left( \bigcup _{k=i}^M J_i\right) \end{aligned}$$

on \(K'.\)

Using this lemma, we can prove the following result. Here, for simplicity, we assume the continuity of the unknown function.

Theorem 2

Let \(I\subset \mathbb R\) be an interval, let Q be a quadrature and let \(\varphi :I\rightarrow \mathbb R\) be a given continuous and increasing function. If

$$\begin{aligned} Q_\varphi (x,y)=ax+(1-a)y \end{aligned}$$

for some \(a\in (0,1),\) then \(\varphi \) is a polynomial.

Proof

We will use here Lemma 2. Fix an \(x_0\in I.\) Since the function \(\varphi \) is multiplied by the highest power of \((y-x),\) we do not need to assume anything concerning the coefficients or weights of the quadrature Q. Moreover \(\varphi \) is assumed to be continuous thus it must be a polynomial (on some interval \(I'\)). Let us write

$$\begin{aligned} \varphi (x)=p(x),x\in I', \end{aligned}$$

where p is a polynomial. We need to show that \(\varphi (x)=p(x),x\in I.\) Thus assume, for the indirect proof, that there exists a point where this equality does not hold and define

$$\begin{aligned} x_1:=\sup \{z\in I:\varphi (x)=p(x)\text { for all }x\in (x_0,z)\}. \end{aligned}$$

Now we may again use Lemma 2 with \(x_1\) in place of \(x_0,\) to obtain that there exist a polynomial q and an interval \(I''\) containing the point \(x_1\) such that

$$\begin{aligned} \varphi (x)=q(x),x\in I''. \end{aligned}$$

We can see that \(q=p\) and this contradicts the definition of \(x_1.\) \(\square \)

Remark 7

We proved that quadrature means may be artihmetic only if the functions involved are polynomials. In the papers [7, 12] the question when a Lagrangian mean may be quasi-arithmetic was considered. Such a problem is interesting for quadrature means but it seems very difficult.

3 Inequalities between means generated by different quadratures

In this part of the paper we will deal with inequalities connected with the means introduced here. Comparison of means has a long history starting from the elementary \(AM-GM\) inequality and containing many results for different kinds of means (see for example [1, 4, 9, 20]). Let P and Q be two different quadratures. We will study the inequality

$$\begin{aligned} P_\varphi \le Q_\varphi . \end{aligned}$$
(19)

Remark 8

Assume that \(\varphi \) is a continuous and increasing function. Writing inequality (19) explicitly and using the assumed increasingness of \(\varphi \) (and thus of \(\varphi ^{-1}\)) we can see that we in fact need to compare expressions of the type

$$\begin{aligned} \frac{\frac{1}{y-x}\int \limits _x^y\varphi ^{(-k)}(t)dt-\sum _{i=1}^na_1\varphi ^{(-k)}(\alpha _ix+\beta _i y)}{A_Q(y-x)^k}. \end{aligned}$$

In order to obtain such results we will use the method originated by Rajba in [14] which is based on the use of stochastic ordering results which may be found among others in [5, 10, 15]. This method was continued in [11, 17,18,19].

The most useful result for us will be the following theorem.

Theorem 3

([5] Theorem 4.3) Let X and Y be two random variables such that

$$\begin{aligned} \mathbb E(X^j-Y^j)=0,\;\;j=1,2,\dots ,s. \end{aligned}$$

If the distribution functions \(F_X,F_Y\) cross exactly \(s-\)times and the last sign of \(F_X-F_Y\) is positive then

$$\begin{aligned} \mathbb Ef(X)\le \mathbb Ef(Y) \end{aligned}$$

for all \(s-\)convex functions \(f:\mathbb R\rightarrow \mathbb R.\)

We will start with the inequality for means \(M_\varphi \) and \(T_\varphi .\)

Theorem 4

For every continuous convex and increasing function \(\varphi \) we have

$$\begin{aligned} M_\varphi \le T_\varphi . \end{aligned}$$
(20)

Proof

Taking Remark 8 into account, we can write inequality (20) in the form

$$\begin{aligned}{} & {} \frac{24\left( \frac{1}{y-x}\int _x^y\varphi ^{(-2)}(t)dt-\varphi ^{(-2)}\left( \frac{x+y}{2}\right) \right) }{(y-x)^2}\nonumber \\{} & {} \quad \le -\frac{12\left( \frac{1}{y-x}\int \limits _x^y\varphi ^{(-2)}(t)dt-\frac{1}{2}\varphi ^{(-2)}(x)-\frac{1}{2}\varphi ^{(-2)}(y)\right) }{(y-x)^2} \end{aligned}$$
(21)

which, after some simplifications, yields

$$\begin{aligned} \frac{1}{y-x}\int \limits _x^y\varphi ^{(-2)}(t)dt\le \frac{1}{6}\varphi ^{(-2)}(x)+\frac{2}{3}\varphi ^{(-2)}\left( \frac{x+y}{2}\right) +\frac{1}{6}\varphi ^{(-2)}(y). \end{aligned}$$
(22)

Taking two probability measures: \(\mu _X\) that is equally distributed on the interval [xy] and \(\mu _Y:=\frac{1}{6}\delta _x+\frac{2}{3}\delta _{\frac{x+y}{2}}+\frac{1}{6}\delta _y\) and, using Theorem 3, we can see that (22) is satisfied by every \(3-\)convex function. However if \(\varphi ^{(-2)}\) is \(3-\)convex then \(\varphi \) is convex. \(\square \)

Remark 9

Inequality (22) may be found in the paper of Bessenyei and Páles [3] (see also [19])

To give another example of such an inequality, we now compare \(L_\varphi \) with \(M_\varphi .\)

Theorem 5

For every continuous convex and increasing function \(\varphi \) we have

$$\begin{aligned} M_\varphi \le L_\varphi . \end{aligned}$$
(23)

Proof

Now we must show that

$$\begin{aligned} \frac{24\left( \frac{1}{y-x}\int \limits _x^y\varphi ^{(-2)}(t)dt-\varphi ^{(-2)}\left( \frac{x+y}{2}\right) \right) }{(y-x)^2}\le \frac{1}{y-x}\int \limits _x^y\varphi (t)dt \end{aligned}$$

for every convex \(\varphi :[x,y]\rightarrow \mathbb R.\) To this end we have to construct the cumulative distribution functions connected with both expressions occurring in this inequality. For simplicity we will work on the interval [0, 1] (a function constructed on [0, 1] may be easily shifted and re-scaled to fit any interval). Let \(F_L(t)=t,t\in [0,1],\) then clearly

$$\begin{aligned} \int \limits _0^1\varphi dF_L=\int \limits _0^1\varphi (t)dt. \end{aligned}$$

Further, let \(F_M\) be given by

$$\begin{aligned} F_M(t)=\left\{ \begin{array}{ll} 4t^3&{}t\in [0,\frac{1}{2}],\\ 4t^3-12t^2+12t-3&{}t\in (\frac{1}{2},1]. \end{array}\right. \end{aligned}$$

We have

$$\begin{aligned} \int \limits _0^1\varphi dF_M= & {} \int \limits _0^\frac{1}{2}\varphi dF_M+\int \limits _\frac{1}{2}^1\varphi dF_M=\int \limits _0^\frac{1}{2} 12t^2\varphi (t) dt\nonumber \\{} & {} +\int \limits _\frac{1}{2}^1(12t^2-24t+12)\varphi (t) dt=12t^2\varphi ^{(-1)}(t)\biggr |_0^\frac{1}{2}-\int \limits _0^\frac{1}{2} 24t\varphi ^{(-1)}(t) dt\nonumber \\{} & {} +(12t^2-24t+12)\varphi ^{(-1)}(t)\biggr |_\frac{1}{2}^1-\int \limits _\frac{1}{2}^1 (24t-24)\varphi ^{(-1)}(t) dt\nonumber \\= & {} 3\varphi ^{(-1)}\left( \frac{1}{2}\right) -24t\varphi ^{(-2)}(t)\biggr |_0^\frac{1}{2}+\int \limits _0^\frac{1}{2} 24\varphi ^{(-2)}(t)dt -3\varphi ^{(-1)}\left( \frac{1}{2}\right) \nonumber \\{} & {} -(24t-24)\varphi ^{(-2)}(t)\biggr |_\frac{1}{2}^1+\int \limits _\frac{1}{2}^1 24\varphi ^{(-2)}(t)dt=24\int \limits _0^1 \varphi ^{(-2)}(t)dt\nonumber \\{} & {} -12\varphi ^{(-2)}\left( \frac{1}{2}\right) -12\varphi ^{(-2)}\left( \frac{1}{2}\right) =24\left( \int \limits _0^1 \varphi ^{(-2)}(t)dt- \varphi ^{(-2)}(\frac{1}{2})\right) .\nonumber \\ \end{aligned}$$
(24)

It is easy to see that

$$\begin{aligned} \int \limits _0^1tdF_M(t)=\int \limits _0^1tdF_L(t) \end{aligned}$$

and that \(F_M,F_L\) have exactly one crossing point. Therefore, inequality (23) follows from Theorem 3. \(\square \)

Remark 10

In fact Theorem 3 with \(s=1\) (used in the proof of Theorem 5) is known as the Ohlin lemma (see [10]).