Abstract
In the present paper we prove a generalized version of the famous decomposition theorem of Ng. We also focus on the problem posed by Zsolt Páles concerning the Wright-convex functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and terminology
Let X be a real linear space. A set \(D\subset X\) is said to be convex if
A point \(x_0\in D\) is said to be algebraically internal for a set \(D\subset X\), and we will write \(x_0\in algint(D)\), if for every \(x\in X\) there exists a number \(\delta >0\) such that
A set D is algebraically open if \(algint(D)=D\).
Let us recall that a function \(f:D\rightarrow \mathbb R\) is said to be convex if
If the above inequality holds for all \(x, y\in D\) with \(t=\frac{1}{2}\) then f is said to be Jensen-convex.
A relation of majorization was introduced by Schur [20] in 1923 (see also [5]) in the following manner: for \(x, y\in \mathbb R^n\)
where, for any \(x=(x_{1},\ldots ,x_{n})\in {\mathbb {R}}^n,\) \((x_{[1]},\ldots ,x_{[n]})\) denotes the components of x in decreasing order: \(x_{[1]}\ge \cdots \ge x_{[n]}\). When \(x\prec y\), x is said to be majorized by y. The relation of majorization defined above turns out to be a preordering relation i.e. it is reflexive and transitive. The fact \(x\prec y\) is equivalent (see [2, 5]) to the existence of a doubly stochastic matrix (i.e. a square matrix containing nonnegative elements with all rows and columns summing up to 1) \(T\in \mathbb R^{n}_{n}\) such that
Particularly interesting examples of doubly stochastic matrices are provided by the permutation matrices. Recall that, a given matrix is said to be a permutation matrix if each row and column has a single unit entry, and all other entries are zero.
The functions that preserve the order of majorization (in Schur’s honor who first considered them) are said to be convex in the sense of Schur. Thus we say that a function \(f:W\rightarrow \mathbb R\), where \(W\subset \mathbb R^n\) is Schur-convex, if for all \(x, y\in W\) the implication
holds. In the case, where \(W=I^n\) with some interval \(I\subseteq \mathbb R\) the above condition is equivalent to the following one
and for all doubly stochastic matrices \(T\in \mathbb R^{n}_{n}\). A survey of results concerning majorization and Schur convex-functions may be found in an extensive monograph by Arnold et al. [2].
It is known from the classical papers of Schur [20], Hardy–Littlewood–Pólya [5] and Karamata [7] that if a function \(f:I\rightarrow \mathbb R\) is convex then it generates so-called Schur-convex sums, that is the function \(F:I^n\rightarrow \mathbb R\) defined by
is Schur-convex. It is also known that the convexity of f is a sufficient but not necessary condition under which F is Schur-convex.
In 1954 E. M. Wright [21] introduced a new convexity property. A function \(f:D \rightarrow \mathbb R\) is called Wright-convex if
Clearly, each convex and additive function is Wright-convex, and each Wright-convex functions is convex in the sense of Jensen.
In 1987 C. T. Ng [12] gave a full characterization of functions generating Schur-convex sums. The following theorem of Ng shows also the connection between the functions generated by Schur-convex sums and Wright-convex functions.
Theorem 1
[12] Let \(D\subset \mathbb R^m\) be a nonempty open and convex set, \(f:D\rightarrow \mathbb R\) and \(F(x_1,\ldots ,x_n)=\sum _{j=1}^{n}f(x_j)\). The following conditions are equivalent to each other:
-
(a)
F is Schur-convex for some \(n\ge 2\),
-
(b)
F is Schur-convex for every \(n\ge 2\),
-
(c)
f is convex in the sense of Wright,
-
(d)
f admits the representation
where \(w:D\rightarrow \mathbb R\) is a convex function, and \(a:\mathbb R^m\rightarrow \mathbb R\) is an additive function.
Ng proved his decomposition theorem using de Bruijn’s result [3] related to functions having continuous differences. Several other proofs of this theorem are known in the literature, c.f. Kominek [9] used a support type result based on Rode’s theorem [19], whereas Nikodem in [13] applied a local stability theorem to the Jensen equation. All the above mentioned proofs of Ng’s decomposition theorem are based on results that depend on the axiom of choice, i.e. they use transfinite induction. In the paper [18] Páles obtained a new and elementary proof without using transfinite induction.
In this paper we use the version of Ng’s theorem proved by Nikodem and Páles. In [14], among other results, they gave a decomposition type theorem for approximately Wright-convex functions (see [14, Theorem 2, p. 145]). Applying this result for \(\varepsilon =0\) we obtain Ng’s characterization of Wright-convex functions (as a sum of convex and additive functions) defined on a convex subset of a real linear space with a nonempty algebraic interior.
For an overview about the other properties and generalizations of Wright-convexity we refer to the papers [1, 4, 8,9,10,11,12,13,14, 16, 18, 21].
2 A generalized version of Ng’s theorem
During this section D denotes a convex subset of a given real linear space X. Now, we extend Ng’s decomposition theorem for a function \(F:D^n\rightarrow \mathbb R\) of the form:
where \(f_1,\ldots ,f_n:D\rightarrow \mathbb R\) and \(n\in {\mathbb {N}},\ n\ge 2\). In the proof of a generalized version of Ng’s theorem we use the following result from [15].
Lemma 1
[15, Lemma 1] Let a map \(H:D^n\rightarrow Y\) be of the form
where \(F_j:D\rightarrow Y\), for \(j=1,\ldots ,n\) and Y stands for an abelian group. Then H is symmetric if and only if there exist a map \(F:D\rightarrow Y\) and constants \(C_1,\ldots ,C_n\in Y\) such that
The following theorem generalizes Ng’s decomposition theorem:
Theorem 2
Let D be a convex subset of a real linear space X such that \(algint(D)\ne \emptyset \) and let \(f_1,\ldots ,f_n:D\rightarrow \mathbb R\) be given functions. Then the following conditions are pairwise equivalent:
-
(i)
a function \(F:D^n\rightarrow \mathbb R\) given by the formula
$$\begin{aligned} F(x_1,\ldots ,x_n)=\sum _{j=1}^{n}f_j(x_j) \end{aligned}$$(1)is Schur-convex;
-
(ii)
each function \(f_j\) is Wright-convex i.e.
$$\begin{aligned} f_j(tx+(1-t)y)+f_j((1-t)x+ty)\le f_j(x)+f_j(y),\quad x, y\in D,\ t\in [0,1] \end{aligned}$$for \(j=1,\ldots ,n\), moreover, \(f_i-f_j=const\);
-
(iii)
there exist a convex function \(w:D\rightarrow \mathbb R\), an additive map \(a:X\rightarrow \mathbb R\) and constants \(c_1,\ldots ,c_n \in \mathbb R\) such that
$$\begin{aligned} f_j(x)=w(x)+a(x)+c_j,\quad x\in D,\ j=1,\ldots ,n. \end{aligned}$$
Proof
\((i)\Rightarrow (ii)\) Assume that the function F given by the formula (1) is Schur-convex. In particular it is also symmetric. Indeed, because for any permutation matrix P its inverse \(P^{-1}\) is also a permutation matrix, by the Schur-convexity of F we obtain
and consequently \(F(Px)=F(x)\). Applying Lemma 1 we infer that
for some function \(f:D\rightarrow \mathbb R\) and constants \(c_1,\ldots ,c_n\in \mathbb R\). Clearly,
Put \(c:=c_1+\cdots +c_n\) and fix \(j\in \{1,\ldots ,n\},\ x, y, z\in D,\ t\in [0,1]\) arbitrarily. Then on account of the Schur-convexity of F and the fact that
we obtain
\((ii)\Rightarrow (iii)\) By putting
and \(f=f_1,\ c_j:=\overline{c}_{1,j},\ j=1,\ldots ,n\) we obtain the representation
Obviously, since \(f=f_j-c_j\), f is convex in the sense of Wright and by Theorem 2 from [14] it has the form
where \(w:D\rightarrow \mathbb R\) is a convex function and \(a:X\rightarrow \mathbb R\) is an additive map.
\((iii)\Rightarrow (i)\) Fix \(x, y\in D^n,\ x\prec y\). There exists a doubly stochastic matrix T such that \(x=Ty\). Since
then using the representation (2) we obtain
\(\square \)
As an immediate consequence of the implication \((i)\Rightarrow (ii)\) from the above theorem we obtain.
Corollary 1
Under the assumptions of the previous theorem if a function \(F:D^n\rightarrow \mathbb R\) of the form
is Schur-convex then its diagonalization i.e. a function \(f:D\rightarrow \mathbb R\) of the form
is Wright-convex.
3 The problem of Zsolt Páles
In this section we will focus on some problems posed by Zsolt Páles concerning the Wright-convexity. During the whole section \(I\subset \mathbb R\) stands for an interval with nonempty interior and for a number \(h>0\) let
Clearly, \(I_h\) is an interval (possibly an empty set), moreover, for \(0<h_1<h_2\) we have
In the sequel the symbol LBV(I) will stand for the class of all functions \(f:I\rightarrow \mathbb R\) with locally bounded variation on I i.e.
where the total variation \(V_{a}^{b}(f)\) of a function \(f:[a,b]\rightarrow \mathbb R\) over [a, b] is defined by the formula
and \({\mathcal {P}}_{[a,b]}\) stands for the family of all partitions of the interval [a, b]. A function f is of bounded total variation on [a, b] if \(V_{a}^{b}(f)<\infty .\)
The concept of bounded variation functions was introduced in 1881 by Camille Jordan [6] for real functions defined on a closed interval \([a,b]\subset \mathbb R\) in his study of Fourier series. It is well-known that any function \(f\in LBV(I)\) can be decomposed as the difference \(f=f_1-f_2\), where \(f_1, f_2:I\rightarrow \mathbb R\) are increasing functions. The functions \(f_1, f_2:I\rightarrow \mathbb R\) can be given for example by the formulas:
where the point \(a\in I\) is fixed. This representation is known as the Jordan decomposition.
It is well-known (see for instant [4]) that the Wright-convexity of a function \(f:I\rightarrow \mathbb R\) defined on a nonempty interval \(I\subset \mathbb R\) is equivalent to the condition, that for an arbitrary \(h>0\) a function
is increasing.
In 2004 during the 42nd International Symposium on Functional Equations Zs. Páles posed the following problem connected with the above property:
(Zs. Páles, [17], Problem 22) Let \(I\subset \mathbb R\) be an open interval. Given functions \(f, g:I\rightarrow \mathbb R\) characterize the situation (in terms of the properties, or decompositions of f and g) when
is increasing for all \(h>0\).
The following proposition gives a characterization of a pair of functions (f, g) satisfying the condition (3) as a solution of some conditional functional inequality.
Proposition 1
Let \(I\subset \mathbb R\) be a nonempty interval and let \(f, g: I\rightarrow \mathbb R\). Then a function
is increasing for all \(h>0\) if and only if the inequality
holds for all \(x, y\in I, x<y\).
Proof
First, assume that (3) holds. Pick \(x, y\in I, x<y,\ t\in (0,1)\) arbitrarily. Put \(h:=(1-t)(y-x)>0\). Obviously, \(x<y-h\) and by (3) we get
or equivalently,
Conversely, suppose that the pair of functions (f, g) satisfies the inequality (4). Fix an \(h>0\) and \(x, y\in I_h,\ x<y\) arbitrarily. Since \(x+h, y \in (x,y+h)\) and \((x+h)+y=x+(y+h)\) there exists a number \(t\in (0,1)\) such that
By (4) we get
or equivalently,
\(\square \)
The idea of the proof of our main result is inspired by the method used in [10]. The below presented Definition 1 and Lemma 2 are a slight modification of the Definition and Lemma 4 from [10].
Definition 1
Let \(f:I\rightarrow \mathbb R\) be fixed and let \(I_0\subset I\) be a subinterval of positive length. We say that f is decomposable on \(I_0\) if there exist a locally bounded variation function \(f_0:I_0\rightarrow \mathbb R\) and an additive function \(a_0:\mathbb R\rightarrow \mathbb R\) such that \(a_0(\mathbb Q)=\{0\}\) and \(f=f_0+a_0\) on \(I_0\).
The proof of the following, technical lemma is similar to the proof of Lemma 2 from [10], so it will be omitted.
Lemma 2
Let \(f:I\rightarrow \mathbb R\) be a given function. Then
-
(i)
If f is decomposable on the subintervals \(I_1, I_2\) of I and the interval \(I_1\cap I_2\) has positive length then f is decomposable on \(I_1\cup I_2\).
-
(ii)
If \(\{I_n\}_{n\in \mathbb N}\) is an increasing sequence of subintervals of positive length of I on which f is decomposable then f is also decomposable on \(\bigcup _{k=1}^{\infty }I_k\).
-
(iii)
If, for all \(a, b\in I,\ a<b\), f is decomposable on \(\left[ a,\frac{a+b}{2}\right] \) then f is also decomposable on I.
In the proof of our main result we also use the following theorem proved by de Bruijn in [3] (Theorem 6.1, p.213).
Theorem 3
[3] Let f(x) have the period 1, and assume that, for any value of h, the difference \(\Delta _hf(x)\) has bounded variation over \(0\le x\le 1\). Then f(x) can be written in the form \(g(x)+H(x)\), where g(x) has bounded variation, and H(x) is additive.
The main result of the paper reads as follows.
Theorem 4
Let \(f, g:I\rightarrow \mathbb R\). If the mapping
is increasing for all \(h>0\) then
where \(f_0, g_0:I\rightarrow \mathbb R\) are functions of locally bounded variation, and \(a:\mathbb R\rightarrow \mathbb R\) is an additive function such that \(a(\mathbb Q)=\{0\}\). Moreover, under the assumption \(a(\mathbb Q)=\{0\}\) the decomposition (5) is unique.
Proof
By the assumption for arbitrary \(h>0\) the functions
as well as
are increasing which implies that the functions
are the differences of two increasing functions on the interval \(I_{2h}\) and consequently belong to the family \(LBV(I_{2h})\). We show that \(f=a_1+f_1\) where \(f_1\in LBV(I_0)\), for some interval of positive length \(I_0\subset I\) and \(a_1:\mathbb R\rightarrow \mathbb R\) is an additive function. The corresponding proof for the function g runs in a similar way. Changing x to \(x-h\) in the formula (6) for f we get that
Now, choose a number \(\delta >0\) in such a way that the interval \(I_0:=I_{-\delta }\cap I_\delta \) has a non-empty interior. We will show that f is decomposable on \(I_0\). On account of Lemma 2 it is enough to prove that for all \(a, b\in I_0,\ a<b\) f is decomposable on \(\left[ a,\frac{a+b}{2}\right] \). To see it fix arbitrary \(a, b\in I_0,\ a<b\) and consider a map \(\phi :\mathbb R\rightarrow \mathbb R\) given by the formula
Put \(J:=\phi ^{-1}(I_0)\). Clearly, \([0,2]\subset J\). Let us define a function \(f_1: J\rightarrow \mathbb R\) via the formula
Obviously, \(f_1(0)=f_1(1)\). For any \(h>0\) and \(x\in J_h:=J\cap (J-h)\) we obtain
We have shown that \(\Delta _hf_1\in LBV(J_h)\). Let a function \(p:\mathbb R\rightarrow \mathbb R\) be the 1-periodic extension of the restriction \(f{_{1}}{_{|[0,1]}}\) i.e.
and let us define a function \(f_2:J\rightarrow \mathbb R\) via the formula
Observe that for all \(x\in J\)
where [x] and \(\{x\}\) stand for an integer and the fractional part of the number x, respectively. Since for all \(x, h\in [0,1]\) we have
therefore, \(\Delta _h p\in LBV([0,1])\) for all \(h\in [0,1]\).
Now, we prove that \(\Delta _h p\in LBV(\mathbb R)\) for all \(h\in [0,1]\). To show it fix arbitrary \(c, d\in \mathbb R,\ c<d\) and \(h\in [0,1]\). Let us consider three cases:
-
1.
If \([c]+1<[d]\) then
$$\begin{aligned} V_c^d(\Delta _h p)= & {} V_c^{[c]+1}(\Delta _h p)+\sum _{j=[c]+1}^{[d]-1}V_j^{j+1}(\Delta _h p)+V_{[d]}^d(\Delta _h p) \\= & {} V_{\{c\}}^1(\Delta _h p)+\sum _{j=[c]+1}^{[d]-1} V_0^1(\Delta _h p)+V_0^{\{d\}}(\Delta _h p)<\infty . \end{aligned}$$ -
2.
If \([c]+1>[d]\) then \([c]=[d]\) and consequently, \(c, d\in ([c],[c]+1)\). Therefore,
$$\begin{aligned} V_c^d(\Delta _h p)=V_{\{c\}}^{\{d\}}(\Delta _h p)<\infty . \end{aligned}$$ -
3.
In the case where \([c]+1=[d]\) we obtain
$$\begin{aligned} V_c^d(\Delta _h p)= & {} V_c^{[c]+1}(\Delta _h p)+V_{[d]}^d(\Delta _h p) =V_{\{c\}}^1(\Delta _h p)+V_0^{\{d\}}(\Delta _h p)<\infty . \end{aligned}$$
Now, because for any \(h>0\) and natural number n such that \(n>h\) the formula
holds, \(\Delta _h p\in LBV(\mathbb R),\ h>0\). Finally, for \(h<0\) it is enough to apply the identity
to obtain that \(\Delta _h p\in LBV(\mathbb R),\ h\in \mathbb R\). Applying de Bruijn’s theorem we infer that p is a sum of an additive function and a function of locally bounded variation which, due to the definition of p implies that f also has this property on the interval \(\left[ a,\frac{a+b}{2}\right] \), i.e.
where \({\bar{f}}:\left[ a,\frac{a+b}{2}\right] \rightarrow \mathbb R\) is a function of bounded variation and \({\bar{a}}:\mathbb R\rightarrow \mathbb R\) is an additive function. Now, put
and
We obtain immediately from the definition that \(a_1(\mathbb Q)=\{0\}\) and \(f=f_0+a_0\) on \(\left[ a,\frac{a+b}{2}\right] \), so f is decomposable on \(\left[ a,\frac{a+b}{2}\right] \) and by Lemma 2f is decomposable on I.
Analogously, we can prove that g is decomposable on I i.e. there exist a function \(g_0\) with locally bounded variation and an additive function \(a_2:\mathbb R\rightarrow \mathbb R\) such that \(a_2(\mathbb Q)=\{0\}\) and \(g=g_0+a_2\) on I.
Having in mind that for all \(h>0\) the function
is increasing we infer that the additive function
is continuous, therefore it has the form
for some constant \(c\in \mathbb R\). Since for all \(\alpha \in \mathbb Q\) we have
\(c=0\) and consequently \(a_1=a_2\).
For the proof of uniqueness of the representation (5) let us assume that
where \(f_1, f_2\in LBV(I)\) and \(a_1, a_2:\mathbb R\rightarrow \mathbb R\) are additive functions satisfying the property \(a_1(\mathbb Q)=a_2(\mathbb Q)=\{0\}\). Then
Therefore, the additive function \(a_2-a_1\) has to be continuous which implies that \(a_1=a_2\) and consequently \(f_1=f_2\). The proof for the function g runs in a similar way. \(\square \)
The following example below shows that the functions f and g in the statement of the Theorem 4 need not be convex.
Example 1
Let \(f, g:\mathbb R\rightarrow \mathbb R\) be given by the formulas
An easy calculation shows that for arbitrary \(h>0\) the function
is increasing but neither f nor g is convex.
Let us observe that if additionally we assume in Theorem 4 that for all \(h>0\) also the function
is increasing then we obtain the expected representation. Namely, the following theorem holds true.
Theorem 5
Let \(I\subset \mathbb R\) be an open interval and let \(f,g:I\rightarrow \mathbb R\). Then the following conditions are pairwise equivalent:
-
(i)
the mappings
$$\begin{aligned} I_h\ni x \longrightarrow f(x+h)-g(x) \ \ \text {and}\ \ I_h\ni x \longrightarrow g(x+h)-f(x) \end{aligned}$$are increasing for all \(h>0\);
-
(ii)
the inequality
$$\begin{aligned} g(tx+(1-t)y)+f((1-t)x+ty)\le g(x)+f(y) \end{aligned}$$holds for all \(x, y\in I\) and all \(t\in [0,1]\);
-
(iii)
a function \(F:\mathbb R^2\rightarrow \mathbb R\) given by the formula
$$\begin{aligned} F(x,y)=f(x)+g(y) \end{aligned}$$is Schur-convex;
-
(iv)
both functions f and g are Wright-convex and
$$\begin{aligned} f(x)-g(x)=c,\quad x\in I, \end{aligned}$$for some constant \(c\in \mathbb R\);
-
(v)
there exist: a convex function \(w:I\rightarrow \mathbb R\), an additive function \(a:\mathbb R\rightarrow \mathbb R\) and a constant \(c\in \mathbb R\) such that
$$\begin{aligned} f(x)=a(x)+w(x)\quad \text {and}\quad g(x)=a(x)+w(x)+c,\ x\in I. \end{aligned}$$
Proof
\((i)\Rightarrow (ii)\) Fix \(x, y\in I\) and \(t\in (0,1)\), arbitrarily. Assume that \(x<y\) and put \(h:=(1-t)(y-x)\). From the assumption (i)
or equivalently,
Analogously,
therefore,
We have shown that for all \(x, y\in I\) and \(t\in [0,1]\) we have
which implies (ii).
\((ii)\Rightarrow (iii)\) Assume that (ii) is fulfilled. We can rewrite the inequality in condition (ii) as
where
and \(t\in [0,1]\). Since any \(2\times 2\) double stochastic matrix is of the above form, (iii) holds.
The proofs of the implications: \((iii)\Rightarrow (iv)\) and \((iv)\Rightarrow (v)\) are direct consequences of Theorem 2.
\((v)\Rightarrow (i)\) Assume that
where \(a:\mathbb R\rightarrow \mathbb R\) is an additive function, \(w:I\rightarrow \mathbb R\) is a convex function and \(c\in \mathbb R\) is a constant. Because every convex function \(w:I\rightarrow \mathbb R\) has the property that for any \(h>0\) the function
is increasing and for any additive function \(a:\mathbb R\rightarrow \mathbb R\) we have
then we get condition (i). The proof of the theorem is finished. \(\square \)
Availability of data and materials
Not applicable.
References
Adamek, M.: Almost \(\lambda \)-convex and almost Wright convex functions. Math. Slov. 53(1), 67–73 (2003)
Arnold, B.C., Marshall, A.W., Olkin, I.: Inequalities: Theory of Majorization and Its Applications. Springer Series in Statistics, 2nd edn. New York-Dordrecht-Heidelberg-London (2011)
de Bruijn, N.G.: Functions whose differences belong to a given class. Nieuw Arch. Wisk. 2(23), 194–218 (1951)
Dragomir, S.S., Pečarić, J.E., Persson, L.E.: Some inequalities of Hadamard type. Sochow J. Math. 21, 335–341 (1995)
Hardy, G., Littlewood, J.E., Pólya, G.: Inequalities, 1st edn., 2nd edn. Cambridge University Press, Cambridge, London and New York, 1934 (1952)
Jordan, C.: Sur la série de Fourier. Comptes Rendus Sci. Paris 92, 228–230 (1881)
Karamata, J.: Sur une inégalité rélative aux fonctions convexes. Publ. Math. Univ. Belgrade 1, 145–148 (1932)
Kominek, Z.: Convex Functions in Linear Spaces. With Polish and Russian summaries, Prace Naukowe Uniwersytetu Śla̧skiego w Katowicach [Scientific Publications of the University of Silesia], 1087. Uniwersytet Śla̧ski, Katowice (1989)
Kominek, Z.: On a problem of K. Nikodem. Arch. Math. (Basel) 50(3), 287–288 (1988)
Maksa, Gy., Páles, Zs.: Decomposition of higher-order Wright-convex functions. J. Math. Anal. Appl. 359(2), 439–443 (2009)
Mrowiec, J.: On the stability of Wright-convex functions. Aequ. Math. 65(1–2), 158–164 (2003)
Ng, C.T.: Functions generating Schur-convex sums. In: Walter, W. (ed.) General Inequalities, S. Oberwolach, 1986, International Series of Numerical Mathematics, vol. 80, pp. 433–438. Birkhäuser, Boston (1987)
Nikodem, K.: On some class of midconvex functions. Ann. Polon. Math. 50(2), 145–151 (1989)
Nikodem, K., Páles, Zs.: On approximately Jensen-convex and Wright-convex functions. Comptes Rendus Math. Rep. Sci. Can. 23(4), 141–147 (2001)
Olbryś, A.: On delta Schur-convex mappings. Publ. Math. Debr. 86(3–4), 313–323 (2015)
Olbryś, A.: On some inequalities equivalent to the Wright-convexity. J. Math. Inequal. 9(2), 449–461 (2015)
Páles, Zs.: Problem 22. Report of meeting the forty-second international symposium on functional equations, June 20–27: Opava Czech Republic. Aequ. Math. 69(2005), 164–200 (2004)
Páles, Zs.: An elementary proof for the decomposition theorem of Wright convex functions. Ann. Math. Sil. 34(1), 142–150 (2020)
Rodé, G.: Eine abstrakte Version des Satzes von Hahn-Banach. Arch. Math. (Basel) 31(5), 474–481 (1978/1979)
Schur, I.: Über eine Klasse von Mittelbildungen mit Anwendungen auf der Determinanten Theorie. Sitzunsber. Berlin. Math. Ges. 22, 9–20 (1923)
Wright, E.M.: An inequality for convex functions. Am. Math. Monthly 61, 620–622 (1954)
Funding
There were no fundings.
Author information
Authors and Affiliations
Contributions
I declare that this paper was written by myself.
Corresponding author
Ethics declarations
Conflict of interest
I have no competing interests.
Ethical approval
The authors declare no competing interests.
Additional information
Dedicated to Professor Maciej Sablik and Professor László Székelyhidi on the occasion of their 70th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Olbryś, A. Remarks on Wright-convex functions. Aequat. Math. 97, 1157–1171 (2023). https://doi.org/10.1007/s00010-023-01024-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00010-023-01024-2