1 Introduction

Convexity is one the most important, natural, and fundamental concepts in mathematics [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18]. Convex functions were proposed by Jensen over 100 years ago. Over the past few years, many generalizations and extensions have been made for the convexity, for example, quasi-convexity [19], strong convexity [20, 21], approximate convexity [22], logarithmical convexity [23], midconvexity [24], pseudo-convexity [25], h-convexity [26], delta-convexity [27], s-convexity [28], preinvexity [29], GA-convexity [30], GG-convexity [31], coordinate strong convexity [32], and Schur convexity [33,34,35,36,37,38,39,40,41,42,43,44]. In particular, many remarkable inequalities can be found in the literature [45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79] via the convexity theory.

In the article, we deal with the strongly convex function [20, 21].

Definition 1.1

Let Ψ be a real-valued function defined on an interval \(I\subseteq\mathbb{R}\) and c be a positive real number. Then Ψ is said to be strongly convex with modulus c on I if the inequality

$$ \varPsi\bigl(\eta u+(1-\eta)v\bigr)\leq\eta\varPsi(u)+(1-\eta)\varPsi(v)-c \eta (1-\eta) (u-v)^{2} $$
(1.1)

holds for all \(u,v\in I\) and \(\eta\in[0,1]\).

If Ψ is strongly convex, then we clearly see that

$$ \varPsi ( v )+\varPsi_{+}^{\prime} ( v ) ( u-v )+c ( u-v )^{2}\le\varPsi ( u ). $$
(1.2)

Next we recall some basic concepts in the theory of majorization.

There are definite natural applications to the indefinite notion that the entries of n-tuple δ are more nearly equal, or less spread out, than the entries of n-tuple ζ. The concept arises in a variety of contexts, and it can be made precise in a number of ways. But in remarkably many cases, the applicable statement is that δ majorizes ζ means that the sum of k largest entries of ζ does not exceed the sum of k largest entries of δ for all \(k = 1,2,\ldots, n-1\) with equality for \(k = n\). That is, let \({\boldsymbol{\delta}}=(\delta_{1},\delta_{2},\ldots ,\delta_{n})\) and \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta_{2},\ldots ,\zeta_{n})\) be two n-tuples of real numbers, and

$$ \delta_{1}^{\downarrow}\geq\delta_{2}^{\downarrow }\geq \cdots\geq\delta_{n}^{\downarrow} ,\qquad \zeta_{1}^{\downarrow} \geq\zeta_{2}^{\downarrow}\geq\cdots\geq \zeta_{n}^{\downarrow} $$

be their ordered arrangement. Then the n-tuple δ is said to majorize ζ (or ζ is to be majorized by δ), in symbols we write \(\boldsymbol{\delta}\succ\boldsymbol{\zeta}\) if

$$ \sum_{j=1}^{k}\delta_{j}^{\downarrow} \geq\sum_{j=1}^{k}\zeta_{j}^{\downarrow} $$

holds for \(k=1,2,\ldots,n-1\), and

$$ \sum_{j=1}^{n}\delta_{j}=\sum _{j=1}^{n}\zeta_{j}. $$

Recently, many articles that were published in an extensive variety of fields have been dedicated to the theory of majorization. Undeniably, many concepts of majorization have been reinvented and used in different fields of research such as dominance ordering or Lorenz in economics, graph theory, and optimization. Here we mention useful contexts of majorization in physical science. In physics and chemistry, the terms “δ is more chaotic than ζ,” “δ is more mixed than ζ,” and “δ is more disordered than ζ” are related to inequality ordering and the concept of majorization \(\boldsymbol{\delta}\prec \boldsymbol{\zeta}\). To explain “mixing” term, we assume undistinguishable cylindrical beakers which contain liquid of different amounts. Since some beakers contain a large amount of liquid “mixed” to the lesser amounts of liquid from another beaker, such process is known as majorization. The “chaotic” term is used in physical laws. Let us consider one vector which is more chaotic than another vector, it means that one vector majorizes the other vector. The origin of this term is basically related to entropy. In an angulus manner, one vector is said to be more random than another; so, here it means one majorizes the other.

The following distinguished majorization theorem can be found in the literature [80].

Theorem 1.2

Let \([\lambda_{1},\xi_{1}]\subseteq\mathbb{R}\) be an interval, and \({\boldsymbol{\delta}}=(\delta_{1},\delta_{2},\ldots,\delta_{n})\) and \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta_{2},\ldots,\zeta_{n})\) be two n-tuples such that \(\delta_{j},\zeta_{j}\in[\lambda _{1},\xi_{1}]\) for \(j=1,2,\ldots,n\). Then the inequality

$$ \sum_{j=1}^{n}\varPsi (\delta_{j} ) \geq\sum_{j=1}^{n}\varPsi (\zeta_{j} ) $$
(1.3)

holds for every continuous convex function \(\varPsi:[\lambda_{1},\xi _{1}]\rightarrow\mathbb{R}\) if and only if \(\boldsymbol{\delta}\succ \boldsymbol{\zeta}\).

A weighted version of Theorem 1.2 was proved by Fuchs [81].

Theorem 1.3

Let \({\boldsymbol{\delta}}=(\delta_{1},\delta_{2},\ldots,\delta _{n})\), \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta_{2},\ldots,\zeta _{n})\) be two decreasing n-tuples and \(\mathbf {{p}}=(p_{1},p_{2},\ldots,p_{n})\) be a real n-tuple such that

$$\begin{aligned} &\sum_{j=1}^{k}p_{j} \delta_{j}\geq\sum_{j=1}^{k}p_{j} \zeta_{j} \quad\textit{for }k=1,2,\ldots,n-1\quad\textit{and} \end{aligned}$$
(1.4)
$$\begin{aligned} &\sum_{j=1}^{n}p_{j} \delta_{j}=\sum_{j=1}^{n}p_{j} \zeta_{j}. \end{aligned}$$
(1.5)

Then, for every continuous convex function \(\varPsi:[\lambda_{1},\xi _{1}]\rightarrow\mathbb{R}\), we have

$$ \sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ). $$
(1.6)

The following theorem [82] is a weighted majorization theorem for certain n-tuples and positive weights.

Theorem 1.4

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a continuous convex function on an interval \([\lambda_{1},\xi_{1}]\), p be a positive n-tuple, and δ, \(\boldsymbol{\zeta}\in[\lambda_{1},\xi_{1}]^{n}\) satisfying (1.4) and (1.5).

  1. (a)

    If the n-tuple ζ is decreasing, then inequality (1.6) holds.

  2. (b)

    If the n-tuple δ is increasing, then the reverse inequality in (1.6) holds.

Dragomir [83] proved the majorization theorem without using condition (1.4).

Theorem 1.5

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a continuous convex function on an interval \([\lambda_{1},\xi_{1}]\). Suppose that δ, \({\boldsymbol{\zeta}}\in [\lambda_{1},\xi_{1}]^{n}\) and \(p_{j}\geq0\) for \(j=1,2,\ldots,n\). If \((\delta_{j}-\zeta_{j} )_{ (j=\overline{1,n} )}\) and \((\zeta_{j} )_{ (j=\overline{1,n} )}\) are nondecreasing (nonincreasing) and satisfying (1.5), then inequality (1.6) holds.

The main purpose of the article is to establish the majorization theorem for majorized n-tuples by using a strongly convex function and give their applications in the theory of majorization.

2 Main results

We start with establishing some inequalities which will be used to prove majorization theorem for strongly convex functions.

Proposition 2.1

Let \(\varPsi:I\rightarrow\mathbb{R}\) be a strongly convex function with modulus c and \(u_{1},v_{1},x_{1},y_{1}\in I\) such that \(u_{1}< v_{1}\leq y_{1}< x_{1}\), then the following inequalities hold:

$$\begin{aligned} &\mathrm{(a)}\quad \frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}-c(v_{1}-u_{1}) \leq \frac{\varPsi(x_{1})-\varPsi(u_{1})}{x_{1}-u_{1}}-c(x_{1}-u_{1}), \\ &\mathrm{(b)}\quad \frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}-c(v_{1}-u_{1}) \leq \frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}}-c(x_{1}-v_{1}), \\ &\mathrm{(c)}\quad \frac{\varPsi(x_{1})-\varPsi(u_{1})}{x_{1}-u_{1}}-c(x_{1}-u_{1}) \leq \frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}}-c(x_{1}-v_{1}), \\ &\mathrm{(d)}\quad \frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}-c(v_{1}-u_{1}) \leq \frac{\varPsi(x_{1})-\varPsi(y_{1})}{x_{1}-y_{1}}-c(x_{1}-y_{1}). \end{aligned}$$

Proof

(a) Since \(u_{1}< v_{1}< x_{1}\), we obtain

$$\begin{aligned} &v_{1}= \biggl(\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggr)u_{1} + \biggl( \frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr)x_{1}, \\ &\varPsi(v_{1})=\varPsi \biggl[ \biggl(\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggr)u_{1} + \biggl(\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr)x_{1} \biggr]. \end{aligned}$$

By using strong convexity, we have

$$\begin{aligned} &\varPsi(v_{1})\leq\frac{x_{1}-v_{1}}{x_{1}-u_{1}}\varPsi(u_{1}) + \frac{v_{1}-u_{1}}{x_{1}-u_{1}}\varPsi(x_{1}) -c\frac{x_{1}-v_{1}}{x_{1}-u_{1}}\frac{v_{1}-u_{1}}{x_{1}-u_{1}} (u_{1}-x_{1} )^{2} \\ &\phantom{\varPsi(v_{1})}= \biggl(1-\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr)\varPsi(u_{1}) +\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \varPsi(x_{1}) -c ({x_{1}-v_{1}} ) ({v_{1}-u_{1}} ), \\ &\varPsi(v_{1})-\varPsi(u_{1})\leq\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \varPsi(x_{1}) -\frac{v_{1}-u_{1}}{x_{1}-u_{1}}\varPsi(u_{1}) -c ({x_{1}-v_{1}} ) ({v_{1}-u_{1}} ), \\ &\frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}\leq\frac{\varPsi (x_{1})-\varPsi(u_{1})}{x_{1}-u_{1}}-c ({x_{1}-v_{1}} ), \\ &\frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}\leq\frac{\varPsi(x_{1}) -\varPsi(u_{1})}{x_{1}-u_{1}} -c \biggl(1-\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr) ({x_{1}-u_{1}} ). \end{aligned}$$

Rearranging the above inequalities, we deduce (a).

(b) From (a) we can write

$$\begin{aligned} &\varPsi(v_{1})-\varPsi(u_{1})-c(v_{1}-u_{1})^{2} \\ &\quad\leq \biggl(\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr) \bigl\{ \varPsi(x_{1}) - \varPsi(u_{1})-c(x_{1}-u_{1})^{2} \bigr\} \\ &\quad= \biggl(\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr) \bigl\{ \varPsi(x_{1}) - \varPsi(v_{1})+\varPsi(v_{1})-\varPsi(u_{1})-c(x_{1}-u_{1})^{2} \bigr\} , \\ &\bigl\{ \varPsi(v_{1})-\varPsi(u_{1}) \bigr\} \biggl(1- \frac {v_{1}-u_{1}}{x_{1}-u_{1}} \biggr)-c(v_{1}-u_{1})^{2} \\ &\quad\leq \biggl(\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr) \bigl\{ \varPsi(x_{1}) - \varPsi(v_{1})-c(x_{1}-u_{1})^{2} \bigr\} , \\ &\bigl\{ \varPsi(v_{1})-\varPsi(u_{1}) \bigr\} (x_{1}-v_{1}) -c(v_{1}-u_{1})^{2}(x_{1}-u_{1}) \\ &\quad\leq(v_{1}-u_{1}) \bigl\{ \varPsi(x_{1})-\varPsi (v_{1})-c(x_{1}-u_{1})^{2} \bigr\} , \\ &(x_{1}-v_{1}) \biggl[\varPsi(v_{1})- \varPsi(u_{1}) -c(v_{1}-u_{1})^{2} \biggl(1-\frac{u_{1}-v_{1}}{x_{1}-v_{1}} \biggr) \biggr] \\ &\quad\leq(v_{1}-u_{1}) \bigl\{ \varPsi(x_{1})-\varPsi (v_{1})-c(x_{1}-u_{1})^{2} \bigr\} , \\ &(x_{1}-v_{1}) \bigl[\varPsi(v_{1})- \varPsi(u_{1})-c(v_{1}-u_{1})^{2} \bigr] \\ &\quad\leq(v_{1}-u_{1}) \bigl\{ \varPsi(x_{1})- \varPsi(v_{1}) \bigr\} -c(v_{1}-u_{1}) \bigl[(x_{1}-u_{1})^{2}-(v_{1}-u_{1})^{2} \bigr] \\ &\quad=(v_{1}-u_{1}) \bigl[ \bigl(\varPsi(x_{1})- \varPsi(v_{1}) \bigr) -c(x_{1}-v_{1}) (x_{1}-2u_{1}+v_{1}) \bigr] \\ &\quad\leq(v_{1}-u_{1}) \bigl[ \bigl(\varPsi(x_{1})- \varPsi(v_{1}) \bigr) -c(x_{1}-v_{1})^{2} \bigr]. \end{aligned}$$

Rearranging the above inequalities, we deduce (b).

(c) Now, we can write

$$\begin{aligned} &\frac{\varPsi(x_{1})-\varPsi(u_{1})-c(x_{1}-u_{1})^{2}}{x_{1}-u_{1}} \\ &\quad=\frac{ \{\varPsi(x_{1})-\varPsi(v_{1}) \} (x_{1}-v_{1})}{(x_{1}-u_{1})(x_{1}-v_{1})}+\frac{ \{\varPsi (v_{1})-\varPsi(u_{1}) \}(v_{1}-u_{1})}{(x_{1}-u_{1}) (v_{1}-u_{1})}-c(x_{1}-u_{1}). \end{aligned}$$

Using inequality (a), we obtain

$$\begin{aligned} &\frac{\varPsi(x_{1})-\varPsi(u_{1})-c(x_{1}-u_{1})^{2}}{x_{1}-u_{1}} \\ &\quad\leq\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggl(\frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}} \biggr) + \biggl( \frac{\varPsi(x_{1})-\varPsi(u_{1})}{x_{1}-u_{1}} -c(x_{1}-u_{1})+c(v_{1}-u_{1}) \biggr) \\ &\qquad{}\times \biggl(\frac{v_{1}-u_{1}}{x_{1}-u_{1}} \biggr)-c(x_{1}-u_{1}), \\ &\frac{\varPsi(x_{1})-\varPsi(u_{1})-c(x_{1}-u_{1})^{2}}{x_{1}-u_{1}} \biggl(\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggr) \\ &\quad\leq\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggl(\frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}} \biggr) -c(x_{1}-u_{1}) \biggl(1-\frac {(v_{1}-u_{1})^{2}}{(x_{1}-u_{1})^{2}} \biggr) \\ &\quad=\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggl(\frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}} \biggr)-\frac {c}{(x_{1}-u_{1})} \bigl\{ (x_{1}-u_{1})^{2}-(v_{1}-u_{1})^{2} \bigr\} \\ &\quad=\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggl(\frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}} \biggr) -\frac{c(x_{1}-v_{1})}{(x_{1}-u_{1})} \bigl\{ (x_{1}-2u_{1}+v_{1}) \bigr\} \\ &\quad\leq\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggl(\frac{\varPsi(x_{1})-\varPsi(v_{1})}{x_{1}-v_{1}} \biggr) -\frac{c(x_{1}-v_{1})}{(x_{1}-u_{1})}(x_{1}-v_{1}) \\ &\quad\leq\frac{\varPsi(x_{1})-\varPsi(v_{1})-c(x_{1}-v_{1})^{2}}{x_{1}-v_{1}} \biggl(\frac{x_{1}-v_{1}}{x_{1}-u_{1}} \biggr). \end{aligned}$$

Rearranging the above inequality, we deduce (c).

(d) Similarly, to prove the last inequality using inequality (b) and if \(u_{1}< v_{1}< y_{1}\), then we obtain

$$ \frac{\varPsi(v_{1})-\varPsi(u_{1})}{v_{1}-u_{1}}-c(v_{1}-u_{1}) \leq \frac{\varPsi(y_{1})-\varPsi(v_{1})}{y_{1}-v_{1}}-c(y_{1}-v_{1}), $$
(2.1)

also, if \(v_{1}< y_{1}< x_{1}\), then we obtain

$$ \frac{\varPsi(y_{1})-\varPsi(v_{1})}{y_{1}-v_{1}}-c(y_{1}-v_{1}) \leq \frac{\varPsi(x_{1})-\varPsi(y_{1})}{x_{1}-y_{1}}-c(x_{1}-y_{1}), $$
(2.2)

from (2.1) and (2.2), we deduce (d). □

Remark 2.2

If Ψ is a strongly convex function with modulus c, then, by using inequalities in Proposition 2.1, we clearly see that the function \(\hbar(x,y)=\frac{\varPsi(x)-\varPsi(y)-c(x-y)^{2}}{x-y}\) is increasing in both variables.

Now we are in the position to prove majorization theorem for strongly convex functions.

Theorem 2.3

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a strongly convex function with modulus c. Suppose that \({\boldsymbol{\delta}}=(\delta_{1},\delta _{2},\ldots,\delta_{n})\) and \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta _{2},\ldots,\zeta_{n})\) are n-tuples, \(\delta_{j},\zeta_{j}\in [\lambda_{1},\xi_{1}]\), \(j=1,2,\ldots,n\), and the n-tuple δ majorizes ζ. Then the following inequality holds:

$$ \sum_{j=1}^{n}\varPsi ( \delta_{j} )\geq\sum_{j=1}^{n}\varPsi (\zeta_{j} ) +c\sum_{j=1}^{n} ( \delta_{j}-\zeta_{j} )^{2}. $$
(2.3)

Proof

Without loss of generality, we consider that

$$ \delta_{1}\geq\delta_{2}\geq\cdots\geq\delta_{n},\qquad \zeta_{1}\geq\zeta_{2}\geq\cdots\geq\zeta_{n} \quad\text{and}\quad \delta _{j}\neq\zeta_{j}\quad \text{for } j=1,2, \ldots,n. $$

Assume that \(d_{j}=\hbar(\delta_{j},\zeta_{j})=\frac{\varPsi(\delta _{j})-\varPsi(\zeta_{j})-c(\delta_{j}-\zeta_{j})^{2}}{\delta _{j}-\zeta_{j}}\). Since Ψ is strongly convex, so by Remark 2.2, it implies that the sequence \(\{d_{j} \}_{j=1}^{n}\) is decreasing. Suppose that

$$ E_{j}=\sum_{k=1}^{j} \delta_{k}\quad \text{and}\quad F_{j}=\sum_{k=1}^{j} \zeta_{k}, \quad j=1,2,\ldots,n, E_{0}=F_{0}=0. $$

Since \(\boldsymbol{\delta}\succ\boldsymbol{\zeta}\), therefore

$$ E_{n}=F_{n} \quad\text{and}\quad E_{j}\geq F_{j},\quad \text{for } j=1,2,\ldots,n-1. $$

Now, we can write

$$\begin{aligned} &\sum_{j=1}^{n}\varPsi(\delta_{j})- \sum_{j=1}^{n}\varPsi(\zeta_{j}) =\sum _{j=1}^{n} \bigl\{ \varPsi(\delta_{j})- \varPsi(\zeta_{j})-c(\delta _{j}-\zeta_{j})^{2}+c( \delta_{j}-\zeta_{j})^{2} \bigr\} \\ &\quad=\sum_{j=1}^{n} \biggl[ \biggl\{ \frac{\varPsi(\delta_{j})-\varPsi(\zeta _{j})-c(\delta_{j}-\zeta_{j})^{2}}{\delta_{j}-\zeta_{j}} \biggr\} (\delta_{j}-\zeta_{j}) +c( \delta_{j}-\zeta_{j})^{2} \biggr], \end{aligned}$$

which can be written as

$$ \sum_{j=1}^{n}\varPsi( \delta_{j})-\sum_{j=1}^{n}\varPsi( \zeta _{j})-c\sum_{j=1}^{n}( \delta_{j}-\zeta_{j})^{2} =\sum _{j=1}^{n}d_{j}(\delta_{j}- \zeta_{j}). $$
(2.4)

Clearly,

$$ E_{j}-E_{j-1}=\delta_{j} \quad\text{and}\quad F_{j}-F_{j-1}=\zeta_{j}\quad \text{for } j=1,2,\ldots,n. $$
(2.5)

Using (2.5) in (2.4), we get

$$\begin{aligned} &\sum_{j=1}^{n}\varPsi(\delta_{j})- \sum_{j=1}^{n}\varPsi(\zeta _{j})-c \sum_{j=1}^{n}(\delta_{j}- \zeta_{j})^{2} \\ &\quad=\sum_{j=1}^{n}d_{j} \bigl\{ (E_{j}-E_{j-1})-(F_{j}-F_{j-1}) \bigr\} \\ &\quad=\sum_{j=1}^{n}d_{j} (E_{j}-F_{j} )-\sum_{j=1}^{n}d_{j} (E_{j-1}-F_{j-1} ) \\ &\quad=\sum_{j=1}^{n}d_{j} (E_{j}-F_{j} )-\sum_{j=0}^{n-1}d_{j+1} (E_{j}-F_{j} ) \\ &\quad=\sum_{j=1}^{n-1}d_{j} (E_{j}-F_{j} )-\sum_{j=1}^{n-1}d_{j+1} (E_{j}-F_{j} ) \\ &\quad =\sum_{j=1}^{n-1} (d_{j}-d_{j+1} ) (E_{j}-F_{j} ). \end{aligned}$$
(2.6)

Since \(d_{j}\geq d_{j+1}\) and \(E_{j}\geq F_{j}\), so \(\sum_{j=1}^{n-1} (d_{j}-d_{j+1} ) (E_{j}-F_{j} )\geq 0\), hence from (2.6) we obtain (2.3). □

Example 2.4

Let \(\lambda_{1},\xi_{1}\in\mathbb{R^{+}}\) with \(\lambda_{1}<\xi _{1}\) and \(\alpha,\kappa,\gamma\in[\lambda_{1},\xi_{1}]\) such that \(2\alpha,2\kappa,2\gamma,\alpha+\kappa,\kappa+\gamma,\gamma +\alpha\in[\lambda_{1},\xi_{1}]\). Then, for any \(c\leq\frac{1}{\xi_{1}^{3}}\), we have the inequality

$$ \frac{1}{2\alpha}+\frac{1}{2\kappa}+\frac{1}{2\gamma}\geq \frac {1}{\alpha+\kappa} +\frac{1}{\kappa+\gamma}+\frac{1}{\gamma+\alpha} +c \bigl\{ (\alpha- \kappa)^{2}+(\kappa-\gamma)^{2}+(\gamma-\alpha )^{2} \bigr\} . $$
(2.7)

Solution 2.5

Let \(\boldsymbol{\delta}=(2\alpha,2\kappa,2\gamma)\), \(\boldsymbol {\zeta}=(\alpha+\kappa,\kappa+\gamma,\gamma+\alpha)\), clearly \(\boldsymbol{\delta}\succ\boldsymbol{\zeta}\) (for details see [84]). Since the function \(\varPsi(x)=\frac{1}{x}\), \(x\in[\lambda _{1},\xi_{1}]\) is strongly convex with modulus \(c\leq\frac{1}{\xi _{1}^{3}}\). Therefore using inequality (2.3) we obtain (2.7).

Example 2.6

Let \(\lambda_{1},\xi_{1}\in\mathbb{R^{+}}\) with \(\lambda_{1}<\xi _{1}\) and \(\alpha,\kappa,\gamma\in[\lambda_{1},\xi_{1}]\) such that \(\alpha+\kappa-\gamma, \kappa+\gamma-\alpha, \gamma+\alpha -\kappa, \alpha\kappa\gamma\in[\lambda_{1},\xi_{1}]\). Then, for any \(c\leq\frac{1}{2\xi_{1}^{2}}\), we have the inequality

$$\begin{aligned} &(\alpha+\kappa-\gamma) (\kappa+\gamma-\alpha) (\gamma+\alpha - \kappa) \\ &\quad\leq\alpha\kappa\gamma +\exp \bigl[-c \bigl\{ (\kappa-\gamma)^{2}+( \gamma-\alpha )^{2}+(\alpha-\kappa)^{2} \bigr\} \bigr]. \end{aligned}$$
(2.8)

Solution 2.7

Let \(\boldsymbol{\delta}=(\alpha+\kappa-\gamma, \kappa+\gamma -\alpha, \gamma+\alpha-\kappa)\), \(\boldsymbol{\zeta}=(\alpha, \kappa,\gamma)\), clearly \(\boldsymbol{\delta}\succ\boldsymbol{\zeta }\). As \(\varPsi(x)=\log x\), \(x\in[\lambda_{1},\xi_{1}]\) is a strongly concave function with modulus \(c\leq\frac{1}{2\xi_{1}^{2}}\). Therefore using inequality (2.3) we obtain (2.8).

Example 2.8

Let \([-\frac{\pi}{6},\frac{\pi}{6} ]\) be an interval and \(\alpha_{j}\in [-\frac{\pi}{6},\frac{\pi}{6} ]\) such that \(2\alpha_{j}-\alpha_{j+1}\in [-\frac{\pi}{6},\frac{\pi }{6} ]\) for \(j=1,2,\ldots,n\), with \(\alpha_{n+1}=\alpha_{1}\). Then, for any \(c\leq\frac{\sqrt{3}}{4}\), one has

$$\begin{aligned} &\cos(2\alpha_{1}- \alpha_{2})+\cos(2\alpha_{2}-\alpha _{3})+\cdots+ \cos(2\alpha_{n}-\alpha_{1}) \\ & \quad\leq \cos\alpha_{1}+\cos\alpha_{2}+\cdots+\cos \alpha_{n} -c \bigl\{ (\alpha_{1}-\alpha_{2})^{2}+ \cdots+(\alpha_{n}-\alpha _{1})^{2} \bigr\} . \end{aligned}$$
(2.9)

Solution 2.9

Let \(\boldsymbol{\delta}=(2\alpha_{1}-\alpha_{2}, 2\alpha _{2}-\alpha_{3}, 2\alpha_{n}-\alpha_{1})\), \(\boldsymbol{\zeta }=(\alpha_{1}, \alpha_{2},\ldots,\alpha_{n})\), clearly \(\boldsymbol {\delta}\succ\boldsymbol{\zeta}\) (for details see [84]). Since the function \(\varPsi(x)=\cos{x}\), \(x\in [-\frac{\pi}{6},\frac {\pi}{6} ]\) is strongly concave with modulus \(c\leq\frac{\sqrt {3}}{4}\). Therefore using inequality (2.3) we obtain (2.9).

Remark 2.10

We clearly see that inequalities (2.7), (2.8), and (2.9) are the improvements of the corresponding inequalities given in [84].

The weighted version of Theorem 2.3 is given in the following theorem. It can be viewed as a generalization of the majorization theorem.

Theorem 2.11

Let \({\boldsymbol{\delta}}=(\delta_{1},\delta_{2},\ldots,\delta_{n})\), \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta_{2},\ldots,\zeta_{n})\) be two decreasing n-tuples and \(\mathbf{{p}}=(p_{1},p_{2},\ldots,p_{n})\) be a real n-tuple such that

$$\begin{aligned} &\sum_{j=1}^{k}p_{j} \delta_{j}\geq\sum_{j=1}^{k}p_{j} \zeta_{j} \quad\textit{for }k=1,2,\ldots,n-1,\quad\textit{and} \end{aligned}$$
(2.10)
$$\begin{aligned} & \sum_{j=1}^{n}p_{j} \delta_{j}=\sum_{j=1}^{n}p_{j} \zeta_{j}. \end{aligned}$$
(2.11)

Then, for every strongly convex function \(\varPsi:[\lambda_{1},\xi _{1}]\rightarrow\mathbb{R}\), we have

$$ \sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ) +c\sum_{j=1}^{n}p_{j} (\delta_{j}-\zeta_{j} )^{2}. $$
(2.12)

Proof

The idea of the proof is similar to the idea of the proof of Theorem 2.3. □

In the following theorem we shall establish a more general inequality for strongly convex functions.

Theorem 2.12

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a strongly convex function with modulus c. Suppose that \({\boldsymbol{\delta}}=(\delta_{1},\delta _{2},\ldots,\delta_{n})\), \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta _{2},\ldots,\zeta_{n})\) and \(\mathbf{{p}}=(p_{1},p_{2},\ldots,p_{n})\) are n-tuples such that \(\delta_{i},\zeta_{i}\in[\lambda_{1},\xi _{1}]\), \(p_{i}\geq0\), \(j=1,2,\ldots,n\). Then the following inequality holds:

$$ \sum_{j=1}^{n}p_{j} \varPsi(\delta_{j})\geq\sum_{j=1}^{n}p_{j} \varPsi ( {\zeta_{j}} )+\sum_{j=1}^{n}p_{j} \varPsi_{+}^{\prime} ( {\zeta_{j}} ) ( \delta_{j}-\zeta_{j} )+c\sum_{j=1}^{n}p_{j} ( \delta_{j}-\zeta_{j} )^{2}. $$
(2.13)

Proof

Since Ψ is a strongly convex function, therefore we have

$$ \varPsi ( \delta )\geq\varPsi ( \zeta )+\varPsi _{+}^{\prime} ( \zeta ) ( \delta-\zeta )+c ( \delta-\zeta )^{2}. $$
(2.14)

Let \(\zeta\rightarrow\zeta_{j}\) and \(\delta\rightarrow\delta _{j}\), for \(j=1,2,\ldots,n\) in (2.14), we get

$$ \varPsi ( \delta_{j} )\geq\varPsi ( \zeta_{j} )+ \varPsi_{+}^{\prime} ( \zeta_{j} ) ( \delta_{j}- \zeta _{j} )+c ( \delta_{j}-\zeta_{j} )^{2}. $$
(2.15)

Multiplying (2.15) by \(p_{j}\geq0\) and summing over \(j=1,2,\ldots,n\), we get

$$ \sum_{j=1}^{n}p_{j} \varPsi(\delta_{j})\geq\sum_{j=1}^{n}p_{j} \varPsi ( {\zeta_{j}} )+\sum_{j=1}^{n}p_{j} \varPsi_{+}^{\prime} ( {\zeta_{j}} ) ( \delta_{j}-\zeta_{j} )+c\sum_{j=1}^{n}p_{j} ( \delta_{j}-\zeta_{j} )^{2}. $$
(2.16)

 □

Remark 2.13

By setting \(\zeta_{j}=\overline{\delta}=\sum_{j=1}^{n}p_{j}\delta _{j}\) for \(j=1,2,\ldots,n\) in (2.13), we obtain Jensen’s inequality for strongly convex functions which has already been proved by Merentes and Nikodem in [20]. Moreover, by rearranging (2.13) and replacing \(\zeta_{j}\) by \(\delta_{j}\) and setting \(\delta_{j}=\overline {\overline{\Delta}}=\frac{\sum_{j=1}^{n}p_{j}\delta_{j} \varPsi '_{+}(\delta_{j})}{\sum_{j=1}^{n}p_{j}\varPsi'_{+}(\delta_{j})}\in [\lambda_{1},\xi_{1}]\) for \(j=1,2,\ldots,n\), where \(\sum_{j=1}^{n}p_{j}\varPsi'_{+}(\delta_{j})\neq0\), we obtain Slater’s inequality for strongly convex functions.

Next, we prove majorization theorem for positive weight and monotonic condition on a single tuple.

Theorem 2.14

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a strongly convex function with modulus c. Suppose that \({\boldsymbol{\delta}}=(\delta_{1},\delta _{2},\ldots,\delta_{n})\), \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta _{2},\ldots,\zeta_{n})\) and \(\mathbf{{p}}=(p_{1},p_{2},\ldots,p_{n})\) are n-tuples such that \(\delta_{j},\zeta_{j}\in[\lambda_{1},\xi _{1}]\), \(p_{j}\geq0\) for \(j=1,2,\ldots,n\) and they satisfy

$$\begin{aligned} &\sum_{j=1}^{k}p_{j} \delta_{j}\geq\sum_{j=1}^{k}p_{j} \zeta_{j} \quad\textit{for }k=1,2,\ldots,n-1 \quad\textit{and} \end{aligned}$$
(2.17)
$$\begin{aligned} &\sum_{j=1}^{n}p_{j} \delta_{j}=\sum_{j=1}^{n}p_{j} \zeta_{j}. \end{aligned}$$
(2.18)
  1. (a)

    If the n-tuple ζ is decreasing, then the following inequality holds:

    $$ \sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ) +c\sum_{j=1}^{n}p_{j} (\delta_{j}-\zeta_{j} )^{2}. $$
    (2.19)
  2. (b)

    If the n-tuple δ is increasing, then the following inequality holds:

    $$ \sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} ) +c\sum_{j=1}^{n}p_{j} (\zeta_{j}-\delta_{j} )^{2}. $$
    (2.20)

Proof

(a) By using (2.13) and if ζ is a decreasing n-tuple, then we use (2.17) and (2.18), and since Ψ is strongly convex, so Ψ is convex. Therefore using the idea of [82, p. 32], we have \(\sum_{j=1}^{n}p_{j}\varPsi_{+}^{\prime} ( {\zeta_{j}} ) ( \delta _{j}-\zeta_{j} )\geq0\). Hence (2.13) can be written as

$$ \begin{aligned} &\sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )-\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ) -c\sum_{j=1}^{n}p_{j} (\delta_{j}-\zeta_{j} )^{2} \geq\sum _{j=1}^{n}p_{j}\varPsi_{+}^{\prime} ( {\zeta_{j}} ) ( \delta_{j}-\zeta_{j} )\geq0. \end{aligned} $$
(2.21)

From (2.21), we deduce (2.19).

Similarly we can prove part (b). □

The following theorem is in fact the generalization of Theorem 1.5 for a strongly convex function.

Theorem 2.15

Let \(\varPsi:[\lambda_{1},\xi_{1}]\rightarrow\mathbb{R}\) be a strongly convex function with modulus c. Suppose that \({\boldsymbol{\delta}}=(\delta_{1},\delta _{2},\ldots,\delta_{n})\), \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta _{2},\ldots,\zeta_{n})\) and \(\mathbf{{p}}=(p_{1},p_{2},\ldots,p_{n})\) are n-tuples such that \(\delta_{j},\zeta_{j}\in[\lambda_{1},\xi _{1}]\), \(p_{j}\geq0\) for \(j=1,2,\ldots,n\). If \((\delta_{j}-\zeta _{j} )_{ (j=\overline{1,n} )}\) and \((\zeta _{j} )_{ (j=\overline{1,n} )}\) are decreasing (increasing) n-tuples and satisfy (2.18), then the following inequality holds:

$$ \sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ) +c\sum_{j=1}^{n}p_{j} (\delta_{j}-\zeta_{j} )^{2}. $$
(2.22)

Proof

The idea of the proof is similar to the idea of the proof of Theorem 2.14. □

Remark 2.16

Let all the assumptions of Theorem 2.15 hold. Furthermore, if Ψ is increasing and satisfying (2.17), then (2.22) holds.

The following theorem is in fact the generalization of [85, Theorem 1] for a strongly convex function.

Theorem 2.17

Let θ be a strictly increasing function from \((\lambda_{1},\xi _{1})\) onto \((\lambda_{2},\xi_{2})\), \(\varPsi^{\circ}\theta^{-1}\) be a strongly convex function on \([\lambda_{2},\xi_{2}]\), and \({\boldsymbol{\delta}}=(\delta_{1},\delta_{2},\ldots,\delta_{n})\) and \({\boldsymbol{\zeta}}=(\zeta_{1},\zeta_{2},\ldots,\zeta_{n})\) be n-tuples such that \(\delta_{j},\zeta_{j}\in(\lambda_{1},\xi _{1})\) \((j=1,2,\ldots,n)\) and they satisfy

$$\begin{aligned} &\sum_{j=1}^{k}p_{i} \theta(\delta_{j})\geq\sum_{j=1}^{k}p_{j} \theta(\zeta_{j})\quad \textit{for }k=1,2,\ldots,n-1\quad\textit{and} \end{aligned}$$
(2.23)
$$\begin{aligned} & \sum_{j=1}^{n}p_{j} \theta(\delta_{j})=\sum_{j=1}^{n}p_{j} \theta(\zeta_{j}). \end{aligned}$$
(2.24)

Then the following statements are true:

  1. (a)

    If the n-tuple ζ is decreasing and Ψ is a decreasing function, then

    $$ \sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\zeta_{j} ) +c\sum_{j=1}^{n}p_{j} \bigl(\theta(\delta_{j})-\theta(\zeta _{j}) \bigr)^{2}. $$
    (2.25)
  2. (b)

    If the n-tuple δ is increasing and Ψ is an increasing function, then

    $$ \sum_{j=1}^{n}p_{j}\varPsi ( \zeta_{j} )\geq\sum_{j=1}^{n}p_{j} \varPsi (\delta_{j} ) +c\sum_{j=1}^{n}p_{j} \bigl(\theta(\delta_{j})-\theta(\zeta _{j}) \bigr)^{2}. $$
    (2.26)

Proof

(a) Let \(\theta(\delta_{j})=a_{j}, \theta(\zeta _{j})=b_{j}, \varPsi^{\circ}\theta^{-1}(a_{j})=\overline{\varPsi }(a_{j})\), then by using (1.2), we have

$$ \sum_{j=1}^{k}p_{j} \overline{\varPsi} (a_{j} )-\sum_{j=1}^{k}p_{j} \overline{\varPsi} (b_{j} ) +c\sum_{j=1}^{k}p_{j} (a_{j}-b_{j} )^{2} \geq\sum _{j=1}^{k}p_{j}\overline{ \varPsi}_{+}^{\prime} ( {b_{j}} ) ( a_{j}-b_{j} ). $$
(2.27)

Since Ψ is decreasing, by using (2.23) and (2.24) and the idea of Theorem 2.3, we have \(\sum_{j=1}^{k}p_{j}\overline{\varPsi }_{+}^{\prime} ( {b_{j}} ) ( a_{j}-b_{j} )\leq0\). Therefore, from (2.27) we can obtain (2.25).

Similarly we can prove part (b). □

3 Results and discussion

In the article, we establish a monotonicity property for the function involving the strongly convex function, prove the classical majorization theorem for majorized n-tuples by using strongly convex functions, give some applications of majorization theorem, provide a general inequality for a strongly convex function which gives the well-known inequalities such as Jensen’s inequality and Slater’s inequality for strongly convex functions. Furthermore, we also give some weighted versions of majorization theorem for certain n-tuples.

4 Conclusion

We find several majorization results for strongly convex functions and provide their applications. The given results improve the previous results. Our approach may have further applications in the theory of majorization.