1 Introduction and main results

In 1950, Bonferroni [1] proposed a type of symmetric means involving n variables \(x_{1},x_{2},\ldots ,x_{n}\) and two parameters \(p_{1}\), \(p_{2}\) as follows:

$$ B^{p_{1}, p_{2}}(\boldsymbol{x})= \Biggl(\frac{1}{n(n-1)}\sum ^{n}_{i,j=1,i \neq j} x^{p_{1}}_{i} x^{p_{2}}_{j} \Biggr)^{\frac{1}{p_{1}+p_{2}}}, $$
(1.1)

where \(\boldsymbol{x}=(x_{1},x_{2},\ldots ,x_{n})\), \(x_{i}\geq 0\), \(i=1,2, \dots ,n\), \(p_{1},p_{2} \geq 0\), and \(p_{1}+p_{2}\neq 0\).

More than half a century later, Beliakov et al. [2] gave a generalization of the Bonferroni mean by introducing three parameters \(p_{1}\), \(p_{2}\), \(p_{3}\):

$$ B^{p_{1},p_{2},p_{3}}(\boldsymbol{x})= \Biggl(\frac{1}{n(n-1)(n-2)}\sum ^{n} _{i,j,k=1,i\neq j\neq k} x^{p_{1}}_{i} x^{p_{2}}_{j}x^{p_{3}}_{k} \Biggr) ^{\frac{1}{p_{1}+p_{2}+p_{3}}}, $$
(1.2)

where \(\boldsymbol{x}=(x_{1},x_{2},\ldots ,x_{n})\), \(x_{i}\geq 0\), \(i=1,2,\dots ,n\), \(p_{1},p_{2},p_{3} \geq 0\), and \(p_{1}+p_{2}+p _{3} \neq 0\).

In 2012, Xia et al. [3] explored the dual form of the Bonferroni mean \(B^{p_{1},p_{2}}(\boldsymbol{x})\) by changing the summation by the multiplication; the associated symmetric mean, the so-called geometric Bonferroni mean, is defined as

$$ GB^{p_{1},p_{2}}(\boldsymbol{x})=\frac{1}{p_{1}+p_{2}}\prod ^{n}_{i,j=1,i \neq j}(p_{1} x_{i}+p_{2} x_{j})^{\frac{1}{n(n-1)}}, $$
(1.3)

where \(\boldsymbol{x}=(x_{1},x_{2},\ldots ,x_{n})\), \(x_{i} > 0\), \(i=1,2, \dots ,n\), \(p_{1},p_{2} \geq 0\), and \(p_{1}+p_{2}\neq 0\).

Following the idea of Beliakov et al. [2], a generalized version of the geometric Bonferroni mean \(GB^{p_{1},p_{2}}(\boldsymbol{x})\) was given by Park and Kim in [4], which is called the generalized geometric Bonferroni mean:

$$ GB^{ p_{1},p_{2},p_{3}}(\boldsymbol{x})=\frac{1}{ p_{1}+p_{2}+p_{3}} \prod ^{n}_{i,j,k=1,i\neq j\neq k}(p_{1} x_{i}+p_{2} x_{j}+p_{3} x_{k})^{ \frac{1}{n(n-1)(n-2)}}, $$
(1.4)

where \(\boldsymbol{x}=(x_{1},x_{2},\ldots ,x_{n})\), \(x_{i}> 0, i=1,2, \dots ,n\), \(p_{1},p_{2},p_{3} \geq 0\), and \(p_{1}+p_{2}+p_{3} \neq 0\).

It is well known that the Bonferroni mean and geometric Bonferroni mean have important applications in multicriteria decision-making problems and have led to many meaningful results. See, for example, Xu and Yager [5], Xia, Xu and Zhu [3, 6], Tian et al. [7], Dutta et al. [8], Liang et al. [9], and Liu [10].

We often associate the properties of means with their Schur-convexity, since it is a powerful tool for studying the properties of various means. There are numerous inequalities related to means originating from the Schur convexity of means. In recent twenty years, the Schur convexities of functions relating to means have attracted the attention of many researchers. In particular, many remarkable inequalities can be found in the literature [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38] via the convexity and Schur convexity theory. In this paper, we are interested in a generalization of Schur convexity, which is called the Schur m-power convexity and contains as particular cases the Schur convexity, Schur geometrical convexity, and Schur harmonic convexity. There have been some papers written on this topic; for instance, Yang [39,40,41] discussed the Schur m-power convexity of Stolarsky means, Gini means, and Daróczy means, respectively. Wang and Yang [42, 43] studied the Schur m-power convexity of generalized Hamy symmetric function and some other symmetric functions. Wang, Fu, and Shi [44], Yin, Shi, and Qi [45], and Kumar and Nagaraja [46] investigated the Schur m-power convexity for some special mean of two variables. Perla and Padmanabhan [47] explored the Schur m-power convexity of Bonferroni harmonic mean.

Besides the above-mentioned works, it is worth noting that the following results given recently by Shi and Wu [48, 49] are closely related to the topic of the present paper.

In [48], Shi and Wu investigated the Schur m-power convexity of the geometric Bonferroni mean \(GB^{p_{1},p_{2}}(\boldsymbol{x})\) and obtained the following results.

Proposition 1

Let \(p_{1}\), \(p_{2}\) be positive real numbers, and let \(n \geq 3\).

  1. (i)

    If \(m\leq 0 \), then \(GB^{p_{1},p_{2}}(\boldsymbol{x})\) is Schur m-power convex on \(\mathbb{R}^{n}_{++}\);

  2. (ii)

    If \(m\geq 2 \) or \(m=1 \), then \(GB^{p_{1},p_{2}}(\boldsymbol{x})\) is Schur m-power concave on \(\mathbb{R}^{n}_{++}\).

In [49], Shi and Wu discussed the Schur convexity, Schur geometric convexity, and Schur harmonic convexity of the generalized geometric Bonferroni mean involving three parameters \(GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})\). They proved the following results.

Proposition 2

Let \(p_{1}\), \(p_{2}\), \(p_{3}\) be nonnegative real numbers with \(p_{1}+p_{2}+p _{3} \neq 0\), and let \(n \geq 3\). Then \(GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})\) is Schur concave, Schur geometric convex, and Schur harmonic convex on \(\mathbb{R}^{n}_{++}\).

Inspired by previous investigations, in this paper, we study the Schur m-power convexity of the geometric Bonferroni mean \(GB^{p_{1},p_{2},p _{3}}(\boldsymbol{x})\). The obtained result provides a unified generalization of Propositions 1 and 2. Our main result is stated in the following theorem.

Theorem 1

Let \(p_{1}\), \(p_{2}\), \(p_{3}\) be nonnegative real numbers with \(p_{1}+p_{2}+p _{3} \neq 0\), and let \(n\geq 3\).

  1. (i)

    If \(m\leq 0 \), then \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is Schur m-power convex on \(\mathbb{R}^{n}_{++}\);

  2. (ii)

    If \(m\geq 2 \) or \(m=1 \), then \(GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})\) is Schur m-power concave on \(\mathbb{R}^{n}_{++}\).

2 Preliminaries

We begin by introducing some definitions and lemmas, which will be used in the proofs of the main results.

Throughout the paper, \(\mathbb{R}\) denotes the set of real numbers, \(\boldsymbol{x} = (x_{1}, x_{2},\ldots ,x_{n} )\) denotes n-dimensional real vectors; we also denote

$$\begin{aligned}& \mathbb{R}^{n} = \bigl\{ {\boldsymbol{x}: x_{1}, x_{2}, \ldots ,x _{n} \in (-\infty ,+\infty )} \bigr\} , \qquad \mathbb{R}^{n}_{+} = \bigl\{ {\boldsymbol{x}: x_{1}, x_{2}, \ldots ,x_{n} \in [0,+ \infty )} \bigr\} , \\& \mathbb{R}^{n}_{++} = \bigl\{ {\boldsymbol{x}: x_{1}, x_{2}, \ldots ,x_{n} \in (0,+\infty )} \bigr\} . \end{aligned}$$

Definition 1

(see [50])

Let \(\boldsymbol{x} = ( x_{1},x_{2},\ldots , x_{n })\) and \(\boldsymbol{y} = ( y_{1},y_{2},\ldots , y_{n }) \in \mathbb{R}^{n}\).

  1. (i)

    x is said to be majorized by y (in symbols, \(\boldsymbol{x} \prec \boldsymbol{y}\)) if \(\sum_{i = 1}^{k} x_{[i]} \le \sum_{i = 1}^{k} y_{[i]}\) for \(k = 1,2,\ldots ,n - 1\) and \(\sum_{i = 1}^{n} x_{i} = \sum_{i = 1} ^{n} y_{i}\), where \(x_{[1]}\ge x_{[2]}\ge \cdots \ge x_{[n]}\) and \(y_{[1]}\ge y_{[2]}\ge \cdots \ge y_{[n]}\) are rearrangements of x and y in descending order.

  2. (ii)

    Let \(\varOmega \subset \mathbb{R}^{n}\). A function ψ: \(\varOmega \to \mathbb{R}\) is said to be Schur convex on Ω if \(\boldsymbol{x} \prec \boldsymbol{y}\) on Ω implies \(\psi ( \boldsymbol{x} ) \le \psi ( \boldsymbol{y} ) \); ψ is said to be a Schur concave function on Ω if −ψ is a Schur convex function on Ω.

The following result is known in the literature [50] as Schur’s condition. By this criterion we can judge whether a vector-valued function is Schur convex or not.

Proposition 3

Let \(\varOmega \subset \mathbb{R} ^{n} \) be symmetric convex set having a nonempty interior \(\varOmega ^{0}\). Let a function \(\psi :\varOmega \to \mathbb{R} \) be continuous on Ω and differentiable in \(\varOmega ^{0}\). Then ψ is a Schur convex function (Schur concave function) if and only if ψ is symmetric on Ω and

$$ \Delta _{1}:= ( x_{1} - x_{2} ) \biggl( \frac{\partial \psi (\boldsymbol{x})}{\partial x_{1}} - \frac{\partial \psi ( \boldsymbol{x})}{\partial x_{2} } \biggr) \ge 0\ (\leq 0) $$
(2.1)

for all \(\boldsymbol{x} \in \varOmega ^{0} \).

Schur m-power convex functions are a generalization of Schur convex functions and were introduced by Yang [39]. Similarly to the Schur’s condition mentioned before, Yang [39] gave a method of determining the Schur m-power convex functions as follows.

Proposition 4

Let \(\varOmega \subset \mathbb{R}_{++}^{n}\) be a symmetric set with nonempty interior \(\varOmega ^{\circ }\), and let \(\psi :\varOmega \to \mathbb{R} \) be continuous on Ω and differentiable in \(\varOmega ^{\circ }\). Then ψ is Schur m-power convex (Schur m-power concave) on Ω if and only if ψ is symmetric on Ω and

$$ \Delta _{m}:=\frac{x_{1}^{m} - x_{2}^{m}}{m} \biggl(x_{1}^{1-m} \frac{ \partial \psi (\boldsymbol{x})}{\partial x_{1} } - x_{2}^{1-m} \frac{ \partial \psi (\boldsymbol{x})}{\partial x_{2} } \biggr) \ge 0\ ( \leq 0) \quad \textit{if }m\ne 0 $$
(2.2)

and

$$ \Delta _{0}:=(\log x_{1} - \log x_{2} ) \biggl(x_{1}\frac{\partial \psi (\boldsymbol{x})}{\partial x_{1} } - x_{2} \frac{\partial \psi ( \boldsymbol{x})}{\partial x_{2} } \biggr) \ge 0\ (\leq 0) \quad \textit{if }m=0 $$
(2.3)

for all \(\boldsymbol{x} \in \varOmega ^{\circ }\).

Finally, we introduce two majorization relations in preparation for dealing with applications of Schur-convexities of functions in Sect. 4.

Lemma 1

(see [51])

If \(\boldsymbol{x}=( x_{1}, x_{2}, \ldots , x_{n} )\in \mathbb{R}^{n} _{++} \), \(\sum_{i=1}^{n}x_{i}=\varsigma >0\), and \(\epsilon \geq \varsigma \), then

$$ \biggl(\frac{\epsilon -x_{1}}{n\epsilon -\varsigma }, \frac{\epsilon -x_{2} }{n\epsilon -\varsigma }, \ldots , \frac{\epsilon -x_{n} }{n \epsilon -\varsigma } \biggr)\prec \biggl(\frac{x_{1}}{\varsigma }, \frac{x _{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr). $$
(2.4)

Lemma 2

(see [51])

If \(\boldsymbol{x}=( x_{1}, x_{2}, \ldots , x_{n} )\in \mathbb{R}^{n} _{++} \), \(\sum_{i=1}^{n}x_{i}=\varsigma >0\), and \(\epsilon >0\), then

$$ \biggl(\frac{\epsilon +x_{1}}{n\epsilon +\varsigma }, \frac{\epsilon +x_{2} }{n\epsilon +\varsigma }, \ldots , \frac{\epsilon +x_{n} }{n \epsilon +\varsigma } \biggr)\prec \biggl(\frac{x_{1}}{\varsigma }, \frac{x _{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr). $$
(2.5)

3 Proof of main results

Proof of Theorem 1

Recall the definition of the generalized geometric Bonferroni mean:

$$ GB^{ p_{1},p_{2},p_{3}}(\boldsymbol{x})=\frac{1}{ p_{1}+p_{2}+p_{3}} \prod ^{n}_{i,j,k=1,i\neq j\neq k}(p_{1} x_{i}+p_{2} x_{j}+p_{3} x_{k})^{ \frac{1}{n(n-1)(n-2)}}. $$

Taking the natural logarithm gives

$$ \log GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x}) = \log \frac{1}{p_{1}+p _{2}+p_{3}}+ \frac{1}{n(n-1)(n-2)}Q, $$

where

$$\begin{aligned} Q =&\sum^{n}_{j,k=3,j \neq k} \bigl[\log (p_{1} x_{1}+p_{2} x_{j}+p_{3} x_{k})+\log (p_{1} x_{2}+p_{2} x_{j}+p_{3} x_{k}) \bigr] \\ &{}+\sum^{n}_{i,k=3,i \neq k} \bigl[\log (p_{1} x_{i}+p_{2} x_{1}+p_{3} x _{k})+\log (p_{1} x_{i}+p_{2} x_{2}+p_{3} x_{k}) \bigr] \\ &{}+\sum^{n}_{i,j=3,i \neq j} \bigl[\log (p_{1} x_{i}+p_{2} x_{j}+p_{3} x _{1})+\log (p_{1} x_{i}+p_{2}x_{j}+p_{3} x_{2}) \bigr] \\ &{}+\sum^{n}_{k=3}\bigl[\log (p_{1}x_{1}+p_{2}x_{2}+ p_{3} x_{k})+\log (p_{1}x _{2}+p_{2} x_{1}+ p_{3} x_{k})\bigr] \\ &{}+\sum^{n}_{j=3}\bigl[\log (p_{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2})+\log (p_{1}x _{2}+p_{2} x_{j}+ p_{3} x_{1})\bigr] \\ &{}+\sum^{n}_{i=3}\bigl[\log (p_{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2})+\log (p_{1}x _{i}+p_{2} x_{2}+ p_{3}x_{1})\bigr] \\ &{}+\sum^{n}_{i,j,k=3, i\neq j\neq k}\log (p_{1} x_{i}+p_{2}x_{j}+p _{3}x_{k}). \end{aligned}$$

Differentiating \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) with respect to \(x_{1}\) and \(x_{2}\), respectively, we have

$$\begin{aligned} &\frac{\partial GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{\partial x_{1}}\\ &\quad =\frac{GB ^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{n(n-1)(n-2)}\cdot \frac{\partial Q}{\partial x_{1}} \\ &\quad =\frac{GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{n(n-1)(n-2)} \Biggl[\sum^{n}_{j,k=3,j \neq k} \frac{p_{1}}{p_{1} x_{1}+p_{2} x_{j}+p_{3} x_{k}} +\sum^{n}_{i,k=3,i \neq k} \frac{p_{2}}{p_{1} x_{i}+p_{2} x_{1}+p_{3} x _{k}} \\ &\qquad {}+\sum^{n}_{i,j=3,i \neq j}\frac{p_{3}}{p_{1} x_{i}+p_{2} x_{j}+p_{3} x _{1}}+\sum ^{n}_{k=3} \biggl(\frac{p_{1}}{p_{1}x_{1}+p_{2}x_{2}+ p_{3} x _{k}}+ \frac{p_{2}}{p_{1}x_{2}+p_{2} x_{1}+ p_{3} x_{k}} \biggr) \\ &\qquad {}+\sum^{n}_{j=3} \biggl(\frac{p_{1}}{p_{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2}}+ \frac{p _{3}}{p_{1}x_{2}+p_{2} x_{j}+ p_{3}x_{1}} \biggr) \\ &\qquad {}+\sum^{n}_{i=3} \biggl(\frac{p_{2}}{p_{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2}}+ \frac{p _{3}}{p_{1}x_{i}+p_{2} x_{2}+ p_{3} x_{1}} \biggr) \Biggr], \\ &\frac{\partial GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{\partial x_{2}}\\ &\quad =\frac{GB ^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{n(n-1)(n-2)}\cdot \frac{\partial Q}{\partial x_{2}} \\ &\quad =\frac{GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{n(n-1)(n-2)} \Biggl[\sum^{n}_{j,k=3,j \neq k} \frac{p_{1}}{p_{1} x_{2}+p_{2} x_{j}+p_{3} x_{k}} +\sum^{n}_{i,k=3,i \neq k} \frac{p_{2}}{p_{1} x_{i}+p_{2} x_{2}+p_{3} x_{k}} \\ &\qquad {}+\sum^{n}_{i,j=3,i \neq j}\frac{p_{3}}{p_{1} x_{i}+p_{2} x_{j}+p_{3} x _{2}}+\sum ^{n}_{k=3} \biggl(\frac{p_{2}}{p_{1}x_{1}+p_{2}x_{2}+ p_{3} x _{k}}+ \frac{p_{1}}{p_{1}x_{2}+p_{2} x_{1}+ p_{3} x_{k}} \biggr) \\ &\qquad {}+\sum^{n}_{j=3} \biggl(\frac{p_{3}}{p_{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2}}+ \frac{p _{1}}{p_{1}x_{2}+p_{2} x_{j}+ p_{3}x_{1}} \biggr) \\ &\qquad {}+\sum^{n}_{i=3} \biggl(\frac{p_{3}}{p_{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2}}+ \frac{p _{2}}{p_{1}x_{i}+p_{2} x_{2}+ p_{3} x_{1}} \biggr) \Biggr]. \end{aligned}$$

It is easy to see that \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is symmetric on \(\mathbb{R}^{n}_{++}\). Without loss of generality, we may assume that \(x_{1}\geq x_{2}\). Hence, for \(m\neq 0\) and \(n\geq 3 \), we have

$$\begin{aligned} \Delta _{m} =&\frac{x^{m}_{1} -x^{m}_{2}}{m} \biggl(x^{1-m}_{1} \frac{ \partial GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{\partial x_{1}}-x ^{1-m}_{2}\frac{\partial GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{ \partial x_{2}} \biggr) \\ =& \frac{(x^{m}_{1} -x^{m}_{2})GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{mn(n-1)(n-2)} \\ &{}\times \Biggl[p_{1}\sum^{n}_{j,k=3,j \neq k} \biggl(\frac{x_{1}^{1-m}}{p _{1} x_{1}+p_{2} x_{j}+p_{3} x_{k}}-\frac{x_{2}^{1-m}}{p_{1} x_{2}+p _{2} x_{j}+p_{3} x_{k}} \biggr) \\ &{}+p_{2}\sum^{n}_{i,k=3,i \neq k} \biggl( \frac{x_{1}^{1-m}}{p_{1} x_{i}+p _{2} x_{1}+p_{3} x_{k}}-\frac{x_{2}^{1-m}}{p_{1} x_{i}+p_{2} x_{2}+p _{3} x_{k}} \biggr) \\ &{}+p_{3}\sum^{n}_{i,j=3,i \neq j} \biggl( \frac{x_{1}^{1-m}}{p_{1} x_{i}+p _{2} x_{j}+p_{3} x_{1}}-\frac{x_{2}^{1-m}}{p_{1} x_{i}+p_{2} x_{j}+p _{3} x_{2}} \biggr) \\ &{}+\sum^{n}_{k=3} \biggl(\frac{p_{1}x_{1}^{1-m}-p_{2}x_{2}^{1-m}}{p_{1}x _{1}+p_{2}x_{2}+ p_{3} x_{k}} + \frac{p_{2}x_{1}^{1-m}-p_{1}x_{2}^{1-m}}{p _{1}x_{2}+p_{2} x_{1}+ p_{3} x_{k}} \biggr) \\ &{}+\sum^{n}_{j=3} \biggl(\frac{p_{1}x_{1}^{1-m}-p_{3}x_{2}^{1-m}}{p_{1}x _{1}+p_{2}x_{j}+ p_{3} x_{2}} + \frac{p_{3}x_{1}^{1-m}-p_{1}x_{2}^{1-m}}{p _{1}x_{2}+p_{2} x_{j}+ p_{3} x_{1}} \biggr) \\ &{}+\sum^{n}_{i=3} \biggl(\frac{p_{2}x_{1}^{1-m}-p_{3}x_{2}^{1-m}}{p_{1}x _{i}+p_{2}x_{1}+ p_{3} x_{2}} + \frac{p_{3}x_{1}^{1-m}-p_{2}x_{2}^{1-m}}{p _{1}x_{i}+p_{2} x_{2}+ p_{3} x_{1}} \biggr) \Biggr], \end{aligned}$$

and rearranging and collecting like terms, we get

$$\begin{aligned} \Delta _{m} =&\frac{(x^{m}_{1} -x^{m}_{2})GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})}{mn(n-1)(n-2)} \\ &{}\times \Biggl[p_{1}\sum^{n}_{j,k=3,j \neq k} \frac{p_{1}x_{1}x_{2}(x _{1}^{-m}-x_{2}^{-m}) +(p_{2}x_{j}+p_{3}x_{k})(x_{1}^{1-m}-x_{2}^{1-m})}{(p _{1} x_{1}+p_{2} x_{j}+p_{3} x_{k})(p_{1} x_{2}+p_{2} x_{j}+p_{3} x _{k})} \\ &{}+p_{2}\sum^{n}_{i,k=3,i \neq k} \frac{p_{2}x_{1}x_{2}(x_{1}^{-m}-x_{2} ^{-m})+(p_{1}x_{i}+p_{3}x_{k})(x_{1}^{1-m}-x_{2}^{1-m})}{(p_{1} x_{i}+p _{2} x_{1}+p_{3} x_{k})(p_{1} x_{i}+p_{2} x_{2}+p_{3} x_{k})} \\ &{}+p_{3}\sum^{n}_{i,j=3,i \neq j} \frac{p_{3}x_{1}x_{2}(x_{1}^{-m}-x_{2} ^{-m})+(p_{1}x_{i}+p_{2}x_{j})(x_{1}^{1-m}-x_{2}^{1-m})}{(p_{1} x_{i}+p _{2} x_{j}+p_{3} x_{1})(p_{1} x_{i}+p_{2} x_{j}+p_{3} x_{2})} \\ &{}+\sum^{n}_{k=3}\frac{(p_{1}^{2}+p_{2}^{2})x_{1}x_{2}(x_{1}^{-m}-x_{2} ^{-m}) +2p_{1}p_{2}(x_{1}^{2-m}-x_{2}^{2-m})+(p_{1}+p_{2})p_{3}x_{k}(x _{1}^{1-m}-x_{2}^{1-m})}{(p_{1}x_{1}+p_{2}x_{2}+ p_{3} x_{k})(p_{1}x _{2}+p_{2} x_{1}+ p_{3} x_{k})} \\ &{}+\sum^{n}_{j=3}\frac{(p_{1}^{2}+p_{3}^{2})x_{1}x_{2}(x_{1}^{-m}-x_{2} ^{-m}) +2p_{1}p_{3}(x_{1}^{2-m}-x_{2}^{2-m})+(p_{1}+p_{3})p_{2}x_{j}(x _{1}^{1-m}-x_{2}^{1-m})}{(p_{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2})(p_{1}x _{2}+p_{2} x_{j}+ p_{3} x_{1})} \\ &{}+\sum^{n}_{i=3}\frac{(p_{2}^{2}+p_{3}^{2})x_{1}x_{2}(x_{1}^{-m}-x_{2} ^{-m})+2p_{2}p_{3}(x_{1}^{2-m}-x_{2}^{2-m})+(p_{2} +p_{3})p_{1}x_{i}(x _{1}^{1-m}-x_{2}^{1-m})}{(p_{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2})(p_{1}x _{i}+p_{2} x_{2}+ p_{3} x_{1})} \Biggr]. \end{aligned}$$

For the case \(m=1\), the expression \(\Delta _{1}\) follows directly from the expression \(\Delta _{m}\) with \(m=1\), that is,

$$\begin{aligned} \Delta _{1} =&(x_{1} -x_{2}) \biggl( \frac{\partial GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})}{\partial x_{1}}-\frac{\partial GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})}{\partial x_{2}} \biggr) \\ =&- \frac{(x_{1} -x_{2})^{2}GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{n(n-1)(n-2)} \\ &{}\times \Biggl[\sum^{n}_{j,k=3,j \neq k} \frac{p_{1}^{2}}{(p_{1} x_{1}+p _{2}x_{j}+p_{3} x_{k})(p_{1} x_{2}+p_{2}x_{j}+p_{3} x_{k})} \\ &{}+\sum^{n}_{i,k=3,i \neq k}\frac{p_{2}^{2}}{(p_{1} x_{i}+p_{2} x_{1}+p _{3} x_{k})(p_{1} x_{i}+p_{2} x_{2}+p_{3} x_{k})} \\ &{}+\sum^{n}_{i,j=3,i \neq j}\frac{p_{3}^{2}}{(p_{1} x_{i}+p_{2} x_{j}+p _{3} x_{1})(p_{1} x_{i}+p_{2}x_{j}+p_{3} x_{2})} \\ &{}+\sum^{n}_{k=3}\frac{(p_{1}-p_{2})^{2}}{(p_{1}x_{1}+p_{2}x_{2}+ p_{3} x_{k})(p_{1}x_{2}+p_{2}x_{1}+ p_{3} x_{k})} \\ &{}+\sum^{n}_{j=3}\frac{(p_{1}-p_{3})^{2}}{(p_{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2})(p_{1}x_{2}+p_{2} x_{j}+ p_{3}x_{1})} \\ &{}+\sum^{n}_{i=3}\frac{(p_{2}-p_{3})^{2}}{(p_{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2})(p_{1}x_{i}+p_{2} x_{2}+ p_{3}x_{1})} \Biggr]. \end{aligned}$$

For the case \(m=0\), the expression \(\Delta _{0}\) can also be derived from \(\Delta _{m}\). In fact, in view of the definitions of \(\Delta _{0}\) and \(\Delta _{m}\), we can obtain \(\Delta _{0}\) by replacing \((x^{m}_{1} -x ^{m}_{2})/m\) with \((\log x_{1} - \log x_{2})\) in the expression of \(\Delta _{m}\) and then putting in \(m=0\), we deduce that

$$\begin{aligned} \Delta _{0} =&(\log x_{1} - \log x_{2}) \biggl(x_{1}\frac{\partial GB ^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{\partial x_{1}}-x_{2}\frac{ \partial GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})}{\partial x_{2}} \biggr) \\ =&\frac{(x_{1} -x_{2})(\log x_{1} - \log x_{2})GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})}{n(n-1)(n-2)} \\ &{}\times \Biggl[p_{1}\sum^{n}_{j,k=3,j \neq k} \frac{p_{2} x_{j}+p_{3} x _{k}}{(p_{1} x_{1}+p_{2} x_{j}+p_{3} x_{k})(p_{1} x_{2}+p_{2} x_{j}+p _{3} x_{k})} \\ &{}+p_{2}\sum^{n}_{i,k=3,i \neq k} \frac{p_{1}x_{i}+p_{3} x_{k}}{(p_{1} x _{i}+p_{2} x_{1}+p_{3} x_{k})(p_{1} x_{i}+p_{2} x_{2}+p_{3} x_{k})} \\ &{}+p_{3}\sum^{n}_{i,j=3,i \neq j} \frac{p_{1}x_{i}+p_{2} x_{j}}{(p_{1} x _{i}+p_{2} x_{j}+p_{3} x_{1})(p_{1} x_{i}+p_{2} x_{j}+p_{3} x_{2})} \\ &{}+\sum^{n}_{k=3}\frac{2p_{1}p_{2}(x_{1}+x_{2})+p_{3}(p_{1}+p_{2})x_{k}}{(p _{1}x_{1}+p_{2}x_{2}+ p_{3} x_{k})(p_{1}x_{2}+p_{2} x_{1}+ p_{3} x _{k})} \\ &{}+\sum^{n}_{j=3}\frac{2p_{3}p_{1}(x_{1}+x_{2})+p_{2}(p_{1}+p_{3})x_{j}}{(p _{1}x_{1}+p_{2}x_{j}+ p_{3} x_{2})(p_{1}x_{2}+p_{2} x_{j}+ p_{3} x _{1})} \\ &{}+\sum^{n}_{i=3}\frac{2p_{2}p_{3}(x_{1}+x_{2})+p_{1}(p_{2}+p_{3})x_{i}}{(p _{1}x_{i}+p_{2}x_{1}+ p_{3} x_{2})(p_{1}x_{i}+p_{2} x_{2}+ p_{3} x _{1})} \Biggr]. \end{aligned}$$

Now, we are in position to discuss the Schur m-power convexity of \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\).

If \(m <0 \), then by the assumption \(x_{1}\geq x_{2}\), we have \(x^{m}_{1} -x^{m}_{2} \leq 0\), \(x^{-m}_{1} -x^{-m}_{2} \geq 0\), \(x ^{1-m}_{1} -x^{1-m}_{2} \geq 0\), and \(x^{2-m}_{1} -x^{2-m}_{2}\geq 0\). Thus \(\Delta _{m} \geq 0\). From Proposition 4 it follows that \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is Schur m-power convex for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\).

If \(m\geq 2 \), then by the assumption \(x_{1}\geq x_{2}\), we have \(x^{m}_{1} -x^{m}_{2} \geq 0\), \(x^{-m}_{1} -x^{-m}_{2} \leq 0\), \(x^{1-m}_{1} -x^{1-m}_{2} \leq 0\), and \(x^{2-m}_{1} -x^{2-m}_{2} \leq 0\). Thus \(\Delta _{m} \leq 0\). By Proposition 4 we conclude that \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is Schur m-power concave for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\).

If \(m=1\), then it is obvious that \(\Delta _{1}\leq 0\). Hence \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is Schur m-power concave for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\).

If \(m=0\), then it is easy to observe that \(\Delta _{0} \geq 0\). Thus \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) is Schur m-power convex for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\).

The proof of Theorem 1 is completed. □

Remark 1

As a direct consequence of Theorem 1, the results of Proposition 1 follow from Theorem 1 with \(p_{3}=0\). Recalling the definitions of Schur concave functions, Schur geometric convex functions, Schur harmonic convex functions, and Schur m-power convex (concave) functions [39,40,41], the assertions of Proposition 2 can be deduced by taking \(m=1\), \(m=0\), and \(m=-1\), respectively, in Theorem 1.

4 Applications

In this section, we utilize the results of Theorem 1 to establish two new inequalities involving the generalized geometric Bonferroni mean.

Theorem 2

Let \(p_{1},p_{2},p_{3}\) be nonnegative real numbers with \(p_{1}+p_{2}+p _{3} \neq 0\), and let \(\boldsymbol{x}=( x_{1}, x_{2}, \ldots , x_{n} )\), \(\sum_{i=1}^{n}x_{i}=\varsigma >0\), \(\epsilon \geq \varsigma \), and \(n\geq 3\). Then for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\), we have the inequality

$$ GB^{p_{1},p_{2},p_{3}} ({\epsilon -x_{1}}, {\epsilon -x_{2} }, \ldots , {\epsilon -x_{n} } ) \geq \biggl(\frac{n\epsilon }{\varsigma }-1 \biggr) GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x}). $$
(4.1)

Proof

Using Theorem 1 with \(m=1\), we observe that \(GB^{p_{1},p_{2},p_{3}}( \boldsymbol{x})\) is Schur concave on \(\mathbb{R}^{n}_{++}\). By Lemma 1 we have

$$\biggl(\frac{\epsilon -x_{1}}{n\epsilon -\varsigma }, \frac{\epsilon -x_{2} }{n\epsilon -\varsigma }, \ldots , \frac{\epsilon -x_{n} }{n \epsilon -\varsigma } \biggr)\prec \biggl(\frac{x_{1}}{\varsigma }, \frac{x _{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr). $$

Thus we deduce from Definition 1 that

$$GB^{p_{1},p_{2},p_{3}} \biggl(\frac{\epsilon -x_{1}}{n\epsilon -\varsigma }, \frac{\epsilon -x_{2} }{n\epsilon -\varsigma }, \ldots , \frac{ \epsilon -x_{n} }{n\epsilon -\varsigma } \biggr)\geq GB^{p_{1},p_{2},p _{3}} \biggl(\frac{x_{1}}{\varsigma }, \frac{x_{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr), $$

that is,

$$\frac{GB^{p_{1},p_{2},p_{3}}({\epsilon -x_{1}}, {\epsilon -x_{2} }, \ldots , {\epsilon -x_{n} })}{n\epsilon -\varsigma }\geq \frac{GB ^{p_{1},p_{2},p_{3}}(x_{1}, x_{2}, \ldots , x_{n})}{\varsigma }, $$

which implies

$$GB^{p_{1},p_{2},p_{3}} ({\epsilon -x_{1}}, {\epsilon -x_{2} }, \ldots, {\epsilon -x_{n} } ) \geq \biggl(\frac{n\epsilon }{\varsigma }-1 \biggr) GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x}). $$

Theorem 2 is proved. □

Theorem 3

Let \(p_{1},p_{2},p_{3}\) be nonnegative real numbers with \(p_{1}+p_{2}+p _{3} \neq 0\), and let \(\boldsymbol{x}=( x_{1}, x_{2}, \ldots , x_{n} )\), \(\sum_{i=1}^{n}x_{i}=\varsigma >0\), \(\epsilon > 0\), and \(n\geq 3\). Then for \(\boldsymbol{x}\in \mathbb{R}^{n}_{++}\), we have the inequality

$$ GB^{p_{1},p_{2},p_{3}} (\epsilon +x_{1},\epsilon +x_{2},\ldots , \epsilon +x_{n} ) \geq \biggl(\frac{n\epsilon }{\varsigma }+1 \biggr) GB ^{p_{1},p_{2},p_{3}}(\boldsymbol{x}). $$
(4.2)

Proof

By the relationship of majorization given in Lemma 2,

$$\biggl(\frac{\epsilon +x_{1}}{n\epsilon +\varsigma }, \frac{\epsilon +x_{2} }{n\epsilon +\varsigma }, \ldots , \frac{\epsilon +x_{n} }{n \epsilon +\varsigma } \biggr)\prec \biggl(\frac{x_{1}}{\varsigma }, \frac{x _{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr), $$

and the Schur concavity of \(GB^{p_{1},p_{2},p_{3}}(\boldsymbol{x})\) we obtain

$$GB^{p_{1},p_{2},p_{3}} \biggl(\frac{\epsilon +x_{1}}{n\epsilon +\varsigma }, \frac{\epsilon +x_{2} }{n\epsilon +\varsigma }, \ldots , \frac{ \epsilon +x_{n} }{n\epsilon +\varsigma } \biggr)\geq GB^{p_{1},p_{2},p _{3}} \biggl(\frac{x_{1}}{\varsigma }, \frac{x_{2}}{\varsigma }, \ldots , \frac{x_{n}}{\varsigma } \biggr), $$

that is,

$$\frac{GB^{p_{1},p_{2},p_{3}}(\epsilon +x_{1},\epsilon +x_{2},\ldots , \epsilon +x_{n})}{n\epsilon +\varsigma }\geq \frac{GB^{p_{1},p_{2},p _{3}}(x_{1},x_{2},\ldots ,x_{n})}{\varsigma }, $$

and thus

$$GB^{p_{1},p_{2},p_{3}} (\epsilon +x_{1},\epsilon +x_{2},\ldots , \epsilon +x_{n} ) \geq \biggl(\frac{n\epsilon }{\varsigma }+1 \biggr) GB ^{p_{1},p_{2},p_{3}}(\boldsymbol{x}). $$

This completes the proof of Theorem 3. □

5 Results and discussion

In the paper, we present the Schur m-power convexity properties for generalized geometric Bonferroni mean involving three parameters \(p_{1}\), \(p_{2}\), \(p_{3}\),

$$GB^{ p_{1},p_{2},p_{3}}(\boldsymbol{x})=\frac{1}{ p_{1}+p_{2}+p_{3}} \prod ^{n}_{i,j,k=1,i\neq j\neq k}(p_{1} x_{i}+p_{2} x_{j}+p_{3} x_{k})^{ \frac{1}{n(n-1)(n-2)}}. $$

As applications, we establish two inequalities related to the generalized geometric Bonferroni mean.

6 Conclusion

We discuss the Schur m-power convexity of the generalized geometric Bonferroni mean with three parameters. The given results are a generalization of the previous results obtained in [48, 49]. Our approach may have further applications in the theory of majorization.