1 Introduction

For \(x\in R\), the error function \(\operatorname{erf}(x)\) is defined as

$$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}} \int_{0}^{x}e^{-t^{2}}\,dt. $$

The most important properties of this function are collected, for example, in [1, 2]. In the recent past, the error function has been a topic of recurring interest, and a great number of results on this subject have been reported in the literature [316]. It might be surprising that the error function has application in the field of heat conduction besides probability [17, 18].

In 1933, Aumann [19] introduced a generalized notion of convexity, the so-called MN-convexity, when M and N are mean values. A function \(f:[0,\infty)\to[0,\infty)\) is MN-convex if \(f(M(x,y))\leq N(f(x),f(y))\) for \(x,y\in[0,\infty)\). The usual convexity is the special case when M and N both are arithmetic means. Furthermore, the applications of MN-convexity reveal a new world of beautiful inequalities which involve a broad range of functions from the elementary ones, such as sine and cosine function, to the special ones, such as the Γ function, the Gaussian hypergeometric function, and the Bessel function. For the details as regards MN-convexity and its applications the reader is referred to [2025].

Let \(\lambda\in(0,1)\), we define \(A(x,y;\lambda)=\lambda x+(1-\lambda)y\), \(G(x,y;\lambda)=x^{\lambda}y^{1-\lambda}\), \(H(x,y;\lambda)=\frac{xy}{\lambda y+(1-\lambda)x}\) and \(M_{r}(x,y;\lambda)=(\lambda x^{r}+(1-\lambda)y^{r})^{1/r}\) (\(r\neq0\)), \(M_{0}(x,y;\lambda)=x^{\lambda}y^{1-\lambda}\). These are commonly known as weighted arithmetic mean, weighted geometric mean, weighted harmonic mean, and weighted power mean of two positive numbers x and y, respectively. Then it is well known that the inequalities

$$H(x,y;\lambda)=M_{-1}(x,y;\lambda)< G(x,y;\lambda)=M_{0}(x,y; \lambda )< A(x,y;\lambda)=M_{1}(x,y;\lambda) $$

hold for all \(\lambda\in(0,1)\) and \(x,y>0\) with \(x\neq y\).

By elementary computations, one has

$$ \lim_{r\to-\infty}M_{r}(x,y;\lambda)=\min(x,y) $$
(1.1)

and

$$\lim_{r\to+\infty}M_{r}(x,y;\lambda)=\max(x,y). $$

In [26], Alzer proved that \(c_{1}(\lambda)=\frac{\lambda+(1-\lambda )\operatorname{erf}(1)}{\operatorname{erf}(1/(1-\lambda))}\) and \(c_{2}(\lambda)=1\) are the best possible factors such that the double inequality

$$ c_{1}(\lambda)\operatorname{erf}\bigl(H(x,y;\lambda) \bigr)\leq A\bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq c_{2}(\lambda)\operatorname{erf}\bigl(H(x,y;\lambda)\bigr) $$
(1.2)

holds for all \(x, y \in[1,+\infty)\) and \(\lambda\in(0,1/2)\).

Inspired by (1.2), it is natural to ask: does the inequality \(\operatorname{erf}(M(x,y))\leq N(\operatorname{erf}(x),\operatorname {erf}(y))\) hold for other means M, N, such as geometric, harmonic or power means?

In [27, 28], the authors found the greatest values \(\alpha_{1}\), \(\alpha_{2}\) and the least values \(\beta_{1}\), \(\beta_{2}\), such that the double inequalities

$$\operatorname{erf}\bigl(M_{\alpha_{1}}(x,y;\lambda)\bigr)\leq A\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{\beta_{1}}(x,y;\lambda)\bigr) $$

and

$$\operatorname{erf}\bigl(M_{\alpha_{2}}(x,y;\lambda)\bigr)\leq H\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{\beta_{2}}(x,y;\lambda)\bigr) $$

hold for all \(x,y\geq1\) (or \(0< x,y<1\)) and \(\lambda\in(0,1)\).

In the following we answer the question: what are the greatest value p and the least value q, such that the double inequality

$$\operatorname{erf}\bigl(M_{p}(x,y;\lambda)\bigr)\leq G\bigl( \operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq\operatorname {erf} \bigl(M_{q}(x,y;\lambda)\bigr) $$

holds for all \(x,y\geq1\) (or \(0< x,y<1\)) and \(\lambda\in(0,1)\)?

2 Lemmas

In this section we present two lemmas, which will be used in the proof of our main results.

Lemma 2.1

Let \(r\neq0\), \(r_{0}=-1-\frac{2}{e\sqrt{\pi}\operatorname{erf}(1)}=-1.4926\ldots \) , and \(u(x)=\log\operatorname{erf}(x^{1/r})\). Then the following statements are true:

  1. (1)

    if \(r< r_{0}\), then \(u(x)\) is strictly convex on \([1,+\infty)\);

  2. (2)

    if \(r_{0}\leq r<0\), then \(u(x)\) is strictly concave on \((0,1]\);

  3. (3)

    if \(r>0\), then \(u(x)\) is strictly concave on \((0,+\infty)\).

Proof

Simple computations lead to

$$ u'(x)=\frac{2e^{-x^{2/r}}x^{1/r-1}}{r\sqrt{\pi}\operatorname{erf}(x^{1/r})} $$
(2.1)

and

$$ u''(x)=\frac{2e^{-x^{2/r}}x^{1/r-2}}{r^{2}\sqrt{\pi}\operatorname {erf}^{2}(x^{1/r})}g(x), $$
(2.2)

where

$$ g(x)=\bigl(-2x^{2/r}+1-r\bigr)\operatorname{erf} \bigl(x^{1/r}\bigr)-\frac{2}{\sqrt{\pi }}e^{-x^{2/r}}x^{1/r}. $$
(2.3)

Then

$$\begin{aligned}& g'(x)=4x^{2/r-1}g_{1}(x), \end{aligned}$$
(2.4)
$$\begin{aligned}& g_{1}(x)=-\frac{1}{r}\operatorname{erf}\bigl(x^{1/r} \bigr)-\frac{1}{2\sqrt {\pi}}e^{-x^{2/r}}x^{-1/r}, \end{aligned}$$
(2.5)

and

$$ g_{1}'(x)=\frac{1}{2r^{2}\sqrt{\pi}}e^{-x^{2/r}}x^{-1/r-1} \bigl[(2r-4)x^{2/r}+r\bigr]. $$
(2.6)

We divide the proof into two cases.

Case 1. If \(r<0\), then (2.6), (2.5), and (2.3) lead to

$$\begin{aligned}& g_{1}'(x)< 0, \end{aligned}$$
(2.7)
$$\begin{aligned}& \lim_{x\to0^{+}}g_{1}(x)>0, \qquad \lim _{x\to+\infty}g_{1}(x)=-\infty, \end{aligned}$$
(2.8)
$$\begin{aligned}& \lim_{x\to0^{+}}g(x)=-\infty,\qquad \lim_{x\to+\infty}g(x)=0, \end{aligned}$$
(2.9)

and

$$ g(1)=(-1-r)\operatorname{erf}(1)-\frac{2}{e\sqrt{\pi}}. $$
(2.10)

Inequality (2.7) implies that \(g_{1}(x)\) is strictly decreasing on \([0,+\infty)\).

It follows from the monotonicity of \(g_{1}(x)\) and (2.8) that there exists \(x_{1}\in(0,+\infty)\), such that \(g(x)\) is strictly increasing on \([0,x_{1}]\) and strictly decreasing on \([x_{1},+\infty)\).

From the piecewise monotonicity of \(g(x)\) and (2.9) we clearly see that there exists \(x_{2}\in(0,+\infty)\), such that \(g(x)<0\) for \(x\in(0,x_{2})\) and \(g(x)>0\) for \(x\in(x_{2},+\infty)\).

Case 1.1. If \(r< r_{0}\), then from (2.10) we know that \(g(1)>0\). This leads to \(g(x)>0\) for \(x\in[1,+\infty)\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly convex on \([1,+\infty)\).

Case 1.2. If \(r_{0}\leq r<0\), then (2.10) implies that \(g(1)\leq0\). This leads to \(g(x)\leq0\) for \(x\in(0,1]\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly concave on \((0,1]\).

Case 2. If \(r>0\), then (2.5) and (2.3) imply that

$$ g_{1}(x)< 0 $$
(2.11)

and

$$ \lim_{x\to0^{+}}g(x)=0 $$
(2.12)

for \(x\in(0,+\infty)\).

It follows from (2.11), (2.4), and (2.12) that \(g(x)<0\). Therefore (2.2) leads to the conclusion that \(u(x)\) is strictly concave on \((0,+\infty)\). □

Lemma 2.2

The function \(h(x)=2x^{2}+\frac{xe^{-x^{2}}}{\int_{0}^{x}e^{-t^{2}}\,dt}\) is strictly increasing on \((0,+\infty)\).

Proof

Simple computations lead to

$$ h'(x)=\frac{h_{1}(x)}{(\int_{0}^{x}e^{-t^{2}}\,dt)^{2}}, $$
(2.13)

where

$$\begin{aligned}& h_{1}(x)=4x \biggl( \int_{0}^{x}e^{-t^{2}}\,dt \biggr)^{2}+\bigl(1-2x^{2}\bigr)e^{-x^{2}} \int_{0}^{x}e^{-t^{2}}\,dt-xe^{-2x^{2}}, \\& \lim_{x\to0^{+}}h_{1}(x)=0, \end{aligned}$$
(2.14)

and

$$ h_{1}'(x)=4 \biggl( \int_{0}^{x}e^{-t^{2}}\,dt \biggr)^{2}+\bigl(4x^{3}+2x\bigr)e^{-x^{2}} \int_{0}^{x}e^{-t^{2}}\,dt+2x^{2}e^{-2x^{2}}>0 $$
(2.15)

for \(x\in(0,+\infty)\).

Hence, \(h(x)\) is strictly increasing on \((0,+\infty)\), as follows from (2.15), (2.14), and (2.13). □

3 Main results

Theorem 3.1

Let \(\lambda\in(0,1)\) and \(r_{0}=-1-\frac {2}{e\sqrt{\pi}\operatorname{erf}(1)}=-1.4926\ldots\) . Then the double inequality

$$ \operatorname{erf}\bigl(M_{p}(x,y;\lambda)\bigr)\leq G \bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq \operatorname{erf}\bigl(M_{q}(x,y;\lambda)\bigr) $$
(3.1)

holds for all \(x,y\geq1\) if and only if \(p=-\infty\) and \(q\geq r_{0}\).

Proof

First of all, we prove that inequality (3.1) holds if \(p=-\infty\) and \(q\geq r_{0}\). It follows from (1.1) that the first inequality in (3.1) is true if \(p=-\infty\). Since the weighted power mean \(M_{t}(x,y;\lambda)\) is strictly increasing with respect to t on R, thus we only need to prove that the second inequality in (3.1) is true if \(r_{0}\leq q<0\).

If \(r_{0}\leq q<0\), \(u(z)=\log\operatorname{erf}(z^{1/q})\), then Lemma 2.1(2) leads to

$$ \lambda u(s)+(1-\lambda)u(t)\leq u\bigl(\lambda s+(1-\lambda)t \bigr) $$
(3.2)

for \(\lambda\in(0,1)\) and \(s,t\in(0,1]\).

Let \(s=x^{q}\), \(t=y^{q}\), and \(x,y\geq1\). Then (3.2) leads to the second inequality in (3.1).

Second, we prove that the second inequality in (3.1) implies \(q\geq r_{0}\).

Let \(x\geq1\) and \(y\geq1\). Then the second inequality in (3.1) leads to

$$ D(x,y)=:\operatorname{erf}\bigl(M_{q}(x,y;\lambda) \bigr)-G\bigl(\operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\geq0. $$
(3.3)

It follows from (3.3) that

$$D(y,y)=\frac{\partial}{\partial x}D(x,y)|_{x=y}=0 $$

and

$$ \frac{\partial^{2}}{\partial x^{2}}D(x,y)|_{x=y}=\frac{\lambda(1-\lambda)y}{\operatorname {erf}'(y)} \biggl[q-1+ \biggl(2y^{2}+\frac{ye^{-y^{2}}}{\int ^{y}_{0}e^{-t^{2}}\,dt} \biggr) \biggr]. $$
(3.4)

Therefore,

$$q\geq\lim_{y\to 1^{+}}\biggl(1-2y^{2}-\frac{ye^{-y^{2}}}{\int^{y}_{0}e^{-t^{2}}\,dt} \biggr)=r_{0} $$

follows from (3.3) and (3.4) together with Lemma 2.2.

Finally, we prove that the first inequality in (3.1) implies \(p=-\infty\). We distinguish two cases.

Case I. \(p\geq0\). Then for any fixed \(y\in[1,+\infty)\) we have

$$\lim_{x\to+\infty}\operatorname{erf}\bigl(M_{p}(x,y; \lambda)\bigr)=1 $$

and

$$\lim_{x\to+\infty}G\bigl(\operatorname{erf}(x),\operatorname {erf}(y); \lambda\bigr)=\operatorname{erf}^{1-\lambda}(y)< 1, $$

which contradicts the first inequality in (3.1).

Case II. \(-\infty< p<0\). Let \(x\geq1\), \(\alpha=\lambda ^{1/p}\) and \(y\to +\infty\). Then the first inequality in (3.1) leads to

$$ E(x)=:\operatorname{erf}^{\lambda}(x)-\operatorname{erf}( \alpha x)\geq0. $$
(3.5)

It follows from (3.5) that

$$ \lim_{x\to+\infty}E(x)=0 $$
(3.6)

and

$$ E'(x)=\frac{2\lambda}{\sqrt{\pi}}e^{-x^{2}} \biggl[ \operatorname {erf}^{\lambda-1}(x)-\frac{\alpha}{\lambda}e^{(1-\alpha ^{2})x^{2}} \biggr]. $$
(3.7)

Note that \(\alpha>1\), then

$$ \lim_{x\to +\infty} \biggl[\operatorname{erf}^{\lambda-1}(x)- \frac{\alpha }{\lambda}e^{(1-\alpha^{2})x^{2}} \biggr]=1. $$
(3.8)

It follows from (3.7) and (3.8) that there exists a sufficiently large \(\eta_{1}\in[1,+\infty)\), such that \(E'(x)>0\) for \(x\in(\eta_{1},+\infty)\). Hence \(E(x)\) is strictly increasing on \([\eta_{1},+\infty)\).

From the monotonicity of \(E(x)\) on \([\eta_{1},+\infty)\) and (3.6) we conclude that there exists \(\eta_{2}\in[1,+\infty)\), such that \(E(x)<0\) for \(x\in(\eta_{2},+\infty)\), this contradicts (3.5). □

Theorem 3.2

Let \(\lambda\in(0,1)\), then the double inequality

$$ \operatorname{erf}\bigl(M_{\mu}(x,y;\lambda)\bigr)\leq G \bigl(\operatorname{erf}(x),\operatorname{erf}(y);\lambda\bigr)\leq \operatorname{erf}\bigl(M_{\nu}(x,y;\lambda)\bigr) $$
(3.9)

holds for all \(0< x,y<1\) if and only if \(\mu\leq r_{0}\) and \(\nu\geq 0\).

Proof

First of all, we prove that (3.9) holds if \(\mu\leq r_{0}\) and \(\nu\geq0\).

If \(\mu\leq r_{0}\), \(u(z)=\log\operatorname{erf}(z^{1/\mu})\), then Lemma 2.1(1) leads to

$$ u\bigl(\lambda s+(1-\lambda)t\bigr)\leq\lambda u(s)+(1- \lambda)u(t) $$
(3.10)

for \(\lambda\in(0,1)\), \(s,t>1\).

Let \(s=x^{\mu}\), \(t=y^{\mu}\), and \(0< x,y<1\). Then (3.10) leads to the first inequality in (3.9).

If \(\nu\geq0\), \(u(z)=\log\operatorname{erf}(z^{1/\nu})\), then Lemma 2.1(3) leads to

$$ \lambda u(s)+(1-\lambda)u(t)\leq u\bigl(\lambda s+(1-\lambda)t \bigr) $$
(3.11)

for \(\lambda\in(0,1)\), \(0< s,t<1\).

Therefore, the second inequality in (3.9) follows from \(s=x^{\nu}\), \(t=y^{\nu}\), and \(0< x,y<1\) together with (3.11).

Second, we prove that the second inequality in (3.9) implies \(\nu\geq0\).

Let \(0< x,y<1\). Then the second inequality in (3.9) leads to

$$ J(x,y)=:\operatorname{erf}\bigl(M_{\nu}(x,y;\lambda) \bigr)-G\bigl(\operatorname {erf}(x),\operatorname{erf}(y);\lambda\bigr)\geq 0. $$
(3.12)

It follows from (3.12) that

$$J(y,y)=\frac{\partial}{\partial x}J(x,y)|_{x=y}=0 $$

and

$$ \frac{\partial^{2}}{\partial x^{2}}J(x,y)|_{x=y}=\frac{\lambda(1-\lambda)y}{\operatorname {erf}'(y)} \biggl[\nu-1+ \biggl(2y^{2}+\frac{ye^{-y^{2}}}{\int _{0}^{y}e^{-t^{2}}\,dt} \biggr) \biggr]. $$
(3.13)

Hence, from (3.12) and (3.13) together with Lemma 2.2 we know that

$$\nu\geq\lim_{y\to0^{+}} \biggl[1- \biggl(2y^{2}+ \frac{ye^{-y^{2}}}{\int _{0}^{y}e^{-t^{2}}\,dt} \biggr) \biggr]=0. $$

Finally, we prove that the first inequality in (3.9) implies \(\mu\leq r_{0}\).

Let \(y\to1\). Then the first inequality in (3.9) leads to

$$ L(x)=:G\bigl(\operatorname{erf}(x),\operatorname{erf}(1);\lambda \bigr)-\operatorname{erf}\bigl(M_{\mu}(x,1;\lambda)\bigr)\geq 0 $$
(3.14)

for \(0< x<1\).

It follows from (3.14) that

$$ L(1)=0 $$
(3.15)

and

$$ L'(x)=\frac{2\lambda e^{-x^{2}}}{\sqrt{\pi}} \bigl[\operatorname{erf}^{1-\lambda }(1) \operatorname{erf}^{\lambda-1}(x)-x^{\mu-1}\bigl(\lambda x^{\mu}+1-\lambda\bigr)^{1/\mu-1}e^{x^{2}-(\lambda x^{\mu}+1-\lambda)^{2/\mu}} \bigr]. $$
(3.16)

Let

$$ L_{1}(x)=\log \bigl[\operatorname{erf}^{1-\lambda}(1) \operatorname {erf}^{\lambda-1}(x) \bigr]-\log \bigl[x^{\mu-1}\bigl( \lambda x^{\mu}+1-\lambda\bigr)^{1/\mu-1}e^{x^{2}-(\lambda x^{\mu}+1-\lambda)^{2/\mu}} \bigr]. $$
(3.17)

Then

$$\begin{aligned}& \lim_{x\to1^{-}}L_{1}(x)=0, \\& L'_{1}(x)=(\lambda-1)\frac{\operatorname{erf}'(x)}{\operatorname {erf}(x)}- \frac{(\mu-1)(1-\lambda)}{x(\lambda x^{\mu}+1-\lambda)}-2x+2\lambda x^{\mu-1}\bigl(\lambda x^{\mu}+1- \lambda\bigr)^{2/\mu-1}, \end{aligned}$$
(3.18)

and

$$ \lim_{x\to1^{-}}L_{1}'(x)=(1- \lambda) \biggl[-\mu-1-\frac{2}{e \sqrt{\pi}\operatorname{erf}(1)} \biggr]. $$
(3.19)

If \(\mu>r_{0}\), then from (3.19) we clearly see that there exists a small \(\delta_{1}>0\), such that \(L_{1}'(x)<0\) for \(x\in(1-\delta_{1},1)\). Therefore, \(L_{1}(x)\) is strictly decreasing on \([1-\delta_{1},1]\).

The monotonicity of \(L_{1}(x)\) on \([1-\delta_{1},1]\) and (3.18) imply that there exists \(\delta_{2}>0\), such that \(L_{1}(x)>0\) for \(x\in(1-\delta_{2},1)\).

Hence, (3.16) and (3.17) lead to \(L(x)\) being strictly increasing on \([1-\delta_{2},1]\). It follows from the monotonicity of \(L(x)\) and (3.15) that there exists \(\delta_{3}>0\), such that \(L(x)<0\) for \(x\in(1-\delta_{3},1)\), this contradicts (3.14). □