1 Introduction

We recall that the logarithmic derivative of \(\varGamma (x)\) is called the psi or digamma function denoted by

$$ \psi (x)=\frac{d}{dx}\ln \varGamma (x)=\frac{\varGamma ^{\prime }(x)}{\varGamma (x)} $$
(1.1)

for \(x>0 \), that the derivatives \(\psi '(x)\) and \(\psi ''(x)\) are, respectively, called the tri-gamma and tetra-gamma functions, and that the derivatives

$$ \psi ^{(n)}(x)= (-1 )^{n-1} \int _{0}^{\infty }e^{-xt}\frac{t ^{n}}{1-e^{-t}} \,dt= (-1 )^{n-1}n!\sum_{k=0}^{\infty } \frac{1}{(x+k)^{n+1}} $$
(1.2)

for \(n\in \mathbb{N} \) are called the polygamma functions.

A world of the most fundamental properties involving the gamma, digamma, and polygamma functions can be found in some books [1,2,3,4]. It is known that these functions are all key parts of special functions. Moreover, they also play a vital role in other areas like analysis, physics, inequality theory, and statistics. Due to their significance, they attract many scholars to explore some their useful properties. In particular, some properties such as monotonicity, convexity, and complete monotonicity yield numerous inequalities related to these functions; see, for example, [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28].

The following recurrence and asymptotic formulas are often encountered in the literature:

$$\begin{aligned}& \psi ^{(n)}(x+1)=\psi ^{(n)}(x)+(-1)^{n} \frac{n!}{x^{n+1}}\quad (x>0, n=0,1, \ldots ), \end{aligned}$$
(1.3)
$$\begin{aligned}& \psi (x)\sim \ln x-\frac{1}{2x}-\frac{1}{12x^{2}}+ \frac{1}{120x^{4}}- \cdots\quad (x\rightarrow \infty ), \end{aligned}$$
(1.4)
$$\begin{aligned}& (-1)^{n-1}\psi ^{n}(x)\sim \biggl( \frac{(n-1)!}{x^{n}}+\frac{n!}{2x ^{n+1}}+\frac{(n+1)!}{12x^{n+2}}-\cdots \biggr)\quad (x \rightarrow \infty , n=1,2,\ldots ). \end{aligned}$$
(1.5)

For convenience, we define \(\psi _{n}(x)=(-1)^{n-1}\psi ^{(n)}(x)\) for \(n\in \mathbb{N}\). It follows from (1.2) that \(\psi _{n}(x)\) is strictly completely monotonic on \((0,\infty )\) (see [29, 30]). Moreover, from (1.3) and (1.5) it follows that

$$\begin{aligned}& \lim_{x \rightarrow 0^{+}}x^{n+1}\psi _{n}(x)=n!, \end{aligned}$$
(1.6)
$$\begin{aligned}& \lim_{x \rightarrow \infty }x^{n}\psi _{n}(x)=(n-1)!. \end{aligned}$$
(1.7)

which easily yields \(\lim_{x \rightarrow 0^{+}}\psi _{n}(x)=\infty \) and \(\lim_{x \rightarrow \infty }\psi _{n}(x)=0\).

It was shown in [31, Thm. 1.9] that \(\psi _{n}(x)\) is logarithmically convex on \((0,\infty )\). This and Jensen’s theorem yield the following functional inequality:

$$ 1\leq \frac{\prod_{m=1}^{n} \psi _{k}^{p_{m}}(x_{m})}{\psi _{k}(\sum_{m=1}^{n} x_{m} p_{m})} \quad (n, k\in \mathbb{N}) $$
(1.8)

for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots ,n\)), that is, \(p_{m}>0\) with \(\sum_{m=1}^{n} p_{m}=1\). Further, Alzer [32, Thm. 1] provided a refinement of (1.8) with the inequality

$$ \biggl(\frac{\sum_{m=1}^{n} p_{m} x_{m}}{\prod_{m=1}^{n} x_{m}^{p_{m}}} \biggr) ^{\alpha }\leq \frac{\prod_{m=1}^{n} \psi _{k}^{p_{m}}(x_{m})}{\psi _{k}(\sum_{m=1}^{n} x_{m} p_{m})} \quad (\alpha \in \mathbb{R}) $$
(1.9)

for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots ,n\)) if and only if \(\alpha \leq k\). Then, inspired by these results, we are devoted to look for \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _{1}\), and \(\beta _{2} \in \mathbb{R}\) such that, for all \(n, k\in \mathbb{N}\), the double inequality

$$ \frac{\exp (\alpha _{1} A_{n}(x_{m},p_{m},\beta _{1}) )}{ \exp (\alpha _{1} G_{n}(x_{m},p_{m},\beta _{1}) )}\leq \frac{ \prod_{m=1}^{n} \psi _{k}^{p_{m}}(x_{m})}{\psi _{k}(\sum_{m=1}^{n} x _{m} p_{m})}\leq \frac{\exp (\alpha _{2} A_{n}(x_{m},p_{m},\beta _{2}) )}{\exp (\alpha _{2} G_{n}(x_{m},p_{m},\beta _{2}) )} $$

is valid for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots ,n\)). Here and in what follows,

$$ A_{n}(x_{m},p_{m},\beta )=\psi \Biggl(\sum _{m=1}^{n} x_{m} p_{m}+ \beta \Biggr) \quad \text{and} \quad G_{n}(x_{m},p_{m}, \beta )=\sum_{m=1}^{n} p_{m}\psi (x_{m}+\beta ). $$

It was proven in [32, Thm. 2] that

$$ M_{n}^{[s]} \bigl(\psi _{k}(x_{m}),p_{m} \bigr)\geq \psi _{k} \bigl(M _{n}^{[r]}(x_{m},p_{m}) \bigr)\quad (n, k\in \mathbb{N} \text{ and } r, s\in \mathbb{R}) $$
(1.10)

for all \(x_{m}>0\) and weights \(p_{m}>0\) (\(m=1,2,\ldots ,n\)) if and only if \(s\geq 0\) and \(r\geq -sk\), or \(s<0\) and \(r\geq -s(k+1)\).

The power means \(M_{n}^{[t]}\) are defined as (see [33])

$$ M_{n}^{[t]}(x_{m},p_{m})=\Biggl(\sum _{m=1}^{n} p_{m}x_{m}^{t} \Biggr)^{1/t} \quad (t\neq 0), \qquad M_{n}^{[0]}(x_{m},p_{m})= \prod_{m=1}^{n} x_{m}^{p_{m}}, $$

for parameter \(t\in \mathbb{R}\) and all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots ,n\)).

In view of (1.10), it is natural to raise a new question: under which conditions on r and s, the reversed inequality of (1.10) is still valid for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots ,n\))?

The main aim of this paper is to settle the two questions posed thereinbefore. Our main results are the following.

Theorem 1.1

Let α, \(\beta \in \mathbb{R}\) and \(n\in \mathbb{N}\). Define the function

$$ F(x;\alpha ,\beta )=\ln \bigl(\exp \bigl(\alpha \psi (x+\beta )\bigr)\psi _{n}(x) \bigr)- \ln (n-1)!,\quad x>\max (0,-\beta ). $$

Then \(F(x;\alpha ,\beta )\) is strictly increasing and concave for \(\beta \leq 0\) if and only if \(\alpha \geq n\) and strictly decreasing and convex for \(\beta \geq \frac{1}{2}\) if and only if \(\alpha \leq n\).

Applying the asymptotic formulas (1.4) and (1.5) to the function \(F(x;n,\beta )\), we get \(\lim_{x\rightarrow \infty }F(x;n, \beta )=0\). This in combination with Theorem 1.1 yields the following bounds for \(\psi _{n}(x)\).

Corollary 1

Let \(n\in \mathbb{N}\), \(\alpha \leq 0\), and \(\beta \geq \frac{1}{2}\). Then we have the double inequality

$$ (n-1)!\exp \bigl(-n\psi (x+\beta ) \bigr)< \psi _{n}(x)< (n-1)!\exp \bigl(-n\psi (x+\alpha ) \bigr) $$
(1.11)

for \(x>-\alpha \).

As a matter of fact, the double inequality (1.11) was first proved by Batir [34, Thm. 2.1]. In addition, using the concavity and convexity of \(F(x;n,\beta )\) and applying Jensen’s inequality give the following double inequality.

Theorem 1.2

Let \(k\geq 1\) and \(n\geq 2\) be integers, and let \(\alpha _{1}\), \(\alpha _{2}\in \mathbb{R}\), \(\beta _{1}\geq \frac{1}{2}\), and \(\beta _{2}\leq 0\). Then we have the double inequality

$$ \frac{\exp (\alpha _{1} A_{n}(x_{m},p_{m},\beta _{1}) )}{ \exp (\alpha _{1} G_{n}(x_{m},p_{m},\beta _{1}) )}\leq \frac{ \prod_{m=1}^{n} \psi _{k}^{p_{m}}(x_{m})}{\psi _{k}(\sum_{m=1}^{n} x _{m} p_{m})}\leq \frac{\exp (\alpha _{2} A_{n}(x_{m},p_{m},\beta _{2}) )}{\exp (\alpha _{2} G_{n}(x_{m},p_{m},\beta _{2}) )} $$
(1.12)

for all \(x_{m}>-\beta _{2}\) and \(p_{m}>0\) (\(m=1,2,\ldots ,n\)) with \(\sum_{m=1}^{n} p_{m}=1\) if and only if \(\alpha _{1}\leq k\) and \(\alpha _{2}\geq k\).

Theorem 1.3

Let \(k\geq 1\) and \(n\geq 2\) be integers, and let r, \(s\in \mathbb{R}\). Then

$$ M_{n}^{[s]} \bigl(\psi _{k}(x_{m}),p_{m} \bigr)\leq \psi _{k} \bigl(M _{n}^{[r]}(x_{m},p_{m}) \bigr) $$
(1.13)

for all \(x_{m}>0\) and \(p_{m}>0\) (\(m=1,2,\ldots ,n\)) with \(\sum_{m=1} ^{n} p_{m}=1\) if and only if \(s\geq 1\) and \(r\leq -s(k+1)\), or \(s<1\) and \(r\leq -sk-M(s)\), where

$$ M(s)=\sup_{x>0} \bigl(f_{k+1}(x)-f_{k}(x)+sf_{k}(x) \bigr)\quad \textit{and} \quad f_{k}(x)=x\frac{\psi _{k+1}(x)}{\psi _{k}(x)}-k. $$

Remark 1

In brief, Theorem 1.3 can be restated as follows: inequality (1.13) is valid for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2, \ldots , n\)) if and only if \(r\leq -sk-M(s)\).

In the last section, we give some sharp inequalities and star-shaped functions, which are consequences of Theorem 1.1.

2 Some lemmas

In this section, we establish monotonicity properties of some functions closely related to the function \(e^{x}\).

According to Theorem 2.2 in [35], we obtain the following:

Lemma 1

([35, Theorem 2.2])

Let \(z\in \mathbb{R}\). Then the function

$$ g(x)=\frac{e^{-zx}-e^{-(1-z)x}}{1-e^{-x}} $$

is strictly increasing on \((0,\infty )\) if and only if \(z\in (-\infty ,0)\cup (\frac{1}{2},1)\) and strictly decreasing on the same interval if and only if \(z\in (0,\frac{1}{2})\cup (1,\infty )\).

Lemma 2

Let a, \(x_{1}\), and \(x_{2}\in \mathbb{R}\) with \(x_{2}>x_{1}>0\).

  1. (1)

    Let

    $$ H_{a}(x)=\frac{e^{a x_{1} x}-e^{(a-1)x_{1} x}}{e^{a x_{2} x}-e^{(a-1)x _{2} x}}. $$

    Then \(H_{a}(x)\) is strictly increasing on \((0,\infty )\) if and only if \(a\leq 0\) and strictly decreasing on the same interval if and only if \(a\geq \frac{1}{2}\).

  2. (2)

    The function

    $$ h(x)=\frac{1-e^{-x_{1} (1-x)}}{1-e^{-x_{2} (1-x)}}\frac{1-e^{-x_{1} x}}{1-e ^{-x_{2} x}}\frac{e^{a x_{1} x}}{e^{ax_{2} x}} $$

    is strictly decreasing on \((0,\infty )\) for \(a\geq \frac{1}{2}\).

Proof

(1) Differentiating \(H_{a}(x)\) with respect to x leads to

$$ \begin{aligned} &H_{a}'(x)\\ &\quad =\frac{ [a(x_{1}-x_{2})e^{(x_{1}+x_{2})x}+(a x_{2}-a x _{1}-x_{2})e^{x_{1}x}+(x_{1}+a x_{2}-a x_{1})e^{x_{2}x}+(a-1)(x_{1}-x _{2}) ]}{e^{(x-a x)(x_{1}+x_{2})} (e^{a x_{2} x}-e^{(a-1)x _{2} x} )^{2}}, \end{aligned} $$

which implies

$$ H_{a}'(x)=\frac{e^{(a x-x)(x_{1}+x_{2})}}{ (e^{a x_{2} x}-e^{(a-1)x _{2} x} )^{2}}\sum _{n=2}^{\infty }P_{n}(a)\frac{x^{n}}{n!}, $$

where

$$ P_{n}(a)=(x_{2}-x_{1}) \Biggl[\sum _{k=1}^{n-1}x_{1}^{k} x_{2}^{n-k} \biggl(1-a \frac{n!}{k!(n-k)!} \biggr) \Biggr],\quad n=2,3,\ldots . $$

It is not difficult to obtain that

$$ P_{n}(a)>0\quad \text{for all } n\geq 2, a\leq 0, $$
(2.1)

and

$$ P_{n}(a)< 0\quad \text{for all } n\geq 2, a\geq \frac{1}{2}. $$
(2.2)

Hence the function \(H_{a}(x)\) is strictly increasing on \((0,\infty )\) for \(a\leq 0\) and strictly decreasing on the same interval for \(a\geq \frac{1}{2}\). Now, we assume that the function \(H_{a}(x)\) is strictly increasing on \((0,\infty )\). Firstly, we need to notice that

$$ \rho (x)=e^{x_{1}x}+e^{x_{2}x}-e^{(x_{1}+x_{2})x}-1< 0 \quad \text{for } x>0. $$
(2.3)

Then, for \(x>0\), we have:

$$ a< \frac{x_{2}e^{x_{1}x}-x_{1}e^{x_{2}x}+x_{1}-x_{2}}{(x_{2}-x_{1})(e ^{x_{1}x}+e^{x_{2}x}-e^{(x_{1}+x_{2})x}-1)}. $$
(2.4)

Letting x tend to ∞, we obtain \(a\leq 0\).

If \(H_{a}(x)\) is strictly decreasing on \((0,\infty )\), then we have \(a\geq \frac{1}{2}\) as x tends to 0 in the reversed inequality of (2.4).

(2) We write

$$ h(x)=H_{0}(1-x)H_{a}(x). $$

From part (1) of the lemma we conclude that, for \(a\geq \frac{1}{2}\), \(h(x)\) is the product of two positive decreasing functions on \((0,\infty )\). Therefore it is easy to get the desired result.

This completes the proof of Lemma 2. □

Lemma 3

Let α, \(\beta \in \mathbb{R}\) and m, \(n\in \mathbb{N}\). The function

$$ f(x)=\frac{\psi _{m+n}(x+\beta )}{\psi _{m}(x+\alpha )\psi _{n}(x+\beta )} $$

is strictly increasing on \((\max (-\alpha ,-\beta ),\infty )\) for \(\alpha -\beta \leq 0\) and strictly decreasing on \((\max (-\alpha ,- \beta ),\infty )\) for \(\alpha -\beta \geq \frac{1}{2}\).

Proof

Let \(k=m+n\), \(\theta =\alpha -\beta \), and \(y=x+\beta \). Then we have

$$ f(x)=L(y)=\frac{\psi _{k}(y)}{\psi _{m}(y+\theta )\psi _{n}(y)},\quad y> \max (-\theta ,0). $$
(2.5)

If we determine conditions for θ such that the function \(L(y)\) is strictly increasing and strictly decreasing on \((\max (- \theta ,0),\infty )\), then the desired result follows.

For convenience, we replace y by x in \(L(y)\). Using the well-known formula (1.2) and applying the convolution theorem for the Laplace transform, we get

$$ L(x)=\frac{\psi _{k}(x)}{\psi _{m}(x+\theta )\psi _{n}(x)}=\frac{\int _{0}^{\infty } q(t)e^{-xt} \,dt}{\int _{0}^{\infty } p(t)e^{-xt} \,dt}, $$

where

$$ q(t)=\frac{t^{k}}{1-e^{-t}} \quad \text{and} \quad p(t)= \int _{0}^{t} \frac{s^{m}e^{-\theta s}}{1-e^{-s}}\frac{(t-s)^{n}}{1-e ^{-(t-s)}} \,ds. $$

Differentiating \(L(x)\), we get

$$ L'(x)=\frac{\int _{0}^{\infty } tp(t)e^{-xt} \,dt \int _{0}^{\infty } q(t)e ^{-xt} \,dt-\int _{0}^{\infty } tq(t)e^{-xt} \,dt \int _{0}^{\infty } p(t)e ^{-xt} \,dt}{ (\int _{0}^{\infty } p(t)e^{-xt} \,dt )^{2}}, $$
(2.6)

so that using the convolution theorem for the Laplace transform again in (2.6) yields

$$ L'(x) \biggl( \int _{0}^{\infty } p(t)e^{-xt} \,dt \biggr)^{2}= \int _{0} ^{\infty } u(t)e^{-xt} \,dt, $$
(2.7)

where

$$ u(t)= \int _{0}^{t}p(s)q(t-s) (2s-t) \,ds. $$

We will prove that

$$ u(t)>0\quad \text{for } \theta \leq 0, t>0, $$
(2.8)

and

$$ u(t)< 0\quad \text{for } \theta \geq \frac{1}{2}, t>0. $$
(2.9)

Making the substitution \(s=\frac{t}{2}(1+y)\), we obtain

$$ u(t)=\frac{t^{2}}{2} \int _{-1}^{1} yp \biggl(\frac{t}{2}(1+y) \biggr)q \biggl(\frac{t}{2}(1-y) \biggr) \,dy. $$
(2.10)

For \(\delta >0\) and \(y\in (0,1)\), we define

$$ v(y)=yp\bigl(\delta (1+y)\bigr)q\bigl(\delta (1-y)\bigr) \quad \text{and} \quad w(y)=v(y)+v(-y). $$
(2.11)

Combining (2.10) with (2.11), we see that

$$ u(2\delta )=2\delta ^{2} \int _{-1}^{1}v(y) \,dy=2\delta ^{2} \int _{0}^{1}w(y) \,dy. $$
(2.12)

Hence, to prove (2.8) and (2.9), it suffices to show that

$$ w(y)>0\quad \text{for } \theta \leq 0, \delta >0, y\in (0,1), $$
(2.13)

and

$$ w(y)< 0\quad \text{for } \theta \geq \frac{1}{2}, \delta >0, y\in (0,1). $$
(2.14)

From (2.11) a direct computation immediately gives

$$ \begin{aligned}[b] w(y) &=y\delta ^{k} \frac{(1-y)^{k}}{1-e^{-\delta (1-y)}} \int _{0}^{ \delta (1+y)}\frac{s^{m} e^{-\theta s}}{1-e^{-s}}\frac{(\delta (1+y)-s)^{n}}{1-e ^{-(\delta (1+y)-s)}} \,ds \\ &\quad {}-y\delta ^{k}\frac{(1+y)^{k}}{1-e^{-\delta (1+y)}} \int _{0}^{\delta (1-y)}\frac{s ^{m} e^{-\theta s}}{1-e^{-s}}\frac{(\delta (1-y)-s)^{n}}{1-e^{-( \delta (1-y)-s)}} \,ds. \end{aligned} $$
(2.15)

Substituting \(s=\delta (1+y)x\) into the first integral and \(s=\delta (1-y)x\) into the second, we obtain

$$ \frac{w(y)}{y\delta ^{2k+1}(1-y^{2})^{k}}= \int _{0}^{1} x^{m}(1-x)^{n} \varphi (x;\theta ) \,dx, $$
(2.16)

where

$$ \begin{aligned}[b] \varphi (x;\theta ) &=\frac{1+y}{1-e^{-\delta (1-y)}} \frac{1}{1-e^{- \delta (1+y)x}} \frac{e^{-\theta \delta (1+y)x}}{1-e^{-\delta (1+y)(1-x)}} \\ &\quad{}-\frac{1-y}{1-e^{-\delta (1+y)}}\frac{1}{1-e^{-\delta (1-y)x}}\frac{e ^{-\theta \delta (1-y)x}}{1-e^{-\delta (1-y)(1-x)}}. \end{aligned} $$
(2.17)

It is sufficient to show that

$$ \varphi (x;\theta )>0 \quad \text{for } \theta \leq 0, \delta >0, x, y \in (0,1), $$
(2.18)

and

$$ \varphi (x;\theta )< 0\quad \text{for } \theta \geq \frac{1}{2}, \delta >0, x, y\in (0,1), $$
(2.19)

which imply that (2.13) and (2.14) are valid.

Firstly, we assume that \(\theta =0\). Clearly, (2.18) is equivalent to

$$ \phi (x)>0 \quad \text{for } \delta >0, x,y\in (0,1), $$
(2.20)

where

$$ \phi (x)=\frac{(1-e^{-\delta (1-y)(1-x)})(1-e^{-\delta (1-y)x})}{(1-e ^{-\delta (1-y)})(1-y)}-\frac{(1-e^{-\delta (1+y)(1-x)})(1-e^{-\delta (1+y)x})}{(1-e^{-\delta (1+y)})(1+y)}. $$

Differentiating \(\phi (x)\) with respect to x gives

$$ \phi '(x)=\delta \biggl(\frac{e^{-\delta (1-y)x}-e^{-\delta (1-y)(1-x)}}{1-e ^{-\delta (1-y)}}-\frac{e^{-\delta (1+y)x}-e^{-\delta (1+y)(1-x)}}{1-e ^{-\delta (1+y)}} \biggr). $$

Using Lemma 1, we get \(\phi '(x)>0\) for \(x\in (0,\frac{1}{2})\). Since \(\phi (0)=0\), we conclude that \(\phi (x)>0\) for \(x\in (0, \frac{1}{2})\). In combination with the fact that \(\phi (x)\) is symmetric about \(x=\frac{1}{2}\) on \((0,1)\), this establishes (2.20).

We suppose that \(\theta <0\). It is obvious that

$$ e^{-\theta \delta (1+y)x}>e^{-\theta \delta (1-y)x} \quad \text{for } \delta >0, x,y\in (0,1). $$
(2.21)

From (2.20) we see that

$$ \begin{aligned}[b] &\frac{1+y}{1-e^{-\delta (1-y)}}\frac{1}{1-e^{-\delta (1+y)x}}\frac{1}{1-e ^{-\delta (1+y)(1-x)}}\\ &\quad > \frac{1-y}{1-e^{-\delta (1+y)}}\frac{1}{1-e ^{-\delta (1-y)x}}\frac{1}{1-e^{-\delta (1-y)(1-x)}}. \end{aligned} $$
(2.22)

Hence, taking into consideration (2.21) and (2.22), we complete the proof of (2.18).

Next, we consider the case \(\theta \geq \frac{1}{2}\). Then (2.19) can be equivalently written as

$$ \omega (x)< \omega (0), $$
(2.23)

where

$$ \omega (x)= \frac{1-e^{-\delta (1-y) (1-x)}}{1-e^{-\delta (1+y) (1-x)}}\frac{1-e ^{-\delta (1-y) x}}{1-e^{-\delta (1+y) x}}\frac{e^{\theta \delta (1-y) x}}{e^{\theta \delta (1+y) x}}. $$

From part (2) of Lemma 2 it is easy to prove (2.23). Therefore the proof of (2.19) is complete.

The proof of Lemma 3 is complete. □

Using Lemma 3 and limit relations (1.6) and (1.7), we have the following:

Corollary 2

Let m, \(n\in \mathbb{N}\), \(\alpha \geq 0\) and \(\beta \geq \frac{1}{2}\). Then, for all \(x>0\), we have:

$$ 0< \frac{\psi _{m+n}(x+\alpha )}{\psi _{m}(x)\psi _{n}(x+\alpha )}< (m+n-1) (m+n-2)_{n-1}< \frac{ \psi _{m+n}(x)}{\psi _{m}(x+\beta )\psi _{n}(x)}< +\infty , $$

where \((m+n-2)_{n-1}=\frac{(m+n-2)!}{(n-1)! (m-1 )!}\). All bounds are optimal.

Remark 2

Setting \(m=n=1\) in Corollary 2, we get the well-known inequality

$$ \psi _{1}^{2}(x)-\psi _{2}(x)>0 $$

with the optimal lower bound. This inequality was proved in many papers by different methods (see [8, 12, 15, 16, 34]) and used to establish many significant results related to the gamma and polygamma functions in [12, 13].

3 Proofs of the main results

We give a proof of Theorem 1.1.

Proof

Differentiation yields

$$ F'(x;\alpha ,\beta )=\psi _{1}(x+\beta ) \biggl(\alpha -\frac{\psi _{n+1}(x)}{ \psi _{1}(x+\beta )\psi _{n}(x)} \biggr)=\psi _{1}(x+\beta )F_{\alpha , \beta }(x), $$
(3.1)

and therefore

$$ F''(x;\alpha ,\beta )=F_{\alpha ,\beta }'(x)\psi _{1}(x+\beta )-F_{ \alpha ,\beta }(x)\psi _{2}(x+\beta ). $$
(3.2)

From Lemma 3 we conclude that \(F_{\alpha ,\beta }'(x)>0\) for \(\beta \geq \frac{1}{2}\) and \(F_{\alpha ,\beta }'(x)<0\) for \(\beta \leq 0\). In view of Corollary 2, it follows that \(F_{\alpha ,\beta }(x)<0\) for \(\beta \geq \frac{1}{2}\) and \(\alpha \leq n\) and \(F_{\alpha ,\beta }(x)>0\) for \(\beta \leq 0\) and \(\alpha \geq n\). By relations (3.1) and (3.2) it is easy to see that \(F'(x;\alpha ,\beta )<0\) and \(F''(x;\alpha ,\beta )>0\) for \(\beta \geq \frac{1}{2}\) and \(\alpha \leq n\) and that \(F'(x;\alpha , \beta )>0\) and \(F''(x;\alpha ,\beta )<0\) for \(\beta \leq 0\) and \(\alpha \geq n\).

Next, we assume that \(F(x;\alpha ,\beta )\) is strictly increasing and concave for \(\beta \leq 0\). From (3.1) we obtain \(F_{\alpha , \beta }(x)>0\), so that Lemma 3 and \(\lim_{x\rightarrow \infty }F_{\alpha ,\beta }(x)=\alpha -n\) imply \(\alpha \geq n\).

Besides, from (3.2) it follows that

$$ \alpha -\frac{\psi _{n+2}(x)\psi _{n}(x)-\psi _{n+1}^{2}(x)}{\psi _{n} ^{2}(x)\psi _{2}(x+\beta )}>0. $$
(3.3)

Letting x tend to ∞ and utilizing (1.7), we get \(\alpha \geq n\).

By similar arguments the necessary condition of the case where \(F(x;\alpha ,\beta )\) is strictly decreasing and convex for \(\beta \geq \frac{1}{2}\) can be proved.

The proof of Theorem 1.1 is complete. □

Proof

of Theorem  1.2 Using the concavity and convexity in Theorem 1.1 and Jensen’s inequality, we can easily prove the sufficiency of the theorem. Assume that the left-hand side of (1.12) is valid and \(x_{m}\) are not all equal. Setting \(x_{2}=\cdots =x_{n}=y\), from (1.12) we get

$$ \alpha _{1}\leq \frac{p_{1}\ln \psi _{k}(x_{1})+(1-p_{1})\ln \psi _{k}(y)- \ln \psi _{k}(p_{1}x_{1}+(1-p_{1})y)}{\psi (p_{1}x_{1}+(1-p_{1})y+\beta _{1})-p_{1} \psi (x_{1}+\beta _{1})-(1-p_{1})\psi (y+\beta _{1})}. $$
(3.4)

Letting \(x_{1}\) tend to y and applying L’Hospital’s rule, we obtain

$$ \alpha _{1}\leq \frac{(p_{1}-1)\psi _{k+1}^{2}(y)-(p_{1}-1)\psi _{k}(y) \psi _{k+2}(y)}{(1-p_{1})\psi _{2}(y+\beta _{1})\psi _{k}^{2}(y)}, $$
(3.5)

so that (1.7) and (3.5) lead to \(\alpha _{1}\leq k\) as y tends to ∞. If the right-hand side of (1.12) is valid, then by letting y tend to ∞ in the reversed inequality of (3.5), we have \(\alpha _{2}\geq k\).

The proof of Theorem 1.2 is complete. □

Next we give a proof of Theorem 1.3.

Proof of Theorem 1.3

Firstly, we need to show that \(M(s)\) for \(s<1\) is well defined. It follows from (1.6) and (1.7) that \(\lim_{x\rightarrow \infty }f_{k+1}(x)-f_{k}(x)+sf _{k}(x)=0\) and \(\lim_{x\rightarrow 0}f_{k+1}(x)-f_{k}(x)+{sf_{k}(x)=s}\). According to [32, Lemma 2], it is obvious that \(0\leq M(s)<1\) for \(s<0\) and \(s\leq M(s)<1\) for \(0\leq s<1\).

Since the power mean is increasing with respect to its order (see [33, p. 159]) and \(\psi _{k}(x)\) is decreasing, it suffices to show that (1.13) holds for \(s\geq 1\) and \(r=-s(k+1)\) and for \(s<1\) and \(r=-sk-M(s)\).

In fact, it is still necessary to make use of the techniques of the proof of [32, Thm. 2], and we cannot present it in detail. In view of the proof of [32, Thm. 2], we will prove that \(\varPhi (x)\) is increasing on \((0,\infty )\) for \(s\geq 1\) and \(r=-s(k+1)\) and for \(0\neq s<1\) and \(r=-sk-M(s)\), where

$$ \varPhi (x)=x^{1-r}\psi _{k}^{s-1}(x)\psi _{k+1}(x). $$

Furthermore, we get

$$ \varPhi '(x)=\frac{\varPhi (x)}{x}\varPsi (x), $$
(3.6)

where

$$ \varPsi (x)=1-r-x\frac{\psi _{k+2}(x)}{\psi _{k+1}(x)}+(1-s)x\frac{\psi _{k+1}(x)}{\psi _{k}(x)}. $$

Case 1. \(s\geq 1\) and \(r=-s(k+1)\). From [32, Lemma 2] we conclude that \(\varPsi (x)\) is strictly increasing on \((0,\infty )\), so that \(\varPsi (x)>\lim_{x\rightarrow 0^{+}}\varPsi (x)=0\). Hence \(\varPhi (x)\) is increasing on \((0,\infty )\).

Case 2. \(0\neq s<1\) and \(r=-sk-M(s)\). Then \(\varPsi (x)\geq 0\) is equivalent to

$$ M(s)\geq f_{k+1}(x)+sf_{k}(x)-f_{k}(x), $$
(3.7)

and therefore \(\varPhi (x)\) is increasing on \((0,\infty )\).

We consider the case \(s=0\). From Theorem 2 of [32] it follows that \(r<0\). Let \(x_{1}\geq x_{2}\geq \cdots \geq x_{n}> 0\). Then we define

$$ F_{m}(x)=F(x,\ldots ,x,x_{m+1},\ldots ,x_{n}), \quad m=1,2\ldots ,n-1, $$

where

$$ F(x_{1},\ldots ,x_{n})=\sum_{m=1}^{n} p_{m} \ln \psi _{k}(x_{m})- \ln \psi _{k} \Biggl( \Biggl(\sum_{m=1}^{n} p_{m} x_{m}^{r} \Biggr)^{1/r} \Biggr). $$

Differentiating \(F_{m}(x)\) gives

$$ x^{1-r}F'_{m}(x)=\Biggl(\sum _{v=1}^{m} p_{v}\Biggr) \biggl((xz)^{1-r}\frac{\psi _{k+1}(xz)}{\psi _{k}(xz)}-x^{1-r}\frac{\psi _{k+1}(x)}{\psi _{k}(x)} \biggr), $$
(3.8)

where

$$ 0< z= \Biggl(\sum_{v=1}^{m} p_{v}+\sum_{v=m+1}^{n} p_{v}\biggl( \frac{x_{v}}{x}\biggr)^{r} \Biggr)^{1/r}\leq 1. $$

If \(r=-M(0)\), then we conclude that

$$ \biggl(x^{1-r}\frac{\psi _{k+1}(x)}{\psi _{k}(x)}\biggr)'\geq 0, $$

which, together with (3.8), implies \(F'_{m}(x)\leq 0\). By the techniques of the proof in [32, Thm. 2] we have \(F(x_{1}, \ldots ,x_{n})\leq 0\).

We assume that (1.13) is valid for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots, n\)). We set \(x_{2}=\cdots =x_{n}=y\) and \(x_{1}=x\) in (1.13). Then, for \(s\neq 0\), we obtain

$$ f(x,y)=\psi _{k} \bigl(\bigl(p_{1}x^{r}+(1-p_{1})y^{r} \bigr)^{1/r} \bigr)- \bigl(p _{1}\psi _{k}^{s}(y)+(1-p_{1}) \psi _{k}^{s}(y) \bigr)^{1/s}\geq 0 $$

with \(f(y,y)=0\). In fact, a direct computation of the partial derivative of \(f(x,y)\) gives

$$ \frac{\partial f(x,y)}{\partial x}\bigg|_{x=y}=0. $$

In combination with \(f(y,y)=0\), this establishes

$$ \frac{\partial ^{2} f(x,y)}{\partial x^{2}}\bigg|_{x=y}\geq 0, $$

which implies that \(\varPsi (y)\geq 0\) is equivalent to

$$ -r\geq f_{k+1}(y)-f_{k}(y)+sy \frac{\psi _{k+1}(y)}{\psi _{k}(y)}. $$
(3.9)

If \(s\geq 1\), then letting y tend to 0 in (3.9) yields \(r\leq -s(k+1)\). For \(0\neq s<1\), from the definition of \(M(s)\) and (3.9) it follows that \(r\leq -sk-M(s)\). Finally, we prove the case \(s=0\). In the same way, from (1.13) we obtain

$$ f(x,y)=\psi _{k}^{p_{1}}(x)\psi _{k}^{1-p_{1}}(y)- \psi _{k} \bigl(\bigl(p_{1}x ^{r}+(1-p_{1})y^{r} \bigr)^{1/r} \bigr)\leq 0. $$

Since \(f(y,y)=0\) together with

$$ \frac{\partial f(x,y)}{\partial x}\bigg|_{x=y}=0, $$

we obtain

$$ \frac{\partial ^{2} f(x,y)}{\partial x^{2}}\bigg|_{x=y}=-r-f _{k+1}(y)+f_{k}(y)\geq 0. $$
(3.10)

Hence (3.10), together with the definition of \(M(s)\), leads to \(r\leq -M(0)\).

The proof of Theorem 1.3 is complete. □

Remark 3

Define

$$ m(s)=\inf_{x>0} \bigl(f_{k+1}(x)-f_{k}(x)+sf_{k}(x) \bigr), $$

where \(f_{k}(x)\) is defined in Theorem 1.3. From [32, Lemma 2] we get

$$ \biggl(\ln \biggl(x\frac{\psi _{k+1}(x)}{\psi _{k}(x)} \biggr) \biggr)'< 0, $$

which implies \(f_{k+1}(x)-f_{k}(x)>0\). For \(s\geq 0\), it is easy to see that

$$ \bigl(f_{k+1}(x)-f_{k}(x)+sf_{k}(x) \bigr)>0. $$

Since \(\lim_{x\rightarrow \infty }f_{k+1}(x)-f_{k}(x)+sf_{k}(x)=0\), we conclude that \(m(s)=0\). For \(s<0\), from [32, Lemma 2] and (1.6) it follows that

$$ f_{k+1}(x)-f_{k}(x)+sf_{k}(x)>s, $$

which, together with \(\lim_{x\rightarrow 0}f_{k+1}(x)-f_{k}(x)+sf_{k}(x)=s\), implies \(m(s)=s\). As a consequence, the reversed inequality of (1.13) is valid for all \(x_{m}>0\) and weights \(p_{m}\) (\(m=1,2,\ldots, n\)) if and only if \(r\geq -sk-m(s)\).

4 Inequalities

In this section, we use Theorem 1.1 to yield some new inequalities for the polygamma functions.

We recall that a function f is said to be superadditive on an interval I if

$$ f(x)+f(y)\leq f(x+y)\quad \text{for all } x,y\in I \text{ with } x+y \in I. $$

If −f is superadditive, then f is called subadditive on I (see [36]). A function f is said to be star-shaped on \((0,\infty )\) if

$$ f(ax)\leq af(x) $$

is valid for all \(x>0\) and \(a\in (0,1)\). There is a close relation between the above two notions that star-shaped functions are superadditive (see [37,38,39]). Particularly, Trimble et al. [39] show that a function \(f(x)\) is star-shaped on \((0,\infty )\) if and only if \(f(x)/x\) is nondecreasing on \((0,\infty )\).

Theorem 4.1

For \(n\in \mathbb{N}\) and \(\beta \in \mathbb{R}\), let \(G(x)=e^{n \psi (x+\beta )}\psi _{n}(x)\).

  1. (1)

    For \(\beta \leq 0\), we have the inequality

    $$ \frac{1}{(n-1)!}< \frac{G(x+y)}{G(x)G(y)}< +\infty $$
    (4.1)

    for \(x,y>-\beta \).

  2. (2)

    For \(\beta \geq \frac{1}{2}\), we have the inequality

    $$ 0< \frac{G(x+y)}{G(x)G(y)}< \frac{1}{(n-1)!} $$
    (4.2)

    for \(x,y>0\). All bounds are optimal.

Proof

Since the proof of (4.1) is similar to that of (4.2), we provide the proof of (4.1). We define

$$ g(x,y)=\frac{G(x+y)}{G(x)}. $$

Partial differentiation yields

$$ \frac{\partial g(x,y)}{\partial x}=\frac{G(x+y)}{G(x)} \bigl(g_{1}(x+y)-g _{1}(x) \bigr), $$

where \(g_{1}(x)=G'(x)/G(x)\). For \(\beta \leq 0\), due to Theorem 1.1, we conclude that \(g_{1}(x)\) is decreasing on \((-\beta , \infty )\), so that \(\partial g(x,y)/\partial x<0\). By the double inequality

$$ \ln x-\frac{1}{x}< \psi (x)< \ln x-\frac{1}{2x}, $$

which can be derived from [40], we get \(\lim_{x\rightarrow \infty }G(x)=(n-1)!\) for all \(\beta \in \mathbb{R}\), \(\lim_{x\rightarrow 0}G(x)=\infty \) for \(\beta >0\), and \(\lim_{x\rightarrow -\beta }G(x)=0\) for \(\beta \leq 0\). Together with \(\partial g(x,y)/\partial x<0\) and Theorem 1.1, this establishes

$$ \frac{G(x)}{(n-1)!}< 1< g(x,y)< +\infty , $$

from which inequality (4.1) easily follows. Finally, we check that the bounds of (4.1) are optimal. From

$$ \lim_{x\rightarrow \infty }\lim_{y\rightarrow \infty } \frac{G(x+y)}{G(x)G(y)}= \frac{1}{(n-1)!} $$

and

$$ \lim_{x\rightarrow -\beta }\lim_{y\rightarrow \infty } \frac{G(x+y)}{G(x)G(y)}=+ \infty $$

we get the desired result. □

Theorem 4.1 immediately leads to the upper and lower bounds for the ratio \(\psi _{n}(x)\psi _{n}(y)/ \psi _{n}(x+y)\).

Corollary 3

Let \(n\in \mathbb{N}\). We have the double inequality

$$ \biggl(\frac{e^{\psi (x+y+1/2)}}{e^{\psi (x+1/2)+\psi (y+1/2)}} \biggr) ^{n}< \frac{\psi _{n}(x)\psi _{n}(y)}{(n-1)!\psi _{n}(x+y)}< \biggl( \frac{e ^{\psi (x+y)}}{e^{\psi (x)+\psi (y)}} \biggr)^{n} $$

for \(x, y>0\).

From Theorem 4.1 we observe that, for \(\beta \leq 0\), the function \(\xi _{1}(x)=\ln (e^{n\psi (x)}\psi _{n}(x-\beta ))-\ln (n-1)!\) is superadditive on \((0,\infty )\), and for \(\beta \geq \frac{1}{2}\), the function \(\xi _{2}(x)=\ln (e^{n\psi (x+\beta )}\psi _{n}(x))-\ln (n-1)!\) is subadditive on \((0,\infty )\).

In fact, \(\xi _{1}(x)\) and \(-\xi _{2}(x)\) are not only superadditive but also star-shaped on \((0,\infty )\).

Theorem 4.2

The functions \(\xi _{1}(x)\) and \(-\xi _{2}(x)\) are both star-shaped on \((0,\infty )\).

Proof

Since a function \(f(x)\) is star-shaped on \((0,\infty )\) if and only if \(f(x)/x\) is nondecreasing on \((0,\infty )\) (see [39]), it suffices to prove that \(\xi _{1}(x)/x\) and \(-\xi _{2}(x)/x\) is increasing on \((0,\infty )\). Since

$$ \biggl(\frac{\xi _{1}(x)}{x} \biggr)'=\frac{x\xi '_{1}(x)-\xi _{1}(x)}{x ^{2}}, \qquad \biggl(-\frac{\xi _{2}(x)}{x} \biggr)'=\frac{-x\xi '_{2}(x)+\xi _{2}(x)}{x ^{2}}, $$

by Theorem 1.1 we complete the proof of the theorem. □

Remark 4

Theorems 4.1 and 4.2 provide two distinct ways to prove that \(\xi _{1}(x)\) and \(-\xi _{2}(x)\) are superadditive.

5 Conclusions

In this paper, Theorem 1.1 gives necessary and sufficient conditions for \(\alpha \in \mathbb{R}\) such that the function \(F(x;\alpha ,\beta )\) is strictly increasing (decreasing) and concave (convex) on \((\max (0,-\beta ),\infty )\) for \(\beta \in (-\infty ,0]\) (\(\beta \in [1/2,\infty )\)). Consequently, Theorem 1.1 provides a alternative proof of [34, Thm. 2.1] and yields a double mean-value inequality (Theorem 1.2). In Theorem 1.3, it is stated that there exist necessary and sufficient conditions such that the reversed inequality of (1.10) is valid, which enhances Theorem 2 of [32].

Moreover, we find several new inequalities for the polygamma functions. In Theorem 4.1, we prove two sharp inequalities for the function \(G(x)\), which determine the upper and lower bounds for the ratio \(\psi _{n}(x)\psi _{n}(y)/\psi _{n}(x+y)\) (Corollary 3). Finally, two star-shaped functions are posed in Theorem 4.2.