As discussed in the first chapter, the main objective of our work is to generalize Krull-Webster’s theory to multiple \(\log \Gamma \)-type functions and explore the properties of these functions that are analogues of classical properties of the gamma function.

In the previous chapters, we have presented and discussed several results related to these functions, including their differentiation and integration properties as well as important results on their asymptotic behaviors.

We are now in a position to explore further properties of multiple \(\log \Gamma \)-type functions. More precisely, in this chapter we provide for these functions analogues of Euler’s infinite product, Euler’s reflection formula, Gauss’ multiplication formula, Gautschi’s inequality, Raabe’s formula, Wallis’s product formula, Webster’s functional equation, and Weierstrass’ infinite product for the gamma function. We also discuss analogues of Fontana-Mascheroni’s series and Gauss’ digamma theorem and provide a Gregory’s formula-based series representation, a general asymptotic expansion formula, and a few related results.

8.1 Eulerian Form

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). As we already observed in Chap. 1, the representation of Σg as the pointwise limit of the sequence \(n\mapsto f^p_n[g]\) is the analogue of Gauss’ limit for the gamma function. Using identity (3.8), we immediately see that this form of Σg can be translated into a series, namely

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ f^p_1[g](x) -\sum_{k=1}^{\infty}\rho^{p+1}_k[g](x),\qquad x>0. \end{aligned} $$
(8.1)

It is a simple exercise to see that, when \(g(x)=\ln x\) and p = 1, this latter formula reduces to the following series representation of the log-gamma function

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ -\ln x-\sum_{k=1}^{\infty}\textstyle{\left(\ln(x+k)-\ln k-x\ln\left(1+\frac{1}{k}\right)\right)}. \end{aligned} $$
(8.2)

Its multiplicative version is nothing other than the classical Eulerian form (or Euler’s product form) of the gamma function (see, e.g., Srivastava and Choi [93, p. 3]). We recall this form in the following proposition.

Proposition 8.1 (Eulerian Form of the Gamma Function)

The following identity holds

$$\displaystyle \begin{aligned} \Gamma(x) ~=~ \frac{1}{x}\,\prod_{k=1}^{\infty}\frac{(1+1/k)^x}{1+x/k}{\,},\qquad x>0. \end{aligned}$$

We thus see that, for any multiple \(\log \Gamma \)-type function, the series representation (8.1) is the analogue of the Eulerian form of the gamma function in the additive notation. Moreover, we have shown in Theorem 7.5 that this series can be differentiated term by term on \(\mathbb {R}_+\). We have also shown in Proposition 5.18 that this series can be integrated term by term on any bounded interval of [0, ). Let us state these important facts in the following theorem.

Theorem 8.2 (Eulerian Form)

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . The following assertions hold.

  1. (a)

    For any x > 0 we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ -g(x)+\sum_{j=1}^p{\textstyle{{{x}\choose{j}}}}{\,}\Delta^{j-1}g(1) - \sum_{k=1}^{\infty}\left(g(x+k)-\sum_{j=0}^p{\textstyle{{{x}\choose{j}}}}{\,}\Delta^jg(k)\right) \end{aligned}$$

    and the series converges uniformly on any bounded subset of [0, ).

  2. (b)

    If g lies in \(\mathcal {C}^0\) , then Σg lies in \(\mathcal {C}^0\) and the series above can be (repeatedly) integrated term by term on any bounded interval of [0, ).

  3. (c)

    If g lies in \(\mathcal {C}^r\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r\in \mathbb {N}\) , then Σg lies in \(\mathcal {C}^r\) and the series above can be differentiated term by term up to r times.

Proof

Assertion (a) follows from identity (3.8) and the existence Theorem 3.6 (see also Remark 3.7). Assertion (b) follows from Proposition 5.18, especially its assertion (c2), and Remark 5.19. Assertion (c) follows from Theorem 7.5. □

Example 8.3

Let us apply Theorem 8.2 to \(g(x)=\ln x\) and p = 1. We immediately retrieve identity (8.2). Upon differentiation, we also obtain

$$\displaystyle \begin{aligned} \psi(x) ~=~ -\frac{1}{x}-\sum_{k=1}^{\infty}\left(\frac{1}{x+k}-\ln\left(1+\frac{1}{k}\right)\right) \end{aligned}$$

and, for any \(r\in \mathbb {N}^*\),

$$\displaystyle \begin{aligned} \psi_r(x) ~=~ (-1)^{r+1}{\,}r!\,\sum_{k=0}^{\infty}\frac{1}{(x+k)^{r+1}} ~=~ (-1)^{r+1}{\,}r!\,\zeta(r+1,x). \end{aligned}$$

Integrating on (0, x), we obtain

$$\displaystyle \begin{aligned} \psi_{-2}(x) ~=~ x-x\ln x-\sum_{k=1}^{\infty}\left((x+k)\ln\left(1+\frac{x}{k}\right) -x-\frac{x^2}{2}\ln\left(1+\frac{1}{k}\right)\right). \end{aligned}$$

Integrating once more on (0, x), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle {\psi_{-3}(x) ~=~ \frac{1}{4}{\,}x^2(3-2\ln x)}\\ & &\displaystyle -\sum_{k=1}^{\infty}\left(\frac{1}{2}(x+k)^2\ln\left(1+\frac{x}{k}\right) -\frac{k}{2}{\,}x-\frac{3}{4}{\,}x^2-\frac{1}{6}{\,}x^3\ln\left(1+\frac{1}{k}\right)\right). \end{array} \end{aligned} $$

We can actually integrate both sides on (0, x) repeatedly as we wish. \(\lozenge \)

8.2 Weierstrassian Form

In the following proposition, we recall an alternative infinite product representation of the gamma function, which was proposed by Weierstrass. This representation is usually called the Weierstrass factorization of the gamma function or the Weierstrass canonical product form of the gamma function (see Artin [11, pp. 15–16] and Srivastava and Choi [93, p. 1]).

Proposition 8.4 (Weierstrassian Form of the Gamma Function)

The following identity holds

$$\displaystyle \begin{aligned} \Gamma(x) ~=~ \frac{e^{-\gamma x}}{x}\,\prod_{k=1}^{\infty}\frac{e^{\frac{x}{k}}}{1+\frac{x}{k}}{\,},\qquad x>0. \end{aligned} $$
(8.3)

We now show that this factorization can be generalized to any \(\log \Gamma _p\)-type function that is of class \(\mathcal {C}^p\). This new result is presented in the following two theorems, which deal with the cases p = 0 and p ≥ 1 separately. We observe that the special case when p = 1 was previously established by John [49, Theorem B’] and in the multiplicative notation by Webster [98, Theorem 7.1].

It is important to note that, just as in Theorem 8.2, the partial sums that define the series of the theorems below are nothing other than the sequence \(n\mapsto f^p_n[g](x)\). Thus, these series can be integrated and differentiated term by term.

Theorem 8.5 (Weierstrassian Form When deg g = −1)

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) . The following assertions hold.

  1. (a)

    We have γ[g] = σ[g].

  2. (b)

    For any x > 0 we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]-g(x)-\sum_{k=1}^{\infty}\left(g(x+k)-\int_k^{k+1}g(t){\,}dt\right) \end{aligned}$$

    and the series converges uniformly on any bounded subset of [0, ).

  3. (c)

    The function Σg lies in \(\mathcal {C}^0\) and the series above can be (repeatedly) integrated term by term on any bounded interval of [0, ).

  4. (d)

    If g lies in \(\mathcal {C}^r\cap \mathcal {K}^r\) for some \(r\in \mathbb {N}\) , then Σg lies in \(\mathcal {C}^r\) and the series above can be differentiated term by term up to r times.

Proof

Assertion (a) follows from Proposition 6.36. Assertion (b) follows from Theorem 8.2 and identity (6.43). Assertions (c) and (d) follow from Theorem 8.2. □

To establish the second theorem (the case when deg g ≥ 0), we need the following technical lemma.

Lemma 8.6

Let g lie in \(\mathcal {C}^1\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}^*\) . Then

$$\displaystyle \begin{aligned} \Delta g(x) -\sum_{j=0}^{p-2}G_j\Delta^jg'(x) ~\to ~ 0\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$

If, in addition, \(g\in \mathcal {C}^{p-1}\) , then

$$\displaystyle \begin{aligned} \Delta^{p-1}g(x)-g^{(p-1)}(x) ~\to ~0\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$

Proof

By Proposition 4.12, we have that g′ lies in \(\mathcal {C}^0\cap \mathcal {D}^{p-1}\cap \mathcal {K}^{p-1}\). The first convergence result then follows immediately from the application of (6.22) to g′. That is,

$$\displaystyle \begin{aligned} J^{p-1}[g'](x) ~\to ~0\qquad \mbox{as }x\to\infty. \end{aligned}$$

Let us now assume that \(g\in \mathcal {C}^{p-1}\). By Propositions 4.11 and 4.12, for every i ∈{0, …, p − 2} the function

$$\displaystyle \begin{aligned} g_i ~=~ \Delta^ig^{(p-2-i)} \end{aligned}$$

lies in \(\mathcal {C}^1\cap \mathcal {D}^2\cap \mathcal {K}^2\) and hence, applying the first result to g i, we obtain that

$$\displaystyle \begin{aligned} \Delta g_i(x)-g_i^{\prime}(x) ~\to ~0\qquad \mbox{as }x\to\infty. \end{aligned}$$

Summing these limits for i = 0, …, p − 2, we obtain the claimed limit. □

Theorem 8.7 (Weierstrassian Form When deg g 0)

Let g lie in \(\mathcal {C}^p\cap \mathcal {D}^p\cap \mathcal {K}^p\) with deg g = p − 1 for some \(p\in \mathbb {N}^*\) . The following assertions hold.

  1. (a)

    We have γ[g (p)] = σ[g (p)] = g (p−1)(1) − ( Σg)(p)(1).

  2. (b)

    For any x > 0 we have

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x) & =&\displaystyle \sum_{j=1}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^{j-1}g(1)+{\textstyle{{{x}\choose{p}}}}(\Sigma g)^{(p)}(1)\\ & &\displaystyle -g(x)- \sum_{k=1}^{\infty}\left(g(x+k)-\sum_{j=0}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^jg(k)-{\textstyle{{{x}\choose{p}}}}g^{(p)}(k)\right) \end{array} \end{aligned} $$

    and the series converges uniformly on any bounded subset of [0, ).

  3. (c)

    The function Σg lies in \(\mathcal {C}^p\) and the series above can be (repeatedly) integrated term by term on any bounded interval of [0, ).

  4. (d)

    If g lies in \(\mathcal {C}^{\max \{p,r\}}\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r\in \mathbb {N}\) , then Σg lies in \(\mathcal {C}^{\max \{p,r\}}\) and the series above can be differentiated term by term up to \(\max \{p,r\}\) times.

Proof

By Proposition 4.12, we have that g (p) lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\). Assertion (a) then follows from Propositions 6.36 and 7.7. Now, using (6.43) we get

$$\displaystyle \begin{aligned} \gamma[g^{(p)}] ~=~ \sum_{k=1}^{\infty}(g^{(p)}(k)-\Delta g^{(p-1)}(k)).\end{aligned} $$

Using Theorem 8.2, we then obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x) & =&\displaystyle \sum_{j=1}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^{j-1}g(1)+{\textstyle{{{x}\choose{p}}}}\left(g^{(p-1)}(1)-\gamma[g^{(p)}]\right)\\ & &\displaystyle -g(x)- \lim_{n\to\infty}\sum_{k=1}^{n-1}\left(g(x+k)-\sum_{j=0}^{p-1}{\textstyle{{{x}\choose{j}}}}{\,}\Delta^jg(k)-{\textstyle{{{x}\choose{p}}}}{\,}g^{(p)}(k)\right)\\ & &\displaystyle + \lim_{n\to\infty}{\textstyle{{{x}\choose{p}}}}\left(\Delta^{p-1}g(n)-g^{(p-1)}(n)\right), \end{array} \end{aligned} $$

where the latter limit is zero by Lemma 8.6. This proves assertion (b). Assertions (c) and (d) follow from Theorem 8.2. □

Example 8.8

Let us apply Theorem 8.7 to \(g(x)=\ln x\) and p = 1. We immediately get

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ -\gamma x-\ln x-\sum_{k=1}^{\infty}\left(\ln(x+k)-\ln k-\frac{x}{k}\right), \end{aligned}$$

which is the additive version of the Weierstrassian form (8.3) of the gamma function. It is remarkable that we can now retrieve this formula in an effortless way. Upon differentiation, we also obtain (see, e.g., Srivastava and Choi [93, p. 24])

$$\displaystyle \begin{aligned} \psi(x) ~=~ -\gamma-\frac{1}{x}-\sum_{k=1}^{\infty}\left(\frac{1}{x+k}-\frac{1}{k}\right). \end{aligned}$$

Integrating on (0, x), we obtain

$$\displaystyle \begin{aligned} \psi_{-2}(x) ~=~ -\gamma\,\frac{x^2}{2}+x-x\ln x -\sum_{k=1}^{\infty}\left((x+k)\ln\left(1+\frac{x}{k}\right)-x-\frac{x^2}{2k}\right). \end{aligned}$$

Integrating once more on (0, x), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle {\psi_{-3}(x) ~=~ \frac{1}{12}{\,}x^2(9-2\gamma x-6\ln x)}\\ & &\displaystyle -\sum_{k=1}^{\infty}\left(\frac{1}{2}(x+k)^2\ln\left(1+\frac{x}{k}\right) -\frac{k}{2}{\,}x-\frac{3}{4}{\,}x^2-\frac{x^3}{6k}\right). \end{array} \end{aligned} $$

Just as in Example 8.3, we can integrate both sides on (0, x) repeatedly as we wish. \(\lozenge \)

Let us end this section with an aside about some potential consequences of the technical Lemma 8.6.

Remark 8.9

If g lies in \(\mathcal {C}^1\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}^*\), then by Propositions 4.8 and 4.12 we have \(g'\in \mathcal {R}^{p-1}_{\mathbb {R}}\). That is, for any a ≥ 0

$$\displaystyle \begin{aligned} g'(x+a)-\sum_{j=0}^{p-2}{\textstyle{{{a}\choose{j}}}}\Delta^jg'(x) ~\to ~ 0\qquad \mbox{as }x\to\infty. \end{aligned}$$

Combining this result with the first part of Lemma 8.6, we can derive surprising limits. For instance, we obtain for any p ∈{1, 2, 3}

$$\displaystyle \begin{aligned} \textstyle{\Delta g(x)-g'\left(x+\frac{1}{2}\right)} ~\to ~0\qquad \mbox{as }x\to\infty. \end{aligned}$$

This latter limit has the following interpretation. The mean value theorem tells us that Δg(x) = g′(x + ξ x) for some ξ x ∈ (0, 1). The limit above then says that

$$\displaystyle \begin{aligned} \textstyle{g'(x+\xi_x)-g'(x+\frac{1}{2})} ~\to ~ 0\quad \mbox{as }x\to\infty. \end{aligned}$$

In particular, if g lies in \(\mathcal {C}^2\) and for instance eventually satisfies g″(x) ≥ c for some c > 0, then

$$\displaystyle \begin{aligned} \begin{array}{rcl} c\left|\xi_x-\frac{1}{2}\right| & \leq &\displaystyle \left|\int_{\frac{1}{2}}^{\xi_x}g''(x+t){\,}dt\right|\\ & =&\displaystyle |\textstyle{g'(x+\xi_x)-g'(x+\frac{1}{2})}| ~\to ~ 0\quad \mbox{as }x\to\infty, \end{array} \end{aligned} $$

which shows that \(\xi _x\to \frac {1}{2}\) as x →. \(\lozenge \)

8.3 Gregory’s Formula-Based Series Representation

The following proposition provides series expressions for Σg and σ[g] in terms of Gregory’s coefficients (see also Proposition D.2 in Appendix D). This proposition follows from the next lemma, which in turn immediately follows from Corollary 6.12.

Lemma 8.10

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^q\) for some \(p,q\in \mathbb {N}\) such that p  q. Let x > 0 be so that for k = p, …, q the function g is k-convex or k-concave on [x, ). Then we have

$$\displaystyle \begin{aligned} |J^{k+1}[\Sigma g](x)| ~\leq ~ \overline{G}_k{\,}|\Delta^k g(x)|{\,},\qquad k=p,\ldots,q. \end{aligned}$$

Proposition 8.11

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^{\infty }\) for some \(p\in \mathbb {N}\) . Let x > 0 be so that for every integer q  p the function g is q-convex or q-concave on [x, ). Suppose also that the sequence qΔ q g(x) is bounded. Then we have

$$\displaystyle \begin{aligned} J^{q+1}[\Sigma g](x) ~\to ~ 0\qquad \mathit{\mbox{as }}q\to_{\mathbb{N}}\infty, \end{aligned}$$

that is,

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt-\sum_{n=1}^{\infty}G_n\,\Delta^{n-1}g(x). \end{aligned} $$
(8.4)

In particular, if the assumptions above are satisfied for x = 1, then we have

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{n=1}^{\infty}G_n\,\Delta^{n-1}g(1). \end{aligned} $$
(8.5)

Proof

This result is an immediate consequence of Lemma 8.10 and the fact that the sequence \(n\mapsto \overline {G}_n\) decreases to zero. Identity (8.4) then follows from (6.18). □

Example 8.12

Applying Proposition 8.11 to the function \(g(x)=\ln x\) with p = 1, we obtain the following series representation of the log-gamma function for x > 0

$$\displaystyle \begin{aligned} \begin{array}{rcl} \ln\Gamma(x) & =&\displaystyle \frac{1}{2}\ln(2\pi)-x+x\ln x-\sum_{n=0}^{\infty}G_{n+1}\,\Delta^n\ln x{}\\ & =&\displaystyle \frac{1}{2}\ln(2\pi)-x+x\ln x-\sum_{n=0}^{\infty}|G_{n+1}|\,\sum_{k=0}^n(-1)^k{\textstyle{{{n}\choose{k}}}}{\,}\ln(x+k), \end{array} \end{aligned} $$
(8.6)

where we have used the classical identity (see, e.g., Graham et al. [41, p. 188])

$$\displaystyle \begin{aligned} \Delta^n f(x) ~=~ \sum_{k=0}^n(-1)^{n-k}\,{\textstyle{{{n}\choose{k}}}}{\,}f(x+k). \end{aligned}$$

Equivalently, using the Binet function J(x), identity (8.6) can take the form

$$\displaystyle \begin{aligned} J(x) ~=~ -\sum_{n=1}^{\infty}|G_{n+1}|\,\sum_{k=0}^n(-1)^k{\textstyle{{{n}\choose{k}}}}{\,}\ln(x+k),\qquad x>0, \end{aligned}$$

where, for any \(n\in \mathbb {N}^*\), the inner sum also reduces to the following integral (see, e.g., [41, p. 192])

$$\displaystyle \begin{aligned} (-1)^n\Delta^n\ln x ~=~ -\int_0^{\infty}\frac{e^{-xt}}{t}\left(1-e^{-t}\right)^n dt{\,},\qquad n\in\mathbb{N}^*. \end{aligned}$$

In particular,

$$\displaystyle \begin{aligned} |\Delta^n\ln x| ~\leq ~ \int_0^{\infty}\frac{e^{-xt}}{t}\left(1-e^{-t}\right) dt ~=~ \Delta\ln x ~=~ \ln\left(1+\frac{1}{x}\right). \end{aligned}$$

In the multiplicative notation, identity (8.6) takes the following form

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Gamma(x) & =&\displaystyle \sqrt{2\pi}{\,}e^{-x}{\,}x^{x-\frac{1}{2}}\left(\frac{x+1}{x}\right)^{\frac{1}{12}} \left(\frac{(x+2)x}{(x+1)^2}\right)^{-\frac{1}{24}}\\ & &\displaystyle \times \left(\frac{(x+3)(x+1)^3}{(x+2)^3x}\right)^{\frac{19}{720}}\ldots. \end{array} \end{aligned} $$

Further infinite product representations and approximations of the gamma function can be found for instance in Feng and Wang [36]. \(\lozenge \)

8.4 Analogue of Fontana-Mascheroni’s Series

Interestingly, when \(g(x)=\frac {1}{x}\) and p = 0, identity (8.5) reduces to the well-known formula

$$\displaystyle \begin{aligned} \gamma ~=~ \sum_{n=1}^{\infty}\frac{|G_n|}{n}{\,}, \end{aligned}$$

where γ is Euler’s constant and the series is called Fontana-Mascheroni’s series (see, e.g., Blagouchine [20, p. 379]). Thus, the series representation of the asymptotic constant σ[g] given in (8.5) provides the analogue of Fontana-Mascheroni’s series for any function g satisfying the assumptions of Proposition 8.11.

Example 8.13

The analogue of Fontana-Mascheroni’s series for the function \(g(x)=\ln x\) can be obtained by setting x = 1 in (8.6). We obtain

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty}|G_{n+1}|\,\sum_{k=0}^n(-1)^k{\textstyle{{{n}\choose{k}}}}{\,}\ln(k+1) ~=~ -1+\frac{1}{2}\ln(2\pi), \end{aligned}$$

or equivalently (see Example 8.12),

$$\displaystyle \begin{aligned} \sum_{n=0}^{\infty}|G_{n+1}|\,\int_0^{\infty}\frac{e^{-t}}{t}\left(1-e^{-t}\right)^n dt ~=~ 1-\frac{1}{2}\ln(2\pi). \end{aligned}$$

The following proposition provides a way to construct a function g(x) that has a prescribed associated asymptotic constant σ[g] given in the form (8.5).

Proposition 8.14

Suppose that the series

$$\displaystyle \begin{aligned} S ~=~ \sum_{n=1}^{\infty}G_n{\,}s_n \end{aligned}$$

converges for a given real sequence ns n and let \(g\colon \mathbb {R}_+\to \mathbb {R}\) be such that

$$\displaystyle \begin{aligned} g(n) ~=~ \sum_{k=1}^n{\textstyle{{{n-1}\choose{k-1}}}}{\,} s_k{\,},\qquad n\in\mathbb{N}^*. \end{aligned} $$
(8.7)

If g satisfies the assumptions of Proposition 8.11 with x = 1, then the following assertions hold.

  1. (a)

    S = σ[g].

  2. (b)

    \(\Sigma g(n) = \sum _{k=1}^{n-1}{n-1\choose k}{\,} s_k\) for any \(n\in \mathbb {N}^*\).

  3. (c)

    s n = Δ n−1 g(1) = Δ n  Σg(1) for any \(n\in \mathbb {N}^*\).

Proof

Identity (8.7) can take the following alternative form

$$\displaystyle \begin{aligned} g(n+1) ~=~ \sum_{k=0}^n{\textstyle{{{n}\choose{k}}}}{\,}s_{k+1}{\,},\qquad n\in\mathbb{N}. \end{aligned}$$

Using the classical inversion formula (Graham et al. [41, p. 192]), we then obtain

$$\displaystyle \begin{aligned} s_{n+1} ~=~ \sum_{k=0}^n(-1)^{n-k}\,{\textstyle{{{n}\choose{k}}}}{\,}g(k+1) ~=~ \Delta^ng(1){\,},\qquad n\in\mathbb{N}.\end{aligned} $$

This establishes assertion (c) and then assertion (a) by Proposition 8.11. Assertion (b) is straightforward using (5.2). □

Example 8.15

Let us apply Proposition 8.14 to the series

$$\displaystyle \begin{aligned} S ~=~ \sum_{n=1}^{\infty}\frac{|G_n|}{n^2}{\,},\end{aligned} $$

that is,

$$\displaystyle \begin{aligned} S ~=~ \sum_{n=1}^{\infty}G_n{\,}s_n\qquad \mbox{with}\quad s_n ~=~ (-1)^{n-1}\frac{1}{n^2}{\,}. \end{aligned}$$

Let \(g\colon \mathbb {R}_+\to \mathbb {R}\) be a function such that

$$\displaystyle \begin{aligned} g(n) ~=~ \sum_{k=1}^n(-1)^{k-1}{\textstyle{{{n-1}\choose{k-1}}}}\,\frac{1}{k^2}{\,},\qquad n\in\mathbb{N}^*, \end{aligned}$$

or equivalently (see Graham et al. [41, p. 281] or Merlini et al. [72, Lemma 4.1]),

$$\displaystyle \begin{aligned} g(n) ~=~ \frac{1}{n}{\,}H_n{\,},\qquad n\in\mathbb{N}^*. \end{aligned}$$

We naturally take \(g(x)=\frac {1}{x}{\,}H_x\) , from which we can derive (see, e.g., Graham et al. [41, p. 280])

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \frac{\pi^2}{12}-\frac{1}{2}\,\psi_1(x)+\frac{1}{2}{\,}H_{x-1}^2{\,}. \end{aligned}$$

Thus, we have S = σ[g]. Combining this result with the definition of σ[g], we derive the surprising identity (compare with Blagouchine and Coppo [22, pp. 469–470])

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{n^2} ~=~ \frac{\pi^2}{12}-\frac{1}{2}+\frac{1}{2}\int_0^1 H_t^2{\,}dt{\,}. \end{aligned}$$

Proceeding similarly, with a bit of computation one also finds

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{n^3} ~=~ \frac{1}{3}\,\zeta(3)+\frac{\pi^2}{12}\,\gamma-\frac{5}{12}+\frac{1}{6}\int_0^1 H_t^3{\,}dt{\,}. \end{aligned}$$

Those formulas are worth comparing with the well-known identities (see Sect. 10.2)

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{n} ~=~ \gamma ~=~ \int_0^1 H_t{\,}dt{\,}. \end{aligned}$$

For similar formulas, see also Blagouchine and Coppo [22]. \(\lozenge \)

Example 8.16

Let us apply Proposition 8.14 to the series

$$\displaystyle \begin{aligned} S ~=~ \sum_{n=1}^{\infty}\frac{|G_n|}{n+a}{\,}, \end{aligned}$$

where a > 0. For this series, we can take

$$\displaystyle \begin{aligned} g(x) ~=~ \mathrm{B}(x,a+1)\qquad \mbox{and}\qquad \Sigma g(x) ~=~ \frac{1}{a}-\mathrm{B}(x,a), \end{aligned}$$

where (x, y)↦B(x, y) is the beta function. We then derive the identity

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{n+a} ~=~ \frac{1}{a}-\int_0^1\mathrm{B}(x+1,a){\,}dx. \end{aligned}$$

Using the definition of the beta function as an integral, this identity also reads

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{n+a} ~=~ \frac{1}{a}+\int_0^1\frac{x^a}{\ln(1-x)}{\,}dx. \end{aligned}$$

Setting \(a=\frac {1}{2}\) for instance, we obtain

$$\displaystyle \begin{aligned} \sum_{n=1}^{\infty}\frac{|G_n|}{2n+1} ~=~ 1+\frac{1}{2}\,\int_0^1\frac{\sqrt{x}}{\ln(1-x)}{\,}dx. \end{aligned}$$

We also observe that the decimal expansion of the latter integral is the sequence A094691 in the OEIS [90]. \(\lozenge \)

8.5 Analogue of Raabe’s Formula

Recall that Raabe’s formula yields, for any x > 0, a simple explicit expression for the integral of the log-gamma function over the interval (x, x + 1). We state this result in the following proposition (see Example 6.5). For recent references on Raabe’s formula, see, e.g., Cohen and Friedman [30, p. 366] and Srivastava and Choi [93, p. 29].

Proposition 8.17 (Raabe’s Formula)

The following identity holds

$$\displaystyle \begin{aligned} \int_x^{x+1}\ln\Gamma(t){\,}dt ~=~ \frac{1}{2}\,\ln(2\pi)+x\ln x-x{\,},\qquad x>0. \end{aligned} $$
(8.8)

Clearly, identities (6.10) and (6.11) provide the analogue of Raabe’s formula for any continuous multiple \(\log \Gamma \)-type function Σg. We recall this important and useful formula in the next proposition.

Proposition 8.18 (Analogue of Raabe’s Formula)

For any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) , we have

$$\displaystyle \begin{aligned} \int_x^{x+1}\Sigma g(t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt,\qquad x>0, \end{aligned} $$
(8.9)

where σ[g] is the asymptotic constant associated with g and defined by the equation

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \int_0^1\Sigma g(t+1){\,}dt{\,}. \end{aligned} $$
(8.10)

The challenging part in this context is to find a nice expression for σ[g]. For instance, setting x = 1 in Raabe’s formula (8.8), we obtain the identity

$$\displaystyle \begin{aligned} \sigma[\ln] ~=~ \int_0^1\ln\Gamma(t+1){\,}dt ~=~ -1+\frac{1}{2}\,\ln(2\pi){\,}. \end{aligned}$$

However, in general such a closed-form expression for σ[g] is not easy to derive.

An expression for σ[g] as a limit can be obtained using Proposition 5.18(c2). Specifically, if g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sigma[g] & =&\displaystyle \lim_{n\to\infty}\int_0^1(f^p_n[g](t)+g(t)){\,}dt\\ & =&\displaystyle \lim_{n\to\infty}\left(\sum_{k=1}^{n-1}g(k)-\int_1^ng(t){\,}dt+\sum_{j=1}^pG_j\Delta^{j-1}g(n)\right),{} \end{array} \end{aligned} $$
(8.11)

which is nothing other than the restriction of the generalized Stirling formula (6.21) to the natural integers.

Series expressions for σ[g] can also be obtained by integrating on the interval (0, 1) the series representations of Σg + g given in Theorems 8.2 and 8.7. For instance, we have

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{j=1}^pG_j{\,}\Delta^{j-1}g(1) - \sum_{k=1}^{\infty}\left(\int_{k}^{k+1}g(t){\,}dt-\sum_{j=0}^pG_j{\,}\Delta^jg(k)\right). \end{aligned} $$
(8.12)

Note also that, under certain assumptions, the latter series converges to zero as \({p\to _{\mathbb {N}}\infty }\). In this case, (8.12) reduces to the analogue of Fontana-Mascheroni’s series; see Proposition 8.11.

Example 8.19

Applying (8.11) and (8.12) to \(g(x)=\frac {1}{x}\) and p = 0, we obtain

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^n\frac{1}{k}-\ln n\right) ~=~ \sum_{k=1}^{\infty}\left(\frac{1}{k}-\ln\left(1+\frac{1}{k}\right)\right), \end{aligned}$$

which is Euler’s constant γ. Identity (8.9) then immediately provides the following analogue of Raabe’s formula

$$\displaystyle \begin{aligned} \int_x^{x+1}\psi(t){\,}dt ~=~ \ln x{\,},\qquad x>0. \end{aligned}$$

The following proposition provides interesting identities that involve the antiderivative of Σg, where g is any function lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\). It also yields a formula for ΣG, where G is the antiderivative of g. This result is worth comparing with Example 7.19.

Proposition 8.20

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and define the function \(G\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} G(x) ~=~ \int_1^x g(t){\,}dt\qquad \mathit{\mbox{for }}x>0. \end{aligned}$$

Then G lies in \(\mathcal {C}^1\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\) . Moreover, for any x > 0 we have

$$\displaystyle \begin{aligned} \Sigma G(x) ~=~ \int_1^x\Sigma g(t){\,}dt-\sigma[g]{\,}(x-1) \end{aligned}$$

and

$$\displaystyle \begin{aligned} \Sigma_x\int_x^{x+1}\Sigma g(t){\,}dt ~=~ \int_1^x\Sigma g(t){\,}dt{\,}. \end{aligned}$$

Proof

We have that G lies in \(\mathcal {C}^1\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\) by Proposition 4.12. We then obtain

$$\displaystyle \begin{aligned} (\Sigma G)' ~=~ \Sigma g-\sigma[g] \end{aligned}$$

by Proposition 7.7. This establishes the first formula. Combining it with (8.9), we obtain

$$\displaystyle \begin{aligned} \Sigma_x\int_x^{x+1}\Sigma g(t){\,}dt ~=~ \sigma[g]{\,}(x-1)+\Sigma G(x) ~=~ \int_1^x\Sigma g(t){\,}dt, \end{aligned}$$

that is, the second formula. □

Example 8.21

Apply Proposition 8.20 to the function \(g(x)=\ln x\) with p = 1, we obtain

$$\displaystyle \begin{aligned} \Sigma_x\int_x^{x+1}\ln\Gamma(t){\,}dt ~=~ \int_1^x\ln\Gamma(t){\,}dt ~=~ \psi_{-2}(x)-\psi_{-2}(1). \end{aligned}$$

Using Raabe’s formula (8.8) in the left-hand side, we finally obtain

$$\displaystyle \begin{aligned} \frac{1}{2}\,\ln(2\pi)(x-1)+\Sigma_x(x\ln x) -{\textstyle{{{x}\choose{2}}}} ~=~ \psi_{-2}(x)-\psi_{-2}(1), \end{aligned}$$

from which we immediately derive a closed-form expression for \(\Sigma _x(x\ln x)\); see also Sect. 12.5. \(\lozenge \)

We now present a proposition, immediately followed by a corollary that provides interesting characterizations of multiple Γ-type functions based on the analogue of Raabe’s formula. Example 8.24 below illustrates this characterization in the special case of the log-gamma function.

Proposition 8.22

Let h lie in \(\mathcal {C}^1\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\) for some \(p\in \mathbb {N}\) and let \(f\colon \mathbb {R}_+\to \mathbb {R}\) be a function. Then f lies in \(\mathcal {C}^0\cap \mathcal {K}^p\) and satisfies the equation

$$\displaystyle \begin{aligned} \int_x^{x+1}f(t){\,}dt ~=~ h(x){\,},\qquad x>0, \end{aligned} $$
(8.13)

if and only if f = ( Σh)′.

Proof

The sufficiency is trivial. Let us prove the necessity. Differentiating both sides of (8.13), we obtain Δf = h′. Using the existence Theorem 3.6 and then Proposition 7.7, we then see that f = c + ( Σh) for some \(c\in \mathbb {R}\). Using (8.13) again, we then see that c must be 0. □

Corollary 8.23 (A Characterization Result)

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) and let \(f\colon \mathbb {R}_+\to \mathbb {R}\) be a function. Then f lies in \(\mathcal {C}^0\cap \mathcal {K}^p\) and satisfies the equation

$$\displaystyle \begin{aligned} \int_x^{x+1}f(t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt{\,},\qquad x>0, \end{aligned}$$

if and only if f =  Σg.

Proof

The sufficiency is trivial by (8.9). Let us prove the necessity. Define the function \(h\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} h(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt\qquad \mbox{for }x>0. \end{aligned}$$

Then, h clearly lies in \(\mathcal {C}^1\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\). Using Proposition 8.22 and then Proposition 8.20, we immediately obtain that f = ( Σh) =  Σg. □

Example 8.24

Applying Corollary 8.23 to the function \(g(x)=\ln x\) with p = 1, we obtain the following alternative characterization of the gamma function. A function \(f\colon \mathbb {R}_+\to \mathbb {R}\) lies in \(\mathcal {C}^0\cap \mathcal {K}^1\) and satisfies the equation

$$\displaystyle \begin{aligned} \int_x^{x+1}f(t){\,}dt ~=~ \frac{1}{2}\,\ln(2\pi)+x\ln x-x{\,}, \qquad x>0, \end{aligned}$$

if and only if \(f(x)=\ln \Gamma (x)\). \(\lozenge \)

8.6 Analogue of Gauss’ Multiplication Formula

In the following proposition, we recall the Gauss multiplication formula for the gamma function, also called Gauss’ multiplication theorem (see Artin [11, p. 24]).

Proposition 8.25 (Gauss’ Multiplication Formula)

For any integer m ≥ 1, we have the following identity

$$\displaystyle \begin{aligned} \prod_{j=0}^{m-1}\Gamma\left(\frac{x+j}{m}\right) ~=~ \frac{\Gamma(x)}{m^{x-\frac{1}{2}}}{\,}(2\pi)^{\frac{m-1}{2}},\qquad x>0. \end{aligned} $$
(8.14)

When m = 2, identity (8.14) reduces to Legendre’s duplication formula

$$\displaystyle \begin{aligned} \Gamma\left(\frac{x}{2}\right)\Gamma\left(\frac{x+1}{2}\right) ~=~ \frac{\Gamma(x)}{2^{x-1}}{\,}\sqrt{\pi}{\,},\qquad x>0. \end{aligned}$$

Remark 8.26

For any fixed m ≥ 2, the Gauss multiplication formula (8.14) enables one to retrieve easily the value of the asymptotic constant associated with the function \(g(x)=\ln x\). In particular, this value can be retrieved from Legendre’s duplication formula. Indeed, taking the logarithm of both sides of (8.14) and then integrating on x ∈ (0, 1), we obtain

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\int_0^1\ln\Gamma\left(\frac{x+j}{m}\right)dx ~=~ \frac{m-1}{2}\ln(2\pi)+\int_0^1\ln\Gamma(x){\,}dx. \end{aligned}$$

Using the change of variable \(t=\frac {x+j}{m}\) in the left-hand integral, we then obtain almost immediately the following identity

$$\displaystyle \begin{aligned} \int_0^1\ln\Gamma(t){\,}dt ~=~ \frac{1}{2}\ln(2\pi). \end{aligned}$$

Combining this result with (8.9), we retrieve \(\sigma [\ln ]=-1+\frac {1}{2}\ln (2\pi )\). \(\lozenge \)

Webster [98, Theorem 5.2] showed how an analogue of Gauss’ multiplication formula can be partially constructed for any Γ-type function. His proof is very short and essentially relies on the uniqueness and existence theorems in the special case when p = 1. We now show how Webster’s approach can be further extended to all multiple Γ-type functions. As usual, we use the additive notation.

Theorem 8.27 (Analogue of Gauss’ Multiplication Formula)

Let g lie in dom( Σ) and let \(m\in \mathbb {N}^*\) . Define also the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} g_m(x) ~=~ g\left(\frac{x}{m}\right)\qquad \mathit{\mbox{for }}x>0. \end{aligned}$$

Then we have

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\Sigma g\left(\frac{x+j}{m}\right) ~=~ \sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right)+\Sigma g_m(x),\qquad x>0, \end{aligned} $$
(8.15)

and

$$\displaystyle \begin{aligned} \Sigma g_m(m) ~=~ \sum_{j=1}^{m-1}g\left(\frac{j}{m}\right). \end{aligned}$$

Proof

Let g lie in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). Then g m also lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) by Corollary 4.21. Now, we can readily check that the function \(f\colon \mathbb {R}_+\to \mathbb {R}\) defined by

$$\displaystyle \begin{aligned} f(x) ~=~ \sum_{j=0}^{m-1}\Sigma g\left(\frac{x+j}{m}\right)-\sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right) \end{aligned}$$

is a solution to the equation Δf = g m that lies in \(\mathcal {K}^p\) and such that f(1) = 0. By the uniqueness Theorem 3.1, it follows that f =  Σg m. This establishes (8.15). The last identity follows immediately. □

Theorem 8.27 actually provides a partial solution to the problem of finding the analogue of Gauss’ multiplication formula. A more complete result would also provide a closed-form expression for the right-hand side of identity (8.15).

Unfortunately, no general method to provide simple or compact expressions for Σg m seems to be known. However, such expressions can sometimes be found.

For instance, when \(g(x)=\ln x\), we obtain

$$\displaystyle \begin{aligned} g_m(x) ~=~ \ln x-\ln m\qquad \mbox{and}\qquad \Sigma g_m(x) ~=~ \ln\Gamma(x)-(x-1)\ln m. \end{aligned}$$

Substituting this latter expression in identity (8.15), we immediately obtain the formula

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\ln\Gamma\left(\frac{x+j}{m}\right) ~=~ \sum_{j=1}^m\ln\Gamma\left(\frac{j}{m}\right)+\ln\Gamma(x)-(x-1)\ln m{\,}, \end{aligned} $$
(8.16)

that is, in the multiplicative notation,

$$\displaystyle \begin{aligned} \prod_{j=0}^{m-1}\Gamma\left(\frac{x+j}{m}\right) ~=~ \frac{\Gamma(x)}{m^{x-1}}\,\prod_{j=1}^m\Gamma\left(\frac{j}{m}\right),\qquad x>0. \end{aligned}$$

It remains to find a nice expression for the latter product, and more generally for the right-hand sum of identity (8.15). On this issue, we have the following useful result.

Proposition 8.28

Let g lie in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) and let \(m\in \mathbb {N}^*\) . Define also the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equation \(g_m(x)=g(\frac {x}{m})\) for x > 0. Then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right) & =&\displaystyle m{\,}\sigma[g]-\int_m^{m+1}\Sigma g_m(t){\,}dt\\ & =&\displaystyle m{\,}\sigma[g]-\sigma[g_m]-m\,\int_{1/m}^1g(t){\,}dt. \end{array} \end{aligned} $$

Proof

The first identity can be proved simply by integrating both sides of (8.15) on x ∈ (m, m + 1). Indeed, using the change of variable \(t=\frac {x+j}{m}\) and identity (8.10), the left-hand side reduces to

$$\displaystyle \begin{aligned} m\,\sum_{j=0}^{m-1}\int_{1+\frac{j}{m}}^{1+\frac{j+1}{m}}\Sigma g(t){\,}dt ~=~ m\int_1^2\Sigma g(t){\,}dt ~=~ m\,\sigma[g]. \end{aligned}$$

The second identity then follows from a simple application of (8.9). □

Example 8.29

Let us apply Proposition 8.28 to the function \(g(x)=\ln x\). We obtain

$$\displaystyle \begin{aligned} \sum_{j=1}^m\ln\Gamma\left(\frac{j}{m}\right) ~=~ -\frac{1}{2}\ln m+\frac{1}{2}{\,}(m-1)\ln(2\pi). \end{aligned}$$

Substituting this expression in (8.16) and then translating the resulting formula into the multiplicative notation, we retrieve Gauss’ multiplication formula (8.14). \(\lozenge \)

In the following proposition, we provide a convergence result for the function defined in the left-hand side of (8.15), which does not require the computation of Σg m. This result simply reduces to the generalized Stirling formula when m = 1.

Proposition 8.30

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and let \(m\in \mathbb {N}^*\) . Define also the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equation \(g_m(x)=g(\frac {x}{m})\) for x > 0. Then we have

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\Sigma g\left(\frac{x+j}{m}\right) -\int_1^xg_m(t){\,}dt+\sum_{j=1}^pG_j\,\Delta^{j-1}g_m(x) ~\to ~ m\,\sigma_m[g] \end{aligned}$$

as x ∞, where

$$\displaystyle \begin{aligned} \sigma_m[g] ~=~ \sigma[g]-\int_{1/m}^1g(t){\,}dt. \end{aligned}$$

Proof

Theorem 8.27 and Proposition 8.28 provide the following identity

$$\displaystyle \begin{aligned} \Sigma g_m(x)-\sigma[g_m] ~=~ \sum_{j=0}^{m-1}\Sigma g\left(\frac{x+j}{m}\right)-m\,\sigma_m[g]\qquad x>0. \end{aligned}$$

The result is then an immediate application of the generalized Stirling formula (Theorem 6.13) to the function Σg m (recall that g m lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\)). □

We end this section with three corollaries. Corollaries 8.31 and 8.32 yield properties of the derivatives and antiderivatives of the function g in the context of the analogue of Gauss’ multiplication formula. Corollary 8.33 shows how the antiderivative of g can be expressed as a limit involving the function Σg m.

Corollary 8.31

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . Let also \(m\in \mathbb {N}^*\) and define the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the \(g_m(x)=g(\frac {x}{m})\) . Then the equation obtained by replacing g with g (r) in (8.15) can also be obtained by differentiating r times both sides of (8.15).

Proof

Differentiating r times both sides of (8.15), multiplying through by m r, and then using (7.1), we obtain

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\Sigma g^{(r)}\left(\frac{x+j}{m}\right) + m(\Sigma g)^{(r)}(1) ~=~ m^r\,\Sigma g_m^{(r)}(x) + m^r(\Sigma g_m)^{(r)}(1). \end{aligned}$$

Setting x = 1, we then get

$$\displaystyle \begin{aligned} \sum_{j=1}^m\Sigma g^{(r)}\left(\frac{j}{m}\right) + m(\Sigma g)^{(r)}(1) ~=~ m^r(\Sigma g_m)^{(r)}(1). \end{aligned}$$

Subtracting this latter equation from the former one, we finally get

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}\Sigma g^{(r)}\left(\frac{x+j}{m}\right) ~=~ \sum_{j=1}^m\Sigma g^{(r)}\left(\frac{j}{m}\right)+m^r\,\Sigma g_m^{(r)}(x), \end{aligned}$$

which is precisely the equation obtained by replacing g with g (r) in (8.15). □

Corollary 8.32

Let \(p\in \mathbb {N}\), \(m\in \mathbb {N}^*\), \(c\in \mathbb {R}\) , and \(g\in \mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) . Define also the functions \(G,g_m,G_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equations

$$\displaystyle \begin{aligned} G(x) ~=~ c+\int_1^xg(t){\,}dt,\quad g_m(x)=g\left(\frac{x}{m}\right),\quad G_m(x) ~=~ G\left(\frac{x}{m}\right)\quad \mathit{\mbox{for }}x>0. \end{aligned}$$

Then both functions G and G m lie in \(\mathcal {C}^1\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{p+1}\) . Moreover, for any x > 0 we have

$$\displaystyle \begin{aligned} \Sigma G_m(x) ~=~ \frac{1}{m}\int_1^x\Sigma g_m(t){\,}dt+(x-1)\left(c-\frac{1}{m}\int_m^{m+1}\Sigma g_m(t){\,}dt\right). \end{aligned}$$

Proof

The first part follows immediately from Proposition 8.20 and Corollary 4.21. Now, by definition of G m we have

$$\displaystyle \begin{aligned} G_m(x) ~=~ c+\frac{1}{m}\int_m^xg_m(t){\,}dt ~=~ c+\frac{1}{m}\left(\int_1^xg_m(t){\,}dt-\int_1^mg_m(t){\,}dt\right). \end{aligned}$$

The claimed identity can then be established easily using Proposition 8.20 and then applying identity (8.9). □

Corollary 8.33

Let g lie in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\) . Define also the functions \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) \((m\in \mathbb {N}^*)\) by the equation \(g_m(x)=g(\frac {x}{m})\) for x > 0. Then we have

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\frac{\Sigma g_m(mx)-\Sigma g_m(m)}{m} ~=~ \int_1^x g(t){\,}dt{\,},\qquad x>0. \end{aligned}$$

Moreover, if g is integrable at 0, then

$$\displaystyle \begin{aligned} \lim_{m\to\infty}\frac{1}{m}\,\Sigma g_m(mx) ~=~ \int_0^x g(t){\,}dt{\,},\qquad x>0. \end{aligned}$$

Proof

Replacing x with mx in (8.15) and dividing through by m, we obtain

$$\displaystyle \begin{aligned} \frac{1}{m}\,\Sigma g_m(mx) ~=~ \frac{1}{m}\,\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right)-\frac{1}{m}\,\sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right). \end{aligned}$$

Letting \(m\to _{\mathbb {N}}\infty \) in this identity and using (8.9), we see that the first Riemann sum on the right side converges to

$$\displaystyle \begin{aligned} \int_0^1\Sigma g(x+t){\,}dt ~=~ \sigma[g]+\int_1^xg(t){\,}dt \end{aligned}$$

while the second one converges (if g is integrable at 0) to

$$\displaystyle \begin{aligned} \int_0^1\Sigma g(t){\,}dt ~=~ \sigma[g]-\int_0^1g(t){\,}dt. \end{aligned}$$

This establishes the corollary. □

8.7 Asymptotic Expansions and Related Results

In this section, we provide and investigate asymptotic expansions of (higher order differentiable) multiple \(\log \Gamma \)-type functions. We also establish and discuss some important consequences of these expansions, including a variant of the generalized Stirling formula and an extension of the so-called Liu formula to multiple \(\log \Gamma \)-type functions.

To begin with, let us first recall the asymptotic expansion of the log-gamma function (see, e.g., Gel’fond [39, p. 342] and Srivastava and Choi [93, p. 7]).

Proposition 8.34

For any \(q\in \mathbb {N}^*\) , we have the following asymptotic expansion as x 

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ \frac{1}{2}\ln(2\pi)-x+\left(x-\frac{1}{2}\right)\ln x+\sum_{k=1}^q\frac{B_{k+1}}{k(k+1){\,}x^k}+O\left(x^{-q-1}\right). \end{aligned} $$
(8.17)

For instance, setting q = 4 in equation (8.17), we obtain

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ \frac{1}{2}\ln(2\pi)-x+\left(x-\frac{1}{2}\right)\ln x+\frac{1}{12x}-\frac{1}{360x^3}+O\left(x^{-5}\right). \end{aligned}$$

We now provide a generalization of this result to multiple \(\log \Gamma \)-type functions. Even more generally, in the next proposition we provide for any integer \(m\in \mathbb {N}^*\) an asymptotic expansion of the function

$$\displaystyle \begin{aligned} x ~\mapsto ~\frac{1}{m}\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right). \end{aligned} $$
(8.18)

Proposition 8.35

  1. (a)

    Let g lie in \(\mathcal {C}^1\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,1\}}\) for some \(p\in \mathbb {N}\) . Then, for any \(m\in \mathbb {N}^*\) and any x > 0, we have

    $$\displaystyle \begin{aligned} \frac{1}{m}\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right) ~=~ \int_x^{x+1}\Sigma g(t){\,}dt -\frac{1}{2m}{\,}g(x) + R_m(x){\,},{} \end{aligned}$$

    with

    $$\displaystyle \begin{aligned} R_m(x) ~=~ \frac{1}{m}\,\int_0^1B_1(\{mt\}){\,}(\Sigma g)'(x+t){\,}dt \end{aligned}$$

    and

    $$\displaystyle \begin{aligned} |R_m(x)| ~\leq ~ \frac{1}{2m}\,\int_0^1|(\Sigma g)'(x+t)|{\,}dt{\,}. \end{aligned}$$

    For large x the latter integral reduces to |g(x)|.

  2. (b)

    If g lie in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,2q\}}\) for some \(p\in \mathbb {N}\) and some \(q\in \mathbb {N}^*\) . Then, for any \(m\in \mathbb {N}^*\) and any x > 0, we have

    with

    $$\displaystyle \begin{aligned} R^q_m(x) ~=~ -\frac{1}{m^{2q}}\,\int_0^1\frac{B_{2q}(\{mt\})}{(2q)!}{\,}(\Sigma g)^{(2q)}(x+t){\,}dt \end{aligned}$$

    and

    $$\displaystyle \begin{aligned} |R^q_m(x)| ~\leq ~ \frac{1}{m^{2q}}{\,}\frac{|B_{2q}|}{(2q)!}\,\int_0^1|(\Sigma g)^{(2q)}(x+t)|{\,}dt{\,}. \end{aligned}$$

    For large x the latter integral reduces to |g (2q−1)(x)|.

Proof

Let us prove assertion (b) first. The first part follows from a straightforward application of Euler-Maclaurin’s formula (Proposition 6.31) to f =  Σg, with a = x, b = x + 1, and N = m. Now, we see that the function ( Σg)(2q) lies in \(\mathcal {K}^{(p-2q)_+}\) by Proposition 4.12, and hence also in \(\mathcal {K}^{-1}\) by Proposition 4.7. Thus, for sufficiently large x we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_0^1|(\Sigma g)^{(2q)}(x+t)|{\,}dt & =&\displaystyle \left|\int_0^1(\Sigma g)^{(2q)}(x+t){\,}dt\right|\\ & =&\displaystyle \left|(\Sigma g)^{(2q-1)}(x+1)-(\Sigma g)^{(2q-1)}(x)\right|. \end{array} \end{aligned} $$

By Proposition 7.7, the latter expression reduces to

$$\displaystyle \begin{aligned} \left|\Sigma g^{(2q-1)}(x+1)-\Sigma g^{(2q-1)}(x)\right| ~=~ |g^{(2q-1)}(x)|{\,}. \end{aligned}$$

Assertion (a) can be proved similarly. Here we observe that ( Σg) lies in \(\mathcal {K}^{(p-1)_+}\) and hence also in \(\mathcal {K}^{-1}\). Thus, for sufficiently large x we obtain

$$\displaystyle \begin{aligned} \int_0^1|(\Sigma g)'(x+t)|{\,}dt ~=~ \left|\int_0^1(\Sigma g)'(x+t){\,}dt\right| ~=~ |g(x)|. \end{aligned}$$

This completes the proof. □

Setting m = 1 in Proposition 8.35, we derive immediately an asymptotic expansion of the function Σg in terms of its trend and the higher order derivatives of g. As this special case is very important for the applications, we state it in the next proposition (in which we also use (8.9) to evaluate the integral of Σg on (x, x + 1)).

Proposition 8.36

The following assertions hold.

  1. (a)

    Let g lie in \(\mathcal {C}^1\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,1\}}\) for some \(p\in \mathbb {N}\) . Then, for any x > 0 we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt -\frac{1}{2}{\,}g(x) + R_1(x){\,},{} \end{aligned}$$

    with

    $$\displaystyle \begin{aligned} R_1(x) ~=~ \int_0^1B_1(t){\,}(\Sigma g)'(x+t){\,}dt \end{aligned}$$

    and

    $$\displaystyle \begin{aligned} |R_1(x)| ~\leq ~ \frac{1}{2}\,\int_0^1|(\Sigma g)'(x+t)|{\,}dt{\,}. \end{aligned}$$

    For large x the latter integral reduces to |g(x)|.

  2. (b)

    If g lie in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,2q\}}\) for some \(p\in \mathbb {N}\) and some \(q\in \mathbb {N}^*\) . Then, for any x > 0 we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^xg(t){\,}dt -\frac{1}{2}{\,}g(x)+\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(x) + R^q_1(x){\,}, \end{aligned} $$
    (8.19)

    with

    $$\displaystyle \begin{aligned} R^q_1(x) ~=~ -\int_0^1\frac{B_{2q}(t)}{(2q)!}{\,}(\Sigma g)^{(2q)}(x+t){\,}dt \end{aligned}$$

    and

    $$\displaystyle \begin{aligned} |R^q_1(x)| ~\leq ~ \frac{|B_{2q}|}{(2q)!}\,\int_0^1|(\Sigma g)^{(2q)}(x+t)|{\,}dt{\,}. \end{aligned}$$

    For large x the latter integral reduces to |g (2q−1)(x)|.

Example 8.37

Taking \(g(x)=\ln x\) and p = 1 in (8.19), we retrieve immediately the asymptotic expansion given in (8.17). The following equivalent, but more concise, formulation of this expansion is given in terms of Binet’s function. For any \(q\in \mathbb {N}^*\), we have

$$\displaystyle \begin{aligned} J(x) ~=~ \sum_{k=1}^q\frac{B_{k+1}}{k(k+1){\,}x^k}+O\left(x^{-q-1}\right)\qquad \mbox{as }x\to\infty. \end{aligned}$$

Remark 8.38

The following alternative asymptotic expansion of the Riemann sum (8.18) can be immediately obtained using the general form of Gregory’s formula (Proposition 6.30). If g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) and if it is q-convex or q-concave on [x, ) for every integer q ≥ p, then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_x^{x+1}\Sigma g(t){\,}dt & =&\displaystyle \frac{1}{m}\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right) + \frac{1}{m}\,\sum_{k=1}^q G_k\,\Delta^{k-1}g_m(mx) + R, \end{array} \end{aligned} $$

where

$$\displaystyle \begin{aligned} |R| ~\leq ~ \frac{1}{m}\,\overline{G}_q\left|\Delta^qg_m(mx)\right|\qquad \mbox{and}\qquad g_m(x) ~=~ g\left(\frac{x}{m}\right). \end{aligned}$$

(Compare with Proposition 8.30.) If we set m = 1 in this latter expansion, then we immediately retrieve the inequality of Lemma 8.10 as well as the Gregory formula-based series expression for Σg given in (8.4). It is then important to note that the asymptotic expansion (8.19) often leads to divergent series, contrary to its “cousin” formula (8.4), as already observed in Remark 6.32. For instance, setting x = 1 in (8.17) leads to a divergent series whereas setting x = 1 in the “cousin” formula (8.6) leads to an analogue of Fontana-Mascheroni’s series. In this regard, we observe that the Gregory coefficients have the asymptotic behavior

$$\displaystyle \begin{aligned} |G_n| ~\sim ~ \frac{1}{n(\ln n)^2}\qquad \mbox{as }n\to\infty{\,}, \end{aligned}$$

while the Bernoulli numbers satisfy

$$\displaystyle \begin{aligned} |B_{2n}| ~=~ \frac{2(2n)!}{(2\pi)^{2n}}{\,}\zeta(2n) ~\sim ~ 4\sqrt{\pi n}\left(\frac{n}{\pi e}\right)^{2n}\qquad \mbox{as }n\to\infty{\,}; \end{aligned}$$

see, e.g., Graham et al. [41, p. 286]. \(\lozenge \)

A Variant of the Generalized Stirling Formula

Interestingly, from Proposition 8.35 we can easily derive the following variant of the generalized Stirling formula.

Proposition 8.39 (A Variant of the Generalized Stirling Formula)

Let g lie in \(\mathcal {C}^{2q}\cap \mathcal {D}^p\cap \mathcal {K}^{2q}\) for some \(q\in \mathbb {N}^*\cup \{\frac {1}{2}\}\) and some \(p\in \mathbb {N}\) satisfying p ≤ 2q − 1. For any \(m\in \mathbb {N}^*\) we have

$$\displaystyle \begin{aligned} \frac{1}{m}\sum_{j=0}^{m-1}\Sigma g\left(x+\frac{j}{m}\right) -\int_1^x g(t){\,}dt -\sum_{k=1}^p\frac{B_k}{m^k k!}{\,}g^{(k-1)}(x) ~\to ~ \sigma[g]\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$

In particular,

$$\displaystyle \begin{aligned} \Sigma g(x)-\int_1^x g(t){\,}dt -\sum_{k=1}^p\frac{B_k}{k!}{\,}g^{(k-1)}(x) ~\to ~ \sigma[g]\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned} $$
(8.20)

Proof

For every k ∈{p, …, 2q} we clearly have that g lies in \(\mathcal {D}^k\cap \mathcal {K}^k\) and hence g (k) vanishes at infinity by Theorem 4.14(b). The result then follows from Proposition 8.35. The particular case is obtained by setting m = 1. □

It is clear that the convergence result (8.20) coincides with the generalized Stirling formula (6.21) whenever p = 0 or p = 1. Thus, it does not bring anything new in these cases.

Now, we observe that if g lies in \(\mathcal {C}^{\max \{2q,r\}}\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{2q,r\}}\) for some \(q\in \mathbb {N}^*\cup \{\frac {1}{2}\}\) and some \(p\in \mathbb {N}\) satisfying p ≤ 2q − 1, then the convergence result in (8.20) still holds if we replace g with g (r) and p with (pr)+. Moreover, this modified result can also be obtained by differentiating r times both sides of (8.20) and then removing the terms that vanish at infinity. This important fact can be easily proved similarly as for the generalized Stirling formula (see Proposition 7.12 and the comment that follows it).

Remark 8.40

We now see that the generalized Stirling formula (6.21) could also be established similarly as its variant (8.20), i.e., using the Gregory formula-based asymptotic expansion of Σg as discussed in Remark 8.38. However, formula (6.21) is a very elementary consequence of Lemma 2.7, as commented in Remark 6.16. Its proof is elementary, elegant, and leads to the whole Theorem 6.11, which is a strong result that also provides inequalities. \(\lozenge \)

The restriction of the limit (8.20) to the natural integers provides the following alternative formula to compute the asymptotic constant σ[g]. Under the assumptions of Proposition 8.39, we have

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^{n-1} g(k)-\int_1^n g(t){\,}dt -\sum_{k=1}^p\frac{B_k}{k!}{\,}g^{(k-1)}(n)\right). \end{aligned} $$
(8.21)

Analogue of Liu’s Formula

Liu [64] (see also Mortici [75]) established the following formula. For any \(n\in \mathbb {N}^*\) we have

$$\displaystyle \begin{aligned} n! ~=~ \Gamma(n+1) ~=~ \sqrt{2\pi n}\,\left(\frac{n}{e}\right)^n \exp\left(\int_n^{\infty}\frac{\frac{1}{2}-\{t\}}{t}{\,}dt\right). \end{aligned}$$

This formula provides an exact (as opposed to asymptotic) expression for the gamma function with an integer argument.

We now propose a generalization of this identity to multiple \(\log \Gamma \)-type functions with real arguments. We call it the generalized Liu formula. Recall first the following Dirichlet test for convergence of improper integrals (see, e.g., Titchmarsh [96, p. 21]).

Lemma 8.41 (Dirichlet’s Test)

Let a ≥ 0 and let \(f\colon \mathbb {R}_+\to \mathbb {R}\) be so that the function \(x\mapsto \int _a^x f(t){\,}dt\) is bounded on [a, ). Let also g lie in \(\mathcal {C}^1\cap \mathcal {D}^0\cap \mathcal {K}^0\) . Then the improper integral

$$\displaystyle \begin{aligned} \int_a^{\infty}f(t)g(t){\,}dt \end{aligned}$$

converges.

Proposition 8.42 (Generalized Liu’s Formula)

  1. (a)

    If g lies in \(\mathcal {C}^2\cap \mathcal {D}^1\cap \mathcal {K}^2\) , then for any x > 0 we have

    $$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sigma[g]+\int_1^x g(t){\,}dt -\frac{1}{2}{\,}g(x)+\int_0^{\infty}\textstyle{\left(\frac{1}{2}-\{t\}\right)g'(x+t){\,}dt}. \end{aligned}$$
  2. (b)

    If g lies in \(\mathcal {C}^{2q+1}\cap \mathcal {D}^{2q}\cap \mathcal {K}^{2q+1}\) for some \(q\in \mathbb {N}^*\) , then for any x > 0 we have

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x) & =&\displaystyle \sigma[g]+\int_1^x g(t){\,}dt -\frac{1}{2}{\,}g(x)+\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(x)\\ & &\displaystyle + \int_0^{\infty}\frac{B_{2q}(\{t\})}{(2q)!}{\,}g^{(2q)}(x+t){\,}dt. \end{array} \end{aligned} $$

Proof

Let us prove assertion (b) first. We apply assertion (b) of Proposition 8.36 to the function g with p = 2q. Thus, for any x > 0 and any \(n\in \mathbb {N}\) we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} R^q_1(x) & =&\displaystyle \int_{x+1}^{x+n+1}\frac{B_{2q}(\{t-x\})}{(2q)!}{\,}(\Sigma g)^{(2q)}(t){\,}dt\\ & &\displaystyle -\int_x^{x+n+1}\frac{B_{2q}(\{t-x\})}{(2q)!}{\,}(\Sigma g)^{(2q)}(t){\,}dt. \end{array} \end{aligned} $$

By Proposition 7.7, we have

$$\displaystyle \begin{aligned} (\Sigma g)^{(2q)}(t+1)-(\Sigma g)^{(2q)}(t) ~=~ g^{(2q)}(t) \end{aligned}$$

and hence we obtain

$$\displaystyle \begin{aligned} R^q_1(x) ~=~ S^q_n(x) + T^q_n(x), \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} S^q_n(x) & =&\displaystyle \int_x^{x+n}\frac{B_{2q}(\{t-x\})}{(2q)!}{\,} g^{(2q)}(t){\,}dt{\,},\\ T^q_n(x) & =&\displaystyle -\int_{x+n}^{x+n+1}\frac{B_{2q}(\{t-x\})}{(2q)!}{\,}(\Sigma g)^{(2q)}(t){\,}dt{\,}. \end{array} \end{aligned} $$

Now, we observe that the sequence \(n\mapsto S^q_n(x)\) converges by Dirichlet’s test (see Lemma 8.41). Indeed, g (2q) lies in \(\mathcal {C}^1\cap \mathcal {D}^0\cap \mathcal {K}^0\) by Proposition 4.12, and for every u ≥ x we have that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \left|\int_x^u\frac{B_{2q}(\{t-x\})}{(2q)!}{\,}dt\right| & =&\displaystyle \left|\int_0^{u-x}\frac{B_{2q}(\{t\})}{(2q)!}{\,}dt\right|\\ & =&\displaystyle \left|\int_{\lfloor u-x\rfloor}^{u-x}\frac{B_{2q}(\{t\})}{(2q)!}{\,}dt\right| ~\leq ~ \frac{|B_{2q}|}{(2q)!}{\,}, \end{array} \end{aligned} $$

where we have used the well-known fact that the integral on (0, 1) of the Bernoulli polynomial B 2q is zero.

Let us now show that the sequence \(n\mapsto T^q_n(x)\) approaches zero as n →. Using integration by parts, we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} T^q_n(x) & =&\displaystyle -\int_0^1\frac{B_{2q}(t)}{(2q)!}{\,}(\Sigma g)^{(2q)}(x+n+t){\,}dt\\ & =&\displaystyle \int_0^1\frac{B_{2q+1}(t)}{(2q+1)!}{\,}(\Sigma g)^{(2q+1)}(x+n+t){\,}dt. \end{array} \end{aligned} $$

Since ( Σg)(2q+1) lies in \(\mathcal {K}^{-1}\), for large n we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} |T^q_n(x)| & \leq &\displaystyle \frac{|B_{2q+1}|}{(2q+1)!}{\,}\left|\int_0^1(\Sigma g)^{(2q+1)}(x+n+t){\,}dt\right|\\ & =&\displaystyle \frac{|B_{2q+1}|}{(2q+1)!}{\,}\left|g^{(2q)}(x+n)\right|, \end{array} \end{aligned} $$

which approaches zero as n → by Theorem 4.14(b). This proves assertion (b).

Assertion (a) can be proved similarly by applying assertion (a) of Proposition 8.36 to function g with p = 1. For any x > 0 and any \(n\in \mathbb {N}\) we have

$$\displaystyle \begin{aligned} R_1(x) ~=~ S_n(x) + T_n(x), \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} S_n(x) & =&\displaystyle -\int_x^{x+n}B_1(\{t-x\}){\,} g'(t){\,}dt{\,},\\ T_n(x) & =&\displaystyle \int_{x+n}^{x+n+1}B_1(\{t-x\}){\,}(\Sigma g)'(t){\,}dt{\,}. \end{array} \end{aligned} $$

We now see that the sequence nS n(x) converges by Dirichlet’s test. Moreover, the sequence nT n(x) approaches zero as n →. Indeed, using integration by parts we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} T_n(x) & =&\displaystyle \int_0^1B_1(t){\,}(\Sigma g)'(x+n+t){\,}dt\\ & =&\displaystyle \frac{B_2}{2}{\,}g'(x+n)-\int_0^1\frac{B_2(t)}{2}{\,}(\Sigma g)''(x+n+t){\,}dt, \end{array} \end{aligned} $$

and we conclude the proof as in assertion (b) since g′ lies in \(\mathcal {C}^1\cap \mathcal {D}^0\cap \mathcal {K}^0\). □

Example 8.43

Let us apply assertion (a) of Proposition 8.42 to \(g(x)=\ln x\). We obtain

$$\displaystyle \begin{aligned} \ln\Gamma(x) ~=~ \frac{1}{2}\ln(2\pi)-x+\left(x-\frac{1}{2}\right)\ln x+\int_0^{\infty}\frac{\frac{1}{2}-\{t\}}{t+x}{\,}dt, \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} J(x) ~=~ J^2[\ln\circ\Gamma](x) ~=~ \int_0^{\infty}\frac{\frac{1}{2}-\{t\}}{t+x}{\,}dt, \end{aligned}$$

which extends the original Liu formula to a real argument. \(\lozenge \)

Example 8.44

Applying assertion (a) of Proposition 8.42 to \(g(x)=\frac {1}{x}\), we obtain the following integral expression for the digamma function

$$\displaystyle \begin{aligned} \psi(x) ~=~ \ln x-\frac{1}{2x}+\int_0^{\infty}\frac{\{t\}-\frac{1}{2}}{(t+x)^2}{\,}dt. \end{aligned}$$

This expression seems to be previously unknown. \(\lozenge \)

Setting x = 1 in Proposition 8.42, we immediately derive an integral representation of the asymptotic constant σ[g]. We state this observation in the following corollary.

Corollary 8.45

  1. (a)

    If g lies in \(\mathcal {C}^2\cap \mathcal {D}^1\cap \mathcal {K}^2\) , then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \frac{1}{2}{\,}g(1)+\int_1^{\infty}\textstyle{\left(\{t\}-\frac{1}{2}\right)g'(t){\,}dt}. \end{aligned}$$
  2. (b)

    If g lies in \(\mathcal {C}^{2q+1}\cap \mathcal {D}^{2q}\cap \mathcal {K}^{2q+1}\) for some \(q\in \mathbb {N}^*\) , then we have

    $$\displaystyle \begin{aligned} \sigma[g] ~=~ \frac{1}{2}{\,}g(1)-\sum_{k=1}^q\frac{B_{2k}}{(2k)!}{\,}g^{(2k-1)}(1) - \int_1^{\infty}\frac{B_{2q}(\{t\})}{(2q)!}{\,}g^{(2q)}(t){\,}dt. \end{aligned}$$

Remark 8.46

Proposition 8.42 and Corollary 8.45 enable one to evaluate certain improper integrals involving polynomial functions of the fractional part of the integration variable. For example, to establish the identity

$$\displaystyle \begin{aligned} \int_1^{\infty}\frac{\{x\}-\frac{1}{2}}{2x+1}{\,}dx ~=~ -\frac{3}{4}+\frac{1}{4}\ln 2+\frac{1}{2}\ln 3 \end{aligned}$$

(Srivastava and Choi [93, p. 600, Problem 11]), we simply use assertion (a) of Corollary 8.45 with \(g(x)=\frac {1}{2}\ln (2x+1)\). In this case, we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \frac{1}{2}\ln 2{\,}(x-1)+\frac{1}{2}\ln\Gamma\left(x+\frac{1}{2}\right)-\frac{1}{2}\ln\Gamma\left(\frac{3}{2}\right) \end{aligned}$$

and the integral is simply equal to \(\sigma [g]-\frac {1}{2} g(1)\). \(\lozenge \)

Remark 8.47

In Proposition 8.42, we could substitute σ[g] from its expression given in Corollary 8.45. But then, the restriction to the natural integers of the resulting formulas will simply reduce to the application of Euler-Maclaurin’s formula (Proposition 6.31) to g, with a = 1, b = n, h = 1, and N = n − 1. \(\lozenge \)

8.8 Analogue of Wallis’s Product Formula

In the following proposition, we recall one of the different versions of Wallis’s product formula (see, e.g., Finch [37, p. 21]).

Proposition 8.48 (Wallis’s Product Formula)

The following limit holds

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\frac{1\cdot 3{\,}\cdots{\,}(2n-1)}{2\cdot 4{\,}\cdots{\,}(2n)}{\,}\sqrt{n} ~=~ \frac{1}{\sqrt{\pi}}{\,}. \end{aligned} $$
(8.22)

In the additive notation, identity (8.22) becomes

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left(\frac{1}{2}\,\ln(\pi n)+\sum_{k=1}^{2n}(-1)^{k-1}\,\ln k\right) ~=~ 0. \end{aligned}$$

The following proposition gives an analogue of this latter formula for any function g lying in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\).

Proposition 8.49

Let g lie in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Let \(\tilde {g}\colon \mathbb {R}_+\to \mathbb {R}\) be the function defined by the equation \(\tilde {g}(x)=2{\,}g(2x)\) for x > 0. Let also \(h\colon \mathbb {N}^*\to \mathbb {R}\) be the sequence defined by the equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} h(n) & =&\displaystyle \sigma[\tilde{g}]-\sigma[g]+\int_1^2(g(2n+t)-g(t)){\,}dt\\ & &\displaystyle +\sum_{j=1}^pG_j\left(\Delta^{j-1}g(2n+1)-\Delta^{j-1}\tilde{g}(n+1)\right)\qquad \mathit{\mbox{for }}n\in\mathbb{N}^*. \end{array} \end{aligned} $$

Then we have

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left(h(n) + \sum_{k=1}^{2n}(-1)^{k-1}g(k)\right) ~=~ 0. \end{aligned} $$
(8.23)

Proof

The function \(\tilde {g}\) lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) by Corollary 4.21. By (5.2), for any \(n\in \mathbb {N}^*\) we thus have

$$\displaystyle \begin{aligned} \sum_{k=1}^{2n}(-1)^{k-1}g(k) ~=~ \sum_{k=1}^{2n}g(k)-\sum_{k=1}^n\tilde{g}(k) ~=~ \Sigma g(2n+1)-\Sigma\tilde{g}(n+1). \end{aligned}$$

Using the discrete version of the generalized Stirling formula (8.11), we get

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^{2n}g(k)-\int_1^{2n+1}g(t){\,}dt+\sum_{j=1}^pG_j\,\Delta^{j-1}g(2n+1)\right) \end{aligned}$$

and

$$\displaystyle \begin{aligned} \sigma[\tilde{g}] ~=~ \lim_{n\to\infty}\left(\sum_{k=1}^{n}\tilde{g}(k)-\int_1^{n+1}\tilde{g}(t){\,}dt +\sum_{j=1}^pG_j\,\Delta^{j-1}\tilde{g}(n+1)\right). \end{aligned}$$

This establishes the claimed formula. □

Formula (8.23) actually holds for infinitely many sequences nh(n). Indeed, if it holds for a sequence h(n), then it also holds for instance for the sequence h(n) + n q for any \(q\in \mathbb {N}^*\). Thus, to obtain an elegant analogue of Wallis’s product formula, it is advisable to choose h among the simplest functions. For instance, we could consider the sequence obtained from the series expansion for h(n) about infinity after removing all the summands that vanish at infinity.

Example 8.50

Let us apply Proposition 8.49 to \(g(x)=\ln x\) with p = 1. We obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} h(n) & =&\displaystyle 2n\ln(2n+2)-\left(2n+\frac{1}{2}\right)\ln(2n+1)+\ln(n+1)-1+\frac{1}{2}\ln(2\pi)\\ & =&\displaystyle \frac{1}{2}\ln(\pi n)+O\left(n^{-2}\right). \end{array} \end{aligned} $$

Replacing h(n) with \(\frac {1}{2}\ln (\pi n)\) in (8.23) as recommended above, we retrieve the original Wallis product formula (8.22). \(\lozenge \)

Example 8.51

Let us apply Proposition 8.49 to the harmonic number function g(x) = H x with p = 1. After a bit of calculus we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} h(n) & =&\displaystyle \frac{1}{2}{\,}H_{2n+1}+\frac{1}{2}\,\ln 2+\ln(n+1)-\psi(2n+3)\\ & =&\displaystyle \frac{1}{2}(\gamma +\ln n)+O\left(n^{-1}\right). \end{array} \end{aligned} $$

We then obtain the following analogue of Wallis’s product formula

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left(-\ln n+2\,\sum_{k=1}^{2n}(-1)^kH_k\right) ~=~ \gamma{\,}, \end{aligned}$$

which provides an alternative definition of Euler’s constant γ. \(\lozenge \)

Example 8.52

Let us apply Proposition 8.49 to the harmonic number function of order 2

$$\displaystyle \begin{aligned} g(x) ~=~ H_x^{(2)} ~=~ \zeta(2)-\zeta(2,x+1) \end{aligned}$$

with p = 1. After some algebra we obtain the following analogue of Wallis’s product formula

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\sum_{k=1}^{2n}(-1)^kH_k^{(2)} ~=~ \frac{\pi^2}{24}{\,}. \end{aligned}$$

Remark 8.53

Alternative sequences for h(n) may be considered in Proposition 8.49. For instance, if g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\), then it is easy to see that

$$\displaystyle \begin{aligned} \sum_{k=1}^{2n}(-1)^{k-1}g(k) ~=~ -\Sigma\tilde{g}(n+1),\qquad n\in\mathbb{N}^*, \end{aligned}$$

where \(\tilde {g}\colon \mathbb {R}_+\to \mathbb {R}\) is the function defined by the equation \(\tilde {g}(x)=\Delta g(2x-1)\) for x > 0. Thus, assuming that \(\tilde {g}\) lies in \(\mathcal {K}^0\), identity (8.23) also holds for

$$\displaystyle \begin{aligned} h(n) ~=~ \sigma[\tilde{g}]+\int_1^{n+1}\tilde{g}(t){\,}dt-\sum_{j=1}^{(p-1)_+}G_j\,\Delta^{j-1}\tilde{g}(n+1). \end{aligned}$$

Similarly, we can easily see that

$$\displaystyle \begin{aligned} \sum_{k=1}^{2n}(-1)^{k-1}g(k) ~=~ g(1)-g(2n)+\Sigma\tilde{g}(n),\qquad n\in\mathbb{N}^*,\end{aligned} $$

where \(\tilde {g}\colon \mathbb {R}_+\to \mathbb {R}\) is the function defined by the equation \(\tilde {g}(x)=\Delta g(2x)\) for x > 0. Thus, assuming again that \(\tilde {g}\) lies in \(\mathcal {K}^0\), identity (8.23) also holds for

$$\displaystyle \begin{aligned} h(n) ~=~ g(2n)-g(1)-\sigma[\tilde{g}]-\int_1^n\tilde{g}(t){\,}dt+\sum_{j=1}^{(p-1)_+}G_j\,\Delta^{j-1}\tilde{g}(n). \end{aligned}$$

It is clear that the most appropriate function h among these possibilities strongly depends on the form of the function g. \(\lozenge \)

Remark 8.54

Using summation by parts with the classical indefinite sum operator (see, e.g., Graham et al. [41, p. 55]), it is not difficult to show that

$$\displaystyle \begin{aligned} \Sigma_x g(2x) ~=~ x{\,}g(2x)-g(2)-\Sigma_x\left((x+1)(\Delta g(2x)+\Delta g(2x+1)\right) \end{aligned} $$
(8.24)

(provided both sides exist). More generally, for any \(m\in \mathbb {N}^*\), we can show that

$$\displaystyle \begin{aligned} \Sigma_x g(mx) ~=~ x{\,}g(mx)-g(m)-\sum_{j=0}^{m-1}\Sigma_x\left((x+1)\,\Delta g(mx+j)\right). \end{aligned}$$

For instance, using (8.24) we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma_x\psi(2x) & =&\displaystyle x\,\psi(2x)-\psi(2)-\Sigma_x\left(1+\frac{1}{2x}+\frac{1}{4(x+\frac{1}{2})}\right)\\ & =&\displaystyle x\,\psi(2x)-\psi(1)-x-\frac{1}{2}(\psi(x)+\gamma) -\frac{1}{4}\left(\psi\left(x+\frac{1}{2}\right)-\psi\left(\frac{3}{2}\right)\right)\\ & =&\displaystyle x\,\psi(2x)-\frac{1}{2}\,\psi(x)-x-\frac{1}{4}\,\psi\left(x+\frac{1}{2}\right)+\frac{1}{4}\left(2-2\ln 2+\gamma\right). \end{array} \end{aligned} $$

As this example demonstrates, formula (8.24) can sometimes be very useful in Proposition 8.49 for the computation of \(\sigma [\tilde {g}]\). \(\lozenge \)

8.9 Analogue of Euler’s Reflection Formula

Recall that the identity

$$\displaystyle \begin{aligned} \Gamma(z)\Gamma(1-z) ~=~ \pi\csc{}(\pi z)\end{aligned} $$
(8.25)

holds for any \(z\in \mathbb {C}\setminus \mathbb {Z}\). This identity, known by the name Euler’s reflection formula (see, e.g., Artin [11, p. 26] and Srivastava and Choi [93, p. 3]), can be proved for instance using the Weierstrassian form of the gamma function.

Motivated by this and similar examples, it is then natural to wonder if an analogue of Euler’s reflection formula holds for any multiple \(\log \Gamma \)-type function, at least on \(\mathbb {R}\setminus \mathbb {Z}\), or even on the interval (0, 1). However, this question seems rather difficult and reflection formulas as beautiful as (8.25) are relatively exceptional.

Now, if we logarithmically differentiate both sides of (8.25), we obtain the following reflection formula for the digamma function (see [93, p. 25])

$$\displaystyle \begin{aligned} \psi(x)-\psi(1-x) ~=~ -\pi\cot{}(\pi x){\,}.\end{aligned} $$
(8.26)

Using an appropriate integration, we also obtain the following reflection formula for the Barnes G-function (see [93, p. 45])

$$\displaystyle \begin{aligned} \ln G(1+x)-\ln G(1-x) ~=~ x\ln(2\pi)-\int_0^x\pi t\cot{}(\pi t){\,}dt{\,}. \end{aligned} $$
(8.27)

These and other examples show that the reflection formulas usually share a common pattern. Their right sides typically include 1-periodic functions or integrals of 1-periodic functions while their left sides are of one the following forms

$$\displaystyle \begin{aligned} \Sigma g(x)\pm \Sigma g(1-x)\qquad \mbox{or}\qquad \Sigma g(1+x)\pm \Sigma g(1-x) \end{aligned}$$

for some appropriate functions g.

In this section, we investigate this important topic in the light of our theory. To get straight to the point, we have not found an analogue of Euler’s reflection formula that is systematically applicable to any multiple \(\log \Gamma \)-type function. We nevertheless present a few interesting results that could hopefully be the starting point of a larger theory.

First of all, due to the presence of the arguments x and 1 − x in most of the reflection formulas, it is important to see how the domain of the functions considered in this work can be extended to a larger set. Since many functions g involved in the difference equation Δf = g have singularities at 0 (e.g., \(g(x)=\frac {1}{x}\)), we suggest extending the domain of all these functions to the set \(\mathbb {R}\setminus \{0\}\). Due to the nature of the difference operator Δ, any solution f is then required to be defined on \(\mathbb {R}\setminus (-\mathbb {N})\). The domains of many other associated functions and identities of this theory can be extended likewise. For instance, for any \(p\in \mathbb {N}\) and any \(n\in \mathbb {N}^*\), the domain of the function \(f_n^p[g]\) defined in (1.4) can be extended to \(\mathbb {R}\setminus (-\mathbb {N})\). Similarly, for any \(p\in \mathbb {N}\) and any \(a\in \mathbb {R}\setminus \{0\}\), the domain of the function \(\rho _a^p[g]\) defined in (1.7) can be extended to \(\mathbb {R}\setminus \{-a\}\).

We now have the following important result.

Lemma 8.55

Let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be a function whose restriction \(g|{ }_{\mathbb {R}_+}\) to \(\mathbb {R}_+\) lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Then, there exists a unique function \(f\colon \mathbb {R}\setminus (-\mathbb {N})\to \mathbb {R}\) such that Δf = g and \(f|{ }_{\mathbb {R}_+}=\Sigma (g|{ }_{\mathbb {R}_+})\) . Moreover,

$$\displaystyle \begin{aligned} f(x) ~=~ \lim_{n\to\infty}f_n^p[g](x){\,},\qquad x\in \mathbb{R}\setminus (-\mathbb{N}). \end{aligned}$$

Proof

For any \(m\in \mathbb {N}\) and any solution \(f\colon \mathbb {R}\setminus (-\mathbb {N})\to \mathbb {R}\) to the equation Δf = g, we must have

$$\displaystyle \begin{aligned} f(x-m) ~=~f(x) - \sum_{k=1}^mg(x-k),\qquad x\in\mathbb{R}_+\setminus\mathbb{N}. \end{aligned} $$
(8.28)

This clearly establishes the first part of the lemma.

Let us now prove that for any \(x\in \mathbb {R}_+\setminus \mathbb {N}\) and any integers 0 ≤ m ≤ n we have

$$\displaystyle \begin{aligned} f_n^p[g](x)-\sum_{k=1}^mg(x-k) ~=~ f_n^p[g](x-m)-\sum_{k=1}^m\rho_n^p[g](x-k). \end{aligned} $$
(8.29)

On the one hand, for j = 1, …, p, we have

$$\displaystyle \begin{aligned} \sum_{k=1}^m{\textstyle{{{x-k}\choose{j-1}}}} ~=~ \sum_{k=0}^{m-1}{\textstyle{{{k+x-m}\choose{j-1}}}} ~=~ \sum_{k=0}^m\Delta_k{\textstyle{{{k+x-m}\choose{j}}}} ~=~ {\textstyle{{{x}\choose{j}}}}-{\textstyle{{{x-m}\choose{j}}}} \end{aligned}$$

and hence using (1.7) we obtain

$$\displaystyle \begin{aligned} \sum_{k=1}^m\rho_n^p[g](x-k) ~=~ \sum_{k=1}^mg(x+n-k)-\sum_{j=1}^p\left({\textstyle{{{x}\choose{j}}}}-{\textstyle{{{x-m}\choose{j}}}}\right)\Delta^{j-1}g(n){\,}. \end{aligned}$$

On the other hand, using this latter identity and subtracting the right side of (8.29) from the left side, using (1.4) we obtain

$$\displaystyle \begin{aligned} \sum_{k=0}^{n-1}(g(x-m+k)-g(x+k))-\sum_{k=1}^mg(x-k)+\sum_{k=1}^mg(x+n-k), \end{aligned}$$

which is identically zero. This establishes (8.29).

Let us now show that the sequence \(n\mapsto \rho ^p_n[g](x-k)\) converges to zero for any \(x\in \mathbb {R}_+\setminus \mathbb {N}\) and any \(k\in \mathbb {N}\). By (2.12) it is actually enough to show that the sequence

$$\displaystyle \begin{aligned} n ~\mapsto ~ g[n,n+1,\ldots,n+p-1,n+x-k] \end{aligned}$$

converges to zero. However, by Lemma 2.5 this latter sequence can be sandwiched between the sequences

$$\displaystyle \begin{aligned} n ~\mapsto ~ g[n-k,n+1-k,\ldots,n+p-1-k,n+x-k] \end{aligned}$$

and

$$\displaystyle \begin{aligned} n ~\mapsto ~ g[n,n+1,\ldots,n+p-1,n+x], \end{aligned}$$

which both converge to zero by (2.12).

Finally, let \(f\colon \mathbb {R}\setminus (-\mathbb {N})\to \mathbb {R}\) be the unique function defined in the first part of this lemma. Using (8.28) and (8.29), since g lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} f(x-m) & =&\displaystyle \Sigma g(x)- \sum_{k=1}^mg(x-k) ~=~ \lim_{n\to\infty}f_n^p[g](x)-\sum_{k=1}^mg(x-k)\\ & =&\displaystyle \lim_{n\to\infty}f_n^p[g](x-m), \end{array} \end{aligned} $$

which establishes the second part of the lemma. □

Lemma 8.55 shows that the domain of the function Σg can be extended to \(\mathbb {R}\setminus (-\mathbb {N})\) whenever g is defined on \(\mathbb {R}\setminus \{0\}\). We then use the same symbol Σg for this extended function. Moreover, in this case we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \lim_{n\to\infty}f_n^p[g](x){\,},\qquad x\in \mathbb{R}\setminus (-\mathbb{N}) \end{aligned}$$

and the Eulerian form (8.1) of Σg extends similarly. Actually, when g is a function of a complex variable, Lemma 8.55 can be easily adapted to extend the function Σg to an appropriate complex domain.

Let us now establish reflection formulas on \(\mathbb {R}\setminus \mathbb {Z}\) for functions Σg when the restriction of g to \(\mathbb {R}_+\) lies in \(\mathcal {D}^0\cap \mathcal {K}^0\). The result is presented in the following two propositions, which deal separately with the cases when \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is odd or even. The proofs of these propositions are similar and we therefore omit the second one.

Proposition 8.56

Let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be such that \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {D}^0\cap \mathcal {K}^0\) and let \(\omega \colon \mathbb {R}\setminus \mathbb {Z}\to \mathbb {R}\) be the function defined by the equation

$$\displaystyle \begin{aligned} \omega(x) ~=~ \Sigma g(x)-\Sigma g(1-x)\qquad \mathit{\mbox{for }}x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

Then the following assertions are equivalent.

  1. (i)

    The function \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is odd.

  2. (ii)

    The function ω is 1-periodic.

  3. (iii)

    We have that \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) vanishes at∞ and

    $$\displaystyle \begin{aligned} \omega(x) ~=~ \lim_{N\to\infty}\sum_{|k|\leq N}g(x+k),\qquad x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

Proof

The equivalence (i) ⇔ (ii) is trivial since Δω(x) = g(x) + g(−x). Let us prove the implication (iii) ⇒ (ii). We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta\omega(x) & =&\displaystyle \lim_{N\to\infty} \sum_{|k|\leq N} (g(x+k+1)-g(x+k))\\ & =&\displaystyle \lim_{N\to\infty} (g(x+N+1)-g(x-N)) ~=~ 0. \end{array} \end{aligned} $$

Finally, let us prove the implication (i) ⇒ (iii). Using Lemma 8.55 we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \omega(x) & =&\displaystyle \sum_{k=0}^{\infty}(g(x+k)+g(x-k-1))\\ & =&\displaystyle \lim_{N\to\infty} \left(-g(x-N-1)+\sum_{|k|\leq N} g(x+k)\right). \end{array} \end{aligned} $$

This completes the proof. □

Proposition 8.57

Let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be such that \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {D}^0\cap \mathcal {K}^0\) and let \(\omega \colon \mathbb {R}\setminus \mathbb {Z}\to \mathbb {R}\) be the function defined by the equation

$$\displaystyle \begin{aligned} \omega(x) ~=~ \Sigma g(x)+\Sigma g(1-x)\qquad \mathit{\mbox{for }}x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

Then the following assertions are equivalent.

  1. (i)

    The function \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is even.

  2. (ii)

    The function ω is 1-periodic.

  3. (iii)

    We have that \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) vanishes at∞ and

    $$\displaystyle \begin{aligned} \omega(x) ~=~ -g(x)+\lim_{N\to\infty}\sum_{1\leq |k|\leq N}(g(k)-g(x+k)),\qquad x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

Example 8.58 (The Digamma Function)

Consider the odd function g(x) = 1∕x on \(\mathbb {R}\setminus \{0\}\) for which we have the identity Σg(x) = ψ(x) + γ (see Sect. 10.2). This identity actually holds not only on \(\mathbb {R}_+\) but also on \(\mathbb {R}\setminus (-\mathbb {N})\) since by Lemma 8.55 the digamma function ψ extends to this larger domain through the following Eulerian form (see also Srivastava and Choi [93, p. 24])

$$\displaystyle \begin{aligned} \psi(x) ~=~ -\gamma-\frac{1}{x}+\sum_{k=1}^{\infty}\left(\frac{1}{k}-\frac{1}{x+k}\right),\qquad x\in\mathbb{R}\setminus(-\mathbb{N}). \end{aligned}$$

Now, using Proposition 8.56 we immediately obtain the identity

$$\displaystyle \begin{aligned} \psi(x)-\psi(1-x) ~=~ \lim_{N\to\infty}\sum_{|k|\leq N}\frac{1}{x+k}{\,},\qquad x\in\mathbb{R}\setminus\mathbb{Z}, \end{aligned}$$

where the right-hand function is 1-periodic. Finally, it can be proved (see, e.g., Aigner and Ziegler [3, Chapter 26], Berndt [18, p. 4], and Graham et al. [41, Eq. (6.88)]) that this function reduces to \(-\pi \cot {}(\pi x)\). We then retrieve the reflection formula (8.26) for the digamma function. \(\lozenge \)

Example 8.59 (A Variant of the Digamma Function)

Consider the even function g(x) = 1∕|x| on \(\mathbb {R}\setminus \{0\}\). Using Lemma 8.55, we then obtain the following expression for Σg on \(\mathbb {R}\setminus (-\mathbb {N})\)

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=0}^{\infty}\left(\frac{1}{k+1}-\frac{1}{|x+k|}\right), \end{aligned}$$

or equivalently,

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=0}^{\infty}\left(\frac{1}{k+1}-\frac{1}{x+k}\right)+\sum_{k=0}^{\infty}\left(\frac{1}{x+k}-\frac{1}{|x+k|}\right), \end{aligned}$$

where the first series reduces to ψ(x) + γ. If x > 0, then the second series is zero. If x < 0, it reduces to

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{k=0}^{\infty}\min\left\{\frac{2}{x+k}{\,},0\right\} & =&\displaystyle \sum_{k=0}^{\lfloor -x\rfloor}\frac{2}{x+k} ~=~ 2\sum_{k=0}^{\lfloor -x\rfloor}\Delta_k\psi(x+k)\\ & =&\displaystyle 2{\,}(\psi(1-\{-x\})-\psi(x)). \end{array} \end{aligned} $$

Using Proposition 8.57, we then obtain that the function

$$\displaystyle \begin{aligned} \Sigma g(x)+\Sigma g(1-x) ~=~ -\frac{1}{|x|} +\lim_{N\to\infty}\sum_{1\leq |k|\leq N}\left(\frac{1}{|k|}-\frac{1}{|x+k|}\right){\,},\qquad x\in\mathbb{R}\setminus\mathbb{Z}, \end{aligned}$$

is 1-periodic. Using the reflection formula for ψ, we also obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g(x)+\Sigma g(1-x) & =&\displaystyle \psi(\{x\})+\psi(1-\{x\})+2\gamma\\ & =&\displaystyle 2\,\psi(\{x\}) +\pi\cot{}(\pi x)+2\gamma{\,},\qquad x\in\mathbb{R}\setminus\mathbb{Z}, \end{array} \end{aligned} $$

which provides a closed expression for this periodic function. \(\lozenge \)

Example 8.60

Consider the function \(g\colon \mathbb {R}\to \mathbb {R}\) defined by the equation

$$\displaystyle \begin{aligned} g(x) ~=~ \frac{x+1}{x^2+1}\qquad \mbox{for }x\in\mathbb{R}. \end{aligned}$$

We observe that both functions g(x) and \(\tilde {g}(x)=g(-x)\) have restrictions to \(\mathbb {R}_+\) that lie in \(\mathcal {D}^0\cap \mathcal {K}^0\). However, the function g is neither even nor odd. Denoting its even and odd parts by g + and g , respectively, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} g_+(x) & =&\displaystyle \frac{g(x)+g(-x)}{2} ~=~ \frac{1}{x^2+1}{\,};\\ g_-(x) & =&\displaystyle \frac{g(x)-g(-x)}{2} ~=~ \frac{x}{x^2+1}{\,}. \end{array} \end{aligned} $$

and we can derive a reflection formula for each of these functions.

Now, it is not difficult to see that (see Example 5.10)

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g_+(x) & =&\displaystyle \Im (\psi(1+i)-\psi(x+i)){\,};\\ \Sigma g_-(x) & =&\displaystyle \Re (-\psi(1+i)+\psi(x+i)){\,}. \end{array} \end{aligned} $$

Using Propositions 8.56 and 8.57, we then see that both functions

$$\displaystyle \begin{aligned} \Sigma g_+(x)+\Sigma g_+(1-x)\qquad \mbox{and}\qquad \Sigma g_-(x)-\Sigma g_-(1-x) \end{aligned}$$

are 1-periodic. Moreover, their sum \(\Sigma g(x)+\Sigma \tilde {g}(1-x)\) is also 1-periodic. Equivalently, the function

$$\displaystyle \begin{aligned} \Re(\psi(x+i)-\psi(1-x+i))-\Im(\psi(x+i)+\psi(1-x+i)) \end{aligned}$$

is 1-periodic. However, we do not have a reflection formula for Σg or \(\Sigma \tilde {g}\). \(\lozenge \)

Although Propositions 8.56 and 8.57 constitute major steps in the investigation of reflection formulas, they do not provide closed-form expressions for the 1-periodic functions involved in these formulas. For instance, considering the reflection formula for the digamma function (see Example 8.58), we see that Proposition 8.56 does not yield the right-hand side of identity (8.26). Moreover, it seems that such an expression, obtained for example using Herglotz’s trick (see Aigner and Ziegler [3, Chapter 26]), is very specific to the case when g(x) = 1∕x. Now, finding a closed-form expression in the general case remains a very interesting open problem: such a result would provide an analogue of Euler’s reflection formula for a wide class of functions. In this regard, we observe that Herglotz’s trick uses an analogue of Legendre’s duplication formula in the additive notation. Thus, a suitable adaptation of this trick could be helpful to tackle this problem.

Let us now investigate the more general case when the function \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). We observe that some reflection formulas can be obtained by integrating or differentiating both sides of a given reflection formula. Thus, if \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {C}^1\cap \mathcal {D}^1\cap \mathcal {K}^1\) for instance, we know from Proposition 4.12 that \(g'|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) and we may try to find a reflection formula for Σg′ using Propositions 8.56 and 8.57. Since Σg′ and ( Σg) differ by a constant by Proposition 7.7, a reflection formula for Σg can then be obtained by integrating both sides of the reflection formula for Σg′. This approach is inspired from the elevator method (as discussed in Sect. 7.3).

For instance, integrating both sides of (8.26) on \((\frac {1}{2},x)\), where \(\frac {1}{2}<x<1\), we get the identity

$$\displaystyle \begin{aligned} \ln\Gamma(x)+\ln\Gamma(1-x) ~=~ \ln(\pi\csc{}(\pi x)). \end{aligned}$$

Thus, we retrieve Euler’s reflection formula on the interval \((\frac {1}{2},x)\) and this formula can be extended to the complex domain \(\mathbb {C}\setminus \mathbb {Z}\) by analytic continuation. The identity (8.27) can be obtained similarly, observing that

$$\displaystyle \begin{aligned} \ln G(x+1) ~=~ \ln\Gamma(x)+\ln G(x). \end{aligned}$$

Now, let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be a function such that \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\). Let also \(\omega _+[g]\colon \mathbb {R}\setminus \mathbb {Z}\to \mathbb {R}\) and \(\omega _-[g]\colon \mathbb {R}\setminus \mathbb {Z}\to \mathbb {R}\) be the functions defined by the equation

$$\displaystyle \begin{aligned} \omega_{\pm}[g](x) ~=~ \Sigma g(x)\pm\Sigma g(1-x)\qquad \mbox{for }x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

We then observe that

$$\displaystyle \begin{aligned} \Delta\omega_{\pm}[g](x) ~=~ g(x) \mp g(-x),\qquad x\in\mathbb{R}\setminus\mathbb{Z}. \end{aligned}$$

It follows that ω + (resp. ω ) is 1-periodic if and only if \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is even (resp. odd).

The following proposition provides an explicit expression for the function ω ±[g] whenever it is 1-periodic. This expression is constructed from the very definition of Σg.

Proposition 8.61

Let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be such that \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {D}^p\cap \mathcal {K}^p\) for some \(p\in \mathbb {N}\) . Then the following assertions hold.

  1. (a)

    If \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is odd, then the function ω [g] is 1-periodic and is equal to

    $$\displaystyle \begin{aligned} \lim_{n\to\infty}\bigg(-\sum_{|k|\leq n-1}g(x+k)-g(x-n) +\sum_{j=1}^p\left({\textstyle{{{x}\choose{j}}}}-{\textstyle{{{1-x}\choose{j}}}}\right)\Delta^{j-1}g(n)\bigg). \end{aligned}$$
  2. (b)

    If \(g|{ }_{\mathbb {R}\setminus \mathbb {Z}}\) is even, then the function ω +[g] is 1-periodic and is equal to

    $$\displaystyle \begin{aligned}\displaystyle \lim_{n\to\infty}\bigg(-g(x)+\sum_{1\leq |k|\leq n-1}(g(k)-g(x+k))\\\displaystyle -g(x-n)+\sum_{j=1}^p\left({\textstyle{{{x}\choose{j}}}}+{\textstyle{{{1-x}\choose{j}}}}\right)\Delta^{j-1}g(n)\bigg). \end{aligned} $$

Proof

Let us prove assertion (a). That ω [g] is 1-periodic is clear from the discussion above. Now, using Lemma 8.55 we obtain

$$\displaystyle \begin{aligned}\displaystyle \omega_-[g](x) ~=~ \lim_{n\to\infty}(f_n^p[g](x)+f_n^p[g](1-x))\\\displaystyle =~ \lim_{n\to\infty}\left(\sum_{k=0}^{n-1}(g(1-x+k)-g(x+k)) +\sum_{j=1}^p\left({\textstyle{{{x}\choose{j}}}}-{\textstyle{{{1-x}\choose{j}}}}\right)\Delta^{j-1}g(n)\right). \end{aligned} $$

This proves assertion (a). Assertion (b) can be established similarly. □

Example 8.62

Consider the odd function \(g\colon \mathbb {R}\to \mathbb {R}\) defined by the equation

$$\displaystyle \begin{aligned} g(x)=x-\frac{x}{x^2+1}\qquad \mbox{for }x\in\mathbb{R}. \end{aligned}$$

The function \(g|{ }_{\mathbb {R}_+}\) clearly lies in \(\mathcal {D}^2\cap \mathcal {K}^2\) and we have (see Example 5.10)

$$\displaystyle \begin{aligned} \Sigma g(x)={\textstyle{{{x}\choose{2}}}}-\Re(\psi(x+i)). \end{aligned}$$

By Proposition 8.61, the function

$$\displaystyle \begin{aligned} \Sigma g(x)-\Sigma g(1-x) ~=~ \Re(\psi(1-x+i)-\psi(x+i)) \end{aligned}$$

is 1-periodic and is equal to the limit

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left(-\sum_{|k|\leq n-1}h(x+k)-h(x-n)+(2x-1)h(n)\right), \end{aligned}$$

where h(x) = g(x) − x. \(\lozenge \)

Example 8.63 (Euler’s Reflection Formula)

Consider the even function \(g\colon \mathbb {R}\setminus \{0\}\) defined by the equation \(g(x) = \ln |x|\) for \(x\in \mathbb {R}\setminus \{0\}\). The function \(g|{ }_{\mathbb {R}_+}\) clearly lies in \(\mathcal {D}^1\cap \mathcal {K}^1\) and, since \(\Delta _x\ln |\Gamma (x)|=\ln |x|\) on \(\mathbb {R}\setminus (-\mathbb {N})\), we must have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \ln|\Gamma(x)|{\,},\qquad x\in\mathbb{R}\setminus (-\mathbb{N}).\end{aligned} $$

By Proposition 8.61, the function | Γ(x) Γ(1 − x)| on \(\mathbb {R}\setminus \mathbb {Z}\) is 1-periodic and is equal to

$$\displaystyle \begin{aligned} \lim_{n\to\infty}\left|\frac{1}{x}\prod_{1\leq |k|\leq n}\frac{k}{x+k}\right|. \end{aligned}$$

Euler’s reflection formula then shows that this limit is also \(|\pi \csc {}(\pi x)|\), as expected (see Artin [11, p. 27]). \(\lozenge \)

Remark 8.64

We observe the following interesting link between the analogue of Euler’s reflection formula and the logarithm of the generalized Stirling constant (see Definition 6.17). Let \(g\colon \mathbb {R}\setminus \{0\}\to \mathbb {R}\) be an even function such that \(g|{ }_{\mathbb {R}_+}\) lies in \(\mathcal {C}^0\cap \mathrm {dom}(\Sigma )\). Assume also that g is integrable at 0. Then, we have

$$\displaystyle \begin{aligned} \overline{\sigma}[g|{}_{\mathbb{R}_+}] ~=~ \int_0^1\Sigma g(t){\,}dt ~=~ \frac{1}{2}\int_0^1(\Sigma g(t)+\Sigma g(1-t))dt{\,},\end{aligned} $$

that is,

$$\displaystyle \begin{aligned} \overline{\sigma}[g|{}_{\mathbb{R}_+}] ~=~ \frac{1}{2}\int_0^1\omega_+[g](t){\,}dt. \end{aligned}$$

For instance, for the function \(g(x) = \ln |x|\) (see Example 8.63), we obtain

$$\displaystyle \begin{aligned} \overline{\sigma}[g|{}_{\mathbb{R}_+}] ~=~ \frac{1}{2}\int_0^1\ln(\pi\csc{}(\pi t)){\,}dt \end{aligned}$$

and it is not difficult to see that this expression reduces to \(\frac {1}{2}\ln (2\pi )\). \(\lozenge \)

8.10 Analogue of Gauss’ Digamma Theorem

The following formula, due to Gauss, enables one to compute the values of the digamma function ψ for rational arguments. If \(a,b\in \mathbb {N}^*\) with a < b, then we have

$$\displaystyle \begin{aligned} \psi\left(\frac{a}{b}\right) ~=~ -\gamma-\ln(2b)-\frac{\pi}{2}\,\cot\frac{a\pi}{b} +2\sum_{j=1}^{\lfloor(b-1)/2\rfloor}\cos\left(2j\pi\,\frac{a}{b}\right)\ln\left(\sin\frac{j\pi}{b}\right) \end{aligned} $$
(8.30)

(see, e.g., Knuth [53, p. 95] and Srivastava and Choi [93, p. 30]). This formula can be extended to all integers \(a,b\in \mathbb {N}^*\) by means of the difference equation ψ(x + 1) − ψ(x) = 1∕x.

For instance, we have

$$\displaystyle \begin{aligned} \psi\left(\frac{3}{4}\right) ~=~ -\gamma +\frac{\pi}{2}-3\ln 2. \end{aligned}$$

It is natural to wonder if an analogue of formula (8.30) holds for any multiple \(\log \Gamma \)-type function. Finding an analogue as beautiful as this formula seems to be hard. However, we have the following partial result.

Proposition 8.65

Let \(g\in \mathcal {D}^0\cap \mathcal {K}^0\) and let \(a,b\in \mathbb {N}^*\) with a < b. Then

$$\displaystyle \begin{aligned} \Sigma g\left(\frac{a}{b}\right) ~=~ \frac{1}{b}\,\sum_{j=0}^{b-1}\left(1-\omega_b^{-aj}\right)S^b_j[g], \end{aligned}$$

where

$$\displaystyle \begin{aligned} \omega_b ~=~ e^{\frac{2\pi i}{b}}\qquad \mathit{\mbox{and}}\qquad S^b_j[g] ~=~ \sum_{k=1}^{\infty}\omega_b^{jk}{\,}g\left(\frac{k}{b}\right). \end{aligned}$$

Proof

By definition of the map Σ, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \Sigma g\left(\frac{a}{b}\right) & =&\displaystyle \lim_{n\to\infty}\left(\sum_{k=1}^{n-1}g\left(\frac{bk}{b}\right)-\sum_{k=0}^{n-1}g\left(\frac{bk+a}{b}\right)\right)\\ & =&\displaystyle \lim_{n\to\infty}\sum_{k=1}^{bn-1}(u_b(k)-u_b(k-a)){\,}g\left(\frac{k}{b}\right), \end{array} \end{aligned} $$

where u b(k) = 1, if b divides k, and u b(k) = 0, otherwise; that is,

$$\displaystyle \begin{aligned} u_b(k) ~=~ \frac{1}{b}\sum_{j=0}^{b-1}\omega_b^{jk}. \end{aligned}$$

This completes the proof. □

Proposition 8.65 provides a first step in the search for an explicit expression for \(\Sigma g(\frac {a}{b})\). Depending upon the function g, more computations may be necessary to obtain a useful expression. In this respect, the derivation of formula (8.30) by means of Proposition 8.65 can be found in Marichal [66, p. 13].

Example 8.66

Let us apply Proposition 8.65 to the function g s(x) = −x s, where s > 1. This function lies in \(\mathcal {D}^0\cap \mathcal {K}^0\) and we have Σg s(x) = ζ(s, x) − ζ(s); see Example 1.7. Let \(a,b\in \mathbb {N}^*\) with a < b. For j = 0, …, b − 1, we then have

$$\displaystyle \begin{aligned} S_j^b[g_s] ~=~ -b^s\,\mathrm{Li}_s(\omega_b^j),\end{aligned} $$

where

$$\displaystyle \begin{aligned} \mathrm{Li}_s(z) ~=~ \sum_{k=1}^{\infty}\frac{z^k}{k^s}\end{aligned} $$

is the polylogarithm function. Using Proposition 8.65, we then obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} \zeta\left(s,\frac{a}{b}\right) & =&\displaystyle \zeta(s)-b^{s-1}\,\sum_{j=0}^{b-1}\left(1-\omega_b^{-aj}\right)\mathrm{Li}_s(\omega_b^j)\\ & =&\displaystyle b^{s-1}\,\sum_{j=0}^{b-1}\omega_b^{-aj}\,\mathrm{Li}_s(\omega_b^j). \end{array} \end{aligned} $$

The inverse conversion formula is simply given by

$$\displaystyle \begin{aligned} \mathrm{Li}_s(\omega_b^j) ~=~ b^{-s}\,\sum_{k=1}^b\omega_b^{jk}\,\zeta\left(s,\frac{k}{b}\right),\qquad j=1,\ldots,b-1. \end{aligned}$$

8.11 Generalized Gautschi’s Inequality

Gautschi [38] showed that the following double inequality holds for any 0 ≤ a ≤ 1

$$\displaystyle \begin{aligned} e^{(a-1)\,\psi(x+1)} ~\leq ~ \frac{\Gamma(x+a)}{\Gamma(x+1)} ~\leq ~ x^{a-1},\qquad x>0. \end{aligned}$$

As a consequence, since \(\psi (x)<\ln x\) for any x > 0, he also obtained that

$$\displaystyle \begin{aligned} (x+1)^{a-1} ~\leq ~ \frac{\Gamma(x+a)}{\Gamma(x+1)} ~\leq ~ x^{a-1},\qquad x>0, \end{aligned}$$

which is also a straightforward consequence of the Wendel inequality (6.5). We refer to these inequalities as the Gautschi inequality.

We now provide an analogue of Gautschi’s inequality for certain multiple \(\log \Gamma \)-type functions and for any a ≥ 0. We call it the generalized Gautschi’s inequality. As usual, we use the additive notation.

Proposition 8.67 (Generalized Gautschi’s Inequality)

Suppose that g lie in \(\mathcal {C}^2\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,2\}}\) for some \(p\in \mathbb {N}\) and let a ≥ 0 and x > 0 be so that Σg is convex on [x + ⌊a⌋, ). Then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} (a-\lceil a\rceil){\,}g(x+\lceil a\rceil) & \leq &\displaystyle (a-\lceil a\rceil){\,}(\Sigma g)'(x+\lceil a\rceil)\\ & \leq &\displaystyle \Sigma g(x+a)-\Sigma g(x+\lceil a\rceil) ~\leq ~ (a-\lceil a\rceil){\,}g(x+\lfloor a\rfloor){\,}. \end{array} \end{aligned} $$

(The inequalities are to be reversed if Σg is concave on [x + ⌊a⌋, ).)

Proof

We follow the same steps as in Gautschi’s proof. We can assume that k ≤ a < k + 1 for some fixed \(k\in \mathbb {N}\). Let x > 0 be fixed so that Σg is convex on [x + k, ). Let also \(f\colon [k,k+1)\to \mathbb {R}\) and \(\varphi \colon [k,k+1)\to \mathbb {R}\) be the functions defined by the equations

$$\displaystyle \begin{aligned} f(a) ~=~ \frac{1}{k+1-a}{\,}(\Sigma g(x+a)-\Sigma g(x+k+1)) \end{aligned}$$

and

$$\displaystyle \begin{aligned} \varphi(a) ~=~ (k+1-a)^2f'(a) \end{aligned}$$

for k ≤ a < k + 1. We then observe that

$$\displaystyle \begin{aligned} (k+1-a){\,}f'(a) ~=~ f(a)+D_a{\,}((k+1-a){\,}f(a)) ~=~ f(a)+(\Sigma g)'(x+a). \end{aligned}$$

It then follows that

$$\displaystyle \begin{aligned} \varphi(a) ~=~ (k+1-a){\,}(f(a)+(\Sigma g)'(x+a)) \end{aligned}$$

and

$$\displaystyle \begin{aligned} \varphi'(a) ~=~ (k+1-a){\,}(\Sigma g)''(x+a). \end{aligned}$$

We also have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varphi(k) & =&\displaystyle \Sigma g(x+k)-\Sigma g(x+k+1)+(\Sigma g)'(x+k)\\ & =&\displaystyle (\Sigma g)'(x+k)-g(x+k), \end{array} \end{aligned} $$

where

$$\displaystyle \begin{aligned} g(x+k) ~=~ \int_0^1(\Sigma g)'(x+k+t){\,}dt. \end{aligned}$$

Since Σg is convex on [x + k, ), its derivative is increasing on [x + k, ), and hence we must have φ(k) ≤ 0 and φ′(a) ≥ 0. Since φ(k + 1) = 0, it follows that the function φ is nonpositive and hence that the function f is decreasing. Using L’Hospital’s rule and the fact that φ(k) ≤ 0, we then obtain the following chain of inequalities

$$\displaystyle \begin{aligned} \begin{array}{rcl} -g(x+k+1) & \leq &\displaystyle -(\Sigma g)'(x+k+1)\\ & \leq &\displaystyle \lim_{a\to k+1}f(a) ~\leq ~ f(a) ~\leq ~ f(k) ~=~ -g(x+k). \end{array} \end{aligned} $$

This proves the result. □

Example 8.68

Applying Proposition 8.67 to \(g(x)=\ln x\) and p = 1, we obtain for any a ≥ 0 and any x > 0

$$\displaystyle \begin{aligned} (x+\lceil a\rceil)^{a-\lceil a\rceil} ~\leq ~ e^{(a-\lceil a\rceil)\,\psi(x+\lceil a\rceil)} ~\leq ~ \frac{\Gamma(x+a)}{\Gamma(x+\lceil a\rceil)} ~\leq ~ (x+\lfloor a\rfloor)^{a-\lceil a\rceil}{\,}. \end{aligned}$$

If we assume that 0 ≤ a ≤ 1, then we retrieve the original Gautschi inequality. \(\lozenge \)

Remark 8.69

If we wish to bracket the function Σg(x + a) − Σg(x + 1) in Proposition 8.67, we can use the identity

$$\displaystyle \begin{aligned} \Sigma g(x+\lceil a\rceil) ~=~ \Sigma g(x+1)+\sum_{k=1}^{\lceil a\rceil-1}g(x+k), \end{aligned}$$

which immediately follows from (5.3). For instance, for \(g(x)=\ln x\) we obtain the double inequality

$$\displaystyle \begin{aligned} \begin{array}{rcl} e^{(a-\lceil a\rceil)\,\psi(x+\lceil a\rceil)}(x+\lceil a\rceil -1)^{\underline{\lceil a\rceil -1}} & \leq &\displaystyle \frac{\Gamma(x+a)}{\Gamma(x+1)}\\ & \leq &\displaystyle (x+\lfloor a\rfloor)^{a-\lceil a\rceil}(x+\lceil a\rceil -1)^{\underline{\lceil a\rceil -1}}{\,}. \end{array} \end{aligned} $$

which holds for any a ≥ 0 and any x > 0. \(\lozenge \)

We end this section with the following corollary, which is obtained by integrating on a ∈ (0, 1) the expressions in the generalized Gautschi inequality (Proposition 8.67).

Corollary 8.70

Suppose that g lie in \(\mathcal {C}^2\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,2\}}\) and let x > 0 be so that Σg is convex on [x, ). Then we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} -\frac{1}{2}{\,}g(x+1) & \leq &\displaystyle -\frac{1}{2}{\,}(\Sigma g)'(x+1)\\ & \leq &\displaystyle \int_x^{x+1}\Sigma g(t){\,}dt -\Sigma g(x+1) ~\leq ~ -\frac{1}{2}{\,}g(x){\,}. \end{array} \end{aligned} $$

(The inequalities are to be reversed if Σg is concave on [x, ).) In particular, the following assertions hold.

  1. (a)

    If Σg is not eventually identically zero and if

    $$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{g(x)}{\Sigma g(x)} ~=~ 0, \end{aligned} $$
    (8.31)

    then

    $$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{(\Sigma g)'(x)}{\Sigma g(x)} ~=~ 0\qquad \mathit{\mbox{and}}\qquad \Sigma g(x) ~\sim ~ \int_x^{x+1}\Sigma g(t){\,}dt\quad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$
  2. (b)

    If g is not eventually identically zero and if

    $$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{g(x+1)}{g(x)} ~=~ 1, \end{aligned}$$

    then

    $$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{(\Sigma g)'(x)}{g(x)} ~=~ 1\qquad \mathit{\mbox{and}}\qquad \lim_{x\to\infty}\frac{\int_x^{x+1}\Sigma g(t){\,}dt -\Sigma g(x)}{g(x)} ~=~ \frac{1}{2}{\,}. \end{aligned}$$

Proof

The inequalities are obtained by integrating on a ∈ (0, 1) the expressions in the generalized Gautschi inequality. Let us now prove assertion (a); the second one can be established similarly. If Σg is not eventually identically zero, then it eventually never vanishes since it lies in \(\mathcal {K}^0\). If condition (8.31) holds, then we must have

$$\displaystyle \begin{aligned} \lim_{x\to\infty}\frac{\Sigma g(x+1)}{\Sigma g(x)} ~=~ \lim_{x\to\infty}\left(1+\frac{g(x)}{\Sigma g(x)}\right) ~=~ 1\quad \mbox{and}\quad \lim_{x\to\infty}\frac{g(x)}{\Sigma g(x+1)} ~=~ 0. \end{aligned}$$

We then complete the proof by dividing all the expressions in the inequalities by Σg(x + 1) and letting x →. □

8.12 Generalized Webster’s Functional Equation

In the framework of Γ-type functions, Webster [98, Section 8] investigated the multiplicative version of the functional equation

$$\displaystyle \begin{aligned} \textstyle{f(x)+f(x+\frac{1}{2})} ~=~ h(x),\qquad x>0, \end{aligned}$$

and, more generally, of the functional equation

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}f\left(x+\frac{j}{m}\right) ~=~ h(x),\qquad x>0, \end{aligned}$$

for any \(m\in \mathbb {N}^*\), where \(h\colon \mathbb {R}_+\to \mathbb {R}\) is a given function satisfying certain conditions.

In this section, we extend Webster’s result by considering and solving the more general equation

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}f(x+a{\,}j) ~=~ h(x),\qquad x>0, \end{aligned} $$
(8.32)

where a > 0 is also a given parameter. We call it the generalized Webster functional equation. For instance, we can prove that the unique monotone solution \(f\colon \mathbb {R}_+\to \mathbb {R}\) to the equation

$$\displaystyle \begin{aligned} f(x)+f(x+a) ~=~ \frac{1}{x} \end{aligned}$$

is given by

$$\displaystyle \begin{aligned} f(x) ~=~ \frac{1}{2a}\,\psi\left(\frac{x+a}{2a}\right)-\frac{1}{2a}\,\psi\left(\frac{x}{2a}\right). \end{aligned}$$

Our general result is stated in the following theorem, a variant of which was established by Webster [98, Theorem 8.1] in the special case when p = 1 and \(a=\frac {1}{m}\).

Theorem 8.71 (Generalized Webster’s Functional Equation)

Let \(p\in \mathbb {N}\), \(m\in \mathbb {N}^*\) , a > 0, and \(h\in \mathcal {D}^q\cap \mathcal {K}^q\) for some integer q  p. Define also the function \(h_a\colon \mathbb {R}_+\to \mathbb {R}\) by the equation

$$\displaystyle \begin{aligned} h_a(x) ~=~ h(ax)\qquad \mathit{\mbox{for }}x>0. \end{aligned}$$

If Δh a lies in \(\mathcal {D}^p\cap \mathcal {K}^p_+\cap \mathcal {K}^q\) (resp. \(\mathcal {D}^p\cap \mathcal {K}^p_-\cap \mathcal {K}^q\) ), then there is a unique solution to equation (8.32) lying in \(\mathcal {K}^p\) , namely

$$\displaystyle \begin{aligned} f(x) ~=~ \Sigma h_{am}\left(\frac{x+a}{am}\right)-\Sigma h_{am}\left(\frac{x}{am}\right). \end{aligned}$$

Moreover, this solution lies in \(\mathcal {K}^p_-\) (resp. \(\mathcal {K}^p_+\) ).

Proof

Suppose for instance that Δh a lies in \(\mathcal {D}^p\cap \mathcal {K}^p_+\cap \mathcal {K}^q\) and let \(g_a^m\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation \(g_a^m(x)=\Delta h_a(mx)\) for x > 0. By Corollary 4.21, the function \(g_a^m\) lies in \(\mathcal {D}^p\cap \mathcal {K}^p_+\cap \mathcal {K}^q\). Suppose that \(f\colon \mathbb {R}_+\to \mathbb {R}\) is a solution to equation (8.32). Then necessarily

$$\displaystyle \begin{aligned} g_a^m(x) ~=~ h(amx+a)-h(amx) ~=~ \sum_{j=0}^{m-1}\Delta_jf(amx+aj) ~=~ \Delta_x f(amx). \end{aligned}$$

If f lies in \(\mathcal {K}^p\), then by the uniqueness and existence theorems we have that

$$\displaystyle \begin{aligned} f(amx) ~=~ f(am)+\Sigma g_a^m(x) \end{aligned}$$

and f must lie in \(\mathcal {K}^p_-\). Since both \(g_a^m\) and h lie in \(\mathcal {D}^q\cap \mathcal {K}^q\), by Propositions 5.7 and 5.8 we then have

$$\displaystyle \begin{aligned} \begin{array}{rcl} f(amx) & =&\displaystyle f(am)+\Sigma_x h(amx+a)-\Sigma h(amx)\\ & =&\displaystyle f(am)+\Sigma_x h_{am}\left(x+\frac{1}{m}\right)-\Sigma h_{am}(x)\\ & =&\displaystyle c+\Sigma h_{am}\left(x+\frac{1}{m}\right)-\Sigma h_{am}(x), \end{array} \end{aligned} $$

or equivalently,

$$\displaystyle \begin{aligned} f(x) ~=~ c+\Sigma h_{am}\left(\frac{x+a}{am}\right)-\Sigma h_{am}\left(\frac{x}{am}\right) \end{aligned} $$
(8.33)

for some \(c\in \mathbb {R}\). But the function f specified by (8.33) satisfies (8.32) if and only if c = 0; indeed, we then have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{j=0}^{m-1}f(x+aj) & =&\displaystyle mc+\sum_{j=0}^{m-1}\Delta_j\,\Sigma h_{am}\left(\frac{x+aj}{am}\right)\\ & =&\displaystyle mc+\Delta\Sigma h_{am}\left(\frac{x}{am}\right) ~=~ mc+h(x). \end{array} \end{aligned} $$

This completes the proof. □

Example 8.72

Theorem 8.71 shows that the unique eventually monotone or eventually log-convex solution to the functional equation

$$\displaystyle \begin{aligned} f(x)f(x+a){\,}x^p ~=~ 1,\qquad x>0,{\,}a>0,{\,}p>0, \end{aligned}$$

is the function

$$\displaystyle \begin{aligned} f(x) ~=~ \left(\frac{\Gamma(\frac{x}{2a})}{\sqrt{2a}{\,}\Gamma(\frac{x+a}{2a})}\right)^p. \end{aligned}$$

This result was established by Thielman [95] (see also Anastassiadis [5]). The special case when p = 1 was previously shown by Mayer [70]. \(\lozenge \)

Combining both Theorems 8.27 and 8.71, we can derive immediately the following corollary, which in a sense provides yet another characterization of multiple Γ-type functions. For a similar result on the gamma function, see Artin [11, p. 35].

Corollary 8.73

Let \(p\in \mathbb {N}\), \(m\in \mathbb {N}^*\) , and \(g\in \mathcal {D}^p\cap \mathcal {K}^{p+1}\) . Define also the function \(g_m\colon \mathbb {R}_+\to \mathbb {R}\) by the equation \(g_m(x)=g(\frac {x}{m})\) for x > 0. Then the function f =  Σg is the unique solution lying in \(\mathcal {K}^p\) to the equation

$$\displaystyle \begin{aligned} \sum_{j=0}^{m-1}f\left(\frac{x+j}{m}\right) ~=~ \sum_{j=1}^m\Sigma g\left(\frac{j}{m}\right)+\Sigma g_m(x),\qquad x>0. \end{aligned}$$

Example 8.74

For any \(m\in \mathbb {N}^*\) the gamma function is the unique log-convex solution \(f\colon \mathbb {R}_+\to \mathbb {R}_+\) to the equation

$$\displaystyle \begin{aligned} \prod_{j=0}^{m-1}f\left(\frac{x+j}{m}\right) ~=~ \frac{\Gamma(x)}{m^{x-\frac{1}{2}}}{\,}(2\pi)^{\frac{m-1}{2}},\qquad x>0. \end{aligned}$$

Equivalently, for any \(m\in \mathbb {N}^*\) the gamma function is the unique log-convex solution \(f\colon \mathbb {R}_+\to \mathbb {R}_+\) to the equation

$$\displaystyle \begin{aligned} \prod_{j=0}^{m-1}f\left(\frac{x+j}{m}\right) ~=~ \prod_{j=0}^{m-1}\Gamma\left(\frac{x+j}{m}\right),\qquad x>0. \end{aligned}$$