In this chapter, we discuss the higher order differentiability properties of Σg when g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for any \(p,r\in \mathbb {N}\). In particular, we show the fundamental fact that Σg also lies in \(\mathcal {C}^r\) and that the sequence \(n \mapsto D^rf^p_n[g]\) converges uniformly on any bounded subinterval of \(\mathbb {R}_+\) to D r Σg.

We also show that the functions ( Σg)(r) and Σg (r) differ by a constant and we investigate some properties of these functions, including asymptotic behaviors and an analogue of Euler’s series representation of the constant γ. We present and discuss a procedure, that we call the “elevator” method, to compute Σg by first evaluating Σg (r). Finally, we provide an alternative uniqueness result for higher order differentiable solutions to the equation Δf = g.

7.1 Differentiability of Multiple \(\log \Gamma \)-Type Functions

In this first section we investigate the higher order differentiability of the function Σg when g is of class \(\mathcal {C}^r\) for some \(r\in \mathbb {N}\). We start with the following preliminary, but very important result.

Proposition 7.1

If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\) , then the function Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\).

Proof

If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\), then clearly it also lies in \(\mathcal {C}^r\cap \mathcal {D}^{\max \{p,r\}}\cap \mathcal {K}^{\max \{p,r\}}\). By Proposition 5.6, Σg must lie in \(\mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\). Let us now show that it also lies in \(\mathcal {C}^r\).

We first observe that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). This is clear if r ≤ p by Proposition 4.12. If r > p, then we first see that g (p) lies in \(\mathcal {C}^{r-p}\cap \mathcal {D}^0\cap \mathcal {K}^{r-p}\), and hence also in \(\mathcal {K}^0\cap \mathcal {K}^1\). Using Proposition 4.16(b) repeatedly, we then see that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\).

By Proposition 5.18, Σg (r) must lie in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_++1}\cap \mathcal {K}^{(p-r)_+}\). Hence, there exists \(F\in \mathcal {C}^r\) such that F (r) =  Σg (r). By Proposition 4.12, F must lie in \(\mathcal {K}^{\max \{p,r\}}\). Now, we also have

$$\displaystyle \begin{aligned} D^r\Delta F ~=~ \Delta F^{(r)} ~=~ \Delta\Sigma g^{(r)} ~=~ g^{(r)}, \end{aligned}$$

which shows that Δ(F + P) = g for some polynomial P of degree at most r. By Corollary 4.6 we have that F + P lies in \(\mathcal {K}^{\max \{p,r\}}\). But then, by the uniqueness Theorem 3.1 we must have F + P =  Σg + c for some \(c\in \mathbb {R}\). Hence Σg lies in \(\mathcal {C}^r\). □

Remark 7.2

If g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some integers 0 ≤ r < p, then the function Σg lies in \(\mathcal {C}^r\) by Proposition 7.1. Interestingly, this result can also be established very easily using the following argument. Let \(n\in \mathbb {N}\) be so that Σg is p-convex or p-concave on I n = (n, ). By Lemma 2.6(a), the function Σg lies in \(\mathcal {C}^{p-1}(I_n)\) and hence also in \(\mathcal {C}^r(I_n)\). Using (5.3), we immediately obtain that Σg lies in \(\mathcal {C}^r\). \(\lozenge \)

We now present the following important and very surprising result. It shows that Proposition 7.1 no longer holds when r > p if we ask g to lie in \(\mathcal {K}^p\) instead of \(\mathcal {K}^{\max \{p,r\}}\). Since the proof is somewhat technical, we defer it to Appendix F.

Proposition 7.3

For every \(p\in \mathbb {N}\) , there exists a function g lying in \(\mathcal {C}^{p+1}\cap \mathcal {D}^p\cap \mathcal {K}^p\) for which Σg does not lie in \(\mathcal {C}^{p+1}\) . Thus, the operator Σ does not always preserve differentiability when the order of differentiability exceeds that of convexity.

Proof

See Appendix F. □

The next theorem is the central result of this section. In this theorem, we recall the fundamental result given in Proposition 7.1 and we show that, under the same assumptions, the sequence \(n \mapsto D^rf^p_n[g]\) converges uniformly on any bounded subinterval of \(\mathbb {R}_+\) to D r Σg. We first consider a technical lemma.

Lemma 7.4

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^p\) for some integers 0 ≤ r  p. Then, for any \(n\in \mathbb {N}\) the function \(\rho ^{p+1}_n[\Sigma g]\) lies in \(\mathcal {C}^r\) . Moreover, the sequence \(n\mapsto D^r\rho ^{p+1}_n[\Sigma g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to zero.

Proof

By Proposition 7.1, we have that Σg lies in \(\mathcal {C}^r\). Using (1.7) it is then clear that, for any \(n\in \mathbb {N}\), the function \(\rho ^{p+1}_n[\Sigma g]\) lies in \(\mathcal {C}^r\).

Let us now show the second part of the lemma. Negating g if necessary, we may assume that it lies in \(\mathcal {K}^p_-\). In this case, D r Σg must lie in \(\mathcal {K}^{p-r}_+\) by Proposition 4.12. Let n ≥ p be an integer so that g is p-concave on [n, ). Using Proposition 2.1 repeatedly, we can see that there exist p − r + 1 pairwise distinct points \(\xi _0^n,\ldots ,\xi ^n_{p-r}\in (0,p)\) such that

$$\displaystyle \begin{aligned} D^r_xP_p[\Sigma g](n,\ldots,n+p;n+x) ~=~ P_{p-r}[D^r\Sigma g](n+\xi_0^n,\ldots,n+\xi^n_{p-r};n+x). \end{aligned}$$

Let us now fix x > 0. Using (2.11) and then (2.2) and (2.3), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} D^r\rho^{p+1}_n[\Sigma g](x) & =&\displaystyle D^r\Sigma g[n+\xi_0^n,\ldots,n+\xi^n_{p-r},n+x]{\,}\prod_{i=0}^{p-r}(x-\xi^n_i)\\ & =&\displaystyle A_n\,\prod_{i=1}^{p-r}(x-\xi^n_i), \end{array} \end{aligned} $$

if \(x\neq \xi _i^n\) for i = 0, …, p − r, and \(D^r\rho ^{p+1}_n[\Sigma g](x)=0\), otherwise, where

$$\displaystyle \begin{aligned} A_n ~=~ D^r\Sigma g[n+\xi_1^n,\ldots,n+\xi^n_{p-r},n+x] -D^r\Sigma g[n+\xi_0^n,\ldots,n+\xi^n_{p-r}]. \end{aligned}$$

Now, on the one hand, we clearly have

$$\displaystyle \begin{aligned} \prod_{i=1}^{p-r}|x-\xi^n_i| ~\leq ~ c_x^{p-r}. \end{aligned}$$

where \(c_x=\max \{p,\lceil x\rceil \}\). On the other hand, using Lemma 2.5 (with the fact that D r Σg lies in \(\mathcal {K}^{p-r}_+\)) and then (2.8), we obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} |A_n| & \leq &\displaystyle \left|D^r\Sigma g[n+c_x,\ldots,n+c_x+p-r]-D^r\Sigma g[n-p+r,\ldots,n]\right|\\ & =&\displaystyle \frac{1}{(p-r)!}\left|\Delta^{p-r}D^r\Sigma g(n+c_x)-\Delta^{p-r}D^r\Sigma g(n-p+r)\right|\\ & =&\displaystyle \frac{1}{(p-r)!}{\,}\sum_{j=-p+r}^{c_x-1}|\Delta^{p-r}D^rg(n+j)|. \end{array} \end{aligned} $$

Thus, for any bounded subinterval E of \(\mathbb {R}_+\), we obtain the inequality

$$\displaystyle \begin{aligned} \sup_{x\in E}\left|D^r\rho^{p+1}_n[\Sigma g](x)\right| ~\leq ~ \frac{c_{\sup E}^{p-r}}{(p-r)!}{\,}\sum_{j=-p+r}^{c_{\sup E}-1}|\Delta^{p-r}D^rg(n+j)|. \end{aligned}$$

But the latter sum converges to zero as \(n\to _{\mathbb {N}}\infty \) since D r g lies in \(\mathcal {D}^{p-r}\cap \mathcal {K}^{p-r}\) by Proposition 4.12. This completes the proof of the lemma. □

Theorem 7.5 (Higher Order Differentiability of Multiple \(\boldsymbol \log \ \boldsymbol {\Gamma }\)-Type Functions)

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(r,p\in \mathbb {N}\) . The following assertions hold.

  1. (a)

    Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\).

  2. (b)

    The sequence \(n\mapsto D^rf^p_n[g]\) converges uniformly on any bounded subset of \(\mathbb {R}_+\) to D r  Σg.

Proof

Assertion (a) immediately follows from Proposition 7.1. When r ≤ p, assertion (b) immediately follows from Lemma 7.4 and identity (5.4). Let us now assume that r > p. Using (5.4) and then (1.7) and (5.3) we obtain

$$\displaystyle \begin{aligned} D^rf^p_n[g](x) ~=~ D^r\Sigma g(x)-D^r\Sigma g(x+n) ~=~ -\sum_{k=0}^{n-1}g^{(r)}(x+k). \end{aligned}$$

By Proposition 4.12, we have that g (p) lies in \(\mathcal {C}^{r-p}\cap \mathcal {D}^0\cap \mathcal {K}^{r-p}\), and hence also in \(\mathcal {K}^0\cap \mathcal {K}^1\). Using Proposition 4.16(b) repeatedly, we then see that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\). Thus, we can apply Theorem 3.12 to the function g (r), with f = D r Σg. Since f lies in \(\mathcal {C}^0\cap \mathcal {D}^0\cap \mathcal {K}^0\) by assertion (a) and Proposition 4.12, it follows from Theorem 3.12 that the sequence \(n\mapsto D^rf^p_n[g]\) converges uniformly on \(\mathbb {R}_+\) to f − f() = f = D r Σg. □

Example 7.6

The function \(g(x)=\ln x\) clearly lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Using Theorem 7.5, we now see that the function \(\Sigma g(x)=\ln \Gamma (x)\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Moreover, for any \(r\in \mathbb {N}^*\), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \psi_{r-1}(x) & =&\displaystyle D^r\ln\Gamma(x) ~=~ \lim_{n\to\infty}D^rf^1_n[\ln](x)\\ & =&\displaystyle \lim_{n\to\infty}\left(0^{r-1}\ln n +(-1)^r(r-1)!\,\sum_{k=0}^{n-1}\frac{1}{(x+k)^r}\right). \end{array} \end{aligned} $$

If r = 1, then we obtain

$$\displaystyle \begin{aligned} \psi(x) ~=~ \lim_{n\to\infty}\left(\ln n-\sum_{k=0}^{n-1}\frac{1}{x+k}\right). \end{aligned}$$

If r ≥ 2, then we get (compare with, e.g., Srivastava and Choi [93, p. 33])

$$\displaystyle \begin{aligned} \psi_{r-1}(x) ~=~ (-1)^r(r-1)!\,\zeta(r,x), \end{aligned}$$

where sζ(s, x) is the Hurwitz zeta function (see Example 1.7). \(\lozenge \)

7.2 Some Properties of the Derivatives

In this section, we investigate the functions ( Σg)(r) and Σg (r) and some of their properties. We also show how the asymptotic behaviors of these functions can be analyzed from results of Chap. 6, including the generalized Stirling formula. Finally, we provide a series representation of the asymptotic constant σ[g] as an analogue of Euler’s series representation of γ.

In the next proposition, we essentially establish the fact that the functions ( Σg)(r) and Σg (r) are equal up to an additive constant. This result will have several important consequences in this and the next chapters.

Proposition 7.7

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . Then g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\) . Moreover, for any x > 0 we have

$$\displaystyle \begin{aligned} (\Sigma g)^{(r)}(x)-\Sigma g^{(r)}(x) ~=~ (\Sigma g)^{(r)}(1) ~=~ g^{(r-1)}(1) - \sigma[g^{(r)}]{\,}. \end{aligned} $$
(7.1)

If r > p, then

$$\displaystyle \begin{aligned} \sigma[g^{(r)}] ~=~ g^{(r-1)}(1) + \sum_{k=1}^{\infty}g^{(r)}(k). \end{aligned}$$

Proof

As already observed in the proof of Proposition 7.1, the first claim follows from Propositions 4.12 and 4.16(b). Moreover, we have that Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\). Let us now prove (7.1). By Proposition 4.12, the function φ 1 = ( Σg)(r) is a solution in \(\mathcal {K}^{(p-r)_+}\) to the equation Δφ = g (r). By the existence Theorem 3.6, the function φ 2 =  Σg (r) is also a solution in \(\mathcal {K}^{(p-r)_+}\). Thus, by the uniqueness Theorem 3.1, we must have ( Σg)(r) − Σg (r) = c for some \(c\in \mathbb {R}\), and hence we also have ( Σg)(r)(1) = c.

Now, for any x > 0, using (6.11) we then get

$$\displaystyle \begin{aligned} \begin{array}{rcl} g^{(r-1)}(1)-\sigma[g^{(r)}] & =&\displaystyle g^{(r-1)}(x)-\int_x^{x+1}\Sigma g^{(r)}(t){\,}dt\\ & =&\displaystyle c+g^{(r-1)}(x)-\int_x^{x+1}(\Sigma g)^{(r)}(t){\,}dt. \end{array} \end{aligned} $$

Evaluating the latter integral, we then obtain

$$\displaystyle \begin{aligned} \begin{array}{rcl} g^{(r-1)}(1)-\sigma[g^{(r)}] & =&\displaystyle c+g^{(r-1)}(x)-(\Sigma g)^{(r-1)}(x+1)+(\Sigma g)^{(r-1)}(x)\\ & =&\displaystyle c+g^{(r-1)}(x)-\Delta (\Sigma g)^{(r-1)}(x)\\ & =&\displaystyle c+g^{(r-1)}(x)-(\Delta\Sigma g)^{(r-1)}(x)\\ & =&\displaystyle c, \end{array} \end{aligned} $$

which proves (7.1). Finally, if r > p, then we have that g (r−1) lies in \(\mathcal {C}^1\cap \mathcal {D}^0\cap \mathcal {K}^1\) and that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{-1}\cap \mathcal {K}^0\) by Proposition 4.16(b). The last part of the statement then follows from applying Proposition 6.14 to the function g (r). □

Example 7.8

The function \(g(x)=\frac {1}{x}\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^0\cap \mathcal {K}^{\infty }\) and all its derivatives lie in \(\mathcal {K}^0\). By Theorem 7.5, the function

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=0}^{\infty}\left(\frac{1}{k+1}-\frac{1}{x+k}\right) ~=~ H_{x-1} ~=~ \psi(x)+\gamma \end{aligned}$$

lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Moreover, the series can be differentiated term by term infinitely many times and hence, for any \(r\in \mathbb {N}^*\), we have

$$\displaystyle \begin{aligned} (\Sigma g)^{(r)}(x) ~=~ \sum_{k=0}^{\infty}(-1)^{r+1}\frac{r!}{(x+k)^{r+1}} ~=~ \psi_r(x). \end{aligned}$$

By Proposition 7.7, we also have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sigma[g^{(r)}] & =&\displaystyle -(-1)^r(r-1)! + (-1)^r{\,}r!\,\sum_{k=1}^{\infty}\frac{1}{k^{r+1}}\\ & =&\displaystyle (-1)^r{\,}(r-1)!\left(r\,\zeta(r+1)-1\right), \end{array} \end{aligned} $$

where sζ(s) is the Riemann zeta function . \(\lozenge \)

In the next proposition we show the remarkable fact that the asymptotic equivalence (6.31) still holds if we differentiate both sides.

Proposition 7.9

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) , and let a ≥ 0. When D r  Σg vanishes at infinity, we also assume that

$$\displaystyle \begin{aligned} D^r\Sigma g(n+1) ~\sim ~ D^r\Sigma g(n)\qquad \mathit{\mbox{as }}n\to_{\mathbb{N}}\infty. \end{aligned}$$

Then we have

$$\displaystyle \begin{aligned} D^r\Sigma g(x+a) ~\sim ~ D^r_x\int_x^{x+1}\Sigma g(t){\,}dt ~=~ g^{(r-1)}(x)\qquad \mathit{\mbox{as }}x\to\infty. \end{aligned}$$

Proof

By Proposition 7.7, we have that g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). Moreover, for any x > 0 we have

$$\displaystyle \begin{aligned} D^r\Sigma g(x+a) ~=~ c+\Sigma g^{(r)}(x+a) \end{aligned}$$

and, using (6.11),

$$\displaystyle \begin{aligned} \begin{array}{rcl} D^r_x\int_x^{x+1}\Sigma g(t){\,}dt & =&\displaystyle g^{(r-1)}(x) ~=~ \int_x^{x+1}(\Sigma g)^{(r)}(t){\,}dt\\ & =&\displaystyle c+\int_x^{x+1}\Sigma g^{(r)}(t){\,}dt, \end{array} \end{aligned} $$

where c = g (r−1)(1) − σ[g (r)]. The result then immediately follows from applying Proposition 6.20 to the function g (r). □

Example 7.10

Applying Proposition 7.9 to the function \(g(x)=\ln x\), for any a ≥ 0 we obtain the equivalences

$$\displaystyle \begin{aligned} \ln\Gamma(x+a) ~\sim ~ x\ln x{\,},\qquad \psi(x+a) ~\sim ~ \ln x\qquad \mbox{as }x\to\infty, \end{aligned}$$

and for any \(\nu \in \mathbb {N}\),

$$\displaystyle \begin{aligned} \psi_{\nu +1}(x+a) ~\sim ~ (-1)^{\nu}\,\frac{\nu !}{x^{\nu +1}}\qquad \mbox{as }x\to\infty. \end{aligned}$$

In the next two propositions, we mainly investigate how the convergence results in (6.4) and (6.21) are modified when the function g is replaced with one of its higher order derivatives. The second proposition can be regarded as the “integrated” version of the first one, and hence it naturally involves the generalized Binet function.

Proposition 7.11

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) , and let a ≥ 0. The following assertions hold.

  1. (a)

    g (r) lies in \(\mathcal {R}_{\mathbb {R}}^{(p-r)_+}\) and both Σg (r) and ( Σg)(r) lie in \(\mathcal {R}_{\mathbb {R}}^{(p-r)_++1}\).

  2. (b)

    For any \(q\in \mathbb {N}\) , the function \(x\mapsto \rho _x^{q+1}[\Sigma g](a)\) lies in \(\mathcal {C}^r\) and we have

    $$\displaystyle \begin{aligned} D_x^r\rho_x^{q+1}[\Sigma g](a) ~=~ \rho_x^{q+1}[\Sigma g^{(r)}](a). \end{aligned}$$
  3. (c)

    We have that \(\rho _x^{(p-r)_++1}[\Sigma g^{(r)}](a) \to 0\) and \(D_x^r\rho _x^{p+1}[\Sigma g](a) \to 0\) as x ∞.

Proof

By Proposition 7.7, the function g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\). This immediately proves assertion (a). Now, using (1.7) and then (7.1) we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_x^r\rho_x^{q+1}[\Sigma g](a) & =&\displaystyle \Sigma g^{(r)}(x+a)-\Sigma g^{(r)}(x)-\sum_{j=1}^q{\textstyle{{{a}\choose{j}}}}\,\Delta^{j-1}g^{(r)}(x)\\ & =&\displaystyle \rho_x^{q+1}[\Sigma g^{(r)}](a), \end{array} \end{aligned} $$

which proves assertion (b). Assertion (c) follows from assertions (a) and (b) and the fact that \(\mathcal {R}_{\mathbb {R}}^{(p-r)_++1}\subset \mathcal {R}_{\mathbb {R}}^{p+1}\). □

Proposition 7.12

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . The following assertions hold.

  1. (a)

    For any \(q\in \mathbb {N}\) , the function J q+1[ Σg] lies in \(\mathcal {C}^r\) and we have

    $$\displaystyle \begin{aligned} D^r J^{q+1}[\Sigma g] ~=~ J^{q+1}[\Sigma g^{(r)}]. \end{aligned}$$

    In particular, we have σ[g (r)] = −D r J 1[ Σg](1).

  2. (b)

    We have that \(J^{(p-r)_++1}[\Sigma g^{(r)}](x) \to 0\) and D r J p+1[ Σg](x) → 0 as x ∞. In particular, if r > p, then ( Σg)(r) → 0 as x ∞.

  3. (c)

    We have

    $$\displaystyle \begin{aligned} D_x^r \int_0^1\rho_x^{p+1}[\Sigma g](t){\,}dt ~=~ \int_0^1D_x^r \rho_x^{p+1}[\Sigma g](t){\,}dt. \end{aligned}$$

Proof

Using (6.18) and (7.1), we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} D^r J^{q+1}[\Sigma g](x) & =&\displaystyle \Sigma g^{(r)}(x)-\sigma[g^{(r)}]-\int_1^xg^{(r)}(t){\,}dt +\sum_{j=1}^qG_j\,\Delta^{j-1}g^{(r)}(x)\\ & =&\displaystyle J^{q+1}[\Sigma g^{(r)}](x), \end{array} \end{aligned} $$

which proves assertion (a). Now, setting q = p in these equations we obtain

$$\displaystyle \begin{aligned} D^r J^{p+1}[\Sigma g](x) ~=~ J^{(p-r)_++1}[\Sigma g^{(r)}](x)+\sum_{j=(p-r)_++1}^pG_j\,\Delta^{j-1}g^{(r)}(x). \end{aligned}$$

Since g (r) lies in \(\mathcal {C}^0\cap \mathcal {D}^{(p-r)_+}\cap \mathcal {K}^{(p-r)_+}\), this latter expression vanishes at infinity. This proves assertion (b). Finally, using Proposition 7.11 and assertion (a) we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_0^1D_x^r \rho_x^{p+1}[\Sigma g](t){\,}dt & =&\displaystyle \int_0^1\rho_x^{p+1}[\Sigma g^{(r)}](t){\,}dt ~=~ -J^{p+1}[\Sigma g^{(r)}](x)\\ & =&\displaystyle -D^r J^{p+1}[\Sigma g](x) ~=~ D_x^r \int_0^1\rho_x^{p+1}[\Sigma g](t){\,}dt, \end{array} \end{aligned} $$

which proves assertion (c). □

Assertion (c) of Proposition 7.11 reveals a very important fact. It shows that the convergence result in (6.4) still holds if we replace g with g (r) and p with (pr)+. But it also says that this new result can also be obtained by differentiating r times both sides of (6.4) and then removing the terms that vanish at infinity.

Similarly, assertion (b) of Proposition 7.12 shows that this property also applies to the generalized Stirling formula (6.21).

Example 7.13

The function \(g(x)=\ln x\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\) and its derivative \(g'(x)=\frac {1}{x}\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^0\cap \mathcal {K}^{\infty }\). For any a ≥ 0, the limit in (6.4) reduces to

$$\displaystyle \begin{aligned} \ln\Gamma(x+a)-\ln\Gamma(x)-a\ln x ~\to ~ 0\qquad \mbox{as }x\to\infty. \end{aligned}$$

If we replace g with g′ and set p = 0 in (6.4), we get

$$\displaystyle \begin{aligned} \psi(x+a)-\psi(x) ~\to ~ 0\qquad \mbox{as }x\to\infty. \end{aligned}$$

However, this latter limit can also be obtained by differentiating both sides of the previous limit and then removing the term (\(-\frac {a}{x}\)) that vanishes at infinity.

Now, applying the generalized Stirling formula (6.21) to the function \(g(x)=\ln x\), we clearly retrieve the classical Stirling formula

$$\displaystyle \begin{aligned} \ln\Gamma(x)-\frac{1}{2}\ln(2\pi)+x-\left(x-\frac{1}{2}\right)\ln x ~\to ~ 0\qquad \mbox{as }x\to\infty. \end{aligned}$$

Proceeding similarly as above, we then obtain

$$\displaystyle \begin{aligned} \psi(x)-\ln x ~\to ~ 0\qquad \mbox{as }x\to\infty, \end{aligned}$$

which is actually the analogue of Stirling’s formula for the digamma function. \(\lozenge \)

Remark 7.14

To emphasize the similarities between Propositions 7.11 and 7.12, we could for instance extend our formalism a bit further as follows. For any \(p\in \mathbb {N}\) and any \(\mathrm {S}\in \{\mathbb {N},\mathbb {R}\}\), let \(\mathcal {J}^p_{\mathrm {S}}\) denote the set of continuous functions \(g\colon \mathbb {R}_+\to \mathbb {R}\) having the asymptotic property that

$$\displaystyle \begin{aligned} J^p[g](t) ~\to ~0\qquad \mbox{as }t\to_{\mathrm{S}}\infty. \end{aligned}$$

This new definition enables one to formalize some results more easily. For instance, using (6.17) we clearly obtain that

$$\displaystyle \begin{aligned} \mathcal{J}^p_{\mathrm{S}}\cap\mathcal{D}^p_{\mathrm{S}} ~=~ \mathcal{J}^{p+1}_{\mathrm{S}}\cap\mathcal{D}^p_{\mathrm{S}} \end{aligned}$$

and this identity could be used to establish assertion (b) of Proposition 7.12 from assertion (a). To give another example, we can see that (6.22) actually means that

$$\displaystyle \begin{aligned} \mathcal{C}^0\cap\mathcal{D}^p\cap\mathcal{K}^p ~\subset ~\mathcal{J}^p_{\mathbb{R}}. \end{aligned}$$

Note also that the generalized Stirling formula simply states that Σg lies in \(\mathcal {J}^{p+1}_{\mathbb {R}}\) whenever g lies in \(\mathcal {C}^0\cap \mathcal {D}^p\cap \mathcal {K}^p\). \(\lozenge \)

Taylor Series Expansion of Σ g

Suppose that g lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^p\cap \mathcal {K}^{\infty }\) for some \(p\in \mathbb {N}\). We know from Proposition 7.12 that

$$\displaystyle \begin{aligned} \sigma[g^{(k)}] ~=~ -D^kJ^1[\Sigma g](1),\qquad k\in\mathbb{N}. \end{aligned}$$

Thus, the exponential generating function (see, e.g., Graham et al. [41, Chapter 7]) for the sequence nσ[g (n)] is defined by the equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{k=1}^{\infty}\sigma[g^{(k)}]\,\frac{x^k}{k!} & =&\displaystyle -J^1[\Sigma g](x+1){}\\ & =&\displaystyle \sigma[g]+\int_1^{x+1}g(t){\,}dt - \Sigma g(x+1). \end{array} \end{aligned} $$
(7.2)

Denoting this exponential generating function by egfσ[g](x), the previous equation reduces to

$$\displaystyle \begin{aligned} \mathrm{egf}_{\sigma}[g](x) ~=~ -J^1[\Sigma g](x+1){\,}. \end{aligned}$$

If the function J 1[ Σg] is real analytic at 1, then the series in (7.2) converges in some neighborhood of x = 0. Similarly, if the function Σg is real analytic at 1, then the following Taylor series expansion

$$\displaystyle \begin{aligned} \Sigma g(x+1) ~=~ \sum_{k=1}^{\infty}(\Sigma g)^{(k)}(1)\,\frac{x^k}{k!} \end{aligned} $$
(7.3)

holds in some neighborhood of x = 0, where the numbers ( Σg)(k)(1) for \(k\in \mathbb {N}^*\) can also be computed through (7.1).

Example 7.15

Consider again the functions \(g(x)=\ln x\) and \(\Sigma g(x)=\ln \Gamma (x)\). We know from Example 7.6 that

$$\displaystyle \begin{aligned} D\ln\Gamma(1) ~=~ \psi(1) ~=~ \lim_{n\to\infty}\left(\ln n-\sum_{k=1}^n\frac{1}{k}\right) ~=~ -\gamma{\,}, \end{aligned}$$

and that for any integer k ≥ 2

$$\displaystyle \begin{aligned} D^k\ln\Gamma(1) ~=~ \psi_{k-1}(1) ~=~ (-1)^k{\,}(k-1)!\,\zeta(k). \end{aligned}$$

We then obtain the following Taylor series expansion

$$\displaystyle \begin{aligned} \ln\Gamma(x+1) ~=~ -\gamma x+\sum_{k=2}^{\infty}(-1)^k\,\frac{\zeta(k)}{k}{\,}x^k{\,},\qquad |x|<1. \end{aligned}$$

The values of the sequence nσ[g (n)] can be obtained using (7.1) or (7.2). We get

$$\displaystyle \begin{aligned} \sigma[g] ~=~ -1+\frac{1}{2}\ln(2\pi),\qquad \sigma[g'] ~=~ \gamma, \end{aligned}$$

and for any integer k ≥ 2

$$\displaystyle \begin{aligned} \sigma[g^{(k)}] ~=~ (-1)^k(k-2)!{\,}(1-(k-1)\zeta(k)){\,}. \end{aligned}$$

Analogues of Euler’s Series Representation of γ

Integrating both sides of (7.3) on (0, 1) (assuming that the series can be integrated term by term), we obtain the identity

$$\displaystyle \begin{aligned} \sigma[g] ~=~ \sum_{k=1}^{\infty}(\Sigma g)^{(k)}(1)\,\frac{1}{(k+1)!}{\,}. \end{aligned} $$
(7.4)

Similarly, integrating both sides of (7.2) on (0, 1) (assuming again that the series can be integrated term by term), we obtain the identity

$$\displaystyle \begin{aligned} \sum_{k=0}^{\infty}\sigma[g^{(k)}]\,\frac{1}{(k+1)!} ~=~ \int_1^2(2-t){\,}g(t){\,}dt. \end{aligned} $$
(7.5)

Taking for instance \(g(x)=\frac {1}{x}\) in (7.4), we immediately retrieve Euler’s series representation of γ (see, e.g., Srivastava and Choi [93, p. 272])

$$\displaystyle \begin{aligned} \gamma ~=~ \sum_{k=2}^{\infty}(-1)^k\,\frac{\zeta(k)}{k}{\,}. \end{aligned}$$

This formula can also be obtained taking \(g(x)=\frac {1}{x}\) in (7.5) and using the straightforward identity

$$\displaystyle \begin{aligned} \sigma[g^{(k)}] ~=~ (-1)^kk!\left(\zeta(k+1)-\frac{1}{k}\right),\qquad k\in\mathbb{N}^*. \end{aligned}$$

Considering different functions g(x) in (7.4) and (7.5) enables one to derive various interesting identities. A few applications are given in the following example.

Example 7.16

Taking g(x) = ψ(x) in (7.5) and using the straightforward identity

$$\displaystyle \begin{aligned} \sigma[g^{(k)}] ~=~ \sigma[\psi_k] ~=~ (-1)^{k-1}(k-1)(k-1)!\,\zeta(k)\qquad k\in\mathbb{N},~k\geq 2, \end{aligned}$$

we obtain

$$\displaystyle \begin{aligned} \sum_{k=2}^{\infty}(-1)^k\frac{k-1}{k(k+1)}\,\zeta(k) ~=~ 2-\ln(2\pi){\,}. \end{aligned}$$

Similarly, taking \(g(x)=\ln x\) and then \(g(x)=\ln \Gamma (x)\) in (7.4) and (7.5) we obtain the identities

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{k=2}^{\infty}(-1)^k\frac{1}{k(k+1)}\,\zeta(k) & =&\displaystyle \frac{1}{2}\,\gamma -1+\frac{1}{2}\ln(2\pi){\,},\\ \sum_{k=2}^{\infty}(-1)^k\,\frac{1}{(k+1)(k+2)}{\,}\zeta(k) & =&\displaystyle \frac{1}{2}+\frac{1}{6}{\,}\gamma -2\ln A{\,},\\ \sum_{k=2}^{\infty}(-1)^k\frac{k-1}{k(k+1)(k+2)}{\,}\zeta(k) & =&\displaystyle \frac{5}{4}-\frac{1}{4}\ln(2\pi)-3\ln A{\,}, \end{array} \end{aligned} $$

where A is Glaisher-Kinkelin’s constant; see also Srivastava and Choi [93, Section 3.4]. \(\lozenge \)

7.3 Finding Solutions from Derivatives

Given \(r\in \mathbb {N}^*\) and a function \(g\in \mathcal {C}^r\), a solution \(f\in \mathcal {C}^r\) to the equation Δf = g can sometimes be found more easily by first searching for an appropriate solution \(\varphi \in \mathcal {C}^0\) to the equation Δφ = g (r) and then calculating f as an rth antiderivative of φ.

Let us first examine a very simple example to illustrate to which extent this approach can be easily and usefully applied.

Example 7.17

Let \(g\colon \mathbb {R}_+\to \mathbb {R}\) be defined by the equation

$$\displaystyle \begin{aligned} g(x) ~=~ \int_1^x\ln t{\,}dt\qquad \mbox{for }x>0. \end{aligned}$$

Suppose that we search for a simple expression for the indefinite sum Σg. We can apply Proposition 7.7 and observe that g′ lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\) and hence that g lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Moreover, we have

$$\displaystyle \begin{aligned} (\Sigma g)'(x) ~=~ c+\Sigma g'(x) ~=~ c+\ln\Gamma(x) \end{aligned}$$

for some \(c\in \mathbb {R}\). Thus, we obtain

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ c(x-1)+\int_1^x\ln\Gamma(t){\,}dt. \end{aligned}$$

To find the value of c, we then observe that

$$\displaystyle \begin{aligned} 0 ~=~ g(1) ~=~\Delta\Sigma g(1) ~=~ c+\int_1^2\ln\Gamma(t){\,}dt \end{aligned}$$

and hence \(c=1-\frac {1}{2}\ln (2\pi )\) (see Example 6.5). Alternatively, this value can also be obtained directly from (7.1); we have

$$\displaystyle \begin{aligned} c ~=~ g(1)-\sigma[g'] ~=~ -\sigma[g'] ~=~ 1-\frac{1}{2}\ln(2\pi){\,}. \end{aligned}$$

Thus, this approach amounts to first searching for a simple expression for Σg′, and then computing Σg using an antiderivative of Σg′.

Finally, we get

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ -1+\left(1-\frac{1}{2}\ln(2\pi)\right)x+\psi_{-2}(x), \end{aligned}$$

where ψ −2 is the polygamma function \(\psi _{-2}(x)=\int _0^x\ln \Gamma (t){\,}dt\). \(\lozenge \)

The approach described in Example 7.17 is rather simple and can sometimes be very efficient. We will refer to this technique as the elevator method. In very basic terms, to find Σg one proceeds as follows.

Step 1.:

We take the elevator, go down from the ground floor to the rth basement level, and get the function Σg (r) easily.

Step 2.:

We go back to the ground floor by converting the latter function into the function sought Σg using an rth antiderivative.

$$\displaystyle \begin{aligned} \begin{array}{ccc} \Delta f ~=~ g & & f ~=~ \Sigma g \\ \downarrow & & \uparrow \\ \Delta\varphi ~=~ g^{(r)} & \quad \to\quad & \varphi ~=~ \Sigma g^{(r)} \end{array} \end{aligned}$$

To our knowledge, this trick was investigated thoroughly by Krull [55] and then by Dufresnoy and Pisot [34].

In the next theorem we provide a general result based on this idea. This result is actually very general: it applies to any function \(g\in \mathcal {C}^r\), even if Σg is not defined (e.g., g(x) = 2x).

We first observe that if \(\varphi \in \mathcal {C}^0\) is a solution to the equation Δφ = g (r), then the map

$$\displaystyle \begin{aligned} x ~ \mapsto ~ \int_x^{x+1}\varphi(t){\,}dt-g^{(r-1)}(x) \end{aligned}$$

has a zero derivative and hence it is constant on \(\mathbb {R}_+\). In particular, it has a finite right limit at x = 0.

Theorem 7.18 (The Elevator Method)

Let \(r\in \mathbb {N}^*\) , a > 0, \(g\in \mathcal {C}^r\) , and let \(\varphi \colon \mathbb {R}_+\to \mathbb {R}\) be a continuous solution to the equation Δφ = g (r) . Then there exists a solution \(f\in \mathcal {C}^r\) to the equation Δf = g such that f (r) = φ if and only if

$$\displaystyle \begin{aligned} \int_a^{a+1}\varphi(t){\,}dt ~=~ g^{(r-1)}(a). \end{aligned} $$
(7.6)

If any of these equivalent conditions holds, then f is uniquely determined (up to an additive constant) by

$$\displaystyle \begin{aligned} f(x) ~=~ f(a) + \sum_{k=1}^{r-1}c_k{\,}\frac{(x-a)^k}{k!} + \int_a^x \frac{(x-t)^{r-1}}{(r-1)!}{\,}\varphi(t){\,}dt, \end{aligned} $$
(7.7)

where, for k = 1, …, r − 1,

$$\displaystyle \begin{aligned} c_k ~=~ \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\left(g^{(j+k-1)}(a)-\int_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!}{\,}\varphi(t){\,}dt\right). \end{aligned} $$
(7.8)

Proof

Condition (7.6) is clearly necessary. Indeed, we have

$$\displaystyle \begin{aligned} \int_a^{a+1}\varphi(t){\,}dt ~=~ f^{(r-1)}(a+1)-f^{(r-1)}(a) ~=~ g^{(r-1)}(a). \end{aligned}$$

Let us show that it is sufficient. Since φ is continuous, there exists \(f\in \mathcal {C}^r\) such that f (r) = φ. Taylor’s theorem then provides the expansion formula (7.7) with arbitrary parameters c k = f (k)(a) for k = 1, …, r − 1. Now we need to determine the parameters c 1, …, c k for f to be a solution to the equation Δf = g. To this extent, we need the following claim.

Claim

The function f satisfies the equation Δf = g if and only if f (r) satisfies the equation Δf (r) = g (r) and Δf (j)(a) = g (j)(a) for j = 0, …, r − 1.

Proof of the Claim

The condition is clearly necessary. To see that it is sufficient, we simply show by decreasing induction on j that Δf (j) = g (j). Clearly, this is true for j = r. Suppose that it is true for some integer j satisfying 1 ≤ j ≤ r. For any x > 0 we have

$$\displaystyle \begin{aligned}\displaystyle \Delta f^{(j-1)}(x)-\Delta f^{(j-1)}(a) ~=~ \int_a^x \Delta f^{(j)}(t){\,}dt ~=~ \int_a^x g^{(j)}(t){\,}dt\\\displaystyle =~ g^{(j-1)}(x)-g^{(j-1)}(a) ~=~ g^{(j-1)}(x)-\Delta f^{(j-1)}(a), \end{aligned} $$

which shows that the result still holds for j − 1. □

By the claim, f satisfies the equation Δf = g if and only if Δf (j)(a) = g (j)(a) for j = 0, …, r − 1. When j = r − 1, the latter condition is nothing other than condition (7.6) and hence it is satisfied. Applying Taylor’s theorem to f (j), we obtain

$$\displaystyle \begin{aligned} f^{(j)}(a+1) - f^{(j)}(a) ~=~ \sum_{k=1}^{r-j-1}\frac{1}{k!}{\,}f^{(j+k)}(a) + \int_a^{a+1} \frac{(a+1-t)^{r-j-1}}{(r-j-1)!}{\,}\varphi(t){\,}dt{\,}, \end{aligned}$$

and hence we see that the remaining r − 1 conditions are

$$\displaystyle \begin{aligned} \sum_{k=1}^{r-j-1}\frac{1}{k!}{\,}c_{j+k} ~=~ d_j,\qquad j=0,\ldots,r-2, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} d_j & =&\displaystyle g^{(j)}(a)-\int_a^{a+1} \frac{(a+1-t)^{r-j-1}}{(r-j-1)!}{\,}\varphi(t){\,}dt,\qquad j=0,\ldots,r-2,\\ c_k & =&\displaystyle f^{(k)}(a),\qquad k=1,\ldots,r-1. \end{array} \end{aligned} $$

It is not difficult to see that these r − 1 conditions form a consistent triangular system of r − 1 linear equations in the r − 1 unknowns c 1, …, c r−1. This establishes the uniqueness of f up to an additive constant.

Let us now show that formula (7.8) holds. For k = 1, …, r − 1, we have

$$\displaystyle \begin{aligned} \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}d_{j+k-1} ~=~ \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\sum_{i=1}^{r-j-k}\frac{1}{i!}{\,}c_{i+j+k-1}. \end{aligned}$$

Replacing i with i − j − k + 1 and then permuting the resulting sums, the latter expression reduces to

$$\displaystyle \begin{aligned} \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\sum_{i=j+k}^{r-1}\frac{1}{(i-j-k+1)!}{\,}c_i ~=~ \sum_{i=k}^{r-1}\frac{c_i}{(i-k+1)!}{\,}\sum_{j=0}^{i-k}{\textstyle{{{i-k+1}\choose{j}}}}{\,}B_j{\,}, \end{aligned}$$

that is, using (6.40),

$$\displaystyle \begin{aligned} \sum_{i=k}^{r-1}\frac{c_i}{(i-k+1)!}{\,}0^{i-k} ~=~ c_k{\,}. \end{aligned}$$

This completes the proof of the theorem. □

Adding an appropriate constant to φ if necessary in Theorem 7.18, we can always assume that condition (7.6) holds. More precisely, the function φ  = φ + C, where

$$\displaystyle \begin{aligned} C ~=~ g^{(r-1)}(a)-\int_a^{a+1}\varphi(t){\,}dt, \end{aligned}$$

satisfies

$$\displaystyle \begin{aligned} \int_a^{a+1}\varphi^{\star}(t){\,}dt ~=~ g^{(r-1)}(a). \end{aligned}$$

Example 7.19

Let us see how we can apply Theorem 7.18 to somewhat generalize Example 7.17. Let \(g\in \mathcal {C}^0\), let \(G\in \mathcal {C}^1\) be defined by the equation

$$\displaystyle \begin{aligned} G(x)~=~ \int_1^xg(t){\,}dt\qquad \mbox{for }x>0,\end{aligned} $$

and let \(f\in \mathcal {C}^0\) be any solution to the equation Δf = g. To find a solution F to the equation ΔF = G such that F′ = f, we just need to apply Theorem 7.18 to the function G with r = 1 and a = 1. Defining the function

$$\displaystyle \begin{aligned} f^{\star} ~=~ f-\int_1^2f(t){\,}dt{\,},\end{aligned} $$

we then obtain that the function \(F\in \mathcal {C}^1\) defined by the equation

$$\displaystyle \begin{aligned} F(x) ~=~ \int_1^xf^{\star}(t){\,}dt ~=~ \int_1^xf(t){\,}dt-(x-1)\int_1^2f(t){\,}dt\qquad \mbox{for }x>0,\end{aligned} $$

is the unique (up to an additive constant) solution to the equation ΔF = G such that F′ = f. For similar results, see Krull [55, p. 254] and Kuczma [58, Section 2]. \(\lozenge \)

The next corollary particularizes the elevator method when the function g lies in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\). We omit the proof, since it immediately follows from Theorem 7.5, Proposition 7.7, and Theorem 7.18.

Corollary 7.20 (The Elevator Method)

Let g lie in \(\mathcal {C}^r\cap \mathcal {D}^p\cap \mathcal {K}^{\max \{p,r\}}\) for some \(p\in \mathbb {N}\) and \(r\in \mathbb {N}^*\) . Then Σg lies in \(\mathcal {C}^r\cap \mathcal {D}^{p+1}\cap \mathcal {K}^{\max \{p,r\}}\) and we have

$$\displaystyle \begin{aligned} (\Sigma g)^{(r)}-\Sigma g^{(r)} ~=~ g^{(r-1)}(1)-\sigma[g^{(r)}]. \end{aligned}$$

(This latter value reduces to \(-\sum _{k=1}^{\infty }g^{(r)}(k)\) if r > p.) Moreover, for any a > 0, we have

$$\displaystyle \begin{aligned} \Sigma g ~=~ f_a-f_a(1), \end{aligned}$$

where \(f_a\in \mathcal {C}^r\) is defined by

$$\displaystyle \begin{aligned} f_a(x) ~=~ \sum_{k=1}^{r-1}c_k(a){\,}\frac{(x-a)^k}{k!} + \int_a^x \frac{(x-t)^{r-1}}{(r-1)!}{\,}(\Sigma g)^{(r)}(t){\,}dt \end{aligned}$$

and, for k = 1, …, r − 1,

$$\displaystyle \begin{aligned} c_k(a) ~=~ \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\left(g^{(j+k-1)}(a)-\int_a^{a+1} \frac{(a+1-t)^{r-j-k}}{(r-j-k)!}{\,}(\Sigma g)^{(r)}(t){\,}dt\right). \end{aligned}$$

Corollary 7.20 has an important practical value. It provides an explicit integral expression for Σg from an explicit expression for Σg (r). Setting a = 1 in this result, we simply obtain

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \sum_{k=1}^{r-1}c_k{\,}\frac{(x-1)^k}{k!} + \int_1^x \frac{(x-t)^{r-1}}{(r-1)!}{\,}(\Sigma g)^{(r)}(t){\,}dt, \end{aligned}$$

with, for k = 1, …, r − 1,

$$\displaystyle \begin{aligned} c_k ~=~ \sum_{j=0}^{r-k-1}\frac{B_j}{j!}{\,}\left(g^{(j+k-1)}(1)-\int_1^2 \frac{(2-t)^{r-j-k}}{(r-j-k)!}{\,}(\Sigma g)^{(r)}(t){\,}dt\right). \end{aligned}$$

The following three examples illustrate the use of Corollary 7.20. In the first one, we revisit Example 7.17.

Example 7.21

The function

$$\displaystyle \begin{aligned} g(x) ~=~ \int_1^x\ln t{\,}dt \end{aligned}$$

lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^2\cap \mathcal {K}^{\infty }\). Choosing r = 1 and a = 1 in Corollary 7.20, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} g'(x) & =&\displaystyle \ln x{\,},\\ \Sigma g'(x) & =&\displaystyle \ln\Gamma(x){\,},\\ (\Sigma g)'(x) & =&\displaystyle \textstyle{\ln\Gamma(x)+1-\frac{1}{2}\ln(2\pi)}, \end{array} \end{aligned} $$

and

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \left(1-\frac{1}{2}\ln(2\pi)\right)(x-1)+\int_1^x\ln\Gamma(t){\,}dt. \end{aligned}$$

Example 7.22

The function

$$\displaystyle \begin{aligned} g(x) ~=~ \int_0^x(x-t)\ln t{\,}dt \end{aligned}$$

lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^3\cap \mathcal {K}^{\infty }\). Choosing r = 2 and a = 0 (as a limiting value) in Corollary 7.20, we get

$$\displaystyle \begin{aligned} \begin{array}{rcl} g''(x) & =&\displaystyle \ln x{\,},\\ \Sigma g''(x) & =&\displaystyle \ln\Gamma(x){\,},\\ (\Sigma g)''(x) & =&\displaystyle \textstyle{\ln\Gamma(x)-\frac{1}{2}\ln(2\pi)}, \end{array} \end{aligned} $$

and

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ -(\ln A){\,}x-\frac{1}{4}\ln(2\pi){\,}x^2+\int_0^x(x-t)\ln\Gamma(t){\,}dt, \end{aligned}$$

where A is Glaisher-Kinkelin’s constant and the integral is the polygamma function ψ −3(x). (Here we use the identity \(\psi _{-3}(1)=\ln A+\frac {1}{4}\ln (2\pi )\).)

We can also investigate the asymptotic properties of Σg using our results. For instance, using the generalized Stirling formula (6.21), we also obtain the following asymptotic behavior of Σg

$$\displaystyle \begin{aligned} &\Sigma g(x)+\frac{1}{72}{\,}(22x^3-27x^2+9x)-\frac{1}{48}{\,}x^2(8x-15)\ln x\\ &\qquad -\frac{1}{12}(x+1)^2\ln(x+1)+\frac{1}{48}(x+2)^2\ln(x+2) ~\to ~ \frac{\zeta(3)}{8\pi^2}\qquad \mbox{as }x\to\infty. \end{aligned} $$

Example 7.23

The function \(g(x)=\arctan (x)\) lies in \(\mathcal {C}^{\infty }\cap \mathcal {D}^1\cap \mathcal {K}^{\infty }\). Choosing r = 1 and a = 0 (as a limiting value) in Corollary 7.20, we get (see also Example 5.10)

$$\displaystyle \begin{aligned} \begin{array}{rcl} g'(x) & =&\displaystyle (x^2+1)^{-1} ~=~ -\Im (x+i)^{-1},\\ \Sigma g'(x) & =&\displaystyle \Im\psi(1+i)-\Im\psi(x+i),\\ (\Sigma g)'(x) & =&\displaystyle c-\Im\psi(x+i), \end{array} \end{aligned} $$

for some \(c\in \mathbb {R}\), and hence

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ c{\,}(x-1)+\Im\ln\Gamma(1+i)-\Im\ln\Gamma(x+i). \end{aligned}$$

Applying the operator Δ to both sides of this identity and then setting x = 1, we obtain \(c=\frac {\pi }{2}\). Thus, we have

$$\displaystyle \begin{aligned} \Sigma g(x) ~=~ \frac{\pi}{2}{\,}(x-1)+\Im\ln\Gamma(1+i)-\Im\ln\Gamma(x+i). \end{aligned}$$

Some properties of Σg can be investigated. For instance, using Corollary 6.12 together with the identity

$$\displaystyle \begin{aligned} \int_1^x\arctan(t){\,}dt ~=~ x\arctan(x)-\frac{1}{2}\ln(x^2+1)-\frac{\pi}{4}+\frac{1}{2}\ln 2{\,}, \end{aligned}$$

we obtain the inequality

$$\displaystyle \begin{aligned}\displaystyle \left|\Sigma g(x)-\left(x-\frac{1}{2}\right)\arctan(x)+\frac{1}{2}\ln(x^2+1)-1+\frac{\pi}{4}-\Im\ln\Gamma(1+i)\right|\\\displaystyle \leq ~ \frac{1}{2}\arctan\frac{1}{x^2+x+1} \end{aligned} $$

and hence the left side approaches zero as x →, which provides the asymptotic behavior of the function Σg for large values of its argument. \(\lozenge \)

7.4 An Alternative Uniqueness Result

The following theorem provides a uniqueness result for higher order differentiable solutions to the equation Δf = g. These solutions can be computed from their derivatives using Theorem 7.18. We first state a surprising and useful fact.

Fact 7.24

A periodic function \(\omega \colon \mathbb {R}_+\to \mathbb {R}\) is constant if and only if it lies in \(\mathcal {K}^0\) . In particular, if \(\varphi _1,\varphi _2\colon \mathbb {R}_+\to \mathbb {R}\) are two solutions to the equation Δφ = g such that φ 1 − φ 2 lies in \(\mathcal {K}^0\) , then φ 1 − φ 2 is constant.

Theorem 7.25 (Uniqueness)

Let \(r\in \mathbb {N}^*\) and \(g\in \mathcal {C}^r\) , and assume that there exists \(\varphi \in \mathcal {C}^r\) such that Δφ = g and \(\varphi ^{(r)}\in \mathcal {R}^0_{\mathbb {N}}\) . Then, the following assertions hold.

  1. (a)

    For each x > 0, the series \(\sum _{k=0}^{\infty }g^{(r)}(x+k)\) converges and we have

    $$\displaystyle \begin{aligned} \varphi^{(r)}(x) ~=~ -\sum_{k=0}^{\infty}g^{(r)}(x+k){\,}. \end{aligned}$$
  2. (b)

    For any \(f\in \mathcal {C}^r\cap \mathcal {K}^{r-1}\) such that Δf = g, we have f = c + φ for some \(c\in \mathbb {R}\).

Proof

Assertion (a) follows immediately from (3.2). Now, let \(f\in \mathcal {C}^r\cap \mathcal {K}^{r-1}\) be such that Δf = g. By Lemma 2.6(c), f (r) must lie in \(\mathcal {K}^{-1}\). Setting ω = f − φ and using (3.2) again, we then obtain

$$\displaystyle \begin{aligned} \omega^{(r)}(x) ~=~ f^{(r)}(x)-\varphi^{(r)}(x) ~=~ \lim_{n\to\infty} f^{(r)}(x+n), \end{aligned}$$

which shows that ω (r) also lies in \(\mathcal {K}^{-1}\). By Lemma 2.6(d), ω lies in \(\mathcal {K}^{r-1}\subset \mathcal {K}^0\) and, since it is 1-periodic, it must be constant by Fact 7.24. This proves assertion (b). □

Example 7.26

The assumptions of Theorem 7.25 hold if \(g(x)=\ln x\), \(\varphi (x)=\ln \Gamma (x)\), and r = 2. It then follows that all solutions to the equation Δf = g that lie in \(\mathcal {C}^2\cap \mathcal {K}^1\) are of the form \(f(x)=c+\ln \Gamma (x)\), where \(c\in \mathbb {R}\). We thus easily retrieve Bohr-Mollerup’s theorem with the additional assumption that f lies in \(\mathcal {C}^2\). It is remarkable that this latter result can be obtained here from a very elementary theorem that relies only on Lemma 2.6 and Fact 7.24. \(\lozenge \)