1 Introduction

Series acceleration is a widely investigated problem in the literature which dates back to the times of Euler and Stirling. Many different methods to accelerate the speed of convergence of a series have been given; a few of them will be discussed in Sect. 3 when considering particular examples.

In this paper, we propose a new method close in spirit to the linear acceleration methods for alternating series developed by Cohen et al. [1], Borwein [2], and Coffey [3], among other authors. Such a method is based on the fact that many special functions and analytic constants allow for a probabilistic representation in terms of quantities of the form

$$\begin{aligned} \mathbb {E}\dfrac{g(U)}{(1+tU)^\alpha },\quad t>0,\quad \alpha >0, \end{aligned}$$
(1.1)

where \(\mathbb {E}\) stands for mathematical expectation, U is a random variable taking values in [0, 1], and g is a bounded measurable function defined in this interval.

As we will see in Theorem 2.2 in Sect. 2, the proposed method computes (1.1) by means of a sum of k terms with an explicitly bounded remainder term of the order of \((t/(t+2))^k\). From a computational point of view, it may be of interest to point out that given a series of the form

$$\begin{aligned} \sum _{n=0}^\infty f(n)r^n,\quad 0<r<1; \end{aligned}$$

it is not only important that the geometric rate r be as small as possible, but also that the coefficients f(n) be easy to compute, since such a series can also be written as

$$\begin{aligned} \sum _{n=0}^\infty \left( f(2n)+rf(2n+1)\right) r^{2n}, \end{aligned}$$

thus improving the geometric rate from r to \(r^2\), but at the price of complicating the coefficients. Of course, this procedure can be successively applied. Regarding this kind of considerations, one of the main features of our method is that the coefficients of the main term of the approximation always involve point or tail negative binomial probabilities which, in turn, can be precomputed and stored, as done in many statistical packages. Another interesting feature is that we can give simple sufficient conditions to obtain rational approximations (see the comments following Theorem 2.2).

The paper is organized as follows. In Sect. 2, we consider the negative binomial process and give a geometric bound for its tail probabilities to state our main result (Theorem 2.2), which consists of the computation of (1.1). Section 3 is devoted to applications. Actually, we give fast computations of the arctangent function, Dirichlet functions and their nth derivatives, and the Catalan, Gompertz and Stieltjes constants. In each case, a brief comparative discussion with other methods and results in the literature is provided.

2 The Main Result

Denote by \(\mathbb {N}\) the set of positive integers and by \(\mathbb {N}_0=\mathbb {N}\cup \{0\}\). Let \(t>0\). Recall (cf. Çinlar [4, pp. 279–282]) that the negative binomial process \((X_\alpha (t))_{\alpha \ge 0}\) is a stochastic process starting at the origin, having independent stationary increments and right-continuous nondecreasing paths, and such that, for any \(\alpha >0\), the random variable \(X_\alpha (t)\) has the negative binomial distribution with parameters \(\alpha \) and success probability \(1/(t+1)\), that is

$$\begin{aligned} \begin{aligned} P(X_\alpha (t)=j)&=\left( {\begin{array}{c}\alpha -1+j\\ j\end{array}}\right) \left( \dfrac{t}{t+1}\right) ^j\dfrac{1}{(t+1)^\alpha }\\&=\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) \left( -\dfrac{t}{t+1}\right) ^j\dfrac{1}{(t+1)^\alpha },\quad j\in \mathbb {N}_0. \end{aligned} \end{aligned}$$
(2.1)

Note that, for any \(v\in \mathbb {R}\) with \(|v|<(t+1)/t\), we have

$$\begin{aligned} \mathbb {E}v^{X_\alpha (t)}=\sum _{j=0}^\infty \left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) \left( -\dfrac{tv}{t+1}\right) ^j \dfrac{1}{(t+1)^\alpha }=\dfrac{1}{(1+t(1-v))^\alpha }. \end{aligned}$$
(2.2)

The following estimate of the tail probabilities of \(X_\alpha (t)\) will be needed.

Lemma 2.1

Let \(\alpha , t>0\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} P(X_\alpha (t)\ge k)\le C_\alpha (t,k)\left( \dfrac{t}{t+1}\right) ^k, \end{aligned}$$

where

$$\begin{aligned} C_\alpha (t,k)=\left\{ \begin{array}{ll} (t+1)^{1-\alpha } e^{-(1-\alpha )H_k}, &{} 0<\alpha \le 1 \\ \\ \left( \dfrac{ke^{1+\alpha /{2k}}}{\alpha (t+1)}\right) ^\alpha , &{} 1<\alpha , \end{array} \right. \end{aligned}$$
(2.3)

and \(H_k\) is the kth harmonic number.

Proof

Suppose first that \(0<\alpha \le 1\). Using the inequality \(1-x\le e^{-x}\), \(0\le x\le 1\), we have from (2.1)

$$\begin{aligned} P(X_\alpha (t)=j)\le e^{-(1-\alpha )H_j}\left( \dfrac{t}{t+1}\right) ^j \dfrac{1}{(t+1)^\alpha },\quad j\in \mathbb {N}. \end{aligned}$$

This readily implies the result in this case. Suppose now that \(\alpha >1\). Let \(\theta >0\) be such that

$$\begin{aligned} e^\theta <\dfrac{t+1}{t}. \end{aligned}$$
(2.4)

Using (2.2) and Chebyshev’s inequality (cf. Petrov [5, p. 54–58 ], we have

$$\begin{aligned} P(X_\alpha (t)\ge k)\le \dfrac{\mathbb {E}e^{\theta X_\alpha (t)}}{e^{\theta k}}=e^{-\alpha \log (1-t(e^\theta -1))-k\theta }. \end{aligned}$$
(2.5)

Choose in (2.5) the value of \(\theta \) minimizing the exponent, that is,

$$\begin{aligned} 1-t(e^\theta -1)=\dfrac{\alpha t e^\theta }{k}\quad \Leftrightarrow \quad \theta =\log \dfrac{k}{\alpha +k}+\log \dfrac{t+1}{t}. \end{aligned}$$
(2.6)

Observe that this value satisfies (2.4). We, therefore, have from (2.5) and (2.6)

$$\begin{aligned} \begin{aligned}&P(X_\alpha (t)\ge k)\\&\quad \le \exp \left( \alpha \log \dfrac{k}{\alpha t}-(\alpha +k)\log \left( 1-\dfrac{\alpha }{\alpha +k}\right) -(\alpha +k)\log \dfrac{t+1}{t}\right) . \end{aligned} \nonumber \\ \end{aligned}$$
(2.7)

Note that

$$\begin{aligned} \begin{aligned}&-(\alpha +k)\log \left( 1-\dfrac{\alpha }{\alpha +k}\right) =\sum _{l=1}^\infty \dfrac{1}{l}\dfrac{\alpha ^l}{(\alpha +k)^{l-1}}\\&\quad =\alpha + \alpha \sum _{j=1}^\infty \dfrac{1}{j+1}\left( \dfrac{\alpha }{\alpha +k}\right) ^j\le \alpha +\dfrac{\alpha }{2}\sum _{j=1}^\infty \left( \dfrac{\alpha }{\alpha +k}\right) ^j= \alpha \left( 1+\dfrac{\alpha }{2k}\right) . \end{aligned} \nonumber \\ \end{aligned}$$
(2.8)

We finally obtain from (2.3), (2.7), and (2.8)

$$\begin{aligned} P(X_\alpha (t) \ge k)\le e^{\alpha (1+\alpha /{2k})} \left( \dfrac{k}{\alpha t}\right) ^\alpha \left( \dfrac{t}{t+1}\right) ^{\alpha +k}=C_\alpha (t,k) \left( \dfrac{t}{t+1}\right) ^k. \end{aligned}$$

This completes the proof. \(\square \)

Let \(g:[0,1]\rightarrow [-1,1]\) be a measurable function. Having in mind the constants defined in (2.3), we state our main result.

Theorem 2.2

Let U be a random variable taking values in [0, 1] and \(\alpha , t>0\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} \left| \mathbb {E}\dfrac{g(U)}{(1+tU)^\alpha }-M_\alpha (t,k)\right| \le C_\alpha (t/2,k)\left( \dfrac{t}{t+2}\right) ^k, \end{aligned}$$
(2.9)

where

$$\begin{aligned} \begin{aligned} M_\alpha (t,k)&=\sum _{j=0}^{k-1}P(X_\alpha (t/2)=j)\mathbb {E}g(U)(1-2U)^j\\&=\sum _{j=0}^{k-1}\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) t^jP(X_{\alpha +j}(t/2)+j\le k-1)\mathbb {E}g(U) U^j. \end{aligned} \nonumber \\ \end{aligned}$$
(2.10)

Proof

Let \(u\in [0,1]\). Replacing v by \(1-2u\) and t by t/2 in (2.2), we get

$$\begin{aligned} \dfrac{g(u)}{(1+tu)^\alpha }=g(u)\mathbb {E}(1-2u)^{X_\alpha (t/2)}. \end{aligned}$$
(2.11)

Denote by

$$\begin{aligned} M_k(u)=\sum _{n=0}^{k-1}P(X_\alpha (t/2)=n)g(u)(1-2u)^n,\quad k\in \mathbb {N}. \end{aligned}$$
(2.12)

Since \(\max (|g(u)|,|1-2u|)\le 1\), we have from (2.11) and (2.12)

$$\begin{aligned} \left| \dfrac{g(u)}{(1+tu)^\alpha }-M_k(u)\right| \le P(X_\alpha (t/2)\ge k)\le C_\alpha (t/2,k)\left( \dfrac{t}{t+2}\right) ^k, \end{aligned}$$
(2.13)

where the last inequality follows from Lemma 2.1. On the other hand, using the combinatorial identity

$$\begin{aligned} \left( {\begin{array}{c}n\\ j\end{array}}\right) \left( {\begin{array}{c}-\alpha \\ n\end{array}}\right) =\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) \left( {\begin{array}{c}-(\alpha +j)\\ n-j\end{array}}\right) ,\quad n\in \mathbb {N}_0,\quad j=0,1,\ldots ,n, \end{aligned}$$

we can rewrite (2.12) as

$$\begin{aligned} \dfrac{M_k(u)}{g(u)}= & {} \sum _{n=0}^{k-1}P(X_\alpha (t/2)=n)\sum _{j=0}^n \left( {\begin{array}{c}n\\ j\end{array}}\right) (-2u)^j \nonumber \\= & {} \sum _{j=0}^{k-1}\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) (-2u)^j\sum _{n=j}^{k-1} \left( {\begin{array}{c}-(\alpha +j)\\ n-j\end{array}}\right) \left( -\dfrac{t/2}{t/2+1}\right) ^n \dfrac{1}{(t/2+1)^\alpha } \nonumber \\= & {} \sum _{j=0}^{k-1}\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) (tu)^j\sum _{l=0}^{k-1-j}\left( {\begin{array}{c}-(\alpha +j)\\ l\end{array}}\right) \left( \dfrac{t/2}{t/2+1}\right) ^l\dfrac{1}{(t/2+1)^{\alpha +j}} \nonumber \\= & {} \sum _{j=0}^{k-1}\left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) (tu)^jP(X_{\alpha +j}(t/2)\le k-1-j). \end{aligned}$$
(2.14)

Hence, the conclusion follows by replacing u by the random variable U in (2.12), (2.13), and (2.14), and then taking expectations. \(\square \)

With respect to Theorem 2.2, some remarks are in order. First, the negative binomial probabilities in (2.10) do not depend upon the random variable U, and therefore can be precomputed to evaluate any special function or analytic constant of form (1.1). Second, in may usual cases, we find \(g(U)=U^m\), for some \(m\in \mathbb {N}_0\). This implies that the approximants in (2.10) are rational, whenever \(\alpha \), t and the moments \(\mathbb {E}U^j\), \(j\in \mathbb {N}\), are rational, too. Third, formula (2.10) gives us two alternative ways to compute the main term of the approximation. In each particular example, we are free to choose that providing us the simplest expression.

Finally, the binomial expansion

$$\begin{aligned} \mathbb {E}\dfrac{g(U)}{(1+tU)^\alpha }=\sum _{j=0}^\infty \left( {\begin{array}{c}-\alpha \\ j\end{array}}\right) t^j \mathbb {E}g(U) U^j \end{aligned}$$
(2.15)

is only a formal power series, unless \(0<t\le 1\). In many of the examples considered in the following section, we find \(t=1\). In such a case, the series in (2.15) has a poor rate of convergence. By the second expression in (2.10), if we multiply the jth term in (2.15) by the probability \(P(X_{\alpha +j}(1/2)+j\le k-1)\), \(j=0,1,\ldots ,k-1\), we obtain an approximation at the geometric rate 1/3. As follows from (2.1), such probabilities decrease from \(P(X_\alpha (1/2)\le k-1)\) to

$$\begin{aligned} P(X_{\alpha +k-1}(1/2)=0)=(2/3)^{\alpha +k-1}. \end{aligned}$$

This decreasing property follows from the general fact that the negative binomial process has nondecreasing paths, which implies that

$$\begin{aligned} X_{\alpha +j}(t)+j\le X_{\alpha + j+1}(t)+j+1,\quad \alpha , t>0,\quad j\in \mathbb {N}_0, \end{aligned}$$

and therefore

$$\begin{aligned} P(X_{\alpha +j+1}(t)+j+1\le k)\le P(X_{\alpha +j}(t)+j\le k). \end{aligned}$$

3 Applications

In this section, we apply Theorem 2.2 to obtain fast computations of various different analytic functions and constants. In each case, we use the simplest expression of the approximant in (2.10).

3.1 The Arctangent Function

This function can be represented as

$$\begin{aligned} \dfrac{\arctan (t)}{t}=\mathbb {E}\dfrac{1}{1+t^2 U^2},\quad t\in \mathbb {R}, \end{aligned}$$
(3.1)

where U is a random variable uniformly distributed on [0, 1].

Corollary 3.1

Let \(t\in \mathbb {R}\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} \left| \dfrac{\arctan (t)}{t}-2\sum _{j=0}^{k-1}A_j \dfrac{t^{2j}}{(t^2+2)^{j+1}}\right| \le \left( \dfrac{t^2}{t^2+2}\right) ^k, \end{aligned}$$

where

$$\begin{aligned} A_j=\sum _{l=0}^j \left( {\begin{array}{c}j\\ l\end{array}}\right) \dfrac{(-2)^l}{2l+1}. \end{aligned}$$

Proof

In view of (3.1), it suffices to apply Theorem 2.2 with \(\alpha =1\), \(g(x)=1\), \(0\le x\le 1\), replacing tU by \((tU)^2\), and using the first expression in (2.10). In this last respect, note that

$$\begin{aligned} \mathbb {E}(1-2U^2)^j=\sum _{l=0}^j \left( {\begin{array}{c}j\\ l\end{array}}\right) (-2)^l \mathbb {E}U^{2l}=A_j,\quad j=0,1,\ldots ,k-1. \end{aligned}$$

\(\square \)

Euler’s classical formula for the arctangent function (see Chien-Lih [6] for a short proof of it) reads as

$$\begin{aligned} \dfrac{\arctan (t)}{t}=\sum _{j=0}^\infty \dfrac{2^{2j}(j!)^2}{(2j+1)!}\, \dfrac{t^{2j}}{(t^2+1)^{j+1}},\quad t\in \mathbb {R}. \end{aligned}$$
(3.2)

Observe that the geometric rate of convergence given in Corollary 3.1 is slightly better than that in (3.2). As a counterpart, the coefficients of the main term of the approximation in Corollary 3.1 are slightly more involved than those in (3.2).

3.2 Dirichlet Functions

For any \(a>0\), we consider the Dirichlet function

$$\begin{aligned} \eta _a(s)=\sum _{j=0}^\infty \dfrac{(-1)^j}{(aj+1)^s},\quad s>0. \end{aligned}$$

For any \(s>0\), let \(X_s\) be a random variable having the gamma density

$$\begin{aligned} \rho _s (\theta )=\dfrac{1}{\Gamma (s)}\theta ^{s-1}e^{-\theta },\quad \theta >0. \end{aligned}$$
(3.3)

Observe that the Laplace transform of \(X_s\) is given by

$$\begin{aligned} \mathbb {E}e^{-\lambda X_s}=\dfrac{1}{(\lambda +1)^s},\quad \lambda \ge 0. \end{aligned}$$
(3.4)

We can therefore represent the Dirichlet function as

$$\begin{aligned} \eta _a(s)=\sum _{j=0}^\infty (-1)^j\mathbb {E}e^{-ajX_s}=\mathbb {E}\left( \sum _{j=0}^\infty (-e^{-aX_s})^j\right) =\mathbb {E} \dfrac{1}{1+e^{-aX_s}},\quad s>0. \nonumber \\ \end{aligned}$$
(3.5)

Since \(X_s\rightarrow 0\), a.s., as \(s\rightarrow 0\), this representation readily implies that \(\eta _a(0)=1/2\).

Corollary 3.2

Let \(a>0\) and \(s\ge 0\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} \left| \eta _a(s)-\sum _{j=0}^{k-1}\dfrac{(-1)^j}{(aj+1)^s}P\left( X_{1+j}(1/2)+j\le k-1\right) \right| \le \dfrac{1}{3^k}. \end{aligned}$$

Proof

Starting from (3.5), it is enough to apply Theorem 2.2 with \(\alpha =t=1\), \(g(x)=1\), \(0\le x \le 1\), and \(U=e^{-aX_s}\). The result follows using the second representation of the main term in (2.10) and taking into account that \(C_1(1/2,k)=1\) and

$$\begin{aligned} \mathbb {E}U^j=\dfrac{1}{(aj+1)^s},\quad j\in \mathbb {N}_0, \end{aligned}$$

as follows from (3.4). \(\square \)

Note that Corollary 3.2 gives us a uniform approximation in \(s\ge 0\) of the Dirichlet function at the geometric rate 1/3. On the other hand, this result also provides a fast computation of the Catalan constant \(\kappa =\eta _2(2)\).

Integral representations for the Lerch transcendent function, which includes \(\eta _1(s)\) and \(\eta _2(s)\) as particular cases, were obtained by Guillera and Sondow [7]. Borwein [2] gave various efficient algorithms to compute \(\eta _1(s)\) for complex numbers s with \({\mathcal {R}}(s)>c\), for some \(c\in \mathbb {R}\), at geometric rates \(1/(3+\sqrt{8})\) or 1/4, depending on the algorithm. Using the Markov–Wilf–Zeilberger method, Hessami Pilehrood and Hessami Pilehrood [8] produced fast convergent series for \(\eta _2(2n+1)\), \(\eta _2 (2n+2)\), and \(\eta _2 (2n+3)\), \(n\in \mathbb {N}_0\). In particular, these authors showed that the Catalan constant can be computed by the following convergent series at the geometric rate \(2^{-10}\):

$$\begin{aligned} \kappa =\dfrac{1}{64}\sum _{n=1}^\infty \dfrac{(-1)^{n-1}{256}^n q(n)}{\left( {\begin{array}{c}8n\\ 4n\end{array}}\right) \left( {\begin{array}{c}2n\\ n\end{array}}\right) n^3 (2n-1)(4n-1)^2(4n-3)^2}, \end{aligned}$$

q(n) being a polynomial of degree 6.

3.3 Derivatives of Dirichlet Functions

The nth derivative of \(\eta _a (s)\) is given by

$$\begin{aligned} \eta _a^{(n)}(s)=\sum _{j=0}^\infty (-1)^j \dfrac{\log ^n (aj+1)}{(aj+1)^s},\quad s>0,\quad n\in \mathbb {N}. \end{aligned}$$
(3.6)

To give a probabilistic representation of such derivatives, we will need the following two ingredients. In the first place, recall that the Stirling numbers of the second kind S(nm) are defined by

$$\begin{aligned} x^n=\sum _{m=1}^n S(n,m) (x)_m,\quad n\in \mathbb {N},\quad x\in \mathbb {R}, \end{aligned}$$
(3.7)

where \((x)_m\) is the falling factorial, i.e., \((x)_m=x(x-1)\cdots (x-m+1)\), \(m\in \mathbb {N}\) (\((x)_0=1\)).

Lemma 3.3

Let \(0\le r<1\). For any \(n\in \mathbb {N}\), we have

$$\begin{aligned} \sum _{j=0}^{\infty }(-1)^j j^nr^j=\sum _{m=1}^n {(-1)^m}S(n,m) m! \dfrac{r^m}{(1+r)^{m+1}}. \end{aligned}$$

Proof

From (3.7), we see that

$$\begin{aligned} \begin{aligned}&\sum _{j=0}^\infty (-1)^j j^n r^j=\sum _{j=0}^\infty (-1)^jr^j\sum _{m=1}^n S(n,m)(j)_m=\sum _{m=1}^n S(n,m) \sum _{j=m}^\infty (-1)^j (j)_m r^j\\&\quad =\sum _{m=1}^n (-1)^m S(n,m) m!\dfrac{r^m}{(1-r)^{m+1}}\sum _{l=0}^\infty \left( {\begin{array}{c}m+l\\ l\end{array}}\right) (-1)^l r^l (1-r)^{m+1}\\&\quad =\sum _{m=1}^n (-1)^m S(n,m) {m!}\dfrac{r^m}{(1+r)^{m+1}}, \end{aligned} \end{aligned}$$

where the last equality follows by choosing in (2.2) \(v=-1\), \(\alpha =m+1\), and \(r=t/(t+1)\). \(\square \)

In the second place, let U and T be two independent random variables, such that U is uniformly distributed on [0, 1] and T has the exponential density \(\rho _1(\theta )\) defined in (3.3). Denote by \((U_k)_{k\ge 1}\) and \((T_k)_{k\ge 1}\) two sequences of independent copies of U and T, respectively, both of them mutually independent. Set

$$\begin{aligned} W_n=U_1T_1+\cdots +U_nT_n,\quad n\in \mathbb {N}\quad (W_0=0). \end{aligned}$$

Since U and T are independent, we have from (3.4)

$$\begin{aligned} \mathbb {E}e^{-\lambda UT}= \mathbb {E}\dfrac{1}{1+\lambda U}=\dfrac{\log (\lambda +1)}{\lambda },\quad \lambda \ge 0, \end{aligned}$$
(3.8)

thus implying that

$$\begin{aligned} \mathbb {E}e^{-\lambda W_n}=(\mathbb {E}e^{-\lambda UT})^n=\dfrac{\log ^n (\lambda +1)}{\lambda ^n},\quad n\in \mathbb {N}_0,\quad \lambda \ge 0. \end{aligned}$$
(3.9)

Assume that \(W_n\) and \(X_s\), as defined in (3.3), are independent and define the (0, 1)-valued random variable

$$\begin{aligned} V:=V_a(n,s)=e^{-a(W_n+X_s)},\quad a,s>0,\quad n\in \mathbb {N}. \end{aligned}$$
(3.10)

The following auxiliary result provides a probabilistic representation of \(\eta _a^{(n)}(s)\).

Lemma 3.4

Let \(a,s>0\). For any \(n\in \mathbb {N}\), we have

$$\begin{aligned} \eta _a^{(n)}(s)=a^n \sum _{m=1}^n (-1)^m S(n,m) m! \mathbb {E}\dfrac{V^m}{(1+V)^{m+1}}. \end{aligned}$$

Proof

Using (3.4), (3.9), (3.10), and the independence of the random variables involved, we have

$$\begin{aligned} \begin{aligned}&\dfrac{\eta _a^{(n)}(s)}{a^n}=\sum _{j=0}^\infty (-1)^j \dfrac{\log ^n (aj+1)}{a^n}\mathbb {E}e^{-ajX_s}=\sum _{j=0}^\infty (-1)^j j^n \mathbb {E}V^j \\&\quad =\mathbb {E}\left( \sum _{j=0}^\infty (-1)^j j^n V^j\right) =\mathbb {E}\left( \sum _{m=1}^n (-1)^m S(n,m) m! \dfrac{V^m}{(1+V)^{m+1}}\right) , \end{aligned} \end{aligned}$$

where, in the last equality, we have applied Lemma 3.3 with \(r=V\). \(\square \)

As in the preceding example, by setting \(s=0\) in Lemma 3.4, we obtain a closed form expression for \(\eta _a^{(n)}(0)\). For instance, it follows from (3.10) that:

$$\begin{aligned} \eta _a' (0)=-a \mathbb {E}\dfrac{\textrm{e}^{-aUT}}{(1+\textrm{e}^{-aUT})^2}. \end{aligned}$$

We are in a position to give fast computations of \(\eta _a^{(n)}(s)\).

Corollary 3.5

Let \(a>0\), \(s\ge 0\), and \(n\in \mathbb {N}\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} \begin{aligned}&\Biggl | \eta _a^{(n)}(s)-\sum _{m=1}^n (-1)^m S(n,m)m! \sum _{j=0}^{k-1} \left( {\begin{array}{c}-(m+1)\\ j\end{array}}\right) \dfrac{\log ^n (a(m+j)+1)}{(m+j)^n (a(m+j)+1)^s} \\&\quad \times P(X_{m+1+j}(1/2)+j\le k-1) \Biggr | \le D_a(n,k) \dfrac{1}{3^k}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} D_a(n,k)=a^n \sum _{m=1}^n S(n,m) m! \left( \dfrac{2k \textrm{e}^{1+(m+1)/2k}}{3(m+1)}\right) ^{m+1}. \end{aligned}$$
(3.11)

Proof

We apply Theorem 2.2 with \(t=1\), \(\alpha =m+1\), and \(g(x)=x^m\), and use the second expression in (2.10) for the main term of the approximation to obtain

$$\begin{aligned} \begin{aligned}&\left| \mathbb {E}\dfrac{V^m}{(1+V)^{m+1}} - \sum _{j=0}^{k-1}\left( {\begin{array}{c}-(m+1)\\ j\end{array}}\right) P \left( X_{m+1+j}(1/2)+j\le k-1\right) \mathbb {E}V^{m+j}\right| \\&\quad \le C_{m+1}(1/2,k)\dfrac{1}{3^k}=\left( \dfrac{2k \textrm{e}^{1+(m+1)/2k}}{3(m+1)}\right) ^{m+1}\dfrac{1}{3^k}, \end{aligned}\nonumber \\ \end{aligned}$$
(3.12)

where the last equality follows from (2.3). By (3.4), (3.9), and the independence of the random variables involved, we have

$$\begin{aligned} \mathbb {E}V^{m+j}=\mathbb {E}\textrm{e}^{-a(m+j)W_n}\mathbb {E}\textrm{e}^{-a(m+j)X_s}=\dfrac{\log ^n (a(m+j)+1)}{(a(m+j))^n}\, \dfrac{1}{(a(m+j)+1)^s}. \end{aligned}$$

This, together with (3.12) and Lemma 3.4, shows the result. \(\square \)

As in Corollary 3.2, Corollary 3.5 gives a uniform approximation in \(s\ge 0\) of \(\eta _a^{(n)}(s)\) at the geometric rate 1/3. For large values of k, the bound \(D_a(n,k)\) in (3.11) grows as a polynomial in k of degree \(n+1\). We finally point out that fast computations of \(\eta _a^{(n)}(s)\) at the geometric rate 1/3, with more involved coefficients in the main term of the approximation, were obtained in [9] using some differentiation formulas for the gamma process..

3.4 The Gompertz Constant

This constant is the value of several improper integrals, such as

$$\begin{aligned} G=\int _0^\infty \log (x+1)\textrm{e}^{-x}\, \textrm{d}x=\int _0^\infty \dfrac{\textrm{e}^{-x}}{x+1}\, \textrm{d}x=1-e\int _0^\infty \dfrac{\textrm{e}^{-(x+1)}}{(x+1)^2}\, \textrm{d}x.\nonumber \\ \end{aligned}$$
(3.13)

A series representation of G is given by (cf. Mezö [10])

$$\begin{aligned} G=e\sum _{n=1}^\infty \dfrac{(-1)^{n+1}}{n! n}-e\gamma =0.596347\cdots ,\nonumber \\ \end{aligned}$$
(3.14)

where \(\gamma \) is Euler’s constant. It turns out that the integral on the right-hand side in (3.13) can be written as

$$\begin{aligned} I:=\int _0^\infty \dfrac{\textrm{e}^{-(x+1)}}{(x+1)^2}\, \textrm{d}x=\sum _{i=0}^m \dfrac{1}{\lambda _i}\int _0^1 \dfrac{\textrm{e}^{-\lambda _i (x+1)}}{(x+1)^2}\, \textrm{d}x+ \dfrac{1}{\lambda _m}\int _1^\infty \dfrac{\textrm{e}^{-\lambda _m(x+1)}}{(x+1)^2}\, \textrm{d}x,\nonumber \\ \end{aligned}$$
(3.15)

where \(m\in \mathbb {N}_0\) and \(\lambda _i=2^i\), \(i\in \mathbb {N}_0\). Formula (3.15) follows by induction after noting that, with the change \(x=2u+1\), we have for any \(\lambda >0\)

$$\begin{aligned} \dfrac{1}{\lambda }\int _1^\infty \dfrac{\textrm{e}^{-\lambda (x+1)}}{(x+1)^2}\, \textrm{d}x=\dfrac{1}{2\lambda }\left( \int _0^1 \dfrac{\textrm{e}^{-2\lambda (u+1)}}{(u+1)^2}\, \textrm{d}u + \int _1^\infty \dfrac{\textrm{e}^{-2\lambda (u+1)}}{(u+1)^2}\, \textrm{d}u\right) . \end{aligned}$$

In view of (3.15), denote

$$\begin{aligned} I_\lambda =\int _0^1 \dfrac{\textrm{e}^{-\lambda x}}{(x+1)^2}\, \textrm{d}x,\quad \lambda >0. \end{aligned}$$
(3.16)

The Poisson–Gamma relation (cf. Johnson et al. [11, p. 164]) states that

$$\begin{aligned} q_j(\lambda ):= 1- \sum _{l=0}^j \dfrac{\lambda ^l\textrm{e}^{-\lambda }}{l!}=\int _0^\lambda \dfrac{x^j\textrm{e}^{-x}}{j!}\, \textrm{d}x,\quad \lambda >0,\quad j\in \mathbb {N}_0. \end{aligned}$$
(3.17)

The following auxiliary result gives a fast computation of \(I_\lambda \).

Lemma 3.6

Let \(\lambda >0\). For any \(k\in \mathbb {N}\), we have

$$\begin{aligned} \left| I_\lambda - \sum _{j=0}^{k-1} (-1)^j\dfrac{(j+1)!}{\lambda ^{j+1}}q_j(\lambda )P\left( X_{2+j}(1/2)+j\le k-1\right) \right| \le \dfrac{\textrm{e}^4}{9\lambda }\, \dfrac{k^2}{3^k}. \end{aligned}$$

Proof

Let \(T_\lambda \) be a random variable having the truncated exponential density

$$\begin{aligned} \rho _\lambda (\theta )=\dfrac{\lambda \textrm{e}^{-\lambda \theta }}{1-\textrm{e}^{-\lambda }},\quad 0\le \theta \le 1. \end{aligned}$$
(3.18)

By (3.16), we see that

$$\begin{aligned} I_\lambda =\dfrac{1-\textrm{e}^{-\lambda }}{\lambda }\int _0^1 \dfrac{1}{(1+\theta )^2}\rho _\lambda (\theta )\, \textrm{d}\theta =\dfrac{1-\textrm{e}^{-\lambda }}{\lambda }\mathbb {E}\dfrac{1}{(1+T_\lambda )^2}. \end{aligned}$$
(3.19)

On the other hand, by (3.17) and (3.18), we have for any \(j\in \mathbb {N}_0\)

$$\begin{aligned} \begin{aligned} \mathbb {E}T_\lambda ^j&=\dfrac{\lambda }{1-\textrm{e}^{-\lambda }}\int _0^1 \theta ^j \textrm{e}^{-\lambda \theta }\, \textrm{d}\theta = \dfrac{j!}{(1-\textrm{e}^{-\lambda })\lambda ^j}\int _0^\lambda \dfrac{x^j\textrm{e}^{-x}}{j!}\, \textrm{d}x\\&=\dfrac{j!}{(1-\textrm{e}^{-\lambda })\lambda ^j} q_j(\lambda ). \end{aligned} \end{aligned}$$
(3.20)

Finally, applying Theorem 2.2 with \(t=1\), \(\alpha =2\), and \(g(x)=1\), \(0\le x\le 1\), and using the second expression in (2.10), we get

$$\begin{aligned} \begin{aligned}&\left| \mathbb {E}\dfrac{1}{(1+T_\lambda )^2}-\sum _{j=0}^{k-1}\left( {\begin{array}{c}-2\\ j\end{array}}\right) P \left( X_{2+j}(1/2)+j \le k-1 \right) \mathbb {E}T_\lambda ^j\right| \\&\quad \le C_2(1/2,k) \dfrac{1}{3^k}\le \dfrac{\textrm{e}^4}{9}\, \dfrac{k^2}{3^k}, \end{aligned} \end{aligned}$$
(3.21)

where the last inequality follows from (2.3). The conclusion follows from (3.19)–(3.21) and some simple computations. \(\square \)

In the following result, we give a fast computation of I, as defined in (3.15). By (3.13), this is tantamount to compute G in a fast way.

Corollary 3.7

Let \(q_j(\lambda )\) be as in (3.17) and \(\lambda _j=2^j\), \(j\in \mathbb {N}_0\). For any \(k,m\in \mathbb {N}\), such that \(2^{m+1}\ge k\log 3-2\log k\), we have

$$\begin{aligned} \begin{aligned}&\left| I-\sum _{j=0}^{k-1}(-1)^j (j+1)! P\left( X_{2+j}(1/2)+j\le k-1 \right) \sum _{i=0}^m \dfrac{1}{\lambda _i^{j+2}\textrm{e}^{\lambda _i}}q_j(\lambda _i)\right| \\&\quad \le \left( \dfrac{\textrm{e}^4}{9}\sum _{i=0}^m \dfrac{1}{\lambda _i^2\textrm{e}^{\lambda _i}}+ \dfrac{1}{2^{m+1}}\right) \dfrac{k^2}{3^k}. \end{aligned}\nonumber \\ \end{aligned}$$
(3.22)

Proof

From (3.15) and (3.16), we have

$$\begin{aligned} \left| I-\sum _{i=0}^m \dfrac{1}{\lambda _i \textrm{e}^{\lambda _i}}I_{\lambda _i}\right| \le \dfrac{\textrm{e}^{-2\lambda _m}}{2\lambda _m}=\dfrac{1}{2^{m+1}\exp (2^{m+1})}. \end{aligned}$$
(3.23)

Applying Lemma 3.6 to each one of the terms \(I_{\lambda _i}\), we see from (3.23) that the left-hand side in (3.22) is bounded above by

$$\begin{aligned} \dfrac{\textrm{e}^4}{9}\dfrac{k^2}{3^k}\sum _{i=0}^m \dfrac{1}{\lambda _i^2 \textrm{e}^{\lambda _i}} + \dfrac{1}{2^{m+1}\exp (2^{m+1})}\le \left( \dfrac{\textrm{e}^4}{9}\sum _{i=0}^m \dfrac{1}{\lambda _i^2\textrm{e}^{\lambda _i}}+\dfrac{1}{2^{m+1}}\right) \dfrac{k^2}{3^k}, \end{aligned}$$

since, by assumption, \(\exp (2^{m+1})\ge 3^k/k^2\). This, together with (3.23), shows (3.22) and completes the proof. \(\square \)

Note that, for a fixed k, the number m of terms on the left-hand side in (3.22) has the order of \(\log k\), as \(k\rightarrow \infty \). We point out that Mezö [10] proved the identity

$$\begin{aligned} G=\sum _{n=0}^\infty \dfrac{\log (n+1)}{n!}-\sum _{n=0}^\infty C_{n+1}\left\{ en!\right\} -\dfrac{1}{2}, \end{aligned}$$

where \(C_n\) is the nth Gregory coefficient and \(\{x\}\) stands for the fractional part of x.

Using (3.14), we can obtain from Corollary 3.7 a fast, but not rational, approximation of Euler’s constant \(\gamma \). Rational approximations of \(\gamma \) are available in the literature. For instance, Hessami Pilehrood and Hessami Pilehrood [12] provided a continued fraction expansion converging subexponentially to \(\gamma \) and Prévost and Rivoal [13] used Padé approximation methods to obtain sequences approximating \(\gamma \) at a geometric rate (see also [14] for a fast rational approximation to \(\gamma \) using the standard Poisson process).

3.5 Stieltjes Constants

These constants are the coefficients \((\gamma _m)_{m\ge 0}\) in the Laurent expansion of the Riemann zeta function \(\zeta (z)\) about its simple pole at \(z=1\), that is

$$\begin{aligned} (z-1)\zeta (z)=1+\sum _{m=0}^\infty \dfrac{(-1)^m\gamma _m}{m!}(z-1)^{m+1},\quad z\in \mathbb {C}, \end{aligned}$$

\(\gamma _0=\gamma \) being Euler’s constant. Nan Yue and Williams [15] (see also Coffey [3]) showed that each \(\gamma _m\) can be written as a finite sum

$$\begin{aligned} \gamma _m=-\dfrac{1}{m+1}\sum _{n=0}^m \left( {\begin{array}{c}m+1\\ n\end{array}}\right) B_{m+1-n} (\log 2)^{m-n}\eta _1^{(n)}(1),\quad m\in \mathbb {N}_0, \end{aligned}$$
(3.24)

where \(B_n\) is the nth Bernoulli number and \(\eta _1^{(n)}(1)\) is the nth derivative of the Dirichlet function \(\eta _1(s)\) at \(s=1\), as defined in (3.6).

In view of (3.24), to evaluate \(\gamma _m\), it suffices to compute \(\eta _1^{(n)}(1)\), \(n=0,1,\ldots ,m\). This is done in Corollaries 3.2 and 3.5.

We finally mention that computations of generalized Stieltjes constants at a geometric rate of convergence have been obtained by using different methods. For instance, Johansson [16] gave efficient algorithms based on the Euler–Maclaurin summation formula, Johansson and Blagouchine [17] used complex integration, and Prévost and Rivoal [18] presented approximating sequences obtained by generalizing the remainder Padé approximants method. See also [19] for computations based on a differential calculus for the gamma process.