Series Acceleration via Negative Binomial Probabilities

Many special functions and analytic constants allow for a probabilistic representation in terms of inverse moments of [0, 1]-valued random variables. Under this assumption, we give fast computations of them with an explicit upper bound for the remainder term. One of the main features of the method is that the coefficients of the main term of the approximation always contain negative binomial probabilities which, in turn, can be precomputed and stored. Applications to the arctangent function, Dirichlet functions and their nth derivatives, and the Catalan, Gompertz, and Stieltjes constants are provided.


Introduction
Series acceleration is a widely investigated problem in the literature which dates back to the times of Euler and Stirling.Many different methods to accelerate the speed of convergence of a series have been given; a few of them will be discussed in Sect. 3 when considering particular examples.
In this paper, we propose a new method close in spirit to the linear acceleration methods for alternating series developed by Cohen et al. [1], Borwein [2], and Coffey [3], among other authors.Such a method is based on the fact that many special functions and analytic constants allow for a probabilistic representation in terms of quantities of the form E g(U ) (1 + tU ) α , t > 0, α > 0, (1.1) where E stands for mathematical expectation, U is a random variable taking values in [0, 1], and g is a bounded measurable function defined in this interval.
The author is supported by Research Projects DGA (E48 23R).

MJOM
As we will see in Theorem 2.2 in Sect.2, the proposed method computes (1.1) by means of a sum of k terms with an explicitly bounded remainder term of the order of (t/(t + 2)) k .From a computational point of view, it may be of interest to point out that given a series of the form it is not only important that the geometric rate r be as small as possible, but also that the coefficients f (n) be easy to compute, since such a series can also be written as ∞ n=0 (f (2n) + rf (2n + 1)) r 2n , thus improving the geometric rate from r to r 2 , but at the price of complicating the coefficients.Of course, this procedure can be successively applied.Regarding this kind of considerations, one of the main features of our method is that the coefficients of the main term of the approximation always involve point or tail negative binomial probabilities which, in turn, can be precomputed and stored, as done in many statistical packages.Another interesting feature is that we can give simple sufficient conditions to obtain rational approximations (see the comments following Theorem 2.2).
The paper is organized as follows.In Sect.2, we consider the negative binomial process and give a geometric bound for its tail probabilities to state our main result (Theorem 2.2), which consists of the computation of (1.1).Section 3 is devoted to applications.Actually, we give fast computations of the arctangent function, Dirichlet functions and their nth derivatives, and the Catalan, Gompertz and Stieltjes constants.In each case, a brief comparative discussion with other methods and results in the literature is provided.

The Main Result
Denote by N the set of positive integers and by N 0 = N∪{0}.Let t > 0. Recall (cf.C ¸inlar [4, pp. 279-282]) that the negative binomial process (X α (t)) α≥0 is a stochastic process starting at the origin, having independent stationary increments and right-continuous nondecreasing paths, and such that, for any α > 0, the random variable X α (t) has the negative binomial distribution with parameters α and success probability 1/(t + 1), that is Note that, for any v ∈ R with |v| < (t + 1)/t, we have (2. 2) The following estimate of the tail probabilities of X α (t) will be needed.
Lemma 2.1.Let α, t > 0. For any k ∈ N, we have and H k is the kth harmonic number.
With respect to Theorem 2.2, some remarks are in order.First, the negative binomial probabilities in (2.10) do not depend upon the random variable U , and therefore can be precomputed to evaluate any special function or analytic constant of form (1.1).Second, in may usual cases, we find g(U ) = U m , for some m ∈ N 0 .This implies that the approximants in (2.10) are rational, whenever α, t and the moments EU j , j ∈ N, are rational, too.Third, formula (2.10) gives us two alternative ways to compute the main term of the approximation.In each particular example, we are free to choose that providing us the simplest expression.
Finally, the binomial expansion is only a formal power series, unless 0 < t ≤ 1.In many of the examples considered in the following section, we find t = 1.In such a case, the series in (2.15) has a poor rate of convergence.By the second expression in (2.10), if we multiply the jth term in (2.15) by the probability P (X α+j (1/2) + j ≤ k − 1), j = 0, 1, . . ., k − 1, we obtain an approximation at the geometric rate 1/3.As follows from (2.1), such probabilities decrease from This decreasing property follows from the general fact that the negative binomial process has nondecreasing paths, which implies that and therefore

Applications
In this section, we apply Theorem 2.2 to obtain fast computations of various different analytic functions and constants.In each case, we use the simplest expression of the approximant in (2.10).

The Arctangent Function
This function can be represented as where U is a random variable uniformly distributed on [0, 1].
Proof.In view of (3.1), it suffices to apply Theorem 2.2 with α = 1, g(x) = 1, 0 ≤ x ≤ 1, replacing tU by (tU ) 2 , and using the first expression in (2.10).In this last respect, note that Euler's classical formula for the arctangent function (see Chien-Lih [6] for a short proof of it) reads as Observe that the geometric rate of convergence given in Corollary 3.1 is slightly better than that in (3.2).As a counterpart, the coefficients of the main term of the approximation in Corollary 3.1 are slightly more involved than those in (3.2).
For any s > 0, let X s be a random variable having the gamma density Observe that the Laplace transform of X s is given by We can therefore represent the Dirichlet function as Since X s → 0, a.s., as s → 0, this representation readily implies that η a (0) = 1/2.Corollary 3.2.Let a > 0 and s ≥ 0. For any k ∈ N, we have Proof.Starting from (3.5), it is enough to apply Theorem 2.2 with α = t = 1, g(x) = 1, 0 ≤ x ≤ 1, and U = e −aXs .The result follows using the second representation of the main term in (2.10) and taking into account that C 1 (1/2, k) = 1 and EU j = 1 (aj + 1) s , j ∈ N 0 , as follows from (3.4).
Note that Corollary 3.2 gives us a uniform approximation in s ≥ 0 of the Dirichlet function at the geometric rate 1/3.On the other hand, this result also provides a fast computation of the Catalan constant κ = η 2 (2).

Derivatives of Dirichlet Functions
The nth derivative of η a (s) is given by (−1) j log n (aj + 1) (aj + 1) s , s > 0, n ∈ N. (3.6) To give a probabilistic representation of such derivatives, we will need the following two ingredients.In the first place, recall that the Stirling numbers of the second kind S(n, m) are defined by where (x) m is the falling factorial, i.e., (x Proof.From (3.7), we see that where the last equality follows by choosing in (2.2) v = −1, α = m + 1, and r = t/(t + 1).
In the second place, let U and T be two independent random variables, such that U is uniformly distributed on [0, 1] and T has the exponential density ρ 1 (θ) defined in (3.3).Denote by (U k ) k≥1 and (T k ) k≥1 two sequences of independent copies of U and T , respectively, both of them mutually independent.Set Since U and T are independent, we have from (3.4) thus implying that Assume that W n and X s , as defined in (3.3), are independent and define the (0, 1)-valued random variable The following auxiliary result provides a probabilistic representation of η Proof.Using (3.4), (3.9), (3.10), and the independence of the random variables involved, we have where, in the last equality, we have applied Lemma 3.3 with r = V .As in the preceding example, by setting s = 0 in Lemma 3.4, we obtain a closed form expression for η (n) a (0).For instance, it follows from (3.10) that: We are in a position to give fast computations of η (n) a (s).Corollary 3.5.Let a > 0, s ≥ 0, and n ∈ N.For any k ∈ N, we have where Proof.We apply Theorem 2.2 with t = 1, α = m + 1, and g(x) = x m , and use the second expression in (2.10) for the main term of the approximation to obtain where the last equality follows from (2.3).By (3.4), (3.9), and the independence of the random variables involved, we have This, together with (3.12) and Lemma 3.4, shows the result.
As in Corollary 3.2, Corollary 3.5 gives a uniform approximation in s ≥ 0 of η (n) a (s) at the geometric rate 1/3.For large values of k, the bound D a (n, k) in (3.11) grows as a polynomial in k of degree n + 1.We finally point out that fast computations of η (n) a (s) at the geometric rate 1/3, with more involved coefficients in the main term of the approximation, were obtained in [9] using some differentiation formulas for the gamma process..

The Gompertz Constant
This constant is the value of several improper integrals, such as A series representation of G is given by (cf.Mezö [10]) where γ is Euler's constant.It turns out that the integral on the right-hand side in (3.13) can be written as where m ∈ N 0 and λ i = 2 i , i ∈ N 0 .Formula (3.15) follows by induction after noting that, with the change x = 2u + 1, we have for any λ > 0 In view of (3.15), denote (3.16) The Poisson-Gamma relation (cf.Johnson et al. [11, p. 164]) states that x j e −x j! dx, λ > 0, j ∈ N 0 . (3.17) The following auxiliary result gives a fast computation of I λ .
In the following result, we give a fast computation of I, as defined in (3.15).By (3.13), this is tantamount to compute G in a fast way.Corollary 3.7.Let q j (λ) be as in (3.17) and λ j = 2 j , j ∈ N 0 .For any k, m ∈ N, such that 2 m+1 ≥ k log 3 − 2 log k, we have Proof.From (3.15) and (3.16), we have Note that, for a fixed k, the number m of terms on the left-hand side in (3.22) has the order of log k, as k → ∞.We point out that Mezö [10] proved the identity where C n is the nth Gregory coefficient and {x} stands for the fractional part of x.Using (3.14), we can obtain from Corollary 3.7 a fast, but not rational, approximation of Euler's constant γ.Rational approximations of γ are available in the literature.For instance, Hessami Pilehrood and Hessami Pilehrood [12] provided a continued fraction expansion converging subexponentially to γ and Prévost and Rivoal [13] used Padé approximation methods to obtain sequences approximating γ at a geometric rate (see also [14] for a fast rational approximation to γ using the standard Poisson process).[15] (see also Coffey [3]) showed that each γ m can be written as a finite sum
We finally mention that computations of generalized Stieltjes constants at a geometric rate of convergence have been obtained by using different methods.For instance, Johansson [16] gave efficient algorithms based on the Euler-Maclaurin summation formula, Johansson and Blagouchine [17] used complex integration, and Prévost and Rivoal [18] presented approximating sequences obtained by generalizing the remainder Padé approximants method.See also [19] for computations based on a differential calculus for the gamma process.
.23) Applying Lemma 3.6 to each one of the terms I λi , we see from (3.23) that the left-hand side in (3.22) is bounded above byMJOM