Skip to main content

Special zeta Mahler functions


We study the zeta Mahler function (ZMF, also zeta Mahler measure), which is closely related to the Mahler measure. Here we discuss a family of ZMFs attached to the Laurent polynomials \(k + (x_1 + x_1^{-1}) \cdots \left( x_r + x_r^{-1}\right) \), where k is real. We give explicit formulae, present examples and establish properties for these ZMFs, such as an RH-type phenomenon. Further, we explore connections with the Mahler measure.


For a Laurent polynomial \(P \in \mathbb {C}[x_1^{\pm 1}, \dots , x_r^{\pm 1}] \setminus \{ 0 \},\) define the zeta Mahler function (ZMF) as

$$\begin{aligned} Z(P;s) := \frac{1}{(2 \pi i)^r}\int \limits _{\mathbb {T}^r} \left| P(x_1, \dots , x_r) \right| ^s \, \frac{\mathrm {d}x_1}{x_1} \cdots \frac{\mathrm {d}x_r}{x_r}, \end{aligned}$$

where s is a complex parameter. This was first introduced by Gelfand and Bernstein in [1]. The function, though still unnamed, was later studied by Atiyah [2], Bernstein [3] and by Cassaigne and Maillot [4]. Finally, in 2009 these functions were re-introduced by Akatsuka [5]. The values Z(Ps) can be interpreted as the average value of \(\left| P(x_1, \dots , x_r) \right| ^s\) on the torus \(\mathbb {T}^r = \{ (x_1, \dots , x_r) \in \mathbb {C}^r :|x_1| = \dots = |x_r| = 1\}\). If we let \(X_1, \dots , X_r\) be uniformly distributed random variables on the complex unit circle we can also interpret Z(Ps) as the s-th moment of the random variable \(|P(X_1, \dots , X_r)|\). The logarithmic Mahler measure of P is defined as

$$\begin{aligned} {\text {m}}(P) := \frac{1}{(2 \pi i)^r}\int _{\mathbb {T}^r} \log |P(x_1, \dots , x_r)| \, \frac{\mathrm {d}x_1}{x_1} \dots \frac{\mathrm {d}x_r}{x_r} = \frac{\mathrm {d}Z(P;s)}{\mathrm {d}s}\Bigr |_{s = 0}, \end{aligned}$$

see [6]. In this paper we discuss properties of the zeta Mahler function of the polynomials \(k + (x_1 + x_1^{-1}) \cdots (x_r + x_r^{-1})\) for real k. We denote this function by \(W_r(k;s)\),

$$\begin{aligned} W_r(k;s) = \frac{1}{(2 \pi i)^r} \int \limits _{\mathbb {T}^r} \left| k + \left( x_1 + x_1^{-1}\right) \cdots \left( x_r + x_r^{-1}\right) \right| ^s \frac{\mathrm {d}x_1}{x_1} \dots \frac{\mathrm {d}x_r}{x_r}. \end{aligned}$$

It is easy to see that for real k, the quantity \(k + (x_1 + x_1^{-1}) \cdots (x_r + x_r^{-1})\) is real-valued on the torus \(\mathbb {T}^r\). Further, a substitution \(x_1 \mapsto -x_1\) shows that \(W_r(|k|,s) = W_r(k;s)\), so it suffices to consider only \(k \ge 0\). In the computation we make a clear distinction between the cases \(k \ge 2^r\) and \(0 \le k < 2^r\). In the case \(k \ge 2^r\) (the “light” case) the structure is much simpler as we can drop the absolute value in the integral (2). The ZMFs are originally defined in a half-plane \({\text {Re}}(s)>s_0\) for some \(s_0<0\), and hypergeometric expressions that we prove in this paper provide one with an efficient way for analytic continuation of the ZMF to a meromorphic function of s with an explicit location and structure of poles.

For \(r=1\), these values are already recorded in [5, Theorem 4]. We give a simpler expression for \(W_1(k;s)\).

Theorem 1.1

For \(s \in \mathbb {C}\), the following expressions are valid.

  1. (i)

    For \(|k|>2\), we have

    $$\begin{aligned} W_1(k;s) = |k|^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2} ; 1; \frac{4}{k^2} \right) . \end{aligned}$$
  2. (ii)

    For \(|k| = 2\), we have

    $$\begin{aligned} W_1(k;s) = \frac{2^s \Gamma \left( \frac{1}{2} + s\right) }{\Gamma \left( 1+\frac{s}{2}\right) \Gamma \left( \frac{1+s}{2}\right) }. \end{aligned}$$
  3. (iii)

    For \(|k| < 2\), we have

    $$\begin{aligned} W_{1}(k;s) = \frac{4^s \Gamma \left( \frac{1 + s}{2}\right) ^2}{\pi \Gamma \left( 1 + s\right) } \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{-s}{2} ; \frac{1}{2}; \frac{k^2}{4} \right) . \end{aligned}$$

Here and in what follows

$$\begin{aligned} {}_{r+1} F_{r} \left( a_1, \dots , a_{r+1}; b_1, \dots ,b_{r}; z\right) := \sum _{n \ge 0} \frac{(a_1)_n \cdots (a_{r+1})_n}{(b_1)_n \cdots (b_r)_n} \frac{z^n}{n!} \end{aligned}$$

is the hypergeometric function, where

$$(x)_n := x (x + 1) \cdots (x + n - 1)$$

denotes the Pochhammer symbol. A generalization of the hypergeometric function we also need is the Meijer G-function,

$$\begin{aligned} G_{p,q}^{m,n}(a_1, \dots , a_p;b_1, \dots , b_q; z); \end{aligned}$$

see [7, Sect. 16.17] for the definition.

The explicit formulae in Theorem 1.1 allow us to compute a functional equation in the variable s, using the symmetry of the \({}_{2} F_{1}\) hypergeometric function.

Theorem 1.2

  1. (i)

    For \(|k|>2\) and \(s \in \mathbb {C}\), we have

    $$\begin{aligned} W_1(k;-s-1) = (k^2 - 4)^{-s-\frac{1}{2}} W_1(k;s). \end{aligned}$$
  2. (ii)

    For \(|k|<2\) and \(-1< {\text {Re}}(s) < 0\), we have

    $$\begin{aligned} W_1(k;-s-1) = \cot \left( -\frac{\pi s}{2} \right) (4-k^2)^{-s-\frac{1}{2}} W_1(k;s). \end{aligned}$$

The functional equation for \(|k|>2\) was already recorded in [5, Theorem 3], the \(|k|<2\) functional equation is new. Note that in both cases the line of symmetry of \(W_1(k;s)\) is \({\text {Re}}(s) = -\frac{1}{2}\). In fact, all the zeros of \(W_1(k;s)\) lie on this line. Furthermore, equation (5) defines an analytic continuation of \(W_1(k;s)\) to a meromorphic function on \(\mathbb {C}\).

Theorem 1.3

For any \(k \in \mathbb {R}\), all the non-trivial zeros of \(W_1(k;s)\) lie on the critical line \({\text {Re}}(s) = -\frac{1}{2}\).

Furthermore, we present a general formula for all r in the “light” case.

Theorem 1.4

Let \(r \ge 1\).

  1. (i)

    For \(|k| > 2^r\) and all \(s \in \mathbb {C}\),

    $$\begin{aligned} W_r(k;s) = |k|^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{k^2} \right) . \end{aligned}$$
  2. (ii)

    For \(|k| = 2^r\) and \({\text {Re}}(s) > -\frac{r}{2}\),

    $$\begin{aligned} W_r(k;s) = 2^{rs} {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; 1 \right) . \end{aligned}$$

The case \(r=2\) and \(|k|>4\) was already recorded in [5, Theorem 6].

For \(0< k < 2^r\) we show that, as a function of k, \(W_r(k;s)\) always satisfies the same differential equation as \(W_r(k;s)\) for \(|k|>2^r\). So that the knowledge of the solutions of this differential equation allows us to give an explicit formula for \(W_r(k;s)\) for \(|k| < 2^r\). For real s we have the following result.

Theorem 1.5

For real \(s>0\), s not an odd integer, and real k,

$$\begin{aligned} W_r(k;s)&= |k|^s {\text {Re}} \, {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{k^2} \right) \nonumber \\&\quad + \tan \left( \frac{\pi s}{2} \right) |k|^s {\text {Im}} \, {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{k^2} \right) . \end{aligned}$$

More generally, for \(r=2\) we find the following formula.

Theorem 1.6

For \(|k| < 4\) and \({\text {Re}}(s) > -1\) and s not an odd integer, we have

$$\begin{aligned} W_{2}(k;s)&=\frac{1}{2 \pi } \frac{\tan (\frac{\pi s}{2})}{s+1} |k|^{1+s} {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1 +\frac{s}{2}, \frac{3}{2} + \frac{s}{2} ; \frac{k^2}{16} \right) \nonumber \\&\quad + \frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} {}_3 F_{2} \left( \frac{-s}{2} ,\frac{-s}{2},\frac{-s}{2}; \frac{1-s}{2},\frac{1}{2}; \frac{k^2}{16} \right) . \end{aligned}$$

Furthermore for \(r=3\) we find the following extension.

Theorem 1.7

For \(|k|<8\) and \({\text {Re}}(s) > -1\), s not an odd positive integer, we have

$$\begin{aligned} W_3(k;s)&= \frac{\Gamma (1+s)^3}{\Gamma (1+\frac{s}{2})^6} \cdot {}_4 F_{3} \left( \frac{-s}{2},\frac{-s}{2},\frac{-s}{2},\frac{-s}{2};\frac{1-s}{2},\frac{1-s}{2},\frac{1}{2};\frac{k^2}{64}\right) \nonumber \\&\quad -\frac{\tan (\frac{\pi s}{2})^2}{4 \pi (1+s)} |k|^{1+s} \cdot {}_4 F_{3} \left( \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1+\frac{s}{2},\frac{3+s}{2};\frac{k^2}{64}\right) \nonumber \\&\quad +\frac{4^s \tan (\frac{\pi s}{2}) \Gamma (s+1)}{\pi ^{7/2}} \cdot G_{4,4}^{2,4} \left( \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2};\frac{1+s}{2}, \frac{1+s}{2}, 0 ,\frac{1}{2} ;\frac{k^2}{64} \right) , \nonumber \\ \end{aligned}$$

where \(G_{p,q}^{m,n}\) denotes the Meijer G-function.

Apart from the hypergeometric expressions for \(W_r(k;s)\) above, we compute the probability density \(p_r(k;-)\) of the random variable \(|k + (X_1 + X_1^{-1}) \cdots (X_r + X_r^{-1})|\), where the \(X_i\) are independent uniformly distributed random variables on the complex unit circle. We can relate this \(p_r\) to \(W_r\) via

$$\begin{aligned} W_r(k;s) = \int _0^\infty x^s p_r(k;x)\, \mathrm {d}x, \end{aligned}$$

so that \(p_r(k;-)\) is the inverse Mellin transform of \(W_r(k;s)\) (see Sect. 4.1.2).

This paper is structured as follows. In Sect. 2 we prove Theorem 1.1, and in Sect. 3 we prove Theorems 1.2 and 1.3. In Sect. 4, we study properties of the probability densities \(p_r\) and prove Theorems 1.4 and 1.5, using methods different from Sect. 2 and generalizing Theorem 1.1. We finish Sect. 4 with the proofs of Theorems 1.6 and 1.7. Furthermore, we use our findings to give representations of Mahler measures not only in a hypergeometric form but also as multiple Euler type integrals.

Proof of Theorem 1.1

Proof of Theorems 1.1.(i) and 1.1.(ii)

Assume that \(k>2\). We compute the probability distribution \(p_1(k;-)\) of the random variable \(|k + X + X^{-1}|\), where X is uniformly distributed on the unit circle \(\{z \in \mathbb {C} :|z| = 1 \}\). It can be easily seen that the density of this random variable for \(k>2\) is given by

$$\begin{aligned} p_1(k;x) = \frac{1}{2\pi } \frac{1}{\sqrt{1 - \frac{(x - k)^2}{4}}} \end{aligned}$$

with the support on \((k-2 , k + 2)\), so that

$$\begin{aligned} W_1(k;s) = \int \limits _{k-2}^{k+2} x^s p_1(k;x) \mathrm {d}x&= \frac{1}{2 \pi } \int \limits _{k-2}^{k+2} \frac{x^s \mathrm {d}x}{\sqrt{1 - \frac{(x-k)^2}{4}}}. \end{aligned}$$

Using the substitution \(y = (x-k+2)/4\) to normalize the integral, we find that

$$\begin{aligned} W_1(k;s) = \frac{(k-2)^s}{\pi } \int \limits _0^1 y^{-\frac{1}{2}} (1-y)^{-\frac{1}{2}} \left( 1 - \frac{4}{2 - k} y \right) ^s \, \mathrm {d}y. \end{aligned}$$

Recall now the Euler integral

$$\begin{aligned} {}_2 F_{1} \left( a,b;c;z \right) = \frac{\Gamma (c)}{\Gamma (b) \Gamma (c-b)} \int \limits _0^1 t^{b-1} (1-t)^{c-b-1} (1 - zt)^{-a} \, \mathrm {d}t, \end{aligned}$$

valid for \({\text {Re}}(c)> {\text {Re}}(b) > 0\) and \(z \not \in [1, \infty )\), see [8, p. 4]. We may write

$$\begin{aligned} W_1(k;s) = (k-2)^s \cdot {}_{2} F_{1} \left( -s, \frac{1}{2} ; 1; \frac{4}{2 - k} \right) . \end{aligned}$$

Applying the quadratic transformation

$$\begin{aligned} {}_2 F_{1} \left( a,b;2b;z \right) = \left( 1 - \frac{1}{2}z\right) ^{-a} {}_2 F_{1} \left( \frac{1}{2}a,\frac{1}{2}a+\frac{1}{2};b+\frac{1}{2};\left( \frac{z}{2-z}\right) ^2 \right) \end{aligned}$$

for \(|{\text {arg}}(1-z)|<\pi \) (see [9, Eq. (9.6.17)]), Eq. (11) simplifies to

$$\begin{aligned} W_1(k;s) = k^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2} ; 1; \frac{4}{k^2} \right) . \end{aligned}$$

For \(k = \pm 2\), we find

$$\begin{aligned} W_1(\pm 2;s) = 2^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2} ; 1; 1 \right) \end{aligned}$$

valid for \({\text {Re}}(s) > -\frac{1}{2}\). Letting \(z \rightarrow 1\) in (10) (in other words, using the Gauss summation formula) gives

$$\begin{aligned} W_1(\pm 2;s) = \frac{2^s \Gamma \left( \frac{1}{2} + s\right) }{\Gamma \left( 1+\frac{s}{2}\right) \Gamma \left( \frac{1+s}{2}\right) }. \end{aligned}$$

Proof of Theorem 1.1.(iii)

We now consider \(0 \le k<2\). Then the probability density of the random variable \(|k + X_1 + X_1^{-1}|\) is given by

$$\begin{aligned} p_1(k;x) = {\left\{ \begin{array}{ll} \frac{1}{2\pi } \frac{1}{\sqrt{1 - \frac{(x - k)^2}{4}}} &{}\text {for } 2 - k \le x< 2+k,\\ \frac{1}{2\pi } \frac{1}{\sqrt{1 - \frac{(x - k)^2}{4}}} + \frac{1}{2\pi } \frac{1}{\sqrt{1 - \frac{(x + k)^2}{4}}}&{}\text {for } 0 \le x<2-k,\\ 0 &{}\text {for }x<0 \text { or } x \ge 2 +k, \end{array}\right. } \end{aligned}$$

leading to

$$\begin{aligned} W_1(k;s)&= \int \limits _{0}^{2+k} x^s p_1(k;x) \,\mathrm {d}x \\&= \frac{1}{2\pi }\int \limits _{0}^{2+k} \frac{x^s}{\sqrt{1 - (\frac{x-k}{2})^2}} \,\mathrm {d}x + \frac{1}{2\pi }\int \limits _{0}^{2-k} \frac{x^s}{\sqrt{1 - (\frac{x+k}{2})^2}} \,\mathrm {d}x.\\ \end{aligned}$$

Substituting \(y = \frac{x}{k+2}\), we obtain

$$\begin{aligned} \frac{1}{2 \pi } \int \limits _{0}^{2+k} \frac{x^s}{\sqrt{1 - (\frac{x-k}{2})^2}} \,\mathrm {d}x&= \frac{1}{\pi }\int \limits _{0}^{1} \frac{(2+k)^{s+1} y^s}{\sqrt{(1-y)(2+k)(y(2+k) + 2 - k)}} \, \mathrm {d}y\\&= \frac{(2+k)^{s + \frac{1}{2}}}{\pi (2 - k)^{\frac{1}{2}}} \int \limits _{0}^{1} y^s(1 - y)^{-\frac{1}{2}}\left( 1 - \frac{k+2}{k - 2}y \right) ^{-\frac{1}{2}} \, \mathrm {d}y\\&= \frac{(2+k)^{s + \frac{1}{2}}}{\pi (2 - k)^{\frac{1}{2}}} \frac{\Gamma (s+1)\Gamma (\frac{1}{2})}{\Gamma (s + \frac{3}{2})} {}_2 F_{1} \left( \frac{1}{2}, s + 1; s + \frac{3}{2}; \frac{k+2}{k - 2}\right) . \end{aligned}$$

Using this and the symmetric representation for \(\int _{0}^{2-k}\), we deduce that

$$\begin{aligned} W_1(k;s)&= \frac{\Gamma (s+1)}{\sqrt{\pi }\Gamma (s + \frac{3}{2})} \biggl (\frac{(2+k)^{s + \frac{1}{2}}}{(2 - k)^{\frac{1}{2}}}{}_2 F_{1} \left( \frac{1}{2}, s + 1; s + \frac{3}{2}; \frac{2+k}{-2 + k}\right) \\&\quad + \frac{(2-k)^{s + \frac{1}{2}}}{(2 + k)^{\frac{1}{2}}}{}_2 F_{1} \left( \frac{1}{2}, s + 1; s + \frac{3}{2}; \frac{-2 + k}{2 + k}\right) \biggr ) \end{aligned}$$

for \({\text {Re}}(s) > - 1\). Applying the transformation

$$\begin{aligned} {}_{2}F_{1}(a,b;c;z)=(1-z)^{-a}\,{}_{2}F_{1} \left( a,c - b;c;{\frac{z}{z-1}} \right) \end{aligned}$$

valid for \(|z| \le \frac{1}{2}\), we find

$$\begin{aligned} W_1(k;s)&= \frac{\Gamma (s+1)}{2 \sqrt{\pi }\Gamma (s + \frac{3}{2})} \biggl ( (2 + k)^{s + \tfrac{1}{2}} {}_2 F_{1} \left( \frac{1}{2},\frac{1}{2}; s + \frac{3}{2}; \frac{2 + k}{4}\right) \\&\quad + (2 - k)^{s + \tfrac{1}{2}} {}_2 F_{1} \left( \frac{1}{2},\frac{1}{2}; s + \frac{3}{2}; \frac{2-k}{4}\right) \biggr ). \end{aligned}$$

The latter expression can be simplified to the form (3), using the quadratic transformation

$$\begin{aligned} \frac{2\Gamma (\frac{1}{2})\Gamma (a+b+\frac{1}{2})}{ \Gamma (a+\frac{1}{2})\Gamma (b+\frac{1}{2})} {}_{2} F_{1}\left( a,b; \tfrac{1}{2};z^2\right)&={}_{2} F_{1}\left( 2a,2b;a+b+\tfrac{1}{2};\tfrac{1}{2}-\tfrac{1}{2} z\right) \\&\quad + \, {}_{2} F_{1}\left( 2a,2b;a+b+\tfrac{1}{2};\tfrac{1}{2}+\tfrac{1}{2}z\right) . \end{aligned}$$

for \(|z| \le 1\) (see [7, Eq. (15.8.27)]).

Proof of Theorems 1.2 and 1.3

In this section we discuss special properties of \(W_r(k;s)\) for \(r=1\). We give a functional equation for \(W_1(k;s)\) and we state and prove the “Riemann hypothesis” for \(s \mapsto W_1(k;s)\). It is not clear to what extend these results generalize for \(r > 1\).

Functional Equations for \(W_1(k;s)\)

Proof of Theorem 1.2.(i)

We start with the proof of Theorem 1.2.(i). We give two proofs of this fact. One proof uses hypergeometric transformations, and the other uses the probability distribution \(p_1\).

For the first proof, we use Euler’s transformation formula

$$\begin{aligned} {}_2 F_{1} \left( a,b; c; z \right) = (1-z)^{c-a-b} {}_2 F_{1} \left( c-a,c -b; c; z \right) , \end{aligned}$$

which is just the double iteration of the transformation (). We find out that

$$\begin{aligned} W_1(k;-s-1)&= |k|^{-s-1} {}_2 F_{1} \left( \frac{1+s}{2},1 + \frac{s}{2}; 1; \frac{4}{k^2} \right) \\&= |k|^{-s-1}\left( 1 - \frac{4}{k^2}\right) ^{ - s-\frac{1}{2}} {}_2 F_{1} \left( \frac{1-s}{2},-\frac{s}{2}; 1; \frac{4}{k^2} \right) \\&= (k^2 - 4)^{-s-\frac{1}{2}} W_1(k;s). \end{aligned}$$

For the second proof, we use the symmetry of formula (9):

$$\begin{aligned} p_1 \left( k; \frac{k^2 - 4}{x} \right) = \frac{x}{\sqrt{k^2 - 4}} p_1(k;x) \end{aligned}$$

for \(x \in (k-2,k+2)\). Now applying it to

$$\begin{aligned} W_1(k;s) = \int \limits _{|k|-2}^{|k|+2} x^s p_1(k;x) \, \mathrm {d}x \end{aligned}$$

gives the functional equation (4).

Proof of Theorem 1.2.(ii)

We start with the proof of Theorem 1.2.(ii). Using Euler’s transformation formula (13) we obtain

$$\begin{aligned} W_1(k;-s-1)&= \frac{4^{-s-1} \Gamma (-\frac{s}{2})^2}{\pi \Gamma (-s)} \cdot {}_2 F_1 \left( \frac{1+s}{2}, \frac{1+s}{2}; \frac{1}{2} ; \frac{k^2}{4}\right) \\&= \frac{4^{-s-1} \Gamma (-\frac{s}{2})^2}{\pi \Gamma (-s)} \left( 1 - \frac{k^2}{4} \right) ^{-s-\frac{1}{2}} {}_2 F_1 \left( -\frac{s}{2}, -\frac{s}{2}; \frac{1}{2} ; \frac{k^2}{4}\right) \\&= \frac{4^{-s-\frac{1}{2}}\Gamma (-\frac{s}{2})^2 \Gamma (1+s)}{\Gamma (-s) \Gamma (\frac{1+s}{2})^2} (4 - k^2)^{-s-\frac{1}{2}} W_1(k;s)\\&= 4^{-s-\frac{1}{2}} 2^{2s + 1} \cot \left( -\frac{\pi s}{2} \right) \cdot (4-k^2)^{-s-\frac{1}{2}} W_1(k;s)\\&= \cot \left( -\frac{\pi s}{2} \right) \cdot (4-k^2)^{-s-\frac{1}{2}} W_1(k;s), \end{aligned}$$

which is precisely the functional Eq. (5).

Zeros of \(W_1(k;s)\)

For \(\lambda \in \mathbb {C}\) and \((\alpha ,\beta ) \in \mathbb {C}^2\), consider the Jacobi function

$$\begin{aligned} \varphi _{\lambda }^{(\alpha ,\beta )}(t) := (\cosh {t})^{-\alpha - \beta -1 - i\lambda } {}_2 F_1 \left( \frac{\alpha + \beta + 1 + i\lambda }{2}, \frac{\alpha - \beta + 1 + i\lambda }{2}; \alpha + 1; \tanh ^2{t} \right) , \end{aligned}$$

see [10]. When \((\alpha ,\beta ) \in \mathbb {C}^2\) is fixed, the sequence \(\{ \varphi ^{(\alpha ,\beta )}_{\lambda } \}_{\lambda \ge 0}\) forms a continuous orthogonal system on \(\mathbb {R}_{\ge 0}\) with respect to the weight function

$$\begin{aligned} \Delta _{\alpha ,\beta }(t) := (2 \sinh {t})^{2 \alpha + 1} (2 \cosh {t})^{2 \beta + 1}. \end{aligned}$$

In this way \(\varphi _{\lambda }^{(\alpha , \beta )}(t)\) becomes the unique even \(C^{\infty }\)-function f on \(\mathbb {R}\) satisfying

$$\begin{aligned} \frac{\mathrm {d}^2f}{\mathrm {d}t^2} + \frac{\Delta '(t)}{\Delta (t)}\frac{\mathrm {d}f}{\mathrm {d}t} + \left( \lambda ^2 + (\alpha + \beta + 1)^2 \right) f =0, \end{aligned}$$

where the dash denotes the derivative with respect to t. For brevity we write \(\varphi _{\lambda }\) for \(\varphi ^{(\alpha ,\beta )}_{\lambda }\), similarly for \(\Delta \).

Lemma 3.1

For any \(x > 0\), \(\lambda , \mu \in \mathbb {C}\) with \(\lambda \ne \pm \mu \) and \(\alpha , \beta \in \mathbb {R}\),

$$\begin{aligned} \int \limits _0^x \varphi _\lambda (t) \varphi _\mu (t) \Delta (t) \, \mathrm {d}t = (\mu ^2 - \lambda ^2)^{-1} \Delta (x) \left( \varphi _\lambda '(x)\varphi _\mu (x) - \varphi _\lambda (x)\varphi _\mu '(x) \right) . \end{aligned}$$


Note that we can write (15) as

$$\begin{aligned} \left( \Delta (t)\varphi _{\lambda }'(t) \right) ' = - \left( \lambda ^2 + (\alpha + \beta + 1)^2 \right) \Delta (t)\varphi _{\lambda }(t). \end{aligned}$$

Now performing integration by parts twice and using \(\varphi _\lambda (0) = 0\) gives

$$\begin{aligned} -(\mu ^2 + (\alpha + \beta + 1)^2)\int _0^x \varphi _\lambda (t) \varphi _\mu (t) \Delta (t) \, \mathrm {d}t&= \int \limits _0^x \varphi _\lambda (t) \left( \Delta (t)\varphi _{\mu }'(t) \right) ' \, \mathrm {d}t \\&=\Delta (x)\varphi _\lambda (x)\varphi _\mu '(x) - \int \limits _0^x \varphi _\lambda '(t)\Delta (t) \varphi _{\mu }'(t) \, \mathrm {d}t \\&=\Delta (x) \left( \varphi _\lambda (x)\varphi _\mu '(x) - \varphi _\lambda '(x)\varphi _\mu (x) \right) \\&\quad + \int \limits _0^x \left( \varphi _\lambda '(t) \Delta (t) \right) ' \varphi _{\mu }(t) \, \mathrm {d}t \\&= \Delta (x) \left( \varphi _\lambda (x)\varphi _\mu '(x) - \varphi _\lambda '(x)\varphi _\mu (x) \right) \\&\quad -(\lambda ^2 + (\alpha + \beta + 1)^2) \int \limits _0^x \varphi _\lambda (t) \varphi _{\mu }(t) \Delta (t) \, \mathrm {d}t, \quad \end{aligned}$$

which is Eq. (15). \(\square \)

Lemma 3.2

For any \(x > 0\) and \(\alpha , \beta \in \mathbb {R}\), if \(\varphi _\lambda (x) = 0\) then \(\lambda \in \mathbb {R} \cup i \mathbb {R}\). Moreover, if \(\min (\alpha +\beta +1,\alpha -\beta +1) > 0\) then \(\varphi _{i \mu }(x) > 0\) for \(\mu \le 0\).

The idea of the proof of Lemma 3.2 is based on the proof of Lommel’s theorem on the zeros of Bessel function, see [11, p. 482].

Proof of Lemma 3.2

Fix \(x > 0\) and assume that there is a \(\lambda \not \in \mathbb {R} \cup i \mathbb {R}\) such that \(\varphi _\lambda (x) = 0\). Choose \(\mu = \overline{\lambda }\) and apply Lemma 3.1. Clearly, \(\overline{\varphi _\lambda } = \varphi _\mu \), hence

$$\begin{aligned} \int \limits _0^x |\varphi _\lambda (t)|^2 \Delta (t) \, \mathrm {d}t = (\overline{\lambda }^2 - \lambda ^2)^{-1} \Delta (x) \left( \varphi _\lambda '(x)\overline{\varphi _\lambda (x)} - \varphi _\lambda (x)\overline{\varphi _\lambda '(x)} \right) = 0. \end{aligned}$$

But this gives a contradiction as \(\int _0^x |\varphi _\lambda (t)|^2 \Delta (t) \, \mathrm {d}t\) is transparently positive. Thus \(\lambda \in \mathbb {R} \cup i \mathbb {R}\). If additionally \(\min (\alpha +\beta +1,\alpha -\beta +1) > 0\), the coefficients of the hypergeometric function in (14) are strictly positive provided that \(\lambda = i \mu \) for \(\mu \le 0\), showing that \(\varphi _{i\mu } >0\)\(\square \)

If \((\alpha , \beta ) = (-\frac{1}{2},0)\) or \((0, -\frac{1}{2})\), then a consequence of this result is Theorem 1.3.

Proof of Theorem 1.3

Using Lemma 3.2 it suffices to show that \(s \mapsto W_1(k;s)\) has no real zeros. For \(s \ge 0\) this is obvious, as \(W_1(k;s)\), according to its definition (2), coincides with an integral of a positive function. \(\square \)

The zeta Mahler function \(W_r(k;s)\) for general r

We start this section with noticing that for \(r = 2\), the value of \(W_2(k;s)\) coincides with \(Z(k + x + x^{-1} + y + y^{-1}; s)\) (see (1)). Indeed, the substitution \(x = x_1x_2\) and \(y = x_1x_2^{-1}\) in the latter leads to

$$\begin{aligned} k + x_1x_2 + (x_1x_2)^{-1} + x_1x_2^{-1} + \left( x_1x_2^{-1}\right) ^{-1} = k + \left( x_1 + x_1^{-1}\right) \left( x_2 + x_2^{-1}\right) . \end{aligned}$$

For \(r \ge 3\), the value of (2) is different from \(Z(k + x_1 + x_1^{-1} + \dots + x_r + x_r^{-1};s)\).

In this section we discuss the densities \(p_r(k;-)\).

The probability densities \(p_r(k;-)\)

In this section we explicitly compute the probability distributions \(p_r(k;-)\) for \(r = 1,2,3\) and discuss how to obtain them for any r.

Define \(\hat{p}_r\) to be the density of the random variable \((X_1 + X_1^{-1}) \cdots (X_r + X_r^{-1})\) on \(\mathbb {C}^r\); note that the quantity assumes real values only. We can relate the density of \(\hat{p}_r\) to \(p_r(k;-)\) in the following way (compare with Sect. 2.2).

Lemma 4.1

For \(|k| < 2^r\), we have

$$\begin{aligned} p_r(k;x) = {\left\{ \begin{array}{ll} \hat{p}_r(x - |k|)&{}\text {for } 2^r - |k| \le x< 2^r+|k|,\\ \hat{p}_r(x - |k|) + \hat{p}_r(x + |k|)&{}\text {for } 0 \le x<2^r-|k|,\\ 0 &{}\text {for }x<0 \text { or } x \ge 2^r +|k|. \end{array}\right. } \end{aligned}$$

For \(|k| \ge 2^r\), we have \(p_r(k;x) = \hat{p}_r(x - |k|)\).


We have

$$\begin{aligned} W_r(k;s)&= \int \limits _{\mathbb {R}} |z|^s \hat{p}_r(z-|k|) \, \mathrm {d}z\\&= \int \limits _{2^r - |k|}^{2^r + |k|} |z|^s \hat{p}_r(z-|k|) \, \mathrm {d}z. \end{aligned}$$

If \(|k| \ge 2^r\), then

$$\begin{aligned} W_r(k;s) = \int \limits _{2^r - |k|}^{2^r + |k|} z^s \hat{p}_r(z-|k|) \, \mathrm {d}z, \end{aligned}$$

so that \(p_r(k;x) = \hat{p}_r(x - |k|)\).

If \(|k| < 2^r\), then

$$\begin{aligned} W_r(k;s)&= \int \limits _{2^r - |k|}^{2^r + |k|} |z|^s\hat{p}_r(z - |k|) \, \mathrm {d}z\\&= \int \limits _0^{2^r + |k|} z^s\hat{p}_r(z - |k|) \, \mathrm {d}z + \int \limits _0^{2^r - |k|} z^s\hat{p}_r(- z - |k|) \, \mathrm {d}z \\&= \int \limits _0^{2^r + |k|} z^s\hat{p}_r(z - |k|) \, \mathrm {d}z + \int \limits _0^{2^r - |k|} z^s\hat{p}_r(z + |k|) \, \mathrm {d}z\\&= \int \limits _0^{2^r - |k|} z^s \left( \hat{p}_r(z - |k|) \, \mathrm {d}z + \hat{p}_r(z + |k|) \right) \, \mathrm {d}z + \int \limits _{2^r - |k|}^{2^r + |k|} z^s\hat{p}_r(z - |k|) \, \mathrm {d}z, \end{aligned}$$

where the symmetry \(\hat{p}_r(-z) = \hat{p}_r(z)\) was employed. \(\square \)

Using Lemma 4.1, it is clear that it suffices to compute the distributions \(\hat{p}_r\).

Computation of \(\hat{p}_r\)

We will now compute \(\hat{p}_r\) explicitly for \(r = 1,2\) and 3.

It follows from (9) that

$$\begin{aligned} \hat{p}_1(x) = \frac{1}{2\pi } \frac{1}{\sqrt{1 - \frac{x^2}{4}}}, \end{aligned}$$

for \(|x| < 2\) and \(\hat{p}_1(x) = 0\) otherwise.

For \(r \ge 1\), define \(G_r(y) = \hat{p}_r(2^r \sqrt{1-y})\) for \(0 < y \le 1\) and \(G_r(y) = 0\) otherwise, so that \(\hat{p}_r(y) = G_r(1-x^2/4^r)\). By the above,

$$\begin{aligned} G_1(y) = \frac{1}{2 \pi } \frac{1}{\sqrt{y}} \end{aligned}$$

for \(0 < y \le 1\) and \(G_1(y) = 0\) otherwise.

Further, using basic properties of the Mellin transform, it follows that the \(G_r\) satisfy a recurrence for \(r \ge 2\):

$$\begin{aligned} G_r \left( 1 - \frac{x^2}{4^r} \right)&= \int \limits _{\mathbb {R}} G_{r-1} \left( 1-\frac{t^2}{4^{r-1}} \right) G_{1}\left( 1 - \frac{x^2}{4t^2} \right) \frac{\mathrm {d}t}{|t|} \nonumber \\&= 2 \int \limits _{x/2}^{2^{r-1}} G_{r-1} \left( 1-\frac{t^2}{4^{r-1}} \right) G_{1}\left( 1 - \frac{x^2}{4t^2} \right) \frac{\mathrm {d}t}{t} \nonumber \\&= \frac{1}{\pi } \int \limits _{x/2}^{2^{r-1}} G_{r-1} \left( 1 - \frac{t^2}{4^{r-1}} \right) \frac{\mathrm {d}t}{\sqrt{t^2 - \frac{x^2}{4}}} . \end{aligned}$$

Then using the substitution \(u = t^2/4^{r-1}\) in (16), we find

$$\begin{aligned} G_r \left( 1 - \frac{x^2}{4^r} \right)&= \frac{1}{2 \pi } \int \limits _{x^2/4^r}^{1} G_{r-1}(1 - u) \frac{\mathrm {d}u}{\sqrt{u(u - \frac{x^2}{4^r})}},\\&= \frac{1}{2 \pi } \int \limits _{0}^{1 - x^2/4^r} G_{r-1}(u) \frac{\mathrm {d}u}{\sqrt{(1-u)(1 - \frac{x^2}{4^r} - u)}}. \end{aligned}$$

Finally, let \(y = 1 - x^2/4^r\) and \(v = u/y\) to arrive at the following result.

Theorem 4.2

(Recursive formula for the \(G_r\)) For \(G_r\) with \(r \ge 2\) we have the following recursion:

$$\begin{aligned} G_r(y)&= \frac{\sqrt{y}}{2\pi } \int _{0}^1 G_{r-1}(yv) \frac{\mathrm {d}v}{\sqrt{(1-v)(1-yv)}}. \end{aligned}$$

Applying this recursion with \(r = 2\) we obtain for \(0< y < 1\),

$$\begin{aligned} G_2(y)&= \frac{\sqrt{y}}{2\pi } \int \limits _{0}^1 G_1 (yv) \frac{\mathrm {d}v}{\sqrt{(1-v)(1-yv)}}\\&= \frac{1}{4 \pi ^2} \int \limits _{0}^1 \frac{\mathrm {d}v}{\sqrt{v(1-v)(1-yv)}} \\&= \frac{1}{4 \pi } \cdot {}_2 F_{1} \left( \frac{1}{2}, \frac{1}{2};1;y \right) . \end{aligned}$$

In the last step we use the integral representation (10). This shows that

$$\begin{aligned} \hat{p}_2(x) = \frac{1}{4 \pi } \cdot {}_2 F_{1} \left( \frac{1}{2}, \frac{1}{2};1;1 - \frac{x^2}{16} \right) , \end{aligned}$$

for \(0 < |x| \le 4\) and \(\hat{p}_2(x) = 0\) otherwise.

In the case \(r = 3\) we proceed similarly. For \(0< y < 1\),

$$\begin{aligned} G_3(y)&= \frac{\sqrt{y}}{2 \pi } \int \limits _0^1 G_2(yv) \frac{\mathrm {d}v}{\sqrt{(1-v)(1-yv)}} \\&= \frac{\sqrt{y}}{8 \pi ^2} \int \limits _{0}^1 {}_2 F_{1} \left( \frac{1}{2}, \frac{1}{2};1; yv \right) \frac{\mathrm {d}v}{\sqrt{(1 - v)(1 - yv)}}. \end{aligned}$$

The expression

$$\begin{aligned} Y(a)&:= \int _{0}^1 {}_2 F_{1} \left( \frac{1}{2}, \frac{1}{2};1; av \right) \frac{\mathrm {d}v}{\sqrt{(1 - v)(1 - av)}}\\&= \sum _{r \ge 0} \frac{r! \Gamma (1/2)}{\Gamma (r+3/2)} \left( \sum _{n + m = r} \frac{(\frac{1}{2})_n^2 (\frac{1}{2})_m}{(n!)^2 m!}\right) a^r \end{aligned}$$

satisfies a third-order linear differential equation

$$\begin{aligned} 8a^2(a-1)^2\frac{d^3Y}{da^3} + 24a(a-1)(2a-1)\frac{d^2Y}{da^2} + (56a^2 - 54a + 6) \frac{dY}{da} + (8a - 3)Y = 0. \end{aligned}$$

After solving this differential equation (for example, with Mathematica [12]) and checking the initial conditions we find out that

$$\begin{aligned} G_3(y) = \frac{\sqrt{y}}{4 \pi ^2} {}_2 F_{1} \left( \frac{1}{4}, \frac{1}{4} ;\frac{1}{2};y\right) {}_2 F_{1} \left( \frac{3}{4}, \frac{3}{4} ;\frac{3}{2};y\right) , \end{aligned}$$

so that

$$\begin{aligned} \hat{p}_3(x) = \frac{\sqrt{1 - \frac{x^2}{64}}}{4 \pi ^2} {}_2 F_{1} \left( \frac{1}{4}, \frac{1}{4} ;\frac{1}{2};1 - \frac{x^2}{64}\right) {}_2 F_{1} \left( \frac{3}{4}, \frac{3}{4} ;\frac{3}{2};1 - \frac{x^2}{64}\right) \end{aligned}$$

for \(0< |x| \le 8\) and \(\hat{p}_3(x) = 0\) for \(|x| > 8\).

Already at this stage it is pretty suggestive that \(G_r(1-y)\) satisfies the same differential equation as

$$\begin{aligned} {}_r F_{r-1} \left( \frac{1}{2}, \dots , \frac{1}{2};1, \dots , 1; y \right) . \end{aligned}$$

We discuss this in the next subsection.

Mellin Transform of \(\hat{p}_r\)

In this section we find an expression for

$$\begin{aligned} \int _{-2^r}^{2^r} x^{v} \hat{p}_r(x) \, \mathrm {d}x \end{aligned}$$

when \(v \in \mathbb {C}\). This is the v-th moment of the random variable \((X_1 + X_1^{-1}) \dots (X_r + X_r^{-1})\).

Lemma 4.3

For \(\text {Re}(v) > -1\), we have

$$\begin{aligned} \int \limits _0^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x =\frac{1}{2}\left( \frac{2^v}{\pi } \frac{\Gamma \left( \frac{1}{2} \right) \Gamma (\frac{v+1}{2})}{\Gamma \left( 1 + \frac{v}{2} \right) } \right) ^r \end{aligned}$$


$$\begin{aligned} \int _{-2^r}^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x = \frac{1+e^{\pi i v}}{2}\left( \frac{2^v}{\pi } \frac{\Gamma \left( \frac{1}{2} \right) \Gamma (\frac{v+1}{2})}{\Gamma \left( 1 + \frac{v}{2} \right) } \right) ^r. \end{aligned}$$



$$\begin{aligned} |(X_1 + X_1^{-1}) \cdots (X_r + X_r^{-1})|^v =|X_1 + X_1^{-1}|^v \cdots |X_r + X_r^{-1}|^v \end{aligned}$$

and for the expected value

$$\begin{aligned} \mathsf {E}[|(X_1 + X_1^{-1}) \cdots (X_r + X_r^{-1})|^v] = 2 \int _0^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x, \end{aligned}$$

we have

$$\begin{aligned} 2 \int \limits _{0}^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x = \left( 2 \int _{0}^{2} x^v \hat{p}_1(x) \, \mathrm {d}x \right) ^r. \end{aligned}$$

For \(r = 1\), we obtain

$$\begin{aligned} \int \limits _{0}^2 x^v \hat{p}_1(x) \, \mathrm {d}x&= \frac{1}{2 \pi } \int \limits _{0}^2 \frac{x^v}{\sqrt{1 - \frac{x^2}{4}}} \, \mathrm {d}x \\&=\frac{2^{v}}{\pi } \int \limits _{0}^1 \frac{x^v}{\sqrt{1 - x^2}} \, \mathrm {d}x\\&=\frac{2^{v}}{\pi } \frac{\Gamma (v+1)\Gamma (\frac{1}{2})}{\Gamma (\frac{3}{2}+v)} \cdot {}_2 F_{1} \left( \frac{1}{2} ,v+1;\frac{3}{2} + v; -1 \right) \\&= \frac{ 2^{v}}{\pi } \frac{\Gamma (\frac{1}{2}) \Gamma (1 + \frac{v+1}{2})}{(v+1) \Gamma (1 + \frac{v}{2})}. \end{aligned}$$


$$\begin{aligned} \int \limits _0^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x = \frac{1}{2} \left( \frac{2^{v+1}}{\pi } \frac{\Gamma (\frac{1}{2}) \Gamma (1 + \frac{v+1}{2})}{(v+1) \Gamma (1 + \frac{v}{2})} \right) ^r \end{aligned}$$

and, clearly,

$$\begin{aligned} \int _{-2^r}^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x = \left( 1 + e^{\pi i v} \right) \int _{0}^{2^r} x^v \hat{p}_r(x) \, \mathrm {d}x. \end{aligned}$$

\(\square \)

Let \(f :(0, \infty ) \rightarrow \mathbb {R}\) be continuously differentiable and \(s \in \mathbb {C}\). Denote by M(fs) its Mellin transform

$$\begin{aligned} M(f;s) := \int \limits _0^\infty x^s f(x) \, \mathrm {d}x. \end{aligned}$$

Note that our definition of the Mellin transform differs slightly from the standard one, this does not affect the properties discussed below. Further, let \(\theta \) be the differential operator \(x \frac{\mathrm {d}}{\mathrm {d}x}\).

Proposition 4.4

(General properties of Mellin transforms; see [13, Sect. 3.1.2])

  1. (i)

    \(M(\theta f(x);s) = -(s+1) M(f(x);s)\);

  2. (ii)

    \(M(xf(x);s) = M(f(x);s+1) \).

Let \(H_r(y) := G_r(1-y)\) and consider the Mellin transform

$$\begin{aligned} M(H_r;s) = \int \limits _0^\infty y^s H_r(y) \, \mathrm {d}y. \end{aligned}$$

Substituting \(y = x^2/4^r\) and using Lemma 4.3, we obtain

$$\begin{aligned} M(H_r;s)&=\int \limits _0^1 y^s H_r(y) \, \mathrm {d}y = \frac{2}{4^{r(s+1)}} \int _0^{2^r} x^{2s+1} \hat{p}_r(x) \, \mathrm {d}x \\&=\frac{1}{4^{r(s+1)}} \left( \frac{1}{\pi } 2^{2s+1} \frac{\Gamma \left( \frac{1}{2} \right) \Gamma \left( \frac{(2s+1)+1}{2}\right) }{\Gamma \left( 1 + \frac{2s+1}{2} \right) } \right) ^r\\&= \left( \frac{\Gamma \left( \frac{1}{2} \right) \Gamma (s+1)}{2\pi \Gamma \left( s+\frac{3}{2}\right) } \right) ^r . \end{aligned}$$

It is clear that \(M(H_r;s)\) satisfies a recursion:

$$\begin{aligned} \left( s+\frac{1}{2}\right) ^r M(H_r;s) = s^r M(H_r;s-1). \end{aligned}$$

If \(H_r\) were continuously differentiable, it would follow that

$$\begin{aligned} s^r M(H_r;s-1) = (-1)^r M(\theta ^r H_r;s-1) \end{aligned}$$


$$\begin{aligned} \left( s+\frac{1}{2} \right) ^r M(H_r;s) = (-1)^r M \left( \left( \theta + \frac{1}{2} \right) ^r H_r;s \right) . \end{aligned}$$

In other words,

$$M(\theta ^r H_r;s-1) = M \left( \left( \theta + \frac{1}{2} \right) ^r H_r;s \right) ,$$

so that \(H_r(y)\) is annihilated by the differential operator \(\theta ^{r} - y(\theta + 1/2)^r\). This means that \(H_r\) satisfies the same differential equation as \({}_r F_{r-1} \left( \frac{1}{2}, \dots , \frac{1}{2};1, \dots , 1; y \right) \). Though \(H_r\) is not continuously differentiable, giving a similar argument as in the proof of [14, Theorem 2.4] it follows rigorously that \(H_r\) satisfies the same differential equation in a distributional sense.

\(W_r\) for \(|k| \ge 2^r\)

In this part, we compute the value of \(W_r(k;s)\) for \(|k| \ge 2^r\).

Define for \(s \in \mathbb {C}\) the following function of \(z \in \mathbb {C}\):

$$\begin{aligned} F_{r,s}(z) := \int _{z - 2^r}^{z + 2^r} x^s \hat{p}_r(x- z) \, \mathrm {d}x. \end{aligned}$$

Proposition 4.5

(Properties of \(F_{r,s}\))

  1. (i)

    \(F_{r,s}\) is analytic on \(\mathbb {C} \setminus (-\infty , 2^r]\).

  2. (ii)

    \(F_{r,s}\) is continuous on \(\{ z \in \mathbb {C} \ :{\text {Im}}(z) \ge 0\}\); in other words for real \(z \ge 0\).


This follows immediately as we can write

$$\begin{aligned} F_{r,s}(z) = \int \limits _{- 2^r}^{2^r} (x + z)^s \hat{p}_r(x) \, \mathrm {d}x \end{aligned}$$

and notice that \(z \mapsto (x + z)^s\) is holomorphic for \(z \not \in (-\infty , 2^r]\). \(\square \)

First of all, we can find an explicit expression for \(F_{r,s}(z)\) if \(|z|>2^r\). The value of \(F_{r,s}(z)\) when \(|z| \le 2^r\) will follow from the analytic continuation of \(F_{r,s}\), with the help of Proposition 4.5.

Lemma 4.6

For \(|z| > 2^r\), we have

$$F_{r,s}(z) = z^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{z^2} \right) . $$


Note that

$$\begin{aligned} F_{r,s}(z)&= \int \limits _{-2^r}^{2^r} (x+z)^s \hat{p}_r(x) \, \mathrm {d}x, \\&= z^s\int \limits _{-2^r}^{2^r} \left( 1+ \frac{x}{z} \right) ^s \hat{p}_r(x) \, \mathrm {d}x, \end{aligned}$$

as \({\text {Re}}\left( 1+ \frac{x}{z} \right) > 0\). Since \(|x/z|<1\), we can write

$$\begin{aligned} \left( 1 + \frac{x}{z} \right) ^s = \sum _{n \ge 0} \left( {\begin{array}{c}s\\ n\end{array}}\right) \frac{x^n}{z^n}, \end{aligned}$$

so that

$$\begin{aligned} F_{r,s}(z) = z^s \sum _{n \ge 0} \left( {\begin{array}{c}s\\ n\end{array}}\right) \frac{1}{z^n} \int \limits _{-2^r}^{2^r} x^n \hat{p}_r(x) \, \mathrm {d}x. \end{aligned}$$

Now using Lemma 4.3, we find

$$\begin{aligned} \int \limits _{-2^r}^{2^r} x^n \hat{p}_r(x) \, \mathrm {d}x&= \frac{1+(-1)^n}{2}\left( \frac{2^n}{\pi } \frac{\Gamma (\frac{1}{2}) \Gamma (\frac{n+1}{2})}{\Gamma (1 + \frac{n}{2})} \right) ^r \\&= {\left\{ \begin{array}{ll} \left( {\begin{array}{c}n\\ n/2\end{array}}\right) ^r \qquad &{}\hbox {if } n\hbox { is even,} \\ 0 \qquad &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$


$$\begin{aligned} F_{r,s}(z)&= z^s \sum _{n \ge 0} \left( {\begin{array}{c}s\\ 2n\end{array}}\right) \left( {\begin{array}{c}2n\\ n\end{array}}\right) ^r \frac{1}{z^{2n}}\\&= z^s \sum _{n \ge 0} \left( {\begin{array}{c}s\\ 2n\end{array}}\right) \left( \frac{(\frac{1}{2})_n \cdot 4^n}{n!} \right) ^r \frac{1}{z^{2n}}\\&= z^s \sum _{n \ge 0} \frac{(\frac{1-s}{2})_n (\frac{-s}{2})_n}{n!^2} \left( \frac{(\frac{1}{2})_n}{n!} \right) ^{r-1} \left( \frac{4^r}{z^{2}}\right) ^n\\&= z^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{z^2} \right) . \end{aligned}$$

\(\square \)

Now for \(|k| > 2^r,\) \(k \in \mathbb {R}\) we can deduce an explicit formula for \(W_r\).

Proof of Theorem 1.4.(i)


$$\begin{aligned} W_r(k;s) = F_{r,s}(|k|), \end{aligned}$$

so that

$$\begin{aligned} W_r(k;s) = |k|^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{k^2} \right) , \end{aligned}$$

where Lemma 4.6 was used. \(\square \)

We see that, for a fixed \(|k| > 2^r\), the mapping \(s \mapsto W_r(k;s)\) defines an entire function.

Remark 1

If s is a non-negative integer, the hypergeometric sum is terminating. Hence, \(W_r(k;s)\) becomes a polynomial in |k|.

The value of \(W_r(k;s)\) for \(k = \pm 2^r\) when \(r>1\) can be found by taking the corresponding limit.

Proof of Theorem 1.4.(ii)

This follows from the absolute convergence of

$$\begin{aligned} {}_{r+1} F_r \left( a_1, \dots , a_{r+1};b_1, \dots , b_r;z \right) \end{aligned}$$

on the domain \(|z| \le 1\) if

$$\begin{aligned} {\text {Re}}\left( \sum _{i =1}^{r} b_i - \sum _{j=1}^{r+1} a_j \right) >0 \end{aligned}$$

(see [15, p. 156]).

Example 4.7

For \(r = 1\) and \(|k|>2\), Eq. (6) gives

$$\begin{aligned} W_1(k;s) = |k|^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2} ; 1; \frac{4}{k^2} \right) \end{aligned}$$

and for \(|k| = 2\), Eq. (7) gives

$$\begin{aligned} W_1(k;s) = \frac{2^s \Gamma (\frac{1}{2} + s)}{\Gamma (1+\frac{s}{2}) \Gamma (\frac{1+s}{2})}, \end{aligned}$$

for \({\text {Re}}(s) > -\frac{1}{2}\). This leads to a second proof of Theorem 1.1.

\(W_r\) for \(|k| < 2^r\)

For \(|k|<2^r\), we write

$$\begin{aligned} W_r(k;s) = \int \limits _{0}^{|k|+2^r} x^s \hat{p}_r(x-|k|) \, \mathrm {d}x + \int _{0}^{-|k| + 2^r} x^s \hat{p}_r(x+|k|) \, \mathrm {d}x. \end{aligned}$$

Note that

$$\begin{aligned} \int _{|k|- 2^r}^{|k|+2^r} x^s \hat{p}_r(x-|k|) \, \mathrm {d}x = \int _{0}^{|k|+2^r} x^s \hat{p}_r(x-|k|) \, \mathrm {d}x + e^{\pi i s} \int _{0}^{-|k|+2^r} x^s \hat{p}_r(x+|k|) \, \mathrm {d}x, \end{aligned}$$


$$\begin{aligned} W_r(k;s)&= \frac{1}{1+e^{\pi i s}} \left( \int \limits _{|k| - 2^r}^{|k|+2^r} x^s \hat{p}_r(x-|k|) \, \mathrm {d}x + \int \limits _{-|k| - 2^r}^{-|k|+2^r} x^s \hat{p}_r(x+|k|) \, \mathrm {d}x \right) \\&= \frac{1}{1+e^{\pi i s}} \left( F_{r,s}(|k|) + F_{r,s}(-|k|) \right) . \end{aligned}$$

We now define the following analytic continuations \(F_{r,s}^{+}\) and \(F_{r,s}^{-}\). Let \(F_{r,s}^{+}(z)\) be the analytic continuation of the function

$$\begin{aligned} z^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{z^2} \right) \end{aligned}$$

on the complement of the upper half-disk \(D(0,2^r)^\mathsf {c} \cap \mathbb {H}^{+}\) to \(\mathbb {C} \setminus ((-\infty ,0] \cup [2^r, \infty ))\), while \(F_{r,s}^{-}(z)\) is the analytic continuation of the function

$$(-z)^s \cdot {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{z^2} \right) $$

on the complement of the lower half-disk \(D(0,2^r)^\mathsf {c} \cap \mathbb {H}^{-}\) to \(\mathbb {C} \setminus ((-\infty ,0] \cup [2^r, \infty ))\). Here \(\mathbb {H}^+\) and \(\mathbb {H}^{-}\) denote the (strict) upper- and lower-half planes, respectively.

Using Proposition 4.5, it is clear that for \(|k| < 2^r\) we have

$$\begin{aligned} F_{r,s}(|k|) = F_{r,s}^{+}(|k|) \quad \text { and } \quad F_{r,s}(-|k|) = F_{r,s}^{-}(|k|). \end{aligned}$$


$$\begin{aligned} H_{r,s}(z) := \frac{1}{1+e^{\pi i s}} \left( F_{r,s}^{+}(z) + F_{r,s}^{-}(z) \right) , \end{aligned}$$

which is analytic on \(\mathbb {C} \setminus ((-\infty ,0] \cup [2^r, \infty ))\), with the following motivation.

Theorem 4.8

For \(|k| < 2^r\),

$$\begin{aligned} W_r(k;s) = H_{r,s}(|k|). \end{aligned}$$

In fact, in the case of real s, we have an additional structure.

Lemma 4.9

For real positive s, we have

$$\begin{aligned} \overline{F_{r,s}(-|k|)} = e^{-\pi i s}F_{r,s}(|k|). \end{aligned}$$


For notational convenience assume \(0 \le k < 2^r\). Then

$$\begin{aligned} F_{r,s}(-k) = \int \limits _{k}^{2^r} (x-k)^s \hat{p}_r(x) \,\mathrm {d}x + e^{\pi i s} \int _{-k}^{2^r} (x+k)^s \,\mathrm {d}x \end{aligned}$$

and taking the complex conjugate, we deduce that

$$\begin{aligned} \overline{F_{r,s}(-k)}&= e^{-\pi i s}\int \limits _{-2^r}^{-k} (x+k)^s \hat{p}_r(x) \,\mathrm {d}x + e^{-\pi i s} \int \limits _{-k}^{2^r} (x+k)^s \,\mathrm {d}x\\&= e^{-\pi i s}F_{r,s}(k), \end{aligned}$$

which is the desired claim. \(\square \)

Now we establish Theorem 1.5.

Proof of Theorem 1.5

Observe that for real s Theorem 1.4 coincides with the statement of Theorem 1.5, as

$$\begin{aligned} {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{k^2} \right) \end{aligned}$$

is real-valued for \(|k| \ge 2^r\). Therefore, we can assume \(|k| < 2^r\). Then by Lemma 4.9 and Theorem 4.8 we obtain

$$\begin{aligned} W_r(k;s)&= \frac{1}{1+e^{\pi i s}} (F_{r,s}(|k|) + e^{\pi i s}\overline{F_{r,s}(|k|)}) \\&= {\text {Re}} F_{r,s}(|k|) + \tan \left( \frac{\pi s}{2} \right) {\text {Im}} F_{r,s}(|k|). \end{aligned}$$

\(\square \)

Remark 2

Note that \(F_{r,s}^{+}(z), F_{r,s}^{-}(z)\) and hence also \(H_{r,s}(z)\) satisfy the same differential equation as

$$\begin{aligned} {}_{r+1} F_{r} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{4^r}{z^2} \right) . \end{aligned}$$

It is clear from the definition that \(F_{r,s}^{+}(z) = F_{r,s}^{-}(-z)\) for \(z \in D(0,2^r)^\mathsf {c} \cap \mathbb {H}^{+}\); therefore, from the theory of analytic continuation we conclude with the following.

Proposition 4.10

For all \(z \in \mathbb {H}^{+},\)

$$\begin{aligned} F_{r,s}^{+}(z) = F_{r,s}^{-}(-z). \end{aligned}$$

Proposition 4.11

(Real differentiability at \(k = 2^r\)) Suppose \({\text {Re}}(s) > n - \frac{r}{2}\). Then \(k \mapsto W_r(k;s)\) is n times (real) differentiable at \(k = 2^r\).


We will prove that \(k \mapsto W_r(k;s)\) is differentiable at \(k = 2^r \) if \({\text {Re}}(s) > \frac{1}{2}\). The general case follows by induction. For \(k > 2^r\), we have \(W_r(k;s) = F_{r,s}(k)\), so that the right derivative at \(k = 2^r\) is given by

$$\begin{aligned} \int _{-2^r}^{2^r} s(x + 2^r)^{s-1} \hat{p}_r(x) \, \mathrm {d}x = s F_{r,s-1}(2^r) \end{aligned}$$

when \({\text {Re}}(s) > 1 - \frac{r}{2}\). For \(|k|< 2^r\), we have

$$\begin{aligned} W_r(k;s) = \frac{1}{1 + e^{\pi i s}} \left( F_{r,s}(k) + F_{r,s}(-k) \right) . \end{aligned}$$

Therefore, the left derivative at \(k = 2^r\) is given by

$$\begin{aligned} \frac{1}{1+e^{\pi i s}} \left( s F_{r,s-1}(2^r) + e^{\pi i s} sF_{r,s-1}(2^r) \right) = s F_{r,s-1}(2^r). \end{aligned}$$

This means that at \(k = 2^r\) the left and right derivatives coincide. \(\square \)

Proposition 4.12

(Real derivatives at \(k = 0\)) The right real derivatives \((\mathrm {d}^j/\mathrm {d}k^j)^{+}\) for \(0 \le j \le \lfloor {{\text {Re}}(s)}\rfloor \) of \(k \mapsto W_r(k;s)\) at \(k=0\) are given by

$$\begin{aligned} \frac{\mathrm {d}^jW_r(k;s)}{\mathrm {d}k^j}^{+}\Biggr |_{k = 0} = {\left\{ \begin{array}{ll} 0 \qquad &{}\hbox {if }j\hbox { is odd,}\\ \frac{\Gamma (s+1)^{r+1} }{\Gamma (s-j+1) \Gamma (1+\frac{s}{2})^{2r}} \qquad &{}\hbox {if}\, j\hbox { is even.} \end{array}\right. } \end{aligned}$$


For \(k \ge 0\), we have

$$\begin{aligned} \frac{\mathrm {d}^jW_r(k;s)}{\mathrm {d}k^j}^{+} = s(s-1)\frac{\mathrm {d}^{j-2} W_r(k;s)}{\mathrm {d}k^{j-2}}^{+}. \end{aligned}$$

By induction,

$$\begin{aligned} \frac{\mathrm {d}^jW_r(k;s)}{\mathrm {d}k^j}^{+}&= s(s-1) \cdots (s-j+1) W_r(k;s-j)\\&= \frac{\Gamma (s+1)}{\Gamma (s-j+1)}W_r(k;s-j) \end{aligned}$$

for even j. Finally, notice that

$$\begin{aligned} \frac{\mathrm {d}W_r(k;s)}{\mathrm {d}k}^{+}\Bigr |_{k = 0} = 0. \end{aligned}$$

\(\square \)

We now return to Theorem 1.1.(iii) and give another proof of it.

Proof of Theorem 1.1.(iii)

For \(|z|>2\), we have that

$$\begin{aligned} F_{1,s}(z) = z^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2} ; 1; \frac{4}{z^2} \right) \end{aligned}$$

implying that \(F_{1,s}\) is a solution to the differential equation

$$\begin{aligned} \left( -\frac{1}{4}z^2+1 \right) \frac{dY^2}{dz^2}+\left( \frac{1}{2}sz - \frac{1}{4}z\right) \frac{dY}{dz} -\frac{1}{4}s^2Y = 0. \end{aligned}$$

Equation (17) has a basis of solutions

$$\begin{aligned} Y_0(z)&= {}_{2} F_{1} \left( \frac{-s}{2}, \frac{-s}{2} ; \frac{1}{2}; \frac{z^2}{4} \right) ,\\ Y_1(z)&= z \cdot {}_{2} F_{1} \left( \frac{1-s}{2}, \frac{1-s}{2} ; \frac{3}{2}; \frac{z^2}{4} \right) , \end{aligned}$$

so that \(F_{1,s}^{+}(z) = C_0 Y_0(z) + C_1 Y_1(z)\), \(F_{1,s}^{-}(z) = \tilde{C_0} Y_0(z) + \tilde{C_1} Y_1(z)\) and \(H_{1,s}(z) = D_0 Y_0(z) + D_1 Y_1(z)\) for constants \(C_0, C_1, \tilde{C_0}, \tilde{C_1}, D_0\) and \(D_1\) depending only on s. Using Proposition 4.10, it follows that \(C_1 = -\tilde{C_1}\), hence \(D_1 = 0\).

As \(\lim _{z \rightarrow 0} H_{1,s}(z) = W_1(0;s)\) and we have

$$W_1(0;s) = \frac{2^{s} \Gamma (\frac{1}{2}) \Gamma (\frac{s+1}{2})}{\pi \Gamma (1+\frac{s}{2})},$$

it follows that

$$\begin{aligned} H_{1,s}(z) = \frac{2^{s} \Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{s+1}{2}\right) }{\pi \Gamma \left( 1+\frac{s}{2}\right) } \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{-s}{2} ; \frac{1}{2}; \frac{z^2}{4} \right) . \end{aligned}$$


$$\begin{aligned} W_{1}(k;s) = \frac{4^s \Gamma (\frac{1 + s}{2})^2}{\pi \Gamma (1 + s)} \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{-s}{2} ; \frac{1}{2}; \frac{k^2}{4} \right) \end{aligned}$$

for \(|k|<2^r\).

Remark 3

In the proof above we see that \(H_{1,s}(z)\) extends to an analytic function in a neighborhood of \(z = 0\). We expect that this will only happen in the case \(r = 1\).

Remark 4

We use the symmetry of \(H_{1,s}\) to show that \(D_1 = 0\). Clearly, using

$$\begin{aligned} \lim _{z \rightarrow 2} H_{1,s}(z) = W_1(2;s) = 2^s \cdot {}_{2} F_{1} \left( \frac{-s}{2}, \frac{1-s}{2}; 1 ;1 \right) \end{aligned}$$

would lead to the same conclusion.

For \(r=2\) we follow the above strategy of the case \(r = 1\).

Proof of Theorem 1.6

We have, for \(|z|>4\),

$$\begin{aligned} F_{2,s}(z) = z^s \cdot {}_{3} F_{2} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2} ; 1, 1; \frac{16}{z^2} \right) , \end{aligned}$$

hence \(F_{2,s}\) is a solution to the differential equation

$$\begin{aligned}&\left( -\frac{1}{8}z^3+2z\right) \frac{\mathrm {d}^3Y}{\mathrm {d}z^3} + \left( \frac{3}{8}sz^2 - \frac{3}{8}z^2 - 2s + 2 \right) \frac{\mathrm {d}^2Y}{\mathrm {d}z^2}\nonumber \\&\quad +\left( -\frac{3}{8}s^2z + \frac{3}{8}sz - \frac{1}{8}z \right) \frac{\mathrm {d}Y}{\mathrm {d}z} + \frac{1}{8}s^3Y =0. \end{aligned}$$

A basis of solutions for (18) is given by

$$\begin{aligned} Y_0(z)&= {}_{3} F_{2} \left( \frac{-s}{2}, \frac{-s}{2}, \frac{-s}{2} ; \frac{1-s}{2}, \frac{1}{2}; \frac{z^2}{16} \right) ,\\ Y_1(z)&= z \cdot {}_{3} F_{2} \left( \frac{1-s}{2}, \frac{1-s}{2}, \frac{1-s}{2} ; 1-\frac{s}{2}, \frac{3}{2}; \frac{z^2}{16} \right) , \\ Y_2(z)&= z^{1+s} \cdot {}_{3} F_{2} \left( \frac{1}{2}, \frac{1}{2}, \frac{1}{2} ; 1+\frac{s}{2}, \frac{3}{2} + \frac{s}{2}; \frac{z^2}{16} \right) . \end{aligned}$$

It follows that \(H_{2,s}(z) = D_0 Y_0(z) + D_1 Y_1(z) + D_2 Y_2(z)\) for some constants \(D_0,D_1\) and \(D_2\) depending only on s. Using the same argument as in the second proof of Theorem 1.5, it follows that \(D_1 = 0\). Since

$$\begin{aligned} \lim _{z \rightarrow 0}H_{2,s}(z) = W_{2}(0;s) = \frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} \end{aligned}$$


$$\begin{aligned} \lim _{z \rightarrow 4}H_{2,s}(z) = W_{2}(4;s) = 4^s \cdot {}_{3} F_{2} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2} ; 1, 1; 1 \right) , \end{aligned}$$

we find out that

$$\begin{aligned} {\begin{matrix} H_{2,s}(z) =&{}\frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} \cdot Y_0(z) \\ {} &{}-\frac{4^s {}_3 F_{2} \left( \frac{1}{2},\frac{-s}{2},\frac{1-s}{2};1,1;1 \right) - \frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} {}_3 F_{2} \left( \frac{-s}{2} ,\frac{-s}{2},\frac{-s}{2}; \frac{1-s}{2},\frac{1}{2}; 1 \right) }{4^{1+s} \cdot {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1 +\frac{s}{2}, \frac{3}{2} + \frac{s}{2} ; 1 \right) } \cdot Y_2(z).\\ \end{matrix}} \end{aligned}$$

We can further simplify the coefficient in front of \(Y_2\). We have the following hypergeometric identity for the special value at \(z=1\) (see [8]):

$$\begin{aligned} {}_3 F_{2} \left( a_1,a_2,a_3;b_1,b_2;1 \right) =&\frac{\Gamma (1-a_2)\Gamma (a_3-a_1)\Gamma (b_1)\Gamma (b_2)}{\Gamma (a_1-a_2+1)\Gamma (a_3)\Gamma (b_1-a_1)\Gamma (b_2-a_1)} \\ {}&\quad \times {}_3 F_{2} \left( a_1,a_1-b_1+1,a_1-b_2+1;a_1-a_2+1,a_1-a_3+1;1 \right) \\&+\frac{\Gamma (1-a_2)\Gamma (a_1-a_3)\Gamma (b_1)\Gamma (b_2)}{\Gamma (a_1)\Gamma (a_3-a_2+1)\Gamma (b_1-a_3)\Gamma (b_2-a_3)} \\ {}&\quad \times {}_3 F_{2} \left( a_3,a_3-b_1+1,a_3-b_2+1;a_3-a_1+1,a_3-a_2+1;1 \right) . \end{aligned}$$

Applying it with \(a_1 = 1/2, a_2 = 1/2-s/2, a_3 = -s/2, b_1 = 1\) and \( b_2 = 1\) gives

$$\begin{aligned} {\begin{matrix} {}_3 F_{2} \left( \frac{1}{2},\frac{1-s}{2},\frac{-s}{2};1,1;1 \right) =\frac{\Gamma (\frac{1}{2}+\frac{s}{2})\Gamma (\frac{-s}{2}-\frac{1}{2})}{\Gamma (\frac{s}{2}+1)\Gamma (\frac{-s}{2}) \pi } &{} {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2},\frac{1}{2};1+\frac{s}{2},\frac{3}{2}+\frac{s}{2};1 \right) \\ +\frac{\Gamma (\frac{1}{2}+\frac{s}{2})^2}{\Gamma (1+\frac{s}{2})\pi } &{} {}_3 F_{2} \left( \frac{-s}{2},\frac{-s}{2},\frac{-s}{2};\frac{1-s}{2},\frac{1}{2};1\right) . \end{matrix}} \end{aligned}$$

Hence \(H_{2,s}\) can be written as

$$\begin{aligned} H_{2,s}(z)&= \frac{1}{2 \pi } \frac{\tan (\frac{\pi s}{2})}{s+1} z^{1+s} \cdot {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1 +\frac{s}{2}, \frac{3}{2} + \frac{s}{2} ; \frac{z^2}{16} \right) \\&\quad + \frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} \cdot {}_3 F_{2} \left( -\frac{s}{2} ,-\frac{s}{2},-\frac{s}{2}; \frac{1-s}{2},\frac{1}{2}; \frac{z^2}{16} \right) . \end{aligned}$$

It remains to apply Theorem 4.8 to arrive at the formula for \(W_2(k;s)\). \(\square \)

Remark 5

Notice that \(H_{2,s}(z)\) is not anymore analytic at \(z = 0\).

For odd positive values of s we need to compute the corresponding limits. We present an explicit formula, which can no longer be written in terms of hypergeometric functions.

Theorem 4.13

For odd positive integers n and \(|k|<4\),

$$\begin{aligned} W_2(k;n) = (-1)^{\frac{n+1}{2}} \frac{2^n n!}{\pi ^3} \cdot G_{3,3}^{2,3} \left( 1 + \frac{n}{2}, 1 + \frac{n}{2},1 + \frac{n}{2};0,\frac{n+1}{2},\frac{1}{2};\frac{k^2}{16}\right) . \end{aligned}$$


For \({\text {Re}}(s)>-1\), not an odd integer, write

$$\begin{aligned} \cos \left( \frac{\pi s}{2} \right) W_2(k;s) = \frac{1}{2 \pi } \frac{\sin \left( \frac{\pi s}{2} \right) }{s+1} F_1 + \frac{\Gamma (s+1)^2 \pi }{\Gamma (\frac{s}{2} + 1)^4 \Gamma (\frac{1+s}{2})} \frac{F_2}{\Gamma (\frac{1-s}{2})}, \end{aligned}$$


$$\begin{aligned} F_1 = |k|^{1+s} \cdot {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1 +\frac{s}{2}, \frac{3}{2} + \frac{s}{2} ; \frac{k^2}{16} \right) \end{aligned}$$


$$\begin{aligned} F_2 = {}_3 F_{2} \left( -\frac{s}{2} ,-\frac{s}{2},-\frac{s}{2}; \frac{1-s}{2},\frac{1}{2}; \frac{k^2}{16} \right) . \end{aligned}$$

Using Mathematica, we have

$$\begin{aligned} \cos \left( \frac{\pi s}{2} \right)&G = -\frac{2^{-1-s} \pi ^2}{\Gamma (2+s)} F_1 + \sqrt{\pi } \Gamma \left( -\frac{s}{2} \right) ^3 \frac{F_2}{\Gamma (\frac{1-s}{2})}, \end{aligned}$$


$$\begin{aligned} G = G_{3,3}^{2,3} \left( 1 + \frac{s}{2}, 1 + \frac{s}{2},1 + \frac{s}{2};0,\frac{s+1}{2},\frac{1}{2};\frac{k^2}{16}\right) . \end{aligned}$$


$$\begin{aligned} W_2(k;s)&= \left( \frac{\tan \left( \frac{\pi s}{2} \right) }{2 \pi (s+1)} + \frac{2^{-1-s} \pi ^{5/2} \Gamma (s+1)^2}{\cos \left( \frac{\pi s}{2} \right) \Gamma (2+s)\Gamma (\frac{s}{2}+1)^4 \Gamma (\frac{1+s}{2}) \Gamma (-\frac{s}{2})^3}\right) F_1\\&\quad + \left( \frac{\Gamma (s+1)^2 \sqrt{\pi }}{\Gamma \left( \frac{s}{2} + 1\right) ^4 \Gamma \left( \frac{1+s}{2} \right) \Gamma \left( -\frac{s}{2}\right) ^3}\right) G. \end{aligned}$$

Taking the limit \(s \rightarrow n\), for n odd and positive, gives the result. \(\square \)

Using the explicit expression for \(W_2(k;s)\) in Theorem 1.6, the Mahler measure of the Laurent polynomial \(k + (x+x^{-1})(y+y^{-1})\) can be computed for \(|k|<4\).

Corollary 4.14

[[16, Theorem 3.1]] For \(|k| < 4\),

$$\begin{aligned} {\text {m}} (k + (x+x^{-1})(y+y^{-1})) = \frac{|k|}{4} {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1, \frac{3}{2} ; \frac{k^2}{16} \right) . \end{aligned}$$


The Mahler measure of \(k + (x+x^{-1})(y+y^{-1})\) can be recovered as

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s} W_2(k;s) |_{s = 0}. \end{aligned}$$

Note that in the neighborhood of \(s=0\),

$$\begin{aligned}&\frac{\Gamma (s+1)^2}{\Gamma (\frac{s}{2}+1)^4} = 1 + \mathcal {O}(s^2),\\&{}_3 F_{2} \left( \frac{-s}{2} ,\frac{-s}{2},\frac{-s}{2}; \frac{1-s}{2},\frac{1}{2}; \frac{k^2}{16} \right) = 1 + \mathcal {O}(s^2) \end{aligned}$$


$$\begin{aligned} \frac{\tan (\frac{\pi s}{2})}{s+1} = \frac{\pi s}{2}+\mathcal {O}(s^2). \end{aligned}$$


$$\begin{aligned}\frac{\mathrm {d}W_{2}(k;s)}{\mathrm {d}s}\Bigr |_{s = 0}&= \frac{|k|}{4} {}_3 F_{2} \left( \frac{1}{2},\frac{1}{2} ,\frac{1}{2};1, \frac{3}{2} ; \frac{k^2}{16} \right) . \end{aligned}$$

\(\square \)

Remark 6

The Mahler measure \({\text {m}} (k + (X+X^{-1})(Y+Y^{-1}))\) can also be written as the double integral

$$\begin{aligned} {\text {m}} (k + (x+x^{-1})(y+y^{-1})) = \frac{|k|}{8 \pi }\int _{[0,1]^2} \frac{\mathrm {d}x_1 \, \mathrm {d}x_2}{\sqrt{x_1 x_2(1-x_2)(1-x_1x_2\frac{k^2}{16})}} \end{aligned}$$

using Corollary 4.14 and [7, Eq. (16.5.2)].

For the case \(r \ge 3\) and \(|k|<2^r\) we expect that \(W_r(k;s)\) cannot be written anymore as a linear combination of hypergeometric functions (not their real and imaginary parts as in Theorem 1.5). We now give the proof of Theorem 1.7.

Proof of Theorem 1.7

For \(|k| < 8\) and real \(s>1\), we have

$$\begin{aligned} W_3(k;s)&= |k|^s{\text {Re}} \left( {}_{4} F_{3} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{64}{k^2} \right) \right) \nonumber \\&\quad + \tan \left( \frac{ \pi s}{2} \right) |k|^s {\text {Im}} \left( {}_{4} F_{3} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2}, \dots , \frac{1}{2}; 1, \dots , 1; \frac{64}{k^2} \right) \right) . \end{aligned}$$

Let \(\epsilon > 0\). Then for \(|k|<8\) we have

$$\begin{aligned} |k|^s {}_{4} F_{3} \left( \frac{-s}{2}, \frac{1-s}{2}, \frac{1}{2},\frac{1}{2} + \epsilon ; 1, 1 , 1; \frac{4^r}{k^2} \right) = \alpha _1(s)Y_1(k) + \dots + \alpha _4(s)Y_4(k), \end{aligned}$$


$$\begin{aligned} Y_1(k)&= {}_4 F_{3} \left( \frac{-s}{2},\frac{-s}{2},\frac{-s}{2},\frac{-s}{2};\frac{1-s}{2}-\epsilon ,\frac{1-s}{2},\frac{1}{2};\frac{k^2}{64}\right) ,\\ Y_2(k)&= |k| \cdot {}_4 F_{3} \left( \frac{1-s}{2},\frac{1-s}{2},\frac{1-s}{2},\frac{1-s}{2};\frac{3}{2},1 - \frac{s}{2},1 - \epsilon - \frac{s}{2};\frac{k^2}{64}\right) ,\\ Y_3(k)&= |k|^{1+s} {}_4 F_{3} \left( \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1-\epsilon ,1+\frac{s}{2},\frac{3+s}{2};\frac{k^2}{64}\right) ,\\ Y_4(k)&= |k|^{1+s+2 \epsilon } {}_4 F_{3} \left( \frac{1}{2} + \epsilon ,\frac{1}{2} + \epsilon ,\frac{1}{2} + \epsilon ,\frac{1}{2} + \epsilon ;1+\epsilon ,1+\frac{s}{2} + \epsilon ,\frac{3+s}{2}+\epsilon ;\frac{k^2}{64}\right) \end{aligned}$$


$$\begin{aligned} \alpha _1(s)&= \frac{(8 i)^s \Gamma \left( \frac{s+1}{2}\right) \Gamma \left( \epsilon +\frac{s}{2}+\frac{1}{2}\right) }{\Gamma \left( \epsilon +\frac{1}{2}\right) \Gamma \left( \frac{1-s}{2}\right) \Gamma \left( \frac{s}{2}+1\right) ^3}, \\ \alpha _2(s)&= \frac{i^{s+1} 2^{3 s-2} \Gamma \left( \frac{s}{2}\right) \Gamma \left( \epsilon +\frac{s}{2}\right) }{\Gamma \left( \epsilon +\frac{1}{2}\right) \Gamma \left( -\frac{s}{2}\right) \Gamma \left( \frac{s+1}{2}\right) ^3},\\ {} \alpha _3(s)&=-\frac{i \Gamma (\epsilon ) \Gamma \left( -\frac{s}{2}-\frac{1}{2}\right) }{8 \pi ^{3/2} \Gamma \left( \epsilon +\frac{1}{2}\right) \Gamma \left( \frac{1-s}{2}\right) },\\ \alpha _4(s)&= -\frac{i (-1)^{-\epsilon } 8^{-2\epsilon -1} \Gamma (-\epsilon ) \Gamma \left( -\epsilon -\frac{s}{2}-\frac{1}{2}\right) \Gamma \left( -\epsilon -\frac{s}{2}\right) }{\sqrt{\pi } \Gamma \left( \frac{1}{2}-\epsilon \right) ^3 \Gamma \left( \frac{1-s}{2}\right) \Gamma \left( -\frac{s}{2}\right) }. \end{aligned}$$

We expand \(\alpha _3(s)\) and \(\alpha _4(s)\) in powers of \(\epsilon \). Note that \(\Gamma (\epsilon ) = \Gamma (\epsilon +1)/ \epsilon \), so that for s fixed

$$\begin{aligned} \alpha _3(s) = \frac{1}{\epsilon } \cdot \frac{i}{4 \pi ^2 (1+s)} + \mathcal {O}(1) \end{aligned}$$


$$\begin{aligned} \alpha _4(s) = \frac{1}{\epsilon } \cdot \frac{-i}{4 \pi ^2 (1+s)} + \mathcal {O}(1), \end{aligned}$$


$$\begin{aligned} \lim _{\epsilon \rightarrow 0}&\left( \alpha _3(s) Y_3(k) + \alpha _4(s) Y_4(k) \right) =\lim _{\epsilon \rightarrow 0} (\alpha _3(s) \\&+ \alpha _4(s)) Y_3(k) - \lim _{\epsilon \rightarrow 0} \alpha _4(s) \epsilon \left( \frac{Y_3(k) - Y_4(k)}{\epsilon } \right) . \end{aligned}$$

By L’Hôpital’s rule,

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}&(\alpha _3(s) + \alpha _4(s)) = \frac{\mathrm {d}}{\mathrm {d}\epsilon } \epsilon (\alpha _3(s) + \alpha _4(s)) |_{\epsilon =0}\\ =&-\frac{1}{4 \pi (s+1)} + i\frac{1}{4 \pi ^2(s+1)} \left( \psi (1) \right. \\&\left. + \psi \left( \frac{-s}{2}\right) +\psi \left( \frac{-s-1}{2} \right) - 3 \psi \left( \frac{1}{2} \right) + \log (256) \right) , \end{aligned}$$

where \(\psi \) is the digamma function. Furthermore, notice that

$$\begin{aligned}&G_{4,4}^{2,4} \left( \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2};\frac{1+s}{2}, \frac{1+s}{2}, 0 ,\frac{1}{2} ;\frac{k^2}{64} \right) \\&= \frac{\mathrm {d}}{\mathrm {d}\epsilon } \left( - \frac{\Gamma (1-\epsilon ) \Gamma (\frac{1}{2} + \epsilon )^4}{8^{1+s+2 \epsilon }\Gamma (\frac{3+s}{2} + \epsilon ) \Gamma (1 + \frac{s}{2} + \epsilon )} Y_4(k)+\frac{\Gamma (1+\epsilon ) \Gamma (\frac{1}{2})^4}{8^{1+s}\Gamma (\frac{3+s}{2}) \Gamma (1+\frac{s}{2})} Y_3(k)\right) \Biggr |_{\epsilon = 0}\\&= \frac{\pi ^2}{8^{1+s}\Gamma (\frac{3+s}{2}) \Gamma (1+\frac{s}{2})} \left( \frac{\mathrm {d}}{\mathrm {d}\epsilon } (Y_3(k) - Y_4(k))\Bigr |_{\epsilon =0} - C Y_3(k) \Bigr |_{\epsilon = 0} \right) , \end{aligned}$$


$$\begin{aligned} C=-2\psi (1) + 4 \psi \left( \frac{1}{2}\right) - \psi \left( 1+\frac{s}{2}\right) - \psi \left( \frac{3+s}{2}\right) - \log (64). \end{aligned}$$

Thus we can write

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}&\left( \alpha _3(s) Y_3(k) + \alpha _4(s) Y_4(k) \right) \\ {}&= \left( \frac{-1}{4 \pi (s+1)} + i\frac{ \cot (\pi s)}{2 \pi (s+1)} \right) |k|^{1+s} {}_4 F_{3} \left( \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1+\frac{s}{2},\frac{3+s}{2};\frac{k^2}{64}\right) \\&\quad + i \frac{4^{s} \Gamma (1+s)}{\pi ^{7/2}} G_{4,4}^{2,4} \left( \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2}, \frac{2+s}{2};\frac{1+s}{2}, \frac{1+s}{2}, 0 ,\frac{1}{2} ;\frac{k^2}{64} \right) . \end{aligned}$$

Finally, using (22) we arrive at the result. \(\square \)

We can alternatively represent the Meijer G-function in (8) as the following triple integral.

Proposition 4.15

For \({\text {Re}}(s)>-1\) and \(0< |k| < 8\),

$$\begin{aligned}&G^{2,4}_{4,4} \left( \frac{2+s}{2},\frac{2+s}{2},\frac{2+s}{2},\frac{2+s}{2};\frac{1+s}{2}, \frac{1+s}{2},0,\frac{1}{2}; \frac{k^2}{64}\right) \\&\quad = \frac{ \sqrt{\pi }|k|^{1+s}}{\Gamma (1 +s) 2^{3+2s}}\int _{[0,1]^3} \frac{ (1-x_2)^{\frac{s}{2}} (1-x_3)^{\frac{s-1}{2}} \, \mathrm {d}x_1 \, \mathrm {d}x_2 \, \mathrm {d}x_3}{\sqrt{x_1 x_2 x_3 (1-x_1)(1-x_1 + \frac{k^2}{64}x_1x_2x_3)}}. \end{aligned}$$


First, for all \(z \in \mathbb {C}\) we have

$$\begin{aligned}&G^{2,4}_{4,4} \left( \frac{2+s}{2},\frac{2+s}{2},\frac{2+s}{2},\frac{2+s}{2};\frac{1+s}{2}, \frac{1+s}{2},0,\frac{1}{2}; z \right) \\&= z^{\frac{1+s}{2}} G^{2,4}_{4,4} \left( \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};0, 0,-\frac{1+s}{2},-\frac{s}{2}; z\right) . \end{aligned}$$

Applying Nesterenko’s theorem [17, Proposition 1] with \(a_0 = \dots = a_3 = \frac{1}{2}\), \(b_1 = 1\), \(b_2 =1 + \frac{s}{2}\) and \(b_3 = \frac{3+s}{2}\) to the right hand side gives

$$\begin{aligned} G^{2,4}_{4,4}&\left( \frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};0, 0,-\frac{1+s}{2},-\frac{s}{2}; z\right) \\&= \frac{2^s \sqrt{\pi }}{\Gamma (1+s)}\int _{[0,1]^3} \frac{ (1-x_2)^{\frac{s-1}{2}} (1-x_3)^{\frac{s}{2}} \mathrm {d}x_1 \mathrm {d}x_2 \mathrm {d}x_3}{\sqrt{x_1 x_2 x_3 (1-x_1)(1-x_1 + \frac{k^2}{64}x_1x_2x_3)}}. \end{aligned}$$

\(\square \)

As a consequence of Theorem 1.7 we can compute the Mahler measure of the polynomial \(k + (x+x^{-1})(y+y^{-1})(z+z^{-1})\) for \(|k|<8\).

Corollary 4.16

The Mahler measure of \(k + (x+x^{-1})(y+y^{-1})(z+z^{-1})\) for \(|k|<8\) is given by

$$\begin{aligned} \frac{1}{2 \pi ^{5/2}} G_{4,4}^{2,4} \left( 1, 1, 1, 1;\frac{1}{2}, \frac{1}{2}, 0 ,\frac{1}{2} ;\frac{k^2}{64} \right) . \end{aligned}$$


We expand Eq. (8) in s. Note that

$$\begin{aligned}&\frac{\tan (\frac{\pi s}{2})^2}{4 \pi (1+s)} = \mathcal {O}(s^2),\\&\quad {}_4 F_{3} \left( \frac{-s}{2},\frac{-s}{2},\frac{-s}{2},\frac{-s}{2};\frac{1-s}{2},\frac{1-s}{2},\frac{1}{2};\frac{k^2}{64}\right) = \mathcal {O}(s^2), \end{aligned}$$


$$\begin{aligned} \frac{4^s \tan (\frac{\pi s}{2}) \Gamma (s+1)}{\pi ^{7/2}}&=\frac{1}{2 \pi ^{5/2}}s+\mathcal {O}(s^2). \end{aligned}$$


$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}s}W_3(k;s) |_{s = 0}&= \frac{1}{2 \pi ^{5/2}} G_{4,4}^{2,4} \left( 1, 1, 1, 1;\frac{1}{2}, \frac{1}{2}, 0 ,\frac{1}{2} ;\frac{k^2}{64} \right) . \end{aligned}$$

\(\square \)

We can further write this Mahler measure for \(0<|k|<8\) as the triple integral (compare with equation (21)):

$$\begin{aligned}&{\text {m}}(k + (x+x^{-1})(y+y^{-1})(z+z^{-1})) \nonumber \\&\qquad =\frac{|k|}{16 \pi ^2} \int _{[0,1]^3} \frac{\mathrm {d}x_1 \mathrm {d}x_2 \mathrm {d}x_3}{\sqrt{x_1 x_2 x_3(1-x_1)(1-x_2)(1-x_1 + \frac{k^2}{64} x_1x_2x_3)}}. \end{aligned}$$

Concluding remarks

Using Theorem 1.5, explicit formulas for \(W_r\) in terms of hypergeometric functions and Meijer G-functions can be found for general r; see Theorems 1.6 and 1.7. In this way, the Mahler measure of the polynomials \(k + (x_1+x_1^{-1}) \cdots (x_r + x_r^{-1})\) could be written in terms of certain Meijer G-functions, although it is not expected that this Mahler measure can be written as a single Meijer G-function, like in formulae (20) and (23).

The result in Theorem 1.3 about the location of the zeros of \(W_1\) does not seem to be generalizable to \(W_r\). Furthermore, the numerics suggests there is no functional equation in the general case. For the zeros, there seems to be a pattern, though hardly recognizable.

From an arithmetic point of view, it could be interesting to simplify the expression (19) of \(W_2(k;n)\) for specific odd integers n and integers k. This can be compared to the case \(r=1\), where it can be shown, for example, that \(W_1(1;n) \in \mathbb {Q} + \frac{\sqrt{3}}{\pi } \mathbb {Q}\).

For integers s, the expression (6) of \(W_r(k;s)\) for \(k > 2^r\) is a polynomial in k. For the case \(r=1\), the induced polynomials are, up to an appropriate transformation of the variable, equal to the Legendre polynomials. This implies that the zeros of the induced polynomials have a very specific structure. This structure seems to generalize to the induced polynomials \(W_r(k;s)\) for general r. More specifically, we expect all the zeros of these polynomials to lie on the imaginary axis. Discussion of this theme is outside the scope of this paper.


  1. Bernstein, I.N., Gelfand, S.I.: Meromorphy of the function \(P^{\lambda }\). Funkcional. Anal. i Priložen. 3(1), 84–85 (1969)

    MathSciNet  Google Scholar 

  2. Atiyah, M.F.: Resolution of singularities and division of distributions. Commun. Pure Appl. Math. 23, 145–150 (1970)

    MathSciNet  Article  Google Scholar 

  3. Bernstein, I.N.: Analytic continuation of generalized functions with respect to a parameter. Funkcional. Anal. i Priložen. 6(4), 26–40 (1972)

    MathSciNet  Google Scholar 

  4. Cassaigne, J., Maillot, V.: Hauteur des hypersurfaces et fonctions zêta d’Igusa. J. Number Theory 83(2), 226–255 (2000)

    MathSciNet  Article  Google Scholar 

  5. Akatsuka, H.: Zeta Mahler measures. J. Number Theory 129(11), 2713–2734 (2009)

    MathSciNet  Article  Google Scholar 

  6. Brunault, F., Zudilin, W.: Many Variations of Mahler Measures: A Lasting Symphony. Australian Mathematical Society Lecture Series. Cambridge University Press, Cambridge (2020)

  7. Olver, F.W., Lozier, D.W., Boisvert, R.F., Clark, C.W. (eds.): NIST handbook of mathematical functions, U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC; Cambridge University Press, Cambridge, 2010, With 1 CD-ROM (Windows, Macintosh and UNIX); online version on

  8. Bailey, W.N.: Generalized Hypergeometric Series, Cambridge Tracts in Mathematics and Mathematical Physics. The University Press, Cambridge (1935)

    Google Scholar 

  9. Lebedev, N.N.: Special functions and their applications. Revised English edition. Translated and edited by Richard A. Silverman, Prentice-Hall Inc, Englewood Cliffs, N.J. (1965)

  10. Koornwinder, T.H.: Jacobi functions and analysis on noncompact semisimple Lie groups, Special functions: group theoretical aspects and applications, pp. 1–85. Reidel, Dordrecht, Math. Appl. (1984)

  11. Watson, G.N.: A Treatise on the Theory of Bessel Functions. Cambridge University Press, Cambridge (1944)

    MATH  Google Scholar 

  12. Wolfram Research, Inc.: Wolfram programming lab, Version 12.1, Champaign, IL (2020)

  13. Paris, R.B., Kaminski, D.: Asymptotics and Mellin-Barnes integrals, Encyclopedia of Mathematics and its Applications, vol. 85. Cambridge University Press, Cambridge (2001)

  14. Borwein, J.M., Straub, A., Wan, J., Zudilin, W., Zagier, D.: Densities of short uniform random walks. Can. J. Math. 64(5), 961–990 (2012)

    MathSciNet  Article  Google Scholar 

  15. Yudell, L.L.: Mathematical Functions and Their Approximations. Academic Press Inc, New York (1975)

    MATH  Google Scholar 

  16. Rogers, M.: Hypergeometric formulas for lattice sums and Mahler measures. Int. Math. Res. Not. IMRN 17, 4027–4058 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Zudilin, W.: Arithmetic of linear forms involving odd zeta values. J. Théor. Nombres Bordeaux 16(1), 251–291 (2004)

    MathSciNet  Article  Google Scholar 

Download references


The author would like to thank Tom Koornwinder for his help with the proof of Theorem 1.3, and Frits Beukers for his help with the proof of Theorem 1.4. Thanks to Riccardo Pengo for his helpful comments. Thanks to Wadim Zudilin for his support and useful comments. I would finally like to thank the referee for the careful reading and valuable comments.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Berend Ringeling.

Ethics declarations

Data Availability

Data sharing is not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by NWO Grant OCENW.KLEIN.006.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ringeling, B. Special zeta Mahler functions. Res. number theory 8, 29 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Mahler measure
  • Hypergeometric function
  • Zeta function