1 Introduction

A classical approaches to special functions developed in the 19th century was to classify them from the point of view of solutions to (second) order differential equations with analytic coefficients

$$\begin{aligned} a(z) \frac{d^{2}u}{dz^{2}} + b(z) \frac{du}{dz} + c(z) u = 0. \end{aligned}$$
(1.1)

Gauss understood that the singularities of this equation are central to this classification. Moreover, the role of \(\infty \) as a possible singularity of (1.1) plays an important role. The notion of regular singularities and the well-known Frobenius method to produce solutions arose from this point of view. The reader will find information about these ideas in [20, 27].

It is a classical result that any differential equation with 3 regular singular points (on \({\mathbb {C}} \cup \{ \infty \}\)) can be transformed to the hypergeometric differential equation

$$\begin{aligned} z(1-z) \frac{d^{2}w}{dz^{2}} + [ c - (a+b-1)z] \frac{dw}{dz} - a b w = 0, \end{aligned}$$
(1.2)

with singularities at \(0, \, 1, \, \infty \). The solution of (1.2), normalized by \(w(0) = 1\) is the hypergeometric function, defined by the series expansion

$$\begin{aligned} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a \,\,\, b}{c} \bigg | {z} \right) = \sum _{n=0}^{\infty } \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} z^{n}, \end{aligned}$$
(1.3)

where \((q)_{n}\) os the Pochhammer symbol defined by

$$\begin{aligned} (q)_{n} = {\left\{ \begin{array}{ll} 1 &{} \text { if } n = 0 \\ q(q+1) \cdots (q+n-1) &{} \text { if } n > 0. \end{array}\right. } \end{aligned}$$
(1.4)

Among the many representations of the hypergeometric function we single out the so-called Mellin-Barnes integral

$$\begin{aligned} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a \,\,\, b}{c} \bigg | {z} \right)= & {} \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)} \frac{1}{2 \pi i } \nonumber \\&\times \!\int _{\gamma }\! \frac{\Gamma (a+s) \Gamma (b+s) \Gamma (-s)}{\Gamma (c+s)} (-z)^{s} \, ds,\nonumber \\ \end{aligned}$$
(1.5)

where the contour of integration \(\gamma \) joins \(- i \infty \) to \(+ i \infty \) and separates the poles of \(\Gamma (-s)\) (namely \(0, \, 1, \, 2, \ldots \)) from those of \(\Gamma (a+s) \Gamma (b+s)\) (located at \(-a, \, -a-1, \ldots , -b, -b-1, \ldots \)). In the general case, the poles of the gamma factors are located on horizontal semi-axes. The contour \(\gamma \) has to be chosen to separate those moving to the right from those moving to the left. Examples may be found in [25, 26].

The main type of integrals considered in this work come from Mellin transforms. This concept is recalled next.

Definition 1.1

Given a function f(x), defined on the positive real axis \({\mathbb {R}}^{+}\), its Mellin transform is defined by

$$\begin{aligned} \varphi (s) = ({\mathcal {M}}f)(s) = \int _{0}^{\infty } x^{s-1} f(x) \, dx. \end{aligned}$$
(1.6)

This relation may be inverted by a line integral

$$\begin{aligned} f(x) = ({\mathcal {M}}^{-1} \varphi )(x) = \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \varphi (s) \, ds, \end{aligned}$$
(1.7)

by an appropriate choice of the contour \(\gamma \) of the type described above.

The goal of the current work is to present a procedure to evaluate inverse Mellin transforms based on the method of brackets. This is a method to evaluate definite integrals and it is described in Sect. 2. The class of functions considered here are of the form

$$\begin{aligned} \varphi (s) = \frac{\prod \nolimits _{j=1}^{M} \Gamma \left( a_{j}+A_{j}s\right) }{\prod \nolimits _{j=1}^{N}\Gamma \left( b_{j}+B_{j}s\right) }\frac{\prod \nolimits _{j=1}^{P}\Gamma \left( c_{j}-C_{j}s\right) }{\prod \nolimits _{j=1}^{Q}\Gamma \left( d_{j}-D_{j}s\right) }. \end{aligned}$$
(1.8)

For simplicity the parameters \(A_{j}, \, B_{j}, \, C_{j}, \, D_{j}\) are taken real and positive. Integrals of the form (1.7) with an integrand of the type (1.8) will be referred as Mellin–Barnes integrals.

Remark 1.2

In high energy physics Mellin–Barnes integrals appear frequently at the intermediate steps in the process of calculations [2, 3, 12, 14,15,16, 21, 25]. These are contour integrals in which the integrands are quotients of Euler Gamma functions, as in (1.8). This type of contour integrals represent a typical representation of generalized hypergeometric functions. The Mellin–Barnes representation of denominators in integrands corresponding to Feynman diagrams in the momentum space representation is a standard procedure in quantum field theory. This approach helps to calculate very difficult Feynman diagrams [25] and this is the reason why Barnes integrals usually appear at the penultimate step of calculation.

Remark 1.3

In the previous work [13] it was mentioned that it would be interesting to combine the Mellin–Barnes integrals and the method of brackets, due to the similarity of these methods. This combination is presented here. In addition, several known relations for generalized hypergeometric functions have been reestablished by combining the method of brackets and Mellin–Barnes transformations. Our investigation has been motivated by the work of Prausa [23] which proposes an interesting symbolic procedure. Our work compares it with the method of brackets.

2 The method of brackets

This is a method that evaluates definite integrals over the half line \([0, \, \infty )\). The application of the method consists of small number of rules, deduced in heuristic form, some of which are placed on solid ground [5, 17, 18].

For \(a \in {\mathbb {R}}\), the symbol

$$\begin{aligned} \langle a \rangle = \int _{0}^{\infty }x^{a-1} \, dx \end{aligned}$$
(2.1)

is the bracket associated to the (divergent) integral on the right. The symbol

$$\begin{aligned} \phi _{n} = \frac{(-1)^{n}}{\Gamma (n+1)} \end{aligned}$$
(2.2)

is called the indicator associated to the index n. The notation \(\phi _{i_{1}i_{2}\cdots i_{r}}\), or simply \(\phi _{12 \cdots r}\), denotes the product \(\phi _{i_{1}} \phi _{i_{2}} \cdots \phi _{i_{r}}\).

Rules for the production of bracket series

The first part of the method is to associate to the integral

$$\begin{aligned} I(f) = \int _{0}^{\infty } f(x) \, dx \end{aligned}$$
(2.3)

a bracket series according to

\({\mathbf {Rule \, \, P_{1}}}\). Assume f has the expansion

$$\begin{aligned} f(x) = \sum _{n=0}^{\infty } \phi _{n} a_{n} x^{\alpha n + \beta -1 }. \end{aligned}$$
(2.4)

Then I(f) is assigned the bracket series

$$\begin{aligned} I(f) = \sum _{n \ge 0} \phi _{n} a_{n} \langle \alpha n + \beta \rangle . \end{aligned}$$
(2.5)

\({\mathbf {Rule \, \, P_{2}}}\). For \(\alpha \in {\mathbb {R}}\), the multinomial power \((a_{1} + a_{2} + \cdots + a_{r})^{\alpha }\) is assigned the r-dimension bracket series

$$\begin{aligned} \sum _{n_{1} \ge 0} \sum _{n_{2} \ge 0} \cdots \sum _{n_{r} \ge 0} \phi _{n_{1}\, n_{2} \, \cdots n_{r}} a_{1}^{n_{1}} \cdots a_{r}^{n_{r}} \frac{\langle -\alpha + n_{1} + \cdots + n_{r} \rangle }{\Gamma (-\alpha )}.\nonumber \\ \end{aligned}$$
(2.6)

(See [17] for details.)

Example 2.1

The effect of Rule \(P_{2}\) is to replace (part of) an integrand by a bracket series. For example, the integral

$$\begin{aligned} I(a,c;\alpha , \beta ) = \int _{0}^{\infty } \frac{dx}{(ax^{\beta } + c)^{\alpha }} \end{aligned}$$
(2.7)

can be evaluated directly with the change of variables \(x = (c/a)^{1/\beta } u^{1/\beta }\) to produce

$$\begin{aligned} I(a,c;\alpha ,\beta ) = \frac{1}{\beta } c^{1/\beta - \alpha } a^{-1/\beta } \int _{0}^{\infty } \frac{u^{1/\beta - 1}}{(u+1)^{\alpha }}\, du \end{aligned}$$
(2.8)

and the classical formula

$$\begin{aligned} \int _{0}^{\infty } \frac{u^{x-1} \, du}{(u+1)^{x+y}} = \frac{\Gamma (x) \, \Gamma (y)}{\Gamma (x+y)}, \end{aligned}$$
(2.9)

gives the result

$$\begin{aligned} I(a,c;\alpha ,\beta ) = \frac{c^{1/\beta -\alpha }}{\beta a^{1/\beta }} \frac{\Gamma \left( \alpha - 1/\beta \right) \Gamma \left( 1/\beta \right) }{\Gamma (\alpha )}. \end{aligned}$$
(2.10)

To evaluate this example by the method of brackets, start by using Rule \(P_{2}\) and associate to the integrand a brackets series in the form

$$\begin{aligned} (ax^{\beta }+c)^{-\alpha }= & {} \sum _{n_{1} \ge 0} \sum _{n_{2} \ge 0} \phi _{n_{1}} \phi _{n_{2}} (a x^{\beta })^{n_{1}} (c)^{n_{2}}\frac{\langle \alpha + n_{1} + n_{2} \rangle }{\Gamma (\alpha )}.\nonumber \\ \end{aligned}$$
(2.11)

The next step is to integrate over \([0, \infty )\) to produce

$$\begin{aligned} I(a,c;\alpha ,\beta )= & {} \sum _{n_{1} \ge 0 }\sum _{n_{2} \ge 0} \phi _{12} \frac{a^{n_{1}} c^{n_{2}}}{\Gamma (\alpha ) } \langle \alpha + n_{1} {+} n_{2} \rangle \, \langle \beta n_{1} {+} 1 \rangle .\nonumber \\ \end{aligned}$$
(2.12)

Up to this point, the integral has been reduced to a two-dimensional bracket series containing two brackets. The evaluation of these series is described below using Rule \(E_{2}\), stated next.

\({\mathbf {Rule \, \, P_{3}}}\). Each representation of an integral by a bracket series has associated a complexity index of the representation via

$$\begin{aligned}&\text {complexity index }\nonumber \\&\quad = \text {number of sums -number of brackets}. \end{aligned}$$
(2.13)

It is important to observe that the complexity index is attached to a specific representation of the integral and not just to integral itself. The experience obtained by the authors using this method suggests that, among all representations of an integral as a bracket series, the one with minimal complexity index should be chosen. The level of difficulty in the analysis of the resulting bracket series increases with the complexity index.

Rules for the evaluation of a bracket series

\({\mathbf {Rule \, \, E_{1}}}\). Let \(a, \, b \in {\mathbb {R}}\). The one-dimensional bracket series is assigned the value

$$\begin{aligned} \sum _{n \ge 0} \phi _{n} f(n) \langle an + b \rangle = \frac{1}{|a|} f(n^{*}) \Gamma (-n^{*}), \end{aligned}$$
(2.14)

where \(n^{*}\) is obtained from the vanishing of the bracket; that is, \(n^{*}\) solves \(an+b = 0\). This is precisely the Ramanujan’s master theorem.

The next rule provides a value for multi-dimensional bracket series of index 0, that is, the number of sums is equal to the number of brackets.

\({\mathbf {Rule \, \, E_{2}}}\). Let \(a_{ij} \in {\mathbb {R}}\). Assuming the matrix \(A = (a_{ij})\) is non-singular, then the assignment is

$$\begin{aligned}&\sum _{n_{1} \ge 0} \cdots \sum _{n_{r} \ge 0} \phi _{n_{1} \cdots n_{r}} f(n_{1},\cdots ,n_{r}) \\&\qquad \langle a_{11}n_{1} + \cdots + a_{1r}n_{r} + c_{1} \rangle \cdots \langle a_{r1}n_{1} + \cdots + a_{rr}n_{r} + c_{r} \rangle \\&\quad = \frac{1}{| \text {det}(A) |} f(n_{1}^{*}, \cdots n_{r}^{*}) \Gamma (-n_{1}^{*}) \cdots \Gamma (-n_{r}^{*}) \end{aligned}$$

where \(\{ n_{i}^{*} \}\) is the (unique) solution of the linear system obtained from the vanishing of the brackets. There is no assignment if A is singular.

Example \(\mathbf {2.1}\): continuation. The two-dimensional brackets series in Example 2.1

$$\begin{aligned} I(a,c;\alpha ,\beta )\!= & {} \! \sum _{n_{1} \ge 0 }\sum _{n_{2} \ge 0} \phi _{12} \frac{a^{n_{1}} c^{n_{2}}}{\Gamma (\alpha ) } \langle \alpha + n_{1} + n_{2} \rangle \, \langle \beta n_{1} {+} 1 \rangle .\nonumber \\ \end{aligned}$$
(2.15)

is now evaluated using Rule \(E_{2}\). The linear system to be solved is

$$\begin{aligned} n_{1} = - \frac{1}{\beta }, \quad n_{1}+n_{2} = - \alpha , \end{aligned}$$

and since the determinant of the associated matrix is \(1/\beta \ne 0\), there is a unique solution. This is given by

$$\begin{aligned} n_{1}^{*} = -\frac{1}{\beta }, \quad \text {and} \quad n_{2}^{*} = \frac{1}{\beta } - \alpha . \end{aligned}$$
(2.16)

Using these values in Rule \(E_{2}\) reproduces (2.10).

\({\mathbf {Rule \, \, E_{3}}}\). The value of a multi-dimensional bracket series of positive complexity index is obtained by computing all the contributions of maximal rank by Rule \(E_{2}\). These contributions to the integral appear as series in the free indices. Series converging in a common region are added and divergent/nulls series are discarded. There is no assignment to a bracket series of negative complexity index. If all the resulting series are discarded, then the method is not applicable.

Remark 2.2

There is a small collection of formal operational rules for brackets. These will be used in the calculations presented below.

Rule 2.1

For any \(\alpha \in {\mathbb {R}}\) the bracket satisfies \(\langle - \alpha \rangle = \langle \alpha \rangle \).

Proof

This follows from the change of variables \(x \mapsto 1/x\) in \(\langle - \alpha \rangle = \int _{0}^{\infty } x^{-\alpha - 1} \, dx. \) \(\square \)

A similar change of variables gives the next scaling rule:

Rule 2.2

For any \(\alpha , \, \beta , \, \gamma \in {\mathbb {R}}\) with \(\alpha \ne 0\) the bracket satisfies

$$\begin{aligned} \langle \alpha \gamma + \beta \rangle = \frac{1}{| \alpha |} \left\langle \gamma + \frac{\beta }{\alpha } \right\rangle . \end{aligned}$$
(2.17)

This can be deduced from Rule \(E_{1}\).

Rule 2.3

For any \(\alpha , \, \beta \in {\mathbb {R}}\) with \(\alpha \ne 0\), for any \(n \in {\mathbb {N}}\) appearing as the index of a sum any allowable function F, the identity

$$\begin{aligned} F(n) \langle \alpha n + \beta \rangle = \frac{1}{| \alpha |} F \left( - \frac{\beta }{\alpha } \right) \left\langle n + \frac{\beta }{\alpha } \right\rangle \end{aligned}$$
(2.18)

in the sense that any appearance of the left-hand side in a bracket series may be replaced by the right-hand side.

Proof

This follows directly from the rule \(E_{1}\) to evaluate bracket series. \(\square \)

3 Some operational rules for integration

This section describes the relation between line integrals, like those appearing in the inverse Mellin transform, and the method of brackets. These complement those given for the discrete sums. The results presented here have appeared in [23] as an extension of the method of brackets and used to produce a minimal Mellin-Barnes representations of integrals appearing in connection with Feynman diagrams.

3.1 Brackets and the fundamental rule of integration

Let f be a function defined on \({\mathbb {R}}^{+}\) and consider its Mellin transform

$$\begin{aligned} F(s) = \int _{0}^{\infty } x^{s-1} f(x) \, dx, \end{aligned}$$
(3.1)

with inversion rule

$$\begin{aligned} f(x) = \frac{1}{2 \pi i } \int _{\gamma } x^{-s} F(s) \, ds, \end{aligned}$$
(3.2)

with the usual convention on the contour \(\gamma \). Then, replacing (3.2) into (3.1) produces

$$\begin{aligned} F(s)= & {} \int _{0}^{\infty } x^{s-1} \left[ \frac{1}{2 \pi i } \int _{\gamma } x^{-s'} F(s') \, ds' \right] \, dx \nonumber \\= & {} \frac{1}{2 \pi i } \int _{\gamma } F(s') \left[ \int _{0}^{\infty } x^{s - s' - 1} \, dx \right] \, ds' \nonumber \\= & {} \frac{1}{2 \pi i } \int _{\gamma } F(s') \langle s - s' \rangle \, ds'. \end{aligned}$$
(3.3)

This proves:

Rule 3.1

The rule for integration respect to brackets is given by

$$\begin{aligned} \int _{\gamma } F(s) \langle s + \alpha \rangle \, ds = 2 \pi i F(-\alpha ) \end{aligned}$$
(3.4)

where \(\gamma \) is a contour of the usual type. The generalization

$$\begin{aligned} \int _{\gamma } F(s) \langle \beta s + \alpha \rangle \, ds = \frac{2 \pi i }{| \beta | } F \left( - \frac{\alpha }{\beta } \right) , \quad \text {with} \,\, \beta \in {\mathbb {R}} \end{aligned}$$
(3.5)

can be obtained from Rule 2.2.

3.2 Mellin transform

This section contains a brief review of the Mellin transform. Recall that this is defined by

$$\begin{aligned} {\mathcal {M}}(f)(z) = \int _0^{\infty } x^{z-1}f(x)~dx, \end{aligned}$$
(3.6)

where the arguments are the function f to be transformed and the variable z appearing in the integral. The inverse Mellin transformation is

$$\begin{aligned} f(x) = \frac{1}{2\pi i}\int _{c-i\infty }^{c + i\infty }x^{-z}{\mathcal {M}}(f)(z) \, dz \quad \text { for } x \in (0,\infty ).\nonumber \\ \end{aligned}$$
(3.7)

The point c associated to the contour of integration must be in the vertical strip \(c_1< c < c_2\), with boundaries determined by the condition that

$$\begin{aligned} \int _0^{1} x^{c_1-1}f(x)~dx&\mathrm{and }&\int _1^{\infty } x^{c_2-1}f(x)~dx \end{aligned}$$
(3.8)

must be finite. This is satisfied if the function f satisfies the growth conditions

$$\begin{aligned} |f(x)|< & {} 1/x^{c_1} \quad \mathrm{when} \; x \rightarrow +0, \text { and }\\ |f(x)|< & {} 1/x^{c_2} \quad \text { when} \; x \rightarrow + \infty . \end{aligned}$$

The conditions (3.8) imply that the Mellin transform \({\mathcal {M}}(f)(z)\) is holomorphic in the vertical strip \(c_1< \mathrm{Re}~z < c_2\). The asymptotic behavior of the integrand is then used to determine the direction in which (a finite segment) of the vertical line contour is closed in order to produce a contour to apply Cauchy’s integral theorem. The singularities of the integrand are then used to analyze the behavior of the integrals as the finite segment becomes infinite.

One of the simplest examples of the Mellin transformation is

$$\begin{aligned}&\Gamma (z) = \int _0^{\infty } e^{-x}x^{z-1}~dx \quad \mathrm{and }\\&e^{-x} = \frac{1}{2\pi i}\int _{c-i\infty }^{c + i\infty }x^{-z} \Gamma (z) ~dz. \end{aligned}$$

The contour in the complex plane is the vertical line with \(\mathrm{Re}~z = c\) is in the strip \(0< c < A,\) where A is a real positive number, the vertical line contour must be closed to the left. It is convenient to include here a proof of equations (3.63.7). This is classical and may be found in any textbook on the theory of complex variable. It is reproduced here for pedagogical arguments. First, use the fact that \({\mathcal {M}}(f)(z)\) is holomorphic in the strip \(c_1< \mathrm{Re}~z < c_2\), then taking \(\delta > 0\) to be infinitesimally small,

$$\begin{aligned} {\mathcal {M}}(f) (z)= & {} \int _0^{\infty } x^{z-1}f(x)~dx \nonumber \\= & {} \frac{1}{2\pi i}\int _0^{\infty } x^{z-1} dx\int _{c-i\infty }^{c + i\infty }x^{-\omega } {\mathcal {M}}(f)(\omega ) ~d\omega \nonumber \\= & {} \frac{1}{2\pi i}\int _0^1 x^{z-1} dx\int _{c-i\infty }^{c + i\infty }x^{-\omega } {\mathcal {M}}(f)(\omega ) ~d\omega \nonumber \\&+ \frac{1}{2\pi i}\int _1^\infty x^{z-1} dx\int _{c-i\infty }^{c + i\infty }x^{-\omega } {\mathcal {M}}(f) (\omega ) ~d\omega \nonumber \\= & {} \frac{1}{2\pi i}\int _0^1 x^{z-1} dx\int _{c_1-\delta -i\infty }^{c_1-\delta + i\infty }x^{-\omega } {\mathcal {M}}(f)(\omega ) ~d\omega \nonumber \\&+ \frac{1}{2\pi i}\int _1^\infty \! x^{z-1} dx\int _{c_2+\delta -i\infty }^{c_2+\delta + i\infty }\!\! x^{-\omega } {\mathcal {M}}(f)(\omega ) ~d\omega \nonumber \\= & {} \frac{1}{2\pi i}\int _{c_1-\delta -i\infty }^{c_1-\delta + i\infty }\frac{{\mathcal {M}}(f)(\omega )}{z-\omega } ~d\omega \nonumber \\&- \frac{1}{2\pi i}\int _{c_2+\delta -i\infty }^{c_2+\delta + i\infty }\frac{{\mathcal {M}}(f)(\omega )}{z-\omega } ~d\omega \nonumber \\= & {} \frac{1}{2\pi i}\int _{c_1-\delta +i\infty }^{c_1-\delta - i\infty }\frac{{\mathcal {M}}(f)(\omega )}{\omega -z} ~d\omega \nonumber \\&+ \frac{1}{2\pi i}\int _{c_2+\delta -i\infty }^{c_2+\delta + i\infty }\frac{{\mathcal {M}}(f)(\omega )}{\omega -z} ~d\omega \nonumber \\= & {} \frac{1}{2\pi i}\oint _{CR}\frac{{\mathcal {M}}(f)(\omega )}{\omega -z} ~d\omega , \end{aligned}$$
(3.9)

where CR is a rectangular contour constructed from the two vertical lines from (3.9) supplemented by two horizontal lines at the imaginary complex infinities of the strip \(c_1< \mathrm{Re}~z < c_2.\) The contour CR is closed in the counterclockwise orientation.

The inverse transformation proof is even simpler and may be used in order to define Dirac \(\delta \)-function. Observe that

$$\begin{aligned} f(x)= & {} \frac{1}{2\pi i}\int _{c-i\infty }^{c + i\infty }x^{-z} {\mathcal {M}}(f)(z) ~dz \nonumber \\= & {} \frac{1}{2\pi i}\int _{c-i\infty }^{c + i\infty }x^{-z}~dz\int _0^{\infty } y^{z-1}f(y)~dy \nonumber \\= & {} \int _0^{\infty }\delta (\ln {(y/x)}) y^{-1}f(y)~dy = f(x), \end{aligned}$$
(3.10)

which is valid in view of the relation

$$\begin{aligned} \frac{1}{2\pi i} \int _{c - i\infty }^{c + i\infty }e^{(x-y)z} dz= & {} \frac{1}{2\pi } \int _{-\infty }^{\infty }e^{(x-y)(c +i\tau )} d\tau \nonumber \\= & {} \frac{e^{(x-y)c}}{2\pi }\int _{-\infty }^{\infty }e^{i(x-y)\tau } d\tau \nonumber \\= & {} e^{(x-y)c}\delta (x-y) = \delta (x-y).\nonumber \\ \end{aligned}$$
(3.11)

This proof of the inverse Mellin transformation belongs to D. Hilbert and may be found in classical textbooks dedicated to complex analysis. In this paper we show that the method of brackets, when applied to Mellin integrals, is in some sense equivalent to this old proof. More precisely, we argue that by using the same procedure, namely to divide the integration over \(x \in [0,\infty [\) into \(x \in [0,1]\) and \(x \in [1,\infty [\) and creating a closed contour in the complex plane, one may map all the rules of the method of bracket to the Cauchy integral formula.

In high energy particle physics, in order to solve integro-differential equations representing evolution of important physical quantities, the transformation to Mellin moment is frequently used (see, for example [4]). The inverse transformation of the Mellin moment has completely the same form like the inverse Mellin transformation. The question may appear if the inverse Mellin transformation returns us to some function but how we may know if we came back to this function of a real variable from its Mellin moment or from its Mellin transform in the complex plane? We may differ the Mellin moments from the the Mellin transforms by studying the asymptotic behaviour at the complex infinity of the given function in the complex plane.

Remark 3.1

The question of complexity, as introduced in Rule \(\mathbf {P_{3}}\) is now extended. In the process of evaluating an integral by the method of brackets, define \(\sigma \) to be the number of sums plus the number of contour integrals appearing and \(\kappa \) to be the number of brackets plus the number of integrals on the half-line \([0, \, \infty )\) that appear. The (generalized) index of complexity is \(\iota = \sigma - \kappa \). This index should be seen as a measure of difficulty in the evaluation of the integral by the method of brackets. In the case \(\iota = 0\), the answer is given by a single term. For \(\iota > 0\), the gamma factors appearing in the numerators of the line integrals must be expanded in bracket series. This guarantees that the method provides all series representations of the solution. As usual series converging in a common region must be added. It is the heuristic observation that the bracket/line integral representations of a problem should aim to minimize the index \(\iota \).

3.3 Multiple integrals

The method discussed here can be extended to evaluate multiple integrals with a bracket representation

$$\begin{aligned} J= & {} \left( \frac{1}{2 \pi i} \right) ^{N} \int _{\gamma _{1}} ds_{1} \cdots \int _{\gamma _{N}} ds_{N}F(s_{1}, \ldots , s_{N}) \nonumber \\&\langle a_{11}s_{1} + \cdots a_{1N}s_{N} + c_{1} \rangle \cdots \nonumber \\&\langle a_{N1}s_{1} + \cdots a_{NN}s_{N} + c_{N} \rangle , \end{aligned}$$
(3.12)

by using the one-dimensional rule in iterated form. The expression for J has the form

$$\begin{aligned} J = \frac{1}{| \det (A) |} F(s_{1}^{*}, \cdots , s_{N}^{*}) \end{aligned}$$
(3.13)

where \(A = (a_{ij})\), with \(a_{ij} \in {\mathbb {R}}\) and \(\{ s_{i}^{*} \} \, \, (i=1, \ldots , N)\) is the solution of the system \(A \vec {s} = - \vec {c}\) produced by the vanishing of the arguments in the brackets appearing in this process.

4 Transforming Mellin–Barnes integrals to bracket series

This section evaluates Mellin–Barnes integrals by transforming them into a bracket series. The rules of Sect. 2 are then used to produce an analytic expression for the integral.

Lemma 4.1

The gamma function has the bracket series representation

$$\begin{aligned} \Gamma (\alpha ) = \sum _{n=0}^{\infty } \phi _{n} \langle \alpha + n \rangle . \end{aligned}$$
(4.1)

Proof

This follows simply from expanding \(e^{-t}\) in power series in the integral representation of the gamma function to obtain

$$\begin{aligned} \Gamma (\alpha )= & {} \int _{0}^{\infty } t^{\alpha -1} e^{-t} \, dt = \int _{0}^{\infty } t^{\alpha - 1} \sum _{n=0}^{\infty } \frac{(-1)^{n}}{n!} t^{n} \, dt \nonumber \\= & {} \sum _{n=0}^{\infty } \phi _{n} \langle \alpha + n \rangle . \end{aligned}$$
(4.2)

\(\square \)

In this section we present a systematic procedure to evaluate Mellin–Barnes integrals of the form

$$\begin{aligned} I(x)\!= & {} \!\frac{1}{2 \pi i }\! \int _{\gamma }\! x^{-s} \frac{ \prod \nolimits _{j=1}^{M} \Gamma (a_{j} \!+\! A_{j}s) \prod \nolimits _{j=1}^{P} \Gamma (c_{j} - C_{j}s) }{ \prod \nolimits _{j=1}^{N} \Gamma (b_{j} \!+\! B_{j}s) \prod \nolimits _{j=1}^{Q} \Gamma (d_{j} - D_{j} s) } \, ds\nonumber \\ \end{aligned}$$
(4.3)

where \(\gamma \) is the usual vertical line contour. A similar argument has appeared in [23]. The idea is to use the method of brackets to produce the bracket series associated to (4.3). The parameters satisfy the rules \(a_{j}, \, b_{j}, \, c_{j}, \, d_{j} \in {\mathbb {C}}\) and \(A_{j}, \, B_{j}, \, C_{j}, \, D_{j} \in {\mathbb {R}}^{+}\) for the index j in the corresponding range; for instance, the j associated to \(a_{j}, \, A_{j}\) vary from 1 to M.

The procedure to obtain the bracket series is systematic: the gamma factors in the numerator are replaced using formula (4.1) (the gamma factors in the denominator do not contribute):

$$\begin{aligned} \prod _{j=1}^{M} \Gamma (a_{j}+A_{j}s)= & {} \prod _{j=1}^{M} \left[ \sum _{k_{j}} \phi _{k_{j}} \langle a_{j} + A_{j} s + k_{j} \rangle \right] \\= & {} \sum _{k_{1}} \cdots \sum _{k_{M}} \phi _{k_{1} \cdots k_{M}} \prod _{j=1}^{M} \langle a_{j} + A_{j} s + k_{j} \rangle \nonumber \end{aligned}$$
(4.4)

and similarly

$$\begin{aligned} \prod _{j=1}^{P} \Gamma (c_{j}-C_{j}s)= & {} \prod _{j=1}^{P} \left[ \sum _{\ell _{j}} \phi _{\ell _{j}} \langle c_{j} - C_{j} s + \ell _{j} \rangle \right] \nonumber \\= & {} \sum _{\ell _{1}} \cdots \sum _{\ell _{P}} \phi _{\ell _{1} \cdots \ell _{P}} \prod _{j=1}^{P} \langle c_{j} - C_{j} s + \ell _{j} \rangle .\nonumber \\ \end{aligned}$$
(4.5)

The rules of the method of brackets described in Sect. 2 now yield a bracket series associated with the integral (4.3). To illustrate these idea, introduce the function

$$\begin{aligned} G(s) = \frac{1}{\prod \limits _{j=1}^{N} \Gamma (b_{j} + B_{j}s)} \times \frac{1}{\prod \limits _{j=1}^{Q} \Gamma (d_{j} - D_{j}s)}, \end{aligned}$$
(4.6)

The previous rules transform the integral I(x) in (4.3) to

$$\begin{aligned} I(x)= & {} \frac{1}{2 \pi i } \sum _{k_{1}} \cdots \sum _{k_{M}} \sum _{\ell _{1}} \cdots \sum _{\ell _{P}} \phi _{k_{1} \cdots k_{M} \ell _{1} \cdots \ell _{P}} \nonumber \\&\times \int _{\gamma } x^{-s} G(s) \left[ \prod _{j=1}^{M} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \, \nonumber \\&\times \left[ \prod _{j=1}^{P} \langle c_{j} - C_{j}s + \ell _{j} \rangle \right] \, ds. \end{aligned}$$
(4.7)

this will be written in the more compact form

$$\begin{aligned} I(x)= & {} \frac{1}{2 \pi i } \sum _{\{ k \} } \sum _{\{ \ell \}} \phi _{\{k\}, \, \{\ell \}} \int _{\gamma } x^{-s} G(s)\nonumber \\&\times \left[ \prod _{j=1}^{M} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \left[ \prod _{j=1}^{P} \langle c_{j} - C_{j}s + \ell _{j} \rangle \right] \, ds. \nonumber \\ \end{aligned}$$
(4.8)

Now select the bracket \(\langle a_{M} + A_{M}s + k_{M} \rangle \) to evaluate the integral (4.8). Any other choice of bracket gives an equivalent value for I(x). Start with

$$\begin{aligned} I(x)= & {} \sum _{\{ k \} } \sum _{\{ \ell \}} \phi _{ \{ k \}, \{ \ell \}} \int _{\gamma } x^{-s} G(s) \\&\times \left[ \prod _{j=1}^{M-1} \langle a_{j} + A_{j}s + k_{j} \rangle \right] \, \left[ \prod _{j=1}^{P} \langle c_{j} - C_{j}s + \ell _{j} \rangle \right] \, \\&\frac{\langle a_{M} + A_{M} s + k_{M} \rangle }{2 \pi i } \, ds. \end{aligned}$$

The rules of the method of brackets given requires to solve the linear equation coming from the vanishing of the last bracket and using Rule 3.1. This produces

$$\begin{aligned} s^{*} = - \frac{a_{M} + k_{M}}{A_{M}}. \end{aligned}$$
(4.9)

Therefore

$$\begin{aligned} I(x)= & {} \frac{1}{|A_{M}|} \sum _{\{ k \} } \sum _{\{ \ell \}} \phi _{ \{ k \}, \{ \ell \}} x^{-s^{*}} G(s^{*})\nonumber \\&\times \prod _{j=1}^{M-1} \langle a_{j} + A_{j}s^{*} + k_{j} \rangle \prod _{j=1}^{P} \langle c_{j} - C_{j}s^{*} + \ell _{j} \rangle \nonumber \\ \end{aligned}$$
(4.10)

Therefore, the value of the integral I(x) obtained from the selection of the the bracket \(\langle a_{M} + A_{M}s + k_{M} \rangle \) is given by

$$\begin{aligned}&I(x) = \frac{x^{a_{M}/A_{M}} }{|A_{M} |} \sum _{\{ k \}} \sum _{\{\ell \}} \phi _{\{ k \}, \{ \ell \}} x^{k_{M}/A_{M}} \nonumber \\&\quad \times \frac{ \prod \nolimits _{j=1}^{M-1} \left\langle a_{j} - \frac{A_{j} a_{M} }{A_{M} } - \frac{A_{j} k_{M} }{A_{M} } + k_{j} \right\rangle \prod \nolimits _{j=1}^{P} \left\langle c_{j} + \frac{C_{j} a_{M} }{A_{M} } + \frac{C_{j} k_{M} }{A_{M} } + \ell _{j} \right\rangle }{ \prod \nolimits _{j=1}^{N} \left\langle b_{j} - \frac{B_{j} a_{M} }{A_{M} } - \frac{B_{j} k_{M} }{A_{M} } \right\rangle \prod \nolimits _{j=1}^{Q} \left\langle d_{j} + \frac{D_{j} a_{M} }{A_{M} } + \frac{D_{j} k_{M} }{A_{M} } \right\rangle }.\nonumber \\ \end{aligned}$$
(4.11)

Observe that one obtains a total of \(M+P\) series representations for the integral I(x). There are P of them in the argument \(x^{-1/C_{j}}\) and the remaining M of them in the argument \(x^{1/A_{j}}\). This procedure extends without difficulty to multiple integrals.

These ideas are illustrated next.

Example 4.2

The hypergeometric function \({_{2}}F_{1}\), defined by the series

$$\begin{aligned} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a,b}{c} \bigg | {x} \right) = \sum _{n=0}^{\infty } \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} x^{n}, \end{aligned}$$
(4.12)

for \(|x| < 1\), admits the Mellin-Barnes representation

$$\begin{aligned} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a,b}{c} \bigg | {x} \right)= & {} \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)} \frac{1}{2 \pi i } \nonumber \\&\times \int _{\gamma } \frac{\Gamma (-s) \Gamma (s+a) \Gamma (s+b)}{\Gamma (s+c)} (-x)^{s} \, ds\nonumber \\ \end{aligned}$$
(4.13)

as a contour integral. This appears as entry 9.113 in [19].

The integral in (4.13) is now used to obtain the series representation (4.12). As an added consequence, this method will also produce an analytic continuation of the series (4.12) to the domain \(|x|>1\).

The starting point is now the right-hand side of (4.13)

$$\begin{aligned}&G(a,b,c;x) = \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)} \frac{1}{2 \pi i } \nonumber \\&\quad \times \int _{\gamma } \frac{\Gamma (-s) \Gamma (s+a) \Gamma (s+b)}{\Gamma (s+c)} (-x)^{s} \, ds \end{aligned}$$
(4.14)

and using (4.1) in the three gamma factors yields

$$\begin{aligned}&G(a,b,c;x)= \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)} \frac{1}{2 \pi i } \sum _{n_{1},n_{2},n_{3}} \phi _{123} \nonumber \\&\quad \times \int _{\gamma } \frac{(-x)^{s} \langle -s+n_{1} \rangle \langle s+ a + n_{2} \rangle \langle s+b + n_{3} \rangle }{\Gamma (s+c)} ds.\nonumber \\ \end{aligned}$$
(4.15)

The gamma term in the denominator has no poles, so it is not expanded.

In order to evaluate the expression (4.15), select the bracket containing the index \(n_{1}\) and use Rule 3.1 to obtain the bracket series:

$$\begin{aligned}&G(a,b,c;x) = \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)}\nonumber \\&\quad \times \sum _{n_{1},n_{2},n_{3}} \phi _{123} \frac{(-x)^{n_{1}}}{\Gamma (n_{1}+c)} \langle a + n_{1} + n_{2} \rangle \langle b+n_{1}+n_{3} \rangle .\nonumber \\ \end{aligned}$$
(4.16)

The evaluation of this series is done according to the rules given in Sect. 2.

Take \(n_{1}\) as the free index. Then the indices \(n_{2}, \, n_{3}\) are determined by the system

$$\begin{aligned} a+n_{1}+n_{2} = 0 \quad \text { and } \quad b+n_{1}+n_{3} = 0, \end{aligned}$$
(4.17)

which gives \(n_{2} = - a - n_{1}\) and \(n_{3} = -b-n_{1}\). Then (2.14) produces

$$\begin{aligned}&G_{1}(a,b,c;x) = \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)}\nonumber \\&\quad \times \sum _{n_{1}=0}^{\infty } \phi _{1} \frac{(-x)^{n_{1}}}{\Gamma (n_{1}+c)} \Gamma (a+n_{1}) \Gamma (b+n_{1}), \end{aligned}$$
(4.18)

where the index on \(G_{1}\) is used to indicate that this sum comes from the free index \(n_{1}\). This reduces to (4.12), showing that

$$\begin{aligned} G_{1}(a,b,c;x) = \sum _{n=0}^{\infty } \frac{(a)_{n} (b)_{n}}{(c)_{n} \, n!} x^{n}. \end{aligned}$$
(4.19)

The series on the right converges for \(|x|<1\). This recovers Eq. (4.12).

Take \(n_{2}\) as the free index. Then the vanishing of the brackets give \(n_{1} = -a - n_{2}\) and \(n_{3} = -b+a+n_{2}\). Then

$$\begin{aligned}&G_{2}(a,b,c;x) = \frac{\Gamma (c)}{\Gamma (a) \Gamma (b)} \nonumber \\&\quad \times \sum _{n_{2}=0}^{\infty } \phi _{2} \frac{(-x)^{-a-n_{2}}}{\Gamma (-a-n_{2}+c)} \Gamma (a+n_{2}) \Gamma (b-a-n_{2}).\nonumber \\ \end{aligned}$$
(4.20)

Using \(\Gamma (u+m) = \Gamma (u) (u)_{m}\) converts (4.20) into

$$\begin{aligned} G_{2}(a,b,c;x)= & {} \frac{\Gamma (c) \Gamma (b-a)}{ \Gamma (b) \Gamma (c-a)} \nonumber \\&\times \sum _{n_{2}=0}^{\infty } \phi _{2} \frac{(-x)^{-a-n_{2}}}{ (c-a)_{-n_{2}}} (a)_{n_{2}} (b-a)_{-n_{2}}.\nonumber \\ \end{aligned}$$
(4.21)

The final step uses the transformation rule

$$\begin{aligned} (u)_{-n} = \frac{(-1)^{n}}{(1-u)_{n}} \end{aligned}$$
(4.22)

to eliminate the negative indices on the Pochhammer symbols and convert (4.21) into

$$\begin{aligned} G_{2}(a,b,c;x)= & {} \frac{\Gamma (c) \Gamma (b-a)}{ \Gamma (b) \Gamma (c-a)}\nonumber \\&\times \sum _{n_{2}=0}^{\infty } \phi _{2} \frac{(-x)^{-a-n_{2}}}{ (1-b+a)_{n_{2}}} (a)_{n_{2} } (1-c+a)_{n_{2}}\nonumber \\= & {} (-x)^{-a} \frac{\Gamma (c) \Gamma (b-a)}{\Gamma (b) \Gamma (c-a)}\nonumber \\&\times \sum _{n_{2}=0}^{\infty } \frac{(a)_{n_{2}} (1-c+a)_{n_{2}}}{(1-b+a)_{n_{2}} \, n_{2}!} x^{-n_{2}}. \end{aligned}$$
(4.23)

The series on the right is identified as a hypergeometric series and it yields

$$\begin{aligned} G_{2}(a,b,c;x)= & {} (-x)^{-a} \frac{\Gamma (c) \Gamma (b-a)}{\Gamma (b) \Gamma (c-a)} \nonumber \\&{}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a \,\,\,\, \,\, 1-c+a}{1-b+a} \bigg | {\frac{1}{x}} \right) \end{aligned}$$
(4.24)

and this series converges for \(|x|>1\).

Finally take \(n_{3}\) as the free index. This case is similar to the previous one and it produces

$$\begin{aligned} G_{3}(a,b,c;x)= & {} (-x)^{-b} \frac{\Gamma (c) \Gamma (a-b)}{\Gamma (a) \Gamma (c-b)}\nonumber \\&{}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{b \,\,\,\,\,\, 1-c+b}{1-a+b} \bigg | {\frac{1}{x}} \right) \end{aligned}$$
(4.25)

and this series also converges for \(|x|>1\).

The rules in Sect. 2 state that if in the evaluation of an integral one obtains a collection of series, coming from choices of free indices, those converging in a common region must be added. Thus, the integral G(abcx) in (4.16) has the representations

$$\begin{aligned} G(a,b,c;x) = {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a \,\, b}{c} \bigg | {x} \right) \quad \text {for} \,\, |x| < 1 \end{aligned}$$
(4.26)

and

$$\begin{aligned}&G(a,b,c;x) \nonumber \\&\quad = (-x)^{-a} \frac{\Gamma (c) \Gamma (b-a)}{\Gamma (b) \Gamma (c-a)} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{a \,\,\,\, \,\, 1-c+a}{1-b+a} \bigg | {\frac{1}{x}} \right) \nonumber \\&\qquad + (-x)^{-b} \frac{\Gamma (c) \Gamma (a-b)}{\Gamma (a) \Gamma (c-b)} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{b \,\,\,\,\,\, 1-c+b}{1-a+b} \bigg | {\frac{1}{x}} \right) , \nonumber \\&\qquad \qquad \text {for} \,\, |x|>1. \end{aligned}$$
(4.27)

Therefore we have obtained an analytic continuation of the hypergeometric function \(_{2}F_{1}(a,b,c;x)\) from \(|x|<1\) to the exterior of the unit circle. The identity (4.27) appears as entry 9.132.2 in [19].

5 Inverse Mellin transforms

The method of brackets is now used to evaluate integrals of the form (1.7)

$$\begin{aligned} f(x) = \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \varphi (s) \, ds, \end{aligned}$$
(5.1)

where \(\gamma \) is a vertical line on \({\mathbb {C}}\), adjusted to each problem. Given \(\varphi (s)\), the function f(x) has Mellin transform \(\varphi \).

Example 5.1

Consider the function \(\varphi (s) = \Gamma (s-a).\) Its inverse Mellin transform is given by

$$\begin{aligned} f(x)= \frac{1}{2 \pi i} \int _{\gamma } x^{-s} \Gamma (s-a) \, ds. \end{aligned}$$
(5.2)

Now use (4.1) to write

$$\begin{aligned} \Gamma (s-a) = \sum _{n} \phi _{n} \langle s - a + n \rangle \end{aligned}$$
(5.3)

and (5.2) yields

$$\begin{aligned} f(x) = \sum _{n} \phi _{n} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \langle s-a+n \rangle \, ds \end{aligned}$$
(5.4)

Rule 3.1 now gives

$$\begin{aligned} f(x)= \sum _{n} \phi _{n} x^{n-a} = x^{-a}e^{-x}. \end{aligned}$$
(5.5)

This is written as

$$\begin{aligned} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \Gamma (s-a) \, ds = x^{-a} e^{-x}, \end{aligned}$$
(5.6)

or equivalently

$$\begin{aligned} \int _{0}^{\infty } x^{s-a-1}e^{-x} \, dx = \Gamma (s-a). \end{aligned}$$
(5.7)

Replacing \(s-a\) by s, this is the integral definition of the gamma function.

Example 5.2

The inverse Mellin transform of \(\varphi (s) = \Gamma (a-s)\) is obtained as in Example 5.1. The result is

$$\begin{aligned} f(x) = x^{-a} e^{-1/x}, \end{aligned}$$
(5.8)

also written as

$$\begin{aligned} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \Gamma (a-s) \, ds = x^{-a} e^{-1/x}, \end{aligned}$$
(5.9)

or equivalently

$$\begin{aligned} \int _{0}^{\infty } x^{s-1}x^{-a}e^{-1/x} \, dx = \Gamma (a-s). \end{aligned}$$
(5.10)

The change of variables \(u = x^{-1}\) gives the integral representation of the gamma function.

Example 5.3

The inversion of \(\varphi (s) = \Gamma (s-a) \Gamma (s-b)\) amounts to the evaluation of the line integral

$$\begin{aligned} f(x)= & {} {\mathcal {M}}^{-1} (\Gamma (s-a) \Gamma (s-b))(x)\nonumber \\= & {} \frac{1}{2 \pi i} \int _{\gamma } x^{-s} \Gamma (s-a) \Gamma (s-b) \, ds. \end{aligned}$$
(5.11)

Now use (4.1) to write

$$\begin{aligned} \Gamma (s-a)= & {} \sum _{n_{1}} \phi _{n_{1}} \langle s - a + n_{1} \rangle \quad \text {and} \quad \nonumber \\ \Gamma (s-b)= & {} \sum _{n_{2}} \phi _{n_{2}} \langle s - b + n_{2} \rangle \end{aligned}$$
(5.12)

and produce

$$\begin{aligned} f(x)= & {} \frac{1}{2 \pi i} \int _{\gamma } x^{-s} \sum _{n_{1},n_{2}} \phi _{12} \langle s-a+n_{1} \rangle \langle s - b + n_{2} \rangle \, ds.\nonumber \\ \end{aligned}$$
(5.13)

Now select the bracket containing the index \(n_{1}\) and write (5.13) using Rule 3.1 as

$$\begin{aligned} f(x)= & {} \sum _{n_{1},n_{2}} \phi _{12} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \langle s - b + n_{2} \rangle \langle s - a + n_{1} \rangle \, ds \nonumber \\= & {} \sum _{n_{1}, n_{2} } \phi _{12} \, x^{-a+n_{1} } \langle a-n_{1} - b + n_{2} \rangle . \end{aligned}$$
(5.14)

This is a two-dimensional bracket series and its evaluation is achieved using the rules in Sect. 2:

\(n_{1}\) is a free index. Then \(n_{2} = n_{1}-a+b\) and this produces the value

$$\begin{aligned} f_{1}(x)= & {} \sum _{n_{1}=0}^{\infty } \phi _{1} x^{-a+n_{1}} \Gamma (-n_{1}+a-b) \nonumber \\= & {} x^{-a} \Gamma (a-b) \sum _{n_{1}=0}^{\infty } \phi _{1} (a-b)_{-n_{1}} \nonumber \\= & {} x^{-a} \Gamma (a-b) \sum _{n_{1}=0}^{\infty } \frac{x^{n_{1}}}{n_{1}! \, (1-a+b)_{n_{1}}} \nonumber \\= & {} x^{-a} \Gamma (a-b)\, {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{1-a+b} \bigg | {x} \right) . \end{aligned}$$
(5.15)

\(n_{2}\) is a free index. A similar argument gives

$$\begin{aligned} f_{2}(x) = x^{-b} \Gamma (b-a) \, {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{1-b+a} \bigg | {x} \right) . \end{aligned}$$
(5.16)

Since both representations always converge, one obtains

$$\begin{aligned} f(x)= & {} x^{-a} \Gamma (a-b)\, {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{1-a+b} \bigg | {x} \right) \nonumber \\&+ x^{-b} \Gamma (b-a) \, {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{1-b+a} \bigg | {x} \right) . \end{aligned}$$
(5.17)

The function \(_{0}F_{1}\) is now expressed in terms of the modified Bessel function \(I_{\nu }(z)\). This is defined in [22, 10.25.2] by the power series

$$\begin{aligned} I_{\nu }(z) = \left( \frac{z}{2} \right) ^{\nu } \sum _{k=0}^{\infty } \frac{1}{k! \, \Gamma (\nu +k+1)} \left( \frac{z^{2}}{4} \right) ^{k}. \end{aligned}$$
(5.18)

Lemma 5.4

For \(\alpha \in {\mathbb {R}}\), the identity

$$\begin{aligned} {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{\alpha } \bigg | {x} \right) = \Gamma (\alpha ) x^{(1-\alpha )/2} I_{\alpha -1}(2 \sqrt{x}) \end{aligned}$$
(5.19)

holds.

Proof

This follows directly from (5.18). \(\square \)

Replacing the expression in Lemma 5.4 in (5.17) gives

$$\begin{aligned} f(x)= & {} \frac{\pi }{\sin (\pi (a-b))} x^{-(a+b)/2} \left( I_{b-a}(2 \sqrt{x}) \right. \nonumber \\&\left. - I_{a-b}(2 \sqrt{x}) \right) . \end{aligned}$$
(5.20)

The relation [22, 10.27.4]

$$\begin{aligned} K_{\nu }(z) = \frac{\pi }{2} \frac{I_{-\nu }(z) - I_{\nu }(z)}{\sin ( \pi \nu ))} \end{aligned}$$
(5.21)

(which is here as the definition of \(K_{\nu }(z)\)) now implies

$$\begin{aligned} f(x) = 2 x^{-\nu /2 - b} K_{\nu }(2 \sqrt{x}); \end{aligned}$$
(5.22)

with \(\nu = a-b\). This is

$$\begin{aligned}&\frac{1}{2 \pi i } \int _{\gamma } x^{-s} \Gamma (s-a) \Gamma (s-b) \, ds \nonumber \\&\quad = 2x^{-(a+b)/2} K_{a-b}(2 \sqrt{x}). \end{aligned}$$
(5.23)

After some elementary changes, this is written as

$$\begin{aligned} K_{\nu }(x) = \frac{1}{4 \pi i } \left( \frac{x}{2} \right) ^{\nu } \int _{\gamma } \Gamma (s) \Gamma (s- \nu ) \left( \frac{x}{2} \right) ^{-2s} \, ds, \end{aligned}$$
(5.24)

the form appearing in [22, 10.32.13]. The expression (5.24) is now written in the equivalent form

$$\begin{aligned} \int _{0}^{\infty }x^{s-1} K_{\nu }(x) \, dx = 2^{s-2} \Gamma \left( \frac{s+\nu }{2} \right) \Gamma \left( \frac{s-\nu }{2} \right) . \end{aligned}$$
(5.25)

Example 5.5

Consider the inversion of \(\varphi (s) = \Gamma (s-a) \Gamma (b-s)\). Observe that there is a change in the order of the argument of the second gamma factor with respect to Example 5.3. To evaluate this example, expand the term \(\Gamma (s-a)\) in a bracket series using (4.1) to obtain

$$\begin{aligned} f(x) = \sum _{n} \phi _{n} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \langle s - a + n \rangle \Gamma (b-s) \, ds. \end{aligned}$$
(5.26)

Rule 3.1 yields

$$\begin{aligned} f(x) = x^{-a} \sum _{n=0}^{\infty } \phi _{n} \Gamma (b-a+n) x^{n}. \end{aligned}$$
(5.27)

To simplify this answer write \(\Gamma (b-a+n)= \Gamma (b-a) (b-a)_{n}\), use

$$\begin{aligned} {}_{1}F_{0} \left( \genfrac{}{}{0.0pt}{}{\alpha }{-} \bigg | {x} \right) = (1-x)^{-\alpha } \end{aligned}$$
(5.28)

and conclude that

$$\begin{aligned} f(x) = \frac{\Gamma (b-a)}{x^{a}(1+x)^{b-a}}. \end{aligned}$$
(5.29)

This is equivalent to the evaluation

$$\begin{aligned} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \Gamma (s-a) \Gamma (b-s) \, ds = \frac{\Gamma (b-a)}{x^{a}(1+x)^{b-a}}. \end{aligned}$$
(5.30)

Expanding the other gamma factors produces the same analytic expression for the integral.

Example 5.6

This example considers the simplest case of an integrand where a quotient of gamma factors appears. This is the inversion of

$$\begin{aligned} \varphi (s) = \frac{\Gamma (s-a)}{\Gamma (s-b)}. \end{aligned}$$
(5.31)

The usual formulation now gives

$$\begin{aligned} f(x) = \sum _{n} \phi _{n} \frac{1}{2 \pi i } \int _{\gamma } \left( \frac{x^{-s}}{\Gamma (s-b)} \right) \langle s-a+n \rangle \, ds. \end{aligned}$$
(5.32)

Rule 3.1 now yields

$$\begin{aligned} f(x) = \sum _{n=0}^{\infty } \phi _{n} \frac{x^{n-a}}{\Gamma (a-b-n)}. \end{aligned}$$
(5.33)

This expression is simplified using

$$\begin{aligned} \Gamma (a-b-n)= & {} \Gamma (a-b) (a-b)_{-n}\nonumber \\= & {} (-1)^{n} \frac{\Gamma (a-b)}{(1-a+b)_{n}} \end{aligned}$$
(5.34)

to obtain

$$\begin{aligned} f(x)= & {} \frac{x^{-a}}{\Gamma (a-b)} \sum _{n=0}^{\infty } \frac{(1-a+b)_{n}}{n!} x^{n} \nonumber \\= & {} \frac{1}{\Gamma (a-b)} x^{-a} (1-x)^{-1+a-b}. \end{aligned}$$
(5.35)

This can be written as

$$\begin{aligned}&\frac{1}{2 \pi i } \int _{\gamma } x^{-s} \frac{\Gamma (s-a)}{\Gamma (s-b)} \, ds = \frac{1}{\Gamma (a-b)} x^{-a}(1-x)^{-1+a-b}.\nonumber \\ \end{aligned}$$
(5.36)

Example 5.7

The inverse Mellin transform f(x) of

$$\begin{aligned} \varphi (s) = \frac{\Gamma (s-a)}{\Gamma (b-s)} \end{aligned}$$
(5.37)

is computed from the line integral

$$\begin{aligned} f(x) = \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \frac{\Gamma (s-a)}{\Gamma (b-s)} \, ds. \end{aligned}$$
(5.38)

The usual procedure now yields

$$\begin{aligned} f(x)= & {} \sum _{n=0}^{\infty } (-1)^{n} \frac{x^{n-a}}{n! \, \Gamma (b-a+n)} \nonumber \\= & {} \frac{x^{-a}}{\Gamma (b-a)} \sum _{n=0}^{\infty } \frac{(-1)^{n}}{n! \, (b-a)_{n}} x^{n}. \end{aligned}$$
(5.39)

The series is identified as an \({_{0}F_{1}}\) and using the identity

$$\begin{aligned} J_{\nu }(z) = \frac{1}{\Gamma (\nu +1)} \left( \frac{z}{2} \right) ^{\nu } {}_{0}F_{1} \left( \genfrac{}{}{0.0pt}{}{-}{\nu +1} \bigg | {- \frac{z^{2}}{4}} \right) \end{aligned}$$
(5.40)

produces

$$\begin{aligned} f(x) = x^{(1-a-b)/2} J_{-1-a+b}( 2 \sqrt{x}) \end{aligned}$$
(5.41)

and gives the evaluation

$$\begin{aligned} \frac{1}{2 \pi i } \int _{\gamma } x^{-s} \frac{\Gamma (s-a)}{\Gamma (b-s)} \, ds = x^{(1-a-b)/2} J_{-1-a+b}( 2 \sqrt{x}).\nonumber \\ \end{aligned}$$
(5.42)

Example 5.8

The inverse Mellin transform f(x) of

$$\begin{aligned} \varphi (s) = \frac{\Gamma (a-s)}{\Gamma (s-b)} \end{aligned}$$
(5.43)

is computed as in Example 5.7. The result is

$$\begin{aligned}&\frac{1}{2 \pi i } \int _{\gamma } x^{-s} \frac{\Gamma (a-s)}{\Gamma (s-b)} \, ds \nonumber \\&\quad = x^{-(a+b+1)/2} J_{a-b-1} \left( \frac{2}{\sqrt{x}} \right) \end{aligned}$$
(5.44)

and thus

$$\begin{aligned} \int _{0}^{\infty } x^{s-1} x^{-(a+b+1)/2} J_{a-b-1} \left( \frac{2}{\sqrt{x}} \right) \, dx = \frac{\Gamma (a-s)}{\Gamma (s-b)}.\nonumber \\ \end{aligned}$$
(5.45)

The identity (5.44), with \(a=0\) and \(b=-\nu -1\), can be written as

$$\begin{aligned} J_{\nu }(x) = \frac{1}{2 \pi i } \int _{\gamma } \frac{\Gamma (-s)}{\Gamma (s+ \nu +1)} \left( \frac{x}{2} \right) ^{2s + \nu } \, ds. \end{aligned}$$
(5.46)

Example 5.9

The Mellin inversion of the function

$$\begin{aligned} \varphi (s) = \frac{\Gamma (s) \Gamma (1-s)}{\Gamma (\beta - \alpha s)} \end{aligned}$$
(5.47)

is given by

$$\begin{aligned} f(x) = \frac{1}{2 \pi i} \int _{\gamma } x^{-s} \frac{\Gamma (s) \Gamma (1-s)}{\Gamma (\beta - \alpha s)} \, ds. \end{aligned}$$
(5.48)

The standard procedure gives

$$\begin{aligned} f(x) = \sum _{n_{1},n_{2}} \phi _{12} \, \frac{1}{2 \pi i } \int _{\gamma } \left( \frac{\langle 1 - s + n_{2} \rangle x^{-s}}{\Gamma (\beta - \alpha s)} \right) \, \langle s + n_{1} \rangle \, ds.\nonumber \\ \end{aligned}$$
(5.49)

Rule 3.1 then produces

$$\begin{aligned} f(x) = \sum _{n_{1},n_{2}} \phi _{12} \frac{x^{n_{1}}}{\Gamma (\beta + \alpha n_{1})} \langle 1 + n_{1}+n_{2} \rangle . \end{aligned}$$
(5.50)

To evaluate this two-dimensional bracket series proceed as in Example 5.3. This gives

$$\begin{aligned} f(x) = \sum _{n=0}^{\infty } \frac{(-x)^{n} }{\Gamma (\beta + \alpha n)} \quad \text {when} \,\, |x| < 1 \end{aligned}$$
(5.51)

and

$$\begin{aligned} f(x) = \frac{1}{x} \sum _{n=0}^{\infty } \frac{1}{\Gamma (\beta - \alpha - \alpha n)} \frac{(-1)^{n}}{x^{n}} \quad \text {when} \,\, |x| > 1.\nonumber \\ \end{aligned}$$
(5.52)

The function appearing in (5.51) is the Mittag–Leffler function, defined in [22] by

$$\begin{aligned} E_{\alpha ,\beta }(z) = \sum _{n=0}^{\infty } \frac{z^{n}}{\Gamma (\alpha n + \beta )}. \end{aligned}$$
(5.53)

produces the final expression

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} E_{\alpha , \beta }(-x) &{} \quad \text {if} \,\, |x| < 1 \\ x^{-1} E_{-\alpha , \beta -\alpha }(-1/x) &{} \quad \text {if} \,\, |x|>1. \end{array}\right. } \end{aligned}$$
(5.54)

6 Direct computations of Mellin transforms

This section describes how to use the method of brackets to produce the evaluation of the Mellin transform

$$\begin{aligned} {\mathcal {M}}(f(x))(s) = \int _{0}^{\infty } x^{s-1} f(x) \, dx. \end{aligned}$$
(6.1)

Example 6.1

Example 5.3 has produced the evaluation of

$$\begin{aligned} \int _{0}^{\infty }x^{\alpha -1} K_{\nu }(x) \, dx = 2^{\alpha -2} \Gamma \left( \frac{\alpha +\nu }{2} \right) \Gamma \left( \frac{\alpha -\nu }{2} \right) , \end{aligned}$$
(6.2)

from the Mellin inversion of \(\Gamma (s-a)\Gamma (s-b)\).

Example 6.2

The next example evaluates

$$\begin{aligned} I(\alpha ,\mu ,\nu ) = \int _{0}^{\infty } x^{\alpha -1} K_{\mu }(x) K_{\nu }(x) \, dx. \end{aligned}$$
(6.3)

by the methods developed here.

Entry 10.32.19 in [22] contains the representation

$$\begin{aligned}&K_{\mu }(x)K_{\nu }(x) \nonumber \\&\quad = \frac{1}{8 \pi i } \int _{\gamma } \left( \frac{x}{2} \right) ^{-2s} \frac{1}{\Gamma (2s)} \Gamma \left( s + \frac{\mu +\nu }{2} \right) \nonumber \\&\qquad \times \Gamma \left( s + \frac{\mu -\nu }{2} \right) \Gamma \left( s{-} \frac{\mu {-}\nu }{2} \right) \Gamma \left( s{-} \frac{\mu {+}\nu }{2} \right) \, ds\nonumber \\ \end{aligned}$$
(6.4)

Replacing this in (6.3) and identifying the x-integral as a bracket, yields

$$\begin{aligned} I(\alpha ,\mu ,\nu )= & {} \frac{1}{8 \pi i } \int _{\gamma } \frac{2^{2s} }{\Gamma (2s)} \Gamma \left( s + \frac{\mu +\nu }{2} \right) \Gamma \left( s + \frac{\mu -\nu }{2} \right) \nonumber \\&\times \Gamma \left( s - \frac{\mu -\nu }{2} \right) \Gamma \left( s - \frac{\mu +\nu }{2} \right) \langle \alpha - 2s \rangle \, ds\nonumber \\ \end{aligned}$$
(6.5)

Since this problem contains one bracket and one contour integral, there is no need to expand the gamma terms in bracket series and the result is obtained directly from Rule 3.1. The result is

$$\begin{aligned}&\int _{0}^{\infty } x^{\alpha -1} K_{\mu }(x) K_{\nu }(x) \, dx \nonumber \\&\quad = \frac{2^{\alpha -3}}{\Gamma (\alpha )} \Gamma \left( \frac{\alpha + \mu + \nu }{2} \right) \Gamma \left( \frac{\alpha + \mu - \nu }{2} \right) \nonumber \\&\qquad \times \Gamma \left( \frac{\alpha - \mu + \nu }{2} \right) \Gamma \left( \frac{\alpha - \mu - \nu }{2} \right) . \end{aligned}$$
(6.6)

Example 6.3

The evaluation of

$$\begin{aligned} \int _{0}^{\infty } x^{2a-1} K_{\nu }^{2}(x) \, dx = \frac{\sqrt{\pi }\, \Gamma (a+\nu ) \Gamma (a-\nu ) \Gamma (a)}{4\Gamma \left( a + \tfrac{1}{2} \right) } \end{aligned}$$
(6.7)

is the special case of Example 6.2 with \(\mu = \nu \). Note that the parameter a has been replaced by 2a, in order to write the answer in a more compact form. In particular, with \(\nu =0\), this becomes

$$\begin{aligned} \int _{0}^{\infty } x^{2a-1} K_{0}^{2}(x) \, dx = \frac{\sqrt{\pi } \Gamma ^{3}(a)}{4 \, \Gamma \left( a + \tfrac{1}{2} \right) }. \end{aligned}$$
(6.8)

The final special case mentioned here has \(a = \tfrac{1}{2}\):

$$\begin{aligned} \int _{0}^{\infty } K_{0}^{2}(x) \, dx = \frac{\pi ^{2}}{4}. \end{aligned}$$
(6.9)

These examples have been evaluated in [11] by a different procedure.

Example 6.4

Now consider the integral

$$\begin{aligned} \varphi _{3}(a) = \int _{0}^{\infty } K_{0}^{3}(ax) \, dx \end{aligned}$$
(6.10)

with an auxiliary parameter a that naturally can be scaled out.

The evaluation begins with a more general problem

$$\begin{aligned} I = I(a,b;\mu ,\nu ,\alpha ) = \int _{0}^{\infty } K_{\mu }(ax) K_{\nu }(ax) K_{\alpha }(bx) \, dx \nonumber \\ \end{aligned}$$
(6.11)

and then

$$\begin{aligned} \varphi _{3}(a) = \lim \limits _{{\mathop {b \rightarrow a}\limits ^{\alpha = \mu = \nu \rightarrow 0}}} I(a,b;\mu ,\nu ,\alpha ). \end{aligned}$$
(6.12)

The Mellin-Barnes representations of the factors in the integrand

$$\begin{aligned}&K_{\mu }(ax) K_{\nu }(ax) \nonumber \\&\quad = \frac{1}{8 \pi i } \int _{\gamma } \Gamma \left( t + \frac{\mu + \nu }{2} \right) \Gamma \left( t + \frac{\mu - \nu }{2} \right) \nonumber \\&\qquad \times \Gamma \left( t - \frac{\mu + \nu }{2} \right) \Gamma \left( t - \frac{\mu - \nu }{2} \right) \left( \frac{ax}{2} \right) ^{-2t} \frac{dt}{\Gamma (2t)},\nonumber \\ \end{aligned}$$
(6.13)

and

$$\begin{aligned} K_{\alpha }(bx) = \frac{1}{4 \pi i } \left( \frac{bx}{2} \right) ^{\alpha } \int _{\gamma } \Gamma (s) \Gamma (s- \alpha ) \left( \frac{bx}{2} \right) ^{-2s} \, ds. \nonumber \\ \end{aligned}$$
(6.14)

Replacing in (6.11) gives

$$\begin{aligned}&I = \frac{1}{8(2 \pi i )^{2}} \int _{\gamma _{1}} \int _{\gamma _{2}} \nonumber \\&\quad \times \frac{ \Gamma \left( t + \frac{\mu + \nu }{2} \right) \Gamma \left( t + \frac{\mu - \nu }{2} \right) \Gamma \left( t - \frac{\mu + \nu }{2} \right) \Gamma \left( t - \frac{\mu - \nu }{2} \right) \Gamma (s) \Gamma (s- \alpha )}{a^{2t} b^{2s - \alpha } 2^{-2t - 2s + \alpha } \Gamma (2t)} \nonumber \\&\quad \times \langle -2s - 2t + \alpha +1 \rangle \, dt \, ds. \end{aligned}$$
(6.15)

Now replace the gamma factors in the denominator by their corresponding bracket series to obtain

$$\begin{aligned} I= & {} \frac{1}{8} \sum _{\{ n\}} \phi _{n_{1} \cdots n_{6}} \frac{1}{(2 \pi i)^{2}} \int _{\gamma _{1}} \int _{\gamma _{2}} \frac{1}{a^{2t} b^{2s - \alpha } 2^{-2t - 2s + \alpha } \Gamma (2t)} \nonumber \\&\times \langle t + \frac{\mu + \nu }{2} + n_{1} \rangle \langle t + \frac{\mu - \nu }{2} + n_{2} \rangle \langle t - \frac{\mu + \nu }{2} + n_{3 } \rangle \nonumber \\&\times \langle t - \frac{\mu - \nu }{2} + n_{4} \rangle \langle s + n_{5} \rangle \langle s - \alpha \nonumber \\&+ n_{6} \rangle \langle -2s - 2t + \alpha + 1 \rangle \, dt \, ds. \end{aligned}$$
(6.16)

To evaluate this expression choose to eliminate the sums with indices \(n_{1}\) and \(n_{5}\). The resulting sums now depend on four indices \(n_{2}, \, n_{3}, \, n_{4}\) and \(n_{6}\) and the variables of integration t and s take the values

$$\begin{aligned} t^{*} = - \frac{\mu +\nu }{2} - n_{1} \quad \text {and} \,\, s^{*} = - n_{5}. \end{aligned}$$
(6.17)

This yields

$$\begin{aligned} I= & {} \frac{1}{8} \sum _{\{ n \}} \phi _{n_{2}n_{3}n_{4}n_{6}} \nonumber \\&\times \frac{ \langle t^{*} + \frac{\mu -\nu }{2} + n_{2} \rangle \langle t^{*} - \frac{\mu +\nu }{2} + n_{3} \rangle \langle t^{*} - \frac{\mu - \nu }{2} + n_{4} \rangle \langle s^{*} - \alpha + n_{6} \rangle \langle -2s^{*} - 2t^{*} + \alpha + 1 \rangle }{ a^{2t^{*}} b^{2s^{*} - \alpha } 2^{-2t^{*} - 2s^{*}+ \alpha } \Gamma (2t^{*}) }. \end{aligned}$$
(6.18)

Under the assumption \(|4a^{2}| < |b^{2}|\) the integral in (6.11) is expressed as

$$\begin{aligned} I = I(a,b;\mu ,\nu ,\alpha ) = T_{1}+T_{2}+T_{3}+T_{4} \end{aligned}$$
(6.19)

with

$$\begin{aligned} \quad T_{1}= & {} \frac{1}{8} \frac{a^{\mu +\nu }}{b^{1+ \mu + \nu }} \Gamma \left( \frac{1-\alpha + \mu + \nu }{2} \right) \nonumber \\&\times \Gamma \left( \frac{1 + \alpha + \mu + \nu }{2} \right) \Gamma (-\mu ) \Gamma (-\nu ) \nonumber \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{1 + \tfrac{\mu +\nu }{2} \,\, \tfrac{1+ \mu + \nu }{2} \,\, \frac{1+ \alpha + \mu + \nu }{2} \,\, \frac{1 - \alpha + \mu + \nu }{2} }{1+ \mu \,\, 1+ \nu \,\, 1+ \mu + \nu } \bigg | {\frac{4a^{2}}{b^{2}}} \right) \nonumber \\ T_{2}= & {} \frac{1}{8} \frac{a^{\mu -\nu }}{b^{1+ \mu - \nu }} \Gamma \left( \frac{1-\alpha + \mu - \nu }{2} \right) \nonumber \\&\times \Gamma \left( \frac{1 + \alpha + \mu - \nu }{2} \right) \Gamma (-\mu ) \Gamma (\nu ) \nonumber \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{1 + \tfrac{\mu -\nu }{2} \,\, \tfrac{1+ \mu - \nu }{2} \,\, \frac{1- \alpha + \mu - \nu }{2} \,\, \frac{1 + \alpha + \mu - \nu }{2} }{1+ \mu \,\, 1- \nu \,\, 1+ \mu - \nu } \bigg | {\frac{4a^{2}}{b^{2}}} \right) \nonumber \\ T_{3}= & {} \frac{1}{8} \frac{b^{\mu +\nu -1}}{a^{\mu + \nu }} \Gamma \left( \frac{1-\alpha - \mu - \nu }{2} \right) \nonumber \\&\times \Gamma \left( \frac{1 + \alpha - \mu - \nu }{2} \right) \Gamma (\mu ) \Gamma (\nu ) \nonumber \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{1 - \tfrac{\mu +\nu }{2} \,\, \tfrac{1- \mu - \nu }{2} \,\, \frac{1-\alpha - \mu - \nu }{2} \,\, \frac{1 + \alpha - \mu - \nu }{2} }{1- \mu \,\, 1- \nu \,\, 1- \mu - \nu } \bigg | {\frac{4a^{2}}{b^{2}}} \right) \nonumber \\ T_{4}= & {} \frac{1}{8} \frac{b^{\mu -\nu -1}}{a^{\mu - \nu }} \Gamma \left( \frac{1-\alpha - \mu + \nu }{2} \right) \nonumber \\&\times \Gamma \left( \frac{1 + \alpha - \mu + \nu }{2} \right) \Gamma (\mu ) \Gamma (-\nu ) \nonumber \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{1 - \tfrac{\mu -\nu }{2} \,\, \tfrac{1- \mu + \nu }{2} \,\, \frac{1-\alpha - \mu + \nu }{2} \,\, \frac{1 + \alpha - \mu + \nu }{2} }{1- \mu \,\, 1+ \nu \,\, 1- \mu + \nu } \bigg | {\frac{4a^{2}}{b^{2}}} \right) .\nonumber \\ \end{aligned}$$
(6.20)

Now passing to the limit as \(\alpha , \, \nu , \, \mu \rightarrow 0\) yields

$$\begin{aligned}&\int _{0}^{\infty } K_{0}^{2}(ax) K_{0}(bx) \, dx \\&\quad = \lim \limits _{\mu \rightarrow 0} \frac{1}{8b} \left[ \left( \frac{a^{2}}{b^{2}} \right) ^{\mu } \Gamma \left( \frac{1 + 2 \mu }{2} \right) ^{2} \right. \\&\qquad \times \Gamma (-\mu )^{2} {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1+2 \mu }{2} \,\, \tfrac{1+ 2 \mu }{2} \,\, \tfrac{1+ 2 \mu }{2} }{1+ \mu \,\, 1+ 2 \mu } \bigg | {\frac{4a^{2}}{b^{2}}} \right) \\&\qquad + 2 \pi \Gamma (-\mu ) \Gamma (\mu ) {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1}{2} \,\,\, \tfrac{1}{2} \,\,\, \tfrac{1}{2} }{1+\mu \,\, 1 - \mu } \bigg | { \frac{4a^{2}}{b^{2}}} \right) \\&\qquad \left. + \left( \frac{a^{2}}{b^{2}} \right) ^{-\mu } \Gamma ^{2} \left( \tfrac{1- 2 \mu }{2} \right) \Gamma ^{2}(\mu )\right. \nonumber \\&\qquad \left. {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1- 2 \mu }{2} \,\, \tfrac{1- 2 \mu }{2} \,\, \tfrac{1 - 2 \mu }{2} }{1- \mu \,\,\, 1 - 2 \mu } \bigg | { \frac{4a^{2}}{b^{2}}} \right) \right] \end{aligned}$$

In a similar form, in the case \(|4a^{2}| >|b^{2}|\) one obtains

$$\begin{aligned} I = I(a,b;\mu ,\nu ,\alpha ) = T_{5}+T_{6} \end{aligned}$$
(6.21)

with

$$\begin{aligned} T_{5}= & {} \frac{1}{8} \frac{b^{\alpha }}{a^{\alpha +1}} \frac{\Gamma (- \alpha )}{\Gamma (\alpha +1)} \\&\times \Gamma \left( \frac{1+\alpha - \mu + \nu }{2} \right) \Gamma \left( \frac{1 + \alpha - \mu - \nu }{2} \right) \\&\times \Gamma \left( \frac{1 + \alpha + \mu - \nu }{2} \right) \Gamma \left( \frac{1 + \alpha + \mu + \nu }{2} \right) \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{\tfrac{1+\alpha + \mu +\nu }{2} \,\, \tfrac{1+\alpha - \mu + \nu }{2} \,\, \frac{1+ \alpha + \mu - \nu }{2} \,\, \frac{1 + \alpha - \mu - \nu }{2} }{1+ \alpha \,\, 1+ \tfrac{\alpha }{2} \,\, \tfrac{1+ \alpha }{2} } \bigg | {\frac{b^{2}}{4a^{2}}} \right) \\ T_{6}= & {} \frac{1}{8} \frac{a^{\alpha -1}}{b^{\alpha }} \frac{\Gamma (\alpha )}{\Gamma (1-\alpha )} \Gamma \left( \frac{1-\alpha - \mu + \nu }{2} \right) \\&\times \Gamma \left( \frac{1 - \alpha - \mu - \nu }{2} \right) \\&\Gamma \left( \frac{1 - \alpha + \mu - \nu }{2} \right) \Gamma \left( \frac{1 - \alpha + \mu + \nu }{2} \right) \\&\times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{\tfrac{1-\alpha + \mu +\nu }{2} \,\, \tfrac{1-\alpha - \mu + \nu }{2} \,\, \frac{1- \alpha + \mu - \nu }{2} \,\, \frac{1 - \alpha - \mu - \nu }{2} }{1- \alpha \,\, 1- \tfrac{\alpha }{2} \,\, \tfrac{1- \alpha }{2} } \bigg | {\frac{b^{2}}{4a^{2}}} \right) \end{aligned}$$

and then letting \(\mu = \nu = 0, \, b=a\) and \(\alpha \rightarrow 0\) yields, after scaling the parameter a

$$\begin{aligned} \int _{0}^{\infty } K_{0}^{3}(x) \, dx= & {} \lim \limits _{\alpha \rightarrow 0} \frac{1}{8} \left[ \frac{\Gamma (- \alpha ) \Gamma ^{4} \left( \frac{1+ \alpha }{2} \right) }{\Gamma (1 + \alpha )}\right. \nonumber \\&{}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1+ \alpha }{2} \,\, \tfrac{1+ \alpha }{2} \,\, \tfrac{1+\alpha }{2} }{ 1+ \alpha \,\, 1+ \tfrac{\alpha }{2} } \bigg | {\frac{1}{4} } \right) + \frac{\Gamma (\alpha ) \Gamma ^{4} \left( \tfrac{1 - \alpha }{2} \right) }{\Gamma (1 - \alpha )} \nonumber \\&\left. {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1- \alpha }{2} \,\, \tfrac{1- \alpha }{2} \,\, \tfrac{1-\alpha }{2} }{ 1 - \alpha \,\, 1- \tfrac{\alpha }{2} } \bigg | {\frac{1}{4} } \right) \right] . \end{aligned}$$
(6.22)

The authors have been unable to produce a simpler analytic expression for this limiting value.

Example 6.5

A similar argument as the one presented in the previous example yields

$$\begin{aligned}&\int _{0}^{\infty } K_{0}^{4}(x) \, dx \nonumber \\&\quad = \frac{\sqrt{\pi }}{16} \lim \limits _{\alpha \rightarrow 0} \left[ \frac{\Gamma ^{2}(-\alpha ) \Gamma ^{2} \left( \alpha + \tfrac{1}{2} \right) \Gamma \left( 2 \alpha + \tfrac{1}{2} \right) }{\Gamma (2 \alpha + 1)}\right. \nonumber \\&\qquad \times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1}{2} \,\, \alpha + \tfrac{1}{2} \,\, \alpha + \tfrac{1}{2} \,\, 2 \alpha + \tfrac{1}{2}}{1+ \alpha \,\, 1+ \alpha \,\, 1+ 2 \alpha } \bigg | {1} \right) \nonumber \\&\qquad + 2 \sqrt{\pi } \Gamma (-\alpha ) \Gamma (\alpha ) \Gamma ( \tfrac{1}{2} + \alpha ) \Gamma ( \tfrac{1}{2} - \alpha )\nonumber \\&\qquad \times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} + \alpha \,\,\,\, \tfrac{1}{2} - \alpha }{1 \,\,\,\, 1+ \alpha \,\,\,\, 1- \alpha } \bigg | {1} \right) \nonumber \\&\qquad + \frac{\Gamma ^{2}(\alpha ) \Gamma ^{2}( \tfrac{1}{2} - \alpha ) \Gamma ( \tfrac{1}{2} - 2 \alpha )}{\Gamma (1 -2 \alpha )}\nonumber \\&\left. \qquad \times {}_{4}F_{3} \left( \genfrac{}{}{0.0pt}{}{ \tfrac{1}{2} \,\,\,\, \tfrac{1}{2} - \alpha \,\,\,\, \tfrac{1}{2} - \alpha \,\,\,\, \tfrac{1}{2} - 2 \alpha }{1- \alpha \,\,\,\, 1 - \alpha \,\,\,\, 1- 2 \alpha } \bigg | {1} \right) \right] \end{aligned}$$
(6.23)

Details are omitted.

7 Mellin transforms of products

This section presents a method to evaluate the Mellin transform

$$\begin{aligned} I(s,b) = \int _{0}^{\infty } x^{s-1} f(x) g(bx) \,dx. \end{aligned}$$
(7.1)

knowing a series for f of the form

$$\begin{aligned} f(x) = \sum _{n} \phi _{n} F(n)x^{\beta n} \end{aligned}$$
(7.2)

and the inverse Mellin transform

$$\begin{aligned} g(x) = \frac{1}{2 \pi i } \int _{\gamma }x^{-s} \varphi (s) \, ds. \end{aligned}$$
(7.3)

For \(\kappa > 0\), the change of variables \(s = \kappa s'\) gives

$$\begin{aligned} g(x) = \frac{1}{2 \pi i } \int _{- i \infty }^{i \infty }x^{-\kappa s} {\widetilde{\varphi }}(s) \, ds, \end{aligned}$$
(7.4)

where \({\widetilde{\varphi }}(s) = \kappa \varphi (\kappa s)\). The formula (7.4) is now written as

$$\begin{aligned} g(x) = \frac{1}{2 \pi i } \int _{\gamma }x^{-\kappa s} \varphi (s) \, ds, \end{aligned}$$
(7.5)

that is, the tilde notation is dropped and the parameter \(\kappa \) is kept.

Then

$$\begin{aligned} I= & {} \int _{0}^{\infty } x^{\alpha -1} f(x) g(bx) \, dx = \frac{1}{2 \pi i } \int _{\gamma } \varphi (s) b^{- \kappa s}\nonumber \\&\left[ \sum _{n} \phi _{n} F(n) \left[ \int _{0}^{\infty } x^{\alpha + \beta n - \kappa s -1} \, dx \right] \right] \, ds \nonumber \\= & {} \frac{1}{2 \pi i } \int _{\gamma } \varphi (s) b^{- \kappa s} \left[ \sum _{n} \phi _{n} F(n) \langle \alpha + \beta n - \kappa s \rangle \right] \, ds \nonumber \\= & {} \frac{1}{2 \pi i } \int _{\gamma } \frac{\varphi (s) b^{- \kappa s} }{| \beta |} \left[ \sum _{n} \phi _{n} F(n) \left\langle \frac{\alpha }{\beta } - \frac{\kappa s}{\beta } + n \right\rangle \right] \, ds \nonumber \\= & {} \frac{1}{ | \beta | } \frac{1}{2 \pi i } \int _{\gamma } \varphi (s) b^{-\kappa s} \Gamma \left( \frac{\alpha - \kappa s}{\beta } \right) F \left( \frac{\kappa s - \alpha }{\beta } \right) \, ds.\nonumber \\ \end{aligned}$$
(7.6)

This is stated as a theorem.

Theorem 7.1

Assume the function f(x) has an expansion given by

$$\begin{aligned} f(x) = \sum _{n} \phi _{n} F(n) x^{\beta n} \end{aligned}$$
(7.7)

and the function g(x) is given by rescaled version of the inverse Mellin transform

$$\begin{aligned} g(x) = \frac{1}{2 \pi i } \int _{\gamma } \varphi (s) x^{-\kappa s} \, ds. \end{aligned}$$
(7.8)

Then

$$\begin{aligned}&\int _{0}^{\infty } x^{\alpha - 1} f(x) g(bx) \, dx \\&\quad = \frac{1}{ | \beta | } \frac{1}{2 \pi i } \int _{\gamma } \varphi (s) \Gamma \left( \frac{\alpha - \kappa s}{\beta } \right) F \left( \frac{\kappa s - \alpha }{\beta } \right) b^{- \kappa s} \, ds. \end{aligned}$$

Example 7.2

Entry 6.532.4 in [19] states that

$$\begin{aligned} \int _{0}^{\infty } \frac{x \, J_{0}(Ax) \, dx}{x^{2}+k^{2}} = K_{0}(Ak). \end{aligned}$$
(7.9)

Theorem 7.1 is now used to establish this evaluation.

Start with the expansion

$$\begin{aligned} f(x) = \frac{1}{x^{2}+k^{2}} = \sum _{n} \phi _{n} \left[ \Gamma (n+1) k^{-2n-2} \right] x^{2n}. \end{aligned}$$
(7.10)

This gives (7.7) with \(F(n) = \Gamma (n+1)k^{-2n-2}\) and \(\beta = 2\).

Now use [22, 10.9.23]

$$\begin{aligned} J_{\nu }(z) = \frac{1}{2 \pi i } \int _{\gamma } \frac{\Gamma (t) }{\Gamma (\nu -t+1)} \left( \frac{z}{2} \right) ^{\nu - 2t} \, dt, \end{aligned}$$
(7.11)

and replace z by Ax to produce the desired representation of the Bessel function:

$$\begin{aligned} J_{0}(Ax) = \frac{1}{2 \pi i } \int _{\gamma } \frac{(Ax)^{2s} \Gamma (s) }{2^{2s} \Gamma (1-s)} \, ds. \end{aligned}$$
(7.12)

In the notation of (7.8), the parameters are \(\kappa =2, \, \beta = 2, \, \alpha = 2\) and \(b=A\) and \(\varphi (s) = \frac{\Gamma (s)}{2^{2\,s} \, \Gamma (1-s)} \). Theorem 7.1 now gives

$$\begin{aligned} \int _{0}^{\infty } \frac{x J_{0}(Ax) }{x^{2}+k^{2}} \, dx = \frac{1}{4 \pi i } \int _{ \gamma } \Gamma ^{2}(s) \left( \frac{Ak}{2} \right) ^{-2s} \, ds. \end{aligned}$$
(7.13)

Now the formula [22, 10.32.13])

$$\begin{aligned} K_{\nu }(z) = \frac{1}{4 \pi i } \left( \frac{z}{2} \right) ^{\nu } \int _{\gamma } \Gamma (t) \Gamma (t- \nu ) \left( \frac{z}{2} \right) ^{-2t} \, dt, \end{aligned}$$
(7.14)

In particular for \(\nu = 0\) this becomes

$$\begin{aligned} K_{0}(z) = \frac{1}{4 \pi i } \int _{\gamma } \Gamma ^{2}(t) \left( \frac{z}{2} \right) ^{-2t} \, dt. \end{aligned}$$
(7.15)

Formula (7.9) now follows from (7.13) and (7.15).

Example 7.3

The next evaluation is

$$\begin{aligned} I = \int _{0}^{\infty } x K_{\nu }(ax) J_{\nu }(bx) \, dx = \frac{b^{\nu }}{a^{\nu }(a^{2}+b^{2})}. \end{aligned}$$
(7.16)

This is entry 6.521.2 in [19].

Formulas (5.24) and (5.46) give, respectively, the representations

$$\begin{aligned} K_{\nu }(z) = \frac{1}{4 \pi i } \int _{\gamma } \Gamma (s) \Gamma (s-\nu ) \, \left( \frac{z}{2} \right) ^{-2s+\nu } \, ds \end{aligned}$$
(7.17)

and

$$\begin{aligned} J_{\nu }(z) = \frac{1}{2 \pi i} \int _{\gamma } \frac{\Gamma (-s)}{\Gamma (s + \nu +1)} \left( \frac{z}{2} \right) ^{2s+\nu } \, ds. \end{aligned}$$
(7.18)

Replacing in (7.16) and recognizing the x-integral into a bracket yields

$$\begin{aligned} I= & {} \frac{1}{2 (2 \pi i)^{2}} \left( \frac{ab}{4} \right) ^{\nu } \int _{\gamma _{1}} \int _{\gamma _{2}} \frac{\Gamma (-t) \Gamma (s) \Gamma (s- \nu ) }{\Gamma (t + \nu + 1) }\nonumber \\&\times \left( \frac{a}{2} \right) ^{- 2s} \left( \frac{b}{2} \right) ^{2t} \frac{1}{2} \langle 1 + \nu + t - s \rangle \, ds \, dt. \end{aligned}$$
(7.19)

The bracket is now used to eliminate the s-integral, the result is

$$\begin{aligned} I = \frac{1}{a^{2}} \left( \frac{b}{a} \right) ^{\nu } \frac{1}{2 \pi i } \int _{\gamma } \Gamma (-t) \Gamma (1+t) \left( \frac{b^{2}}{a^{2}} \right) ^{t} \, dt. \end{aligned}$$
(7.20)

The gamma terms are expanded as bracket series using (4.1) and then eliminate the line integral with one of the brackets to obtain

$$\begin{aligned} I= & {} \frac{b^{ \nu }}{a^{ \nu +2}} \frac{1}{2 \pi i } \int _{\gamma } \sum _{n_{1}, n_{2}} \left( \frac{b^{2}}{a^{2}} \right) ^{t} \phi _{n_{1}n_{2}} \langle n_{1} - t \rangle \langle n_{2} + 1 + t \rangle \, dt \nonumber \\= & {} \frac{b^{\nu }}{a^{\nu +2}} \sum _{n_{1},n_{2}} \phi _{n_{1}n_{2}} \left( \frac{b^{2}}{a^{2}} \right) ^{n_{1}} \langle n_{2}+n_{1}+1 \rangle . \end{aligned}$$
(7.21)

The method of brackets now gives two different expressions obtained by using \(n_{1}\) or \(n_{2}\) as the free index of summation. These options give series converging in disjoint regions \(|b|< |a|\) and \(|b|> |a|\). The rules of the method of brackets shows that each sum gives the value of the integral in the corresponding region. In this case, both cases give the same expression

$$\begin{aligned} I = \frac{b^{\nu }}{a^{\nu }(a^{2}+b^{2})}, \end{aligned}$$
(7.22)

for the value of the integral. This confirms (7.16).

8 An integral involving exponentials and Bessel modified functions

The method developed in this work is now applied to the computation of some definite integrals. The idea is relatively simple: given a function f(x) with a Mellin transform \(\varphi (s)\) containing gamma factors, the integral

$$\begin{aligned} F = \int _{0}^{\infty } e^{-x} f(x) \, dx \end{aligned}$$
(8.1)

can be evaluated by writing the exponential as

$$\begin{aligned} e^{-x} = \sum _{n} \phi _{n}x^{n} \end{aligned}$$
(8.2)

and then using the method of brackets. The examples below illustrates this process.

Integrals involving the Bessel function \(K_{\nu }(x)\) have been presented in [11]. The computation there is based on the concept of totally null/divergent representations. The first type includes

$$\begin{aligned} K_{0}(x) = \frac{1}{x} \sum _{n=0}^{\infty } \frac{\Gamma ^{2}(n + \tfrac{1}{2})}{n! \, \Gamma (-n)} \left( - \frac{4}{x^{2}} \right) ^{n} \end{aligned}$$
(8.3)

in which every term vanishes. There is a similar expression for a series in which every term diverges. In spite of the lack of rigor, these series have shown to be useful in the evaluation of definite integrals. See [11] for details. A second technique used before is the integral representation of \(K_{\nu }(x)\) such as

$$\begin{aligned} K_{\nu }(x) = \frac{2^{\nu } \Gamma \left( \nu + \tfrac{1}{2} \right) }{\Gamma (\tfrac{1}{2})} x^{\nu } \int _{0}^{\infty } \frac{\cos t \, dt}{(x^{2}+t^{2})^{\nu +1/2}}, \end{aligned}$$
(8.4)

and then apply the methods of brackets. The method present next is an improvement over those employed in [11].

Example 8.1

Consider the integral

$$\begin{aligned} F(b,\nu ) = \int _{0}^{\infty } e^{-x} K_{\nu }(bx) \, dx. \end{aligned}$$
(8.5)

Naturally the parameter b can be scaled out, but it is instructive to leave it as is.

Write the exponential function as in (8.2) and the Bessel function from (5.24) as

$$\begin{aligned} K_{\nu }(bx) = \frac{1}{4 \pi i } \left( \frac{bx}{2} \right) ^{\nu } \int _{\gamma } \Gamma (s) \Gamma (s- \nu ) \left( \frac{bx}{2} \right) ^{-2s} \, ds,\nonumber \\ \end{aligned}$$
(8.6)

where the contour \(\gamma \) is a vertical line to the right of the poles at \(s=0\) and \(s = \nu \). Then

$$\begin{aligned} F(b,\nu )= & {} \frac{1}{2} \sum _{n_{1}} \phi _{n_{1}} \frac{1}{2 \pi i } \int _{\gamma } \left( \frac{b}{2} \right) ^{\nu - 2s}\nonumber \\&\times \Gamma (s) \Gamma (s- \nu ) \left( \int _{0}^{\infty } x^{n_{1}-2s + \nu } \, dx \right) \, ds \\= & {} \frac{1}{2} \sum _{n_{1}} \phi _{n_{1}} \frac{1}{2 \pi i } \int _{\gamma } \left( \frac{b}{2} \right) ^{\nu - 2s} \nonumber \\&\times \Gamma (s) \Gamma (s- \nu ) \langle n_{1} - 2 s + \nu +1 \rangle \, ds \end{aligned}$$

Now replace the gamma factors by their bracket expansions as in (4.1) to produce

$$\begin{aligned} F(b,\nu )= & {} \frac{1}{2} \sum _{n_{1},n_{2},n_{3}} \phi _{123} \frac{1}{2 \pi i } \int _{\gamma } \left( \frac{b}{2} \right) ^{\nu - 2s} \nonumber \\&\times \langle s + n_{2} \rangle \, \langle s - \nu + n_{3} \rangle \, \langle n_{1} - 2s + \nu + 1 \rangle \, ds \end{aligned}$$

Using the bracket \(\langle s + n_{2} \rangle \) to apply Rule 3.1 gives a bracket series for the integral \(F(b,\nu )\):

$$\begin{aligned} F(b,\nu )= & {} \frac{1}{2} \sum _{\vec {n}} \phi _{123} \left( \frac{b}{2} \right) ^{\nu +2n_{2}} \nonumber \\&\times \langle -n_{2} -\nu + n_{3} \rangle \langle n_{1} + 2 n_{2} + \nu + 1 \rangle , \end{aligned}$$
(8.7)

where \(\vec {n} = \{n_{1},n_{2},n_{3}\}\).

The method of brackets is now used to produce three sums for (8.7):

$$\begin{aligned} T_{1}= & {} \frac{1}{4} \sum _{n=0}^{\infty } \frac{(-1)^{n}}{n!} \Gamma \left( \frac{1 - \nu +n}{2} \right) \nonumber \\&\Gamma \left( \frac{1+ \nu + n}{2} \right) \left( \frac{b}{2} \right) ^{-n-1} \end{aligned}$$
(8.8)
$$\begin{aligned} T_{2}= & {} \frac{1}{2} \sum _{n=0}^{\infty } \frac{(-1)^{n}}{n!} \Gamma (-n-\nu ) \Gamma (1+2n+\nu ) \left( \frac{b}{2} \right) ^{\nu + 2n} \end{aligned}$$
(8.9)
$$\begin{aligned} T_{3}= & {} \frac{1}{2} \sum _{n=0}^{\infty } \frac{(-1)^{n}}{n!} \Gamma (\nu -n) \Gamma (1+2n - \nu ) \left( \frac{b}{2} \right) ^{-\nu +2n} \end{aligned}$$
(8.10)

Observe that \(T_{1}\) diverges since it is of type \({_{2}}F_{0}\) (this sum plays a role in the asymptotic study of the integral when \(b \rightarrow \infty \), not described here) and the series \(T_{2}, \, T_{3}\) converge when \(|b|<1\). The method of brackets now states that \(F(b,\nu ) = T_{2}+T_{3}\), when \(|b| <1\). Under this assumption and since \(T_{3}(b,\nu ) = T_{2}(b, - \nu )\), it suffices to obtain an expression for \(T_{2}(b, \nu )\). This is the next result.

Proposition 8.2

The function \(T_{2}(b, \nu )\) is given by

$$\begin{aligned} T_{2}(b,\nu ) = - \frac{\pi }{2 \sin (\pi \nu )} \frac{b^{\nu } (1 + \sqrt{1 - b^{2}})^{- \nu }}{\sqrt{1-b^{2}}}. \end{aligned}$$
(8.11)

Proof

The proof is divided in a sequence of steps.

Step 1. The function \(T_{2}(b,\nu )\) is given by

$$\begin{aligned} T_{2}(b, \nu ) = - \frac{\pi b^{\nu }}{2^{\nu +1} \sin (\pi \nu )} \sum _{n=0}^{\infty } \frac{\left( \frac{1+\nu }{2} \right) _{n} \, \left( 1 + \frac{\nu }{2} \right) _{n}}{(1+\nu )_{n}} \, \frac{b^{2n}}{n!}.\nonumber \\ \end{aligned}$$
(8.12)

Proof. In the definition of \(T_{2}(b,\nu )\) use \((w)_{2n} = 2^{2n} \left( \frac{w}{2} \right) _{n} \left( \frac{w+1}{2} \right) _{n},\)

$$\begin{aligned} \Gamma (-n-\nu )= & {} \frac{(-1)^{n} \Gamma (-\nu )}{(1+\nu )_{n}}, \,\,\\ \Gamma (1+\nu + 2n)= & {} \Gamma (1+\nu ) 2^{2n} \left( \frac{1+\nu }{2} \right) _{n} \left( 1 + \frac{\nu }{2} \right) _{n} \end{aligned}$$

and \(\Gamma (-\nu ) \Gamma (1+\nu ) = - \pi /\sin (\pi \nu )\), to obtain the result.

This identity shows that the statement of the Proposition is equivalent to proving

$$\begin{aligned} \frac{1}{2^{\nu }} \sum _{n=0}^{\infty } \frac{\left( \frac{1+\nu }{2} \right) _{n} \, \left( 1 + \frac{\nu }{2} \right) _{n}}{(1+\nu )_{n}} \frac{b^{2n}}{n!} = \frac{(1+ \sqrt{1-b^{2}} \,\, )^{-\nu }}{\sqrt{1-b^{2}}}. \end{aligned}$$
(8.13)

Step 2. Formula (2.5.16) in H. Wilf [28]

$$\begin{aligned} \left( \frac{1 - \sqrt{1-4x} }{2x} \right) ^{k} = \sum _{n=0}^{\infty } \frac{k \, (2n+k-1)!}{n! \, (n+k)!} x^{n} \end{aligned}$$
(8.14)

and the identity \((1+ \sqrt{1-b^{2}})(1 - \sqrt{1- b^{2}}) = b^{2}\) shows, after some elementary simplifications, that the statement of the Proposition is equivalent to

$$\begin{aligned}&\sum _{n=0}^{\infty } \frac{\left( \frac{1+\nu }{2} \right) _{n} \, \left( 1 + \frac{\nu }{2} \right) _{n}}{(1+\nu )_{n}} \frac{c^{n}}{n!} \nonumber \\&\quad = \frac{1}{\sqrt{1-c}} \sum _{n=0}^{\infty } \frac{\left( \frac{\nu }{2} \right) _{n} \, \left( 1 + \frac{\nu }{2} \right) _{n}}{(1+\nu )_{n}} \frac{c^{n}}{n!}, \end{aligned}$$
(8.15)

where \(c = b^{2}\).

Step 3. The identity at the end of Step 2 is equivalent to

$$\begin{aligned} {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{\frac{1+\nu }{2} \,\, 1 + \frac{\nu }{2}}{1+ \nu } \bigg | {u} \right)= & {} {}_{1}F_{0} \left( \genfrac{}{}{0.0pt}{}{\tfrac{1}{2}}{-} \bigg | {u} \right) {}_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{\frac{1+\nu }{2} \,\, \frac{\nu }{2}}{1+ \nu } \bigg | {u} \right) .\nonumber \\ \end{aligned}$$
(8.16)

Proof. Simply use the binomial theorem

$$\begin{aligned} (1 - t)^{-\alpha } = \sum _{n=0}^{\infty } \frac{(\alpha )_{n}}{n!} t^{n}. \end{aligned}$$
(8.17)

with \(\alpha = 1/2\).

Step 4. The conclusion of the Proposition is equivalent to the identity

$$\begin{aligned}&\sum _{j=0}^{n} \frac{\left( {\begin{array}{c}n\\ j\end{array}}\right) }{(1+ \nu )_{n-j}} \left( \frac{1}{2} \right) _{j} \left( \frac{1+ \nu }{2} \right) _{n-j} \left( \frac{\nu }{2} \right) _{n-j} \nonumber \\&\quad = \frac{\left( \frac{\nu }{2} \right) _{n} \, \left( 1 + \frac{\nu }{2} \right) _{n}}{(1+\nu )_{n}}, \end{aligned}$$
(8.18)

for every \(n \in {\mathbb {N}}\)\(\square \)

Proof

The umbral method [9, 24] shows that the identity is equivalent to the one formed by replacing Pochhammer symbols \((a)_{k}\) by \(a^{k}\). In this case, (8.18) becomes

$$\begin{aligned} \sum _{j=0}^{n} \left( {\begin{array}{c}n\\ j\end{array}}\right) 2^{j} \nu ^{n-j} = (\nu +2)^{n}, \end{aligned}$$
(8.19)

and this follows from the binomial theorem. \(\square \)

Proof

An alternative proof of (8.18) is presented next. Expressing the Pochhammer symbols in terms of binomial coefficients, it is routine to check that the desired identity is equivalent to

$$\begin{aligned} \sum _{j=0}^{n} \left( {\begin{array}{c}2j\\ j\end{array}}\right) \left( {\begin{array}{c}\nu +2(n-j)\\ n-j\end{array}}\right) \frac{\nu }{\nu +2(n-j)} = \left( {\begin{array}{c}\nu +2n\\ n\end{array}}\right) . \nonumber \\ \end{aligned}$$
(8.20)

This identity is interpreted as the coefficient of \(x^{n}\) in the product of the two series

$$\begin{aligned} A(x)= & {} \sum _{j=0}^{\infty } \left( {\begin{array}{c}2j\\ j\end{array}}\right) x^{j} \quad \text {and} \quad \nonumber \\ B(x)= & {} \sum _{k=0}^{\infty } \nu \left( {\begin{array}{c}\nu +2k\\ k\end{array}}\right) \frac{x^{k}}{\nu +2k}. \end{aligned}$$
(8.21)

The first sum is given by the binomial theorem as

$$\begin{aligned} A(x) = \frac{1}{\sqrt{1-4x}}. \end{aligned}$$
(8.22)

To obtain an analytic expression for B(x) start with entry 2.5.15 in [28]

$$\begin{aligned} \frac{1}{\sqrt{1-4x}} \left( \frac{1 - \sqrt{1-4x}}{2x} \right) ^{\nu } = \sum _{n=0}^{\infty } \left( {\begin{array}{c}\nu +2k\\ k\end{array}}\right) x^{k} \end{aligned}$$
(8.23)

where the term in brackets is the generating function of the Catalan numbers. Some elementary manipulations give

$$\begin{aligned} B(x) = \nu x^{-\nu /2} \int _{0}^{\sqrt{x}} \frac{t^{\nu -1}}{\sqrt{1-4t^{2}}} \left( \frac{1 - \sqrt{1-4t^{2}}}{2t^{2}} \right) ^{\nu } \!\! dt.\nonumber \\ \end{aligned}$$
(8.24)

Then (8.20) is equivalent to the identity

$$\begin{aligned}&\int _{0}^{\sqrt{x}} \frac{t^{\nu -1}}{\sqrt{1-4t^{2}}} \left( \frac{1 - \sqrt{1-4t^{2}}}{2t^{2}} \right) ^{\nu } \, dt \nonumber \\&\quad = \frac{1}{\nu } x^{\nu /2} \left( \frac{1-\sqrt{1- 4x}}{2x} \right) ^{\nu }. \end{aligned}$$
(8.25)

This can now be verified by observing that both sides vanish at \(x=0\) and a direct computation shows that the derivatives match. \(\square \)

The integral is stated next. It appears as entry 6.611.3 in [19].

Corollary 8.1

For \(\nu , \, b \in {\mathbb {R}}\)

$$\begin{aligned}&\int _{0}^{\infty } e^{-x} K_{\nu }(bx) \, dx = \frac{\pi }{2 b^{\nu } \sin (\pi \nu )} \nonumber \\&\quad \times \left[ \frac{(1+ \sqrt{1-b^{2}})^{\nu } - (1 - \sqrt{1 - b^{2}})^{\nu }}{\sqrt{1-b^{2}}} \right] . \end{aligned}$$
(8.26)

In particular, as \(b \rightarrow 1\),

$$\begin{aligned}&\int _{0}^{\infty } e^{-x} K_{\nu }(x) \, dx \nonumber \\&\quad = \frac{\pi \nu }{ \sin (\pi \nu )}, \end{aligned}$$
(8.27)

and as \(\nu \rightarrow 0\),

$$\begin{aligned}&\int _{0}^{\infty } e^{-x}K_{0}(bx) \, dx \nonumber \\&\quad = \frac{\log (1+\sqrt{1-b^{2}}) - \log (1-\sqrt{1-b^2})}{2 \sqrt{1-b^2}}. \end{aligned}$$
(8.28)

Finally, letting \(\nu \rightarrow 0\) and \(b \rightarrow 1\) gives

$$\begin{aligned} \int _{0}^{\infty } e^{-x} K_{0}(x) \, dx = 1. \end{aligned}$$
(8.29)

9 Flexibility of the method of brackets

This final section illustrates the flexibility of the method of brackets. To achieve this goal, we present four different evaluations of the integral

$$\begin{aligned} I(a,b) = \int _{0}^{\infty } \frac{e^{-a^{2}x^{2}} \, dx}{x^{2} + b^{2}}. \end{aligned}$$
(9.1)

The parameters \(a,\, b\) are assumed to be real and positive. This formula appears as entry 3.466.1 in [19] with value

$$\begin{aligned} I(a,b) = \frac{\pi }{2b} e^{a^{2}b^{2} }( 1 - \text {Erf}(ab) ) \end{aligned}$$
(9.2)

and the reader will find in [1] an elementary proof of it. This section presents 4 different ways to use the method of brackets to evaluate this integral.

Method 1. Start with the bracket series representations

$$\begin{aligned} \exp (- a^{2} x^{2})= & {} \sum _{n_{1}} \phi _{n_{1}} a^{2n_{1}} x^{2n_{1}} \nonumber \\ \frac{1}{x^{2}+b^{2}}= & {} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{2}n_{3}} b^{2n_{2}} x^{2n_{3}} \langle 1 + n_{2} + n_{3} \rangle \end{aligned}$$
(9.3)

and produce the bracket series

$$\begin{aligned} I(a,b)= & {} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1}n_{2}+n_{3}} a^{2n_{1}} b^{2n_{2}} \nonumber \\&\times \langle 1+ n_{2} + n_{3} \rangle \, \langle 2n_{1} + 2n_{3} + 1 \rangle . \end{aligned}$$
(9.4)

Method 2. The second form begins with the Mellin–Barnes representations

$$\begin{aligned} \exp (-a^{2} x^{2} )= & {} \frac{1}{4 \pi i } \int _{\gamma } x^{-t} a^{-t} \Gamma \left( \frac{t}{2} \right) \, dt \nonumber \\ \frac{1}{x^{2}+b^{2}}= & {} \frac{1}{4 \pi b^{2} i} \int _{\gamma } x^{-s} b^{s} \Gamma \left( \frac{s}{2} \right) \Gamma \left( 1 - \frac{s}{2} \right) \, ds\nonumber \\ \end{aligned}$$
(9.5)

Replacing in (9.1) a bracket appears and one obtains the representation

$$\begin{aligned} I(a,b)= & {} \frac{1}{4b^{2} (2 \pi i )^{2}} \int _{\gamma _{1}} \int _{\gamma _{2}} a^{-t} b^{s} \Gamma \left( \tfrac{t}{2} \right) \nonumber \\&\times \Gamma \left( \tfrac{s}{2} \right) \Gamma \left( 1 - \tfrac{s}{2} \right) \langle 1 - t - s \rangle \, ds \, dt. \end{aligned}$$
(9.6)

The bracket is used to evaluate the s-integral to produce \(s^{*} = 1-t\) and the expression

$$\begin{aligned} I(a,b) = \frac{1}{4b^{2} (2 \pi i )} \int _{\gamma } a^{-t} b^{1-t} \Gamma \left( \tfrac{t}{2} \right) \Gamma \left( \tfrac{1-t}{2} \right) \Gamma \left( \tfrac{1+t}{2} \right) \, dt.\nonumber \\ \end{aligned}$$
(9.7)

The next step is to express the gamma factors in the numerator as bracket series to obtain

$$\begin{aligned} I(a,b)= & {} \frac{1}{4b^{2} (2 \pi i )} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1} n_{2} n_{3}} \int _{\gamma } a^{-t} b^{1-t} \langle \tfrac{t}{2} + n_{1} \rangle \nonumber \\&\times \langle \tfrac{1}{2} - \tfrac{t}{2} + n_{2} \rangle \langle \tfrac{1}{2} + \tfrac{t}{2} + n_{3} \rangle \, dt \end{aligned}$$

and for instance selecting the bracket in \(n_{1}\) to eliminate the integral and using

$$\begin{aligned} \left\langle \tfrac{t}{2} + n_{1} \right\rangle = 2 \langle t + 2 n_{1} \rangle \end{aligned}$$
(9.8)

gives

$$\begin{aligned} I(a,b)= & {} \frac{1}{2b} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1}n_{2}n_{3}} a^{2n_{1}} b^{2n_{1}} \langle \tfrac{1}{2} + n_{1} \nonumber \\&+ n_{2} \rangle \langle \tfrac{1}{2} - n_{1} + n_{3} \rangle . \end{aligned}$$
(9.9)

Method 3. The next way to evaluate the integral (9.1) is a mixture of the previous two. Using

$$\begin{aligned} \frac{1}{x^{2} + b^{2}}= & {} \sum _{n_{1}} \sum _{n_{2}} \phi _{n_{1}n_{2}} b^{2n_{1}} x^{2n_{2}} \langle 1 + n_{1} + n_{2} \rangle \nonumber \\ \exp (-a^{2}x^{2})= & {} \frac{1}{4 \pi i } \int _{\gamma } x^{-t} a^{-t} \Gamma \left( \tfrac{t}{2} \right) \, dt \end{aligned}$$
(9.10)

and the usual procedures gives

$$\begin{aligned} I(a,b)= & {} \frac{1}{4 \pi i } \sum _{n_{1}} \sum _{n_{2}} b^{2n_{1}} \langle 1+ n_{1} + n_{2} \rangle \nonumber \\&\times \int _{\gamma } a^{-t} \Gamma \left( \tfrac{t}{2} \right) \langle 2n_{2} - t +1 \rangle \, dt \end{aligned}$$

Now use the bracket in \(n_{2}\) to eliminate the integral and produce

$$\begin{aligned} I(a,b)= & {} \frac{1}{2} \sum _{n_{1}} \sum _{n_{2}} \phi _{n_{1}n_{2}} a^{-2 n_{2}-1} b^{2n_{1}} \langle 1+ n_{1} + n_{2} \rangle \nonumber \\&\Gamma \left( n_{2} + \tfrac{1}{2} \right) \end{aligned}$$
(9.11)

and replacing the gamma factor by its bracket series yields

$$\begin{aligned} I(a,b)= & {} \frac{1}{2} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1}n_{2}n_{3}} b^{2n_{1}} a^{-2n_{2}-1} \nonumber \\&\times \langle 1+ n_{1} + n_{2} \rangle \langle \tfrac{1}{2} + n_{2} + n_{3} \rangle . \end{aligned}$$
(9.12)

Method 4. The final form uses the representations

$$\begin{aligned} \frac{1}{x^{2} + b^{2}}= & {} \frac{1}{4 \pi b^{2}i} \int _{\gamma } x^{-s} b^{s} \Gamma \left( \frac{s}{2} \right) \Gamma \left( 1 - \frac{s}{2} \right) \, ds \nonumber \\ \exp (- a^{2}x^{2} )= & {} \sum _{n_{1}} \phi _{n_{1}} a^{2n_{1}} x^{2n_{1}} \end{aligned}$$
(9.13)

to write

$$\begin{aligned} I(a,b)= & {} \frac{1}{4 b^{2} \pi i } \sum _{n_{1}} \phi _{n_{1}} a^{2n_{1}} \nonumber \\&\times \int _{\gamma } b^{s} \Gamma \left( \tfrac{s}{2} \right) \Gamma \left( 1 - \tfrac{s}{2} \right) \langle 2n_{1} - s + 1 \rangle \, ds, \end{aligned}$$
(9.14)

where the last bracket comes the original integral. Now write the gamma factors as bracket series to produce

$$\begin{aligned} I(a,b)= & {} \frac{1}{2b^{2}(2 \pi i )} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1} n_{2} n_{3}} a^{2n_{1}} \nonumber \\&\times \left[ \int _{\gamma } b^{s} \langle \tfrac{s}{2} + n_{2} \rangle \langle 1 - \tfrac{s}{2} + n_{3} \rangle \langle 2n_{1} - s +1 \rangle \, ds \right] , \end{aligned}$$

and using the \(n_{2}\)-bracket to eliminate the integral yields

$$\begin{aligned} I(a,b)= & {} \frac{1}{b^{2}} \sum _{n_{1}} \sum _{n_{2}} \sum _{n_{3}} \phi _{n_{1}n_{2}n_{3}} a^{2n_{1}} b^{-2n_{2}} \nonumber \\&\times \langle 1+ n_{2} + n_{3} \rangle \, \langle 2n_{1} + 2n_{2} + 1 \rangle . \end{aligned}$$
(9.15)

These four bracket series lead to the terms

$$\begin{aligned} T_{1}= & {} \frac{1}{2b} \sum _{k=0}^{\infty } \frac{1}{k!} \Gamma ( \tfrac{1}{2} - k ) \Gamma ( \tfrac{1}{2} +k ) (-a^{2} b^{2})^{k} \nonumber \\= & {} \frac{\pi }{2b} \exp (a^{2}b^{2}) \end{aligned}$$
(9.16)
$$\begin{aligned} T_{2}= & {} \frac{a}{2} \sum _{k=0}^{\infty } \frac{1}{k!} \Gamma (1+k) \Gamma ( - \tfrac{1}{2} - k) (-a^{2} b^{2} )^{k} \nonumber \\= & {} - \frac{\pi }{2b} \exp (a^{2}b^{2}) \text {erf}(ab) \end{aligned}$$
(9.17)
$$\begin{aligned} T_{3}= & {} \frac{1}{2a b^{2}} \sum _{k=0}^{\infty } \frac{1}{k!} \Gamma (1+k) \Gamma ( \tfrac{1}{2} + k ) \left( - \frac{1}{a^{2}b^{2}} \right) ^{k} \nonumber \\= & {} \frac{\sqrt{\pi }}{2ab^{2}} {}_{2}F_{0} \left( \genfrac{}{}{0.0pt}{}{1 \,\,\,\, \tfrac{1}{2}}{-} \bigg | {- \frac{1}{a^{2}b^{2}}} \right) . \end{aligned}$$
(9.18)

The values \(T_{1}\) and \(T_{2}\) and Rule \(E_{3}\) in Sect. 2 are combined to give the value

$$\begin{aligned} I(a,b) = \frac{\pi }{2b} \exp (a^{2}b^{2}) \left[ 1 - \text {erf}(ab) \right] , \end{aligned}$$
(9.19)

conforming (9.2). The value of \(T_{3}\) is useful in the asymptotic study of I(ab) as \(a^{2}b^{2} \rightarrow \infty \).

The final example illustrates a different combination ideas.

Example 9.1

Consider the two-dimensional integral

$$\begin{aligned} J(a,b) = \int _{0}^{\infty } \int _{0}^{\infty } x^{a-1} y^{b-1} \text {Ei}(-x^{2}y) K_{0} \left( \frac{x}{y} \right) \, dx \, dy\nonumber \\ \end{aligned}$$
(9.20)

where \(\text {Ei}\) is the exponential integral defined by

$$\begin{aligned} \text {Ei}(-x) = - \int _{1}^{\infty } \frac{\exp (- x t )}{t} \, dt \quad \text {for } x>0 \end{aligned}$$
(9.21)

with divergent representation

$$\begin{aligned} \text {Ei}(-x^{2}y) = \sum _{\ell \ge 0} \phi _{\ell } \frac{x^{2 \ell } y^{\ell }}{\ell } \end{aligned}$$
(9.22)

(see [11] for the concept of divergent expansion in the context of the method of brackets). Using this series and the Mellin–Barnes representation of \(K_{0}\)

$$\begin{aligned} K_{0} \left( \frac{x}{y} \right) = \frac{1}{4 \pi i } \int _{\gamma } \Gamma ^{2}(t) \left( \frac{x}{2y} \right) ^{-2t} \, dt \end{aligned}$$
(9.23)

yields

$$\begin{aligned} J(a,b)= & {} \frac{1}{4 \pi i } \sum _{\ell \ge 0} \phi _{\ell } \frac{1}{\ell } \int _{\gamma } \Gamma ^{2}(t) 2^{2t} \langle a + 2 \ell \nonumber \\&- 2t \rangle \langle b + \ell + 2t \rangle \, dt, \end{aligned}$$
(9.24)

where the two brackets come from the integrals on the half-line. To evaluate this integral use the bracket \(\langle a + 2 \ell - 2 t \rangle \) to produce

$$\begin{aligned}&J(a,b) \nonumber \\&\quad = \frac{1}{8 \pi i } \sum _{\ell \ge 0} \frac{\phi _{\ell }}{\ell } \left[ \int _{\gamma } \Gamma ^{2}(t) 2^{2t} \langle \tfrac{a}{2} + \ell - t \rangle \langle b + \ell + 2t \rangle \, dt \right] \nonumber \\&\quad = \frac{1}{8 \pi i } \sum _{\ell \ge 0} \frac{\phi _{\ell }}{\ell } \left[ 2 \pi i \Gamma ^{2}(t) 2^{2t} \langle b+ \ell + 2 t \rangle \right] _{t = \tfrac{a}{2} + \ell } \nonumber \\&\quad = \frac{1}{4} \sum _{\ell \ge 0} \frac{\phi _{\ell }}{\ell } \Gamma ^{2} \left( \tfrac{a}{2}+ \ell \right) 2^{a + 2 \ell } \langle a + b + 3 \ell \rangle , \end{aligned}$$
(9.25)

and eliminating the series with the bracket yields the value

$$\begin{aligned} J(a,b) = - \frac{4^{-1 - b/3 + a/6}}{a+b} \Gamma ^{2} \left( \frac{a - 2 b }{6} \right) \Gamma \left( \frac{a+b}{3} \right) . \end{aligned}$$
(9.26)

10 Feynman diagrams

A variety of interesting integrals appear in the evaluation of Feynman diagrams. See [25, 26] for details. The example corresponds to a loop diagram with a propagator in the scalar theory with internal propagators of equal masses [8].

Fig. 1
figure 1

The loop diagram

The integral attached to this diagram is

$$\begin{aligned} J(\alpha , \beta , m,m) = \int d^{D}Q \frac{1}{ \left[ Q^{2}-m^{2} \right] ^{\alpha } \left[ (p+Q)^{2} - m^{2} \right] ^{\beta }}\nonumber \\ \end{aligned}$$
(10.1)

with Mellin–Barnes representation given by

$$\begin{aligned}&J(\alpha ,\beta ,m,m) = \pi ^{D/2} i^{1-D} \frac{ (-m^{2})^{D/2-\alpha -\beta }}{\Gamma (\alpha ) \Gamma (\beta )} \nonumber \\&\quad \times \frac{1}{2 \pi i } \int _{\gamma } (p^{2})^{_u} (-m^{2})^{u}\nonumber \\&\quad \times \frac{\Gamma (-u) \Gamma (\alpha + u) \Gamma (\beta + u) \Gamma (\alpha + \beta - D/2 +u)}{\Gamma (\alpha + \beta + 2u)} \, du\nonumber \\ \end{aligned}$$
(10.2)

Replacing the Gamma factors in the numerator by its corresponding bracket series gives

$$\begin{aligned}&J(\alpha , \beta , m,m) \leftarrow \pi ^{D/2} i^{1-D} \frac{(-m^{2})^{D/2-\alpha - \beta }}{\Gamma (\alpha ) \Gamma (\beta )} \sum _{\{ n \}} \phi _{n_{1}, \ldots , n_{d}} \nonumber \\&\quad \times \frac{1}{2 \pi i} \int _{\gamma } (p^{2})^{u} (- m^{2})^{u}\nonumber \\&\quad \times \frac{ \langle -u +n_{1} \rangle \langle \alpha + u + n_{2} \rangle \langle \beta + u + n_{3} \rangle \langle \alpha + \beta - D/2+ u + n_{4} \rangle }{\Gamma (\alpha + \beta + 2u) } \, du.\nonumber \\ \end{aligned}$$
(10.3)

To evaluate the integral, the vanishing of the bracket \(\langle - u + n_{1} \rangle \) gives \(u^{*} = n_{1}\) and J is expressed as a bracket series

$$\begin{aligned}&J(\alpha , \beta , m,m) = \pi ^{D/2} i^{1-D} \frac{(-m^{2})^{D/2- \alpha - \beta }}{\Gamma (\alpha ) \Gamma (\beta )} \nonumber \\&\quad \times \sum _{\{ n \}} \phi _{n_{1}, \ldots , n_{4}} (p^{2})^{u^{*}} (-m^{2})^{u^{*}} \nonumber \\&\quad \times \frac{ \langle \alpha + u^{*} + n_{2} \rangle \langle \beta + u^{*} + n_{3} \rangle \langle \alpha + \beta - D/2 + u^{*} + n_{4} \rangle }{ \Gamma (\alpha + \beta + 2 u^{*}) }.\nonumber \\ \end{aligned}$$
(10.4)

The terms obtained from the bracket series above are given as hypergeometric values

$$\begin{aligned} T_{1}= & {} \pi ^{D/2} i^{1-D} (-m^{2})^{D/2 - \alpha - \beta }\nonumber \\&\times \frac{\Gamma (\alpha + \beta - D/2)}{\Gamma (\alpha + \beta )} {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\alpha \,\,\,\,\,\, \beta \,\,\,\,\,\, \alpha + \beta - \tfrac{D}{2}}{\tfrac{\alpha + \beta }{2} \,\,\,\,\,\, \tfrac{\alpha + \beta +1}{2} } \bigg | { \frac{p^{2}}{4m^{2}}} \right) \\ T_{2}= & {} \pi ^{D/2} i^{1-D} (-m^{2})^{D/2 - \beta } (p^{2})^{-\alpha } \, \nonumber \\&\times \frac{\Gamma ( \beta - D/2)}{\Gamma ( \beta )} {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\alpha \,\,\,\,\,\, 1+ \frac{\alpha - \beta }{2} \,\,\,\,\,\, \tfrac{1+ \alpha - \beta }{2}}{ 1 + \alpha - \beta \,\,\,\,\,\, 1 - \beta + \tfrac{D}{2} } \bigg | { \frac{4m^{2}}{p^{2}}} \right) \\ T_{3}= & {} \pi ^{D/2} i^{1-D} (-m^{2})^{D/2 - \alpha } (p^{2})^{-\beta } \, \nonumber \\&\times \frac{\Gamma ( \alpha - D/2)}{\Gamma ( \alpha )} {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{\beta \,\,\,\,\,\, 1+ \frac{\beta - \alpha }{2} \,\,\,\,\,\, \tfrac{1+ \beta - \alpha }{2}}{ 1 + \beta - \alpha \,\,\,\,\,\, 1 - \alpha + \tfrac{D}{2} } \bigg | { \frac{4m^{2}}{p^{2}}} \right) \\ T_{4}= & {} \pi ^{D/2} i^{1-D} (p^{2})^{D/2 - \alpha -\beta } \,\nonumber \\&\times \frac{\Gamma (D/2 - \alpha ) \Gamma ( D/2 - \beta ) \Gamma ( \alpha + \beta - D/2)}{\Gamma (\alpha ) \Gamma (\beta ) \Gamma (D- \alpha - \beta )} \\&\times {}_{3}F_{2} \left( \genfrac{}{}{0.0pt}{}{ \alpha + \beta - \frac{D}{2} \,\,\,\,\,\, 1 + \tfrac{\alpha + \beta - D }{2} \,\,\,\,\, \tfrac{1 + \alpha + \beta -D}{2} }{ 1 +\beta - \tfrac{D}{2} \,\,\,\,\, 1 + \alpha - \tfrac{D}{2} } \bigg | { \frac{4m^{2}}{p^{2}}} \right) . \end{aligned}$$

The conclusion is the evaluation of the Feynman diagram shown in Fig. 1 is given by

$$\begin{aligned} J(\alpha , \beta , m, m ) = {\left\{ \begin{array}{ll} T_{1} \quad &{} \text {for} \,\, \left| \frac{p^{2}}{4m^{2}} \right|< 1 \\ T_{2} + T_{3} + T_{4} \quad &{} \text {for} \,\, \left| \frac{4m^{2}}{p^{2}} \right| < 1. \end{array}\right. } \end{aligned}$$
(10.5)

11 Conclusions

The work presented here gives a new procedure to evaluate Mellin–Barnes integral representations. The main idea is to replace the Gamma factors appearing in such representations by its corresponding bracket series. The method of brackets is then used to evaluate these integrals. This method has been shown to be simple and efficient, given its algorithmic nature. A collection of representative examples has been provided.

The paper presents a comparison of the classical methods using Mellin transforms and the newly developed method of brackets. The work included reproduces many integral relations using the method of brackets. The results presented here are compared with those developed by Prausa [23]. In his work this author uses a package developed in Mathematica [10]. Some of the integral representations established here may be obtained using the Mathematica software. Some results in here, such as (6.22), (6.23), (7.16), (8.26), (9.1) may be obtained with the Mathematica package. The same is true for the inverse Mellin transformations studied here, such as the contour integrals (5.30), (5.36), (5.42), the results (4.27) and (4.26) for the contour integral (4.14), the result (5.54) for the contour integral (5.48), the result (7.15) for the contour integral (7.9). The technique developed in the present paper for the evaluation of the improper integrals (6.7), (6.22), (6.23) may be applied to the Mellin transformation of some products of the Bessel functions. Such products appeared in the calculation of the sunset diagrams in [6]. A partial proof of these conjectures may be found in [29].

The present paper establishes that the method of bracket simplifies the calculation of inverse Mellin transformations. In particular this is the case for problems involving multi-fold inverse Mellin transformations applied to Feynman diagrams. This may be seen already at the level of one-fold Mellin–Barnes integrals. For example, the inverse one-fold Mellin–Barnes transforms (4.3) results in the Meijer G-function under the parameter condition \(A_j = B_j = C_j = D_j = 1\). Such simplified structures, in all these particular cases of the G-function considered in the present paper, may be proved without the usage of the method of brackets, but the corresponding proofs are usually longer. This illustrates the flexibility and efficiency of the method of brackets.