Mellin-Barnes meets Method of Brackets: A novel approach to Mellin-Barnes representations of Feynman integrals

In this paper, we present a new approach to the construction of Mellin-Barnes representations for Feynman integrals inspired by the Method of Brackets. The novel technique is helpful to lower the dimensionality of Mellin-Barnes representations in complicated cases, some examples are given.


Introduction
The evaluation of multi-loop Feynman integrals is one of the basic building blocks in phenomenological and theoretical studies in quantum field theory. For this purpose, many techniques have been developed over the years. For an overview see e.g. [1]. Among the most successful ones are the method of differential equations [2][3][4], Mellin-Barnes (MB) integral representations [5,6], and, for numerical evaluations, the method of sector decomposition [7][8][9].
After the construction of a MB representation for a given Feynman integral, one has a large amount of public tools at hand for their subsequent evaluation. For example one can resolve singularities [10,11], expand in dimensional and analytic regulators [10], perform an asymptotic expansion [12], add up residues in terms of multi-fold sums [13] or numerically evaluate the integrals in the Euclidean domain [10]. For a long time the numerical evaluation of MB integrals with physical kinematics was an unresolved problem.
The Method of Brackets defines a small set of simple rules which, when applied to a Schwinger parametrized Feynman integral, yields a set of multi-fold sums. Unfortunately, in many cases not all of these sums contribute to the final result and it is sometimes hard to tell which sum does contribute and which sum should be neglected.
In this paper we modify the Method of Brackets so that it leads to a set of multidimensional MB integrals instead of a set of multi-fold sums. From this set of solutions a single multi-dimensional MB integral contains the full result of the Feynman integral. The ambiguity of the original method is therefore not present.
Reference [21] describes a factorization procedure for the Symanzik polynomials that appear in the Schwinger parametrization of Feynman integrals. This factorization reduces the multiplicity of the resulting multi-fold sums in the context of the original Method of Brackets. In our adapted version the same optimization helps to minimize the number of MB integrations in the constructed representation. This number is in some cases even smaller than for the best result the Mathematica package AMBRE can provide. The modified Method of Brackets is applicable for both planar and non-planar Feynman diagrams.
In Section 2 we derive a set of rules for the adapted Method of Brackets in analogy to the rules defined in [23]. Section 3 discusses the optimization of Symanzik polynomials. In Section 4 an example of the method is presented in great detail. At last, we compare our approach with the results of the AMBRE package for a couple of Feynman integrals in Section 5.

The modified Method of Brackets
The original Method of Brackets is based on Ramanujan's master theorem [25] which states that if a function g(x) admits a Taylor expansion the integral over the parameter x is given by The similarity of this relation to the well-known Mellin-transform allows reformulating the Method of Brackets in a way that leads to MB representations instead of multi-fold sums.
Utilizing (2.1), the original Method of Brackets formulates a set of simple rules to rewrite a Schwinger parametrized Feynman integral (2.4) into a so-called presolution of the diagram -a multi-fold sum over Γ-functions and newly introduced objects called brackets [23]. The brackets in the presolution can then be eliminated using only linear algebra.
In this section we present a similar set of rules, but our presolution will be a multidimensional MB integral instead of a multi-fold sum.

Schwinger parametrization
The starting point to apply the Method of Brackets to Feynman integrals is Schwinger parametrization. An L-loop Feynman integral in Euclidean space-time is given by where the momenta P i are linear combinations of loop momenta and external momenta. For physical Feynman integrals with a Minkowski space-time metric one can usually perform a Wick-rotation [26] to transform the integral into the form (2.3).
The Schwinger parameters x i are introduced for all propagators with the well known formula 1 Afterwards the integrations over the loop-momenta can be performed loop-by-loop via The result can be written as The Symanzik polynomials U and F depend on the Schwinger parameters x i and can be read off directly from the Feynman graph. For an overview of the properties of these graph polynomials, see [27].

The Bracket
The central object of the technique is the bracket, which is defined as Of course, this object by itself is not well-defined as the integral on the right-hand side is divergent for all α. However, it makes sense inside a MB integral where in the last step we used equation (2.2). In contrast, the original Method of Brackets interprets this object inside a multi-fold sum using Ramanujan's master theorem (2.1).

The Rules
The rules provided in this sub-section have to be applied successively to a Schwinger parameterized Feynman integral (2.4). In doing so, rule B has to be used multiple times if the Symanzik polynomials are given in optimized form (see Section 3 for details).

Rule A: Exponential functions
The exponential function in (2.4) is first split into factors using e − i A i = i e −A i so that every exponent A i consists only of a monomial or a monomial divided by U . Afterwards the exponential functions are rewritten into contour integrals using the Cahen-Mellin formula The contour is chosen such that all singularities coming from Γ(−z i ) are to the right of the contour (i.e. c i < 0). The validity of this equation can be checked by closing the contour at |z i | → ∞ to the right and using the residue theorem.
The factor A z i i on the right-hand side of (2.7) should then be expanded to a product of powers, where the base is a single Schwinger parameter, the polynomial U , or one of the symbols introduced by the optimization procedure described in Section 3. After this, all powers of a common base have to be combined, e.g.
This rule corresponds to rule I in [23].

Rule B: Multinomials
Powers of multinomials occur after the insertion of the Symanzik polynomials U or the re-substitution of the symbols introduced by the optimization procedure in Section 3. These powers can also be rewritten in terms of MB integrals using the formula The formula can be derived by first applying Schwinger parametrization to the left-hand side of (2.8) and then using rule A and the definition of the bracket (2.5).
The factors A z 1 1 , · · · , A z J J on the right-hand side of (2.8) are treated in the same way as described for rule A.
This rule corresponds to rule III in [23].

Rule C: Schwinger parameters
After the application of rule A and B, the Schwinger integrals should all be of the form where L(a 1 , · · · ; z 1 , · · · ) is a linear combination of the indices a j and the Mellin-Barnes variables z j . These integrals can now be written as brackets using the definition (2.5): This rule corresponds to rule II in [23].

Rule D: Eliminating the brackets
Applying the rules A, B and C to a Schwinger parametrized Feynman integral results in a presolution of the form We first consider the case J = K and define a K × K-matrix A by where we assume for now its invertibility. A change of basis z = −A −1 s leads to Note the change in the integration contour to (d 1 , · · · , d K ) T = −A(c 1 , · · · , c K ) T . Now all MB integrations can be solved one by one using (2.6): In the case J > K, this formula can be used to solve K out of the J MB integrations. The result will be a (J − K)-dimensional MB integral. Without loss of generality we solve the MB integrals over z 1 , · · · , z K using the K brackets while the J − K integrals over z K+1 , · · · , z J should remain. Therefore, we arrange the first K integration variables into a vector z 1 = (z 1 , · · · , z K ) T and the variables of the remaining integrals into a vector z 2 = (z K+1 , · · · , z J ) T . Now, we can write down our last rule: where the K × (J − K)-matrix C is given by C = ( γ 1 , · · · , γ K ) T . The vector β and the matrix A are again defined as K-dimensional quantities in the same way as before.
The choice which integrals should be solved by this formula and which integrals should remain is somewhat arbitrary. There are J K possibilities. Some of them lead to a singular matrix A and yield no solution. All other choices 1 lead to a possible MB representation for the full result, which implies that we only have to consider one of them. This is a major improvement from the original Method of Brackets, where the individual sum only gives a partial result for the Feynman integral and various choices (but not all) have to be considered to obtain a full result.
We let the question unanswered, if some of the obtained MB representations are in some sense better than others. More studies are necessary to tackle this problem.
This rule corresponds to rule IV in [23].

Optimization procedure
Rules A to D applied to (2.4) are sufficient to obtain a MB representation for a given Feynman integral. However, the naïve application of these rules often leads to a huge number of MB integrations in the result.
Better MB representations can be achieved by first analyzing the Symanzik polynomials U and F as well as the polynomial i x i m 2 i for sub-expressions (polynomials of Schwinger parameters) that appear multiple times. These common sub-expressions are then substituted by new variables which are treated as Schwinger parameters. This can be done recursively.
The Schwinger parametrized Feynman integral (2.4) can now be treated with the set of rules given in Sub-Section 2.3 as before. However, after the first application of rule B (to the power of base U ), the intermediate result contains powers of the variables introduced by the optimization, such that rule C cannot yet be applied. These powers have to be, after the corresponding sub-expressions are re-substituted, treated by rule B as well. Only after all optimization variables have been eliminated, one can continue with rule C.
This procedure reduces the number of MB integrals in the result significantly. If a polynomial ξ with N terms appears J times in U , F and i x i m 2 i , the substitution of ξ decreases the number of terms in these polynomials by J(N − 1). After rule A and the first application of rule B, the number of MB integrals is therefore reduced by J(N − 1) as well. However, the other application of rule B, after ξ is re-inserted, produces N additional MB integrals and one additional bracket. In the end, this optimization leads therefore to a reduction of the number of MB integrals in the final result (after rule D) by (J −1)(N −1).
This optimization approach was first proposed in [21] in the context of the original Method of Brackets. An algorithm to find common sub-expressions in a list of polynomials is given in Appendix A.

Example
As an example, we consider the two-loop propagator diagram in fig. 4.1. The corresponding Feynman integral is given by , and the Schwinger parametrization by A naïve application of rules A to D without optimization would lead to a 13-fold MB representation.
In order to reduce this number, we first identify common sub-expressions in U , F and i x i m 2 i = m 2 (x 3 + x 4 + x 5 ) and replace them by new variables r i : Rule A then leads to x a 4 +z 14 −1 4 x a 5 +z 124 −1

5
, where we introduced the notation z ijk··· = z i + z j + z k + · · · (and later also a ijk··· = a i + a j + a k + · · · ). Note that we have combined all powers of a common base. As a next step, the Symanzik polynomial U in optimized form is re-inserted and rule B applied: x a 5 +z 1248 −1

5
, Now, rule B must be used again three times for r 3 , r 2 and r 1 in that order 2 : . Now we can apply rule C to obtain the presolution I(a 1 , · · · , a 5 ) = c 1 +i∞ with 14 MB integrals and nine brackets, which leads to only five MB integrations at the end.
From the 14 9 = 2002 possibilities only 957 lead to a non-singular matrix A. We choose for the example the MB integrals over z 1 , z 2 , z 3 , z 4 , z 7 to remain. The vectors z 1 , z 2 and β and the matrices A and C defined in rule D read Using the formulas of rule D, we have to substitute The diagram in fig. 5.1(e) is the example from Section 4.
For planar diagrams, we used the loop-by-loop approach implemented in AMBRE version 2.1 [19] and tried out all permutations of the loop momenta to find the representation with a minimum number of MB integrations. We note that the quality of the loop-byloop approach also depends on the momentum flow through the diagram 3 . For non-planar diagrams, we used the global approach implemented in AMBRE version 3.1.1 [14,20]. We tried to apply Barnes' first lemma to all representations, afterwards. Unfortunately, for none of the representations constructed by the Method of Brackets the lemma could be applied.
The results are given in tab. 5 As shown in tab. 5.1, our novel method can not provide a full replacement of the techniques implemented in AMBRE but could be helpful in some complicated cases.
We checked the representations obtained by AMBRE and the Method of Bracket for numerical agreement using the Mathematica packages MBresolve [11] and MB [10]. The numerical integration was performed via the MBintegrate function of the package MB using the integration method Cuhre [28,29] implemented in the Cuba-library [30]. For the kinematic variables, we chose the Euclidean values s = −1/2, t = −1/3 and a mass m = 1 for the massive propagators. All propagator powers were set to one and the -expansion was performed to next-to-leading order. The most complicated representation for a numerical integration was the seven dimensional representation for fig. 5.1(g) obtained by the Method of Brackets. Here, the maximum number of evaluation points had to be set to 2 · 10 9 and the runtime was about 16 hours on 16 CPU cores to achieve an agreement of the three most significant digits of the numerical values. However, more advanced integration algorithms may help to improve the accuracy reached on a reasonable time scale even for such high dimensional MB integrals. For recent developments in that direction, see [14,15]. All numerical values also agree with independent results of the sector decomposition implementation FIESTA [31].   [14,[18][19][20]. The smaller number is marked in bold. The last column gives the planarity of the diagram (P = planar, NP = non-planar).

Conclusion
In this article we presented a new technique to construct MB representations. The approach is based on a reformulation of the Method of Brackets. Our modified Method of Brackets yields not only one but many possible MB representations where every single one is a valid representation of the full Feynman integral. This is a major improvement to the original Method of Brackets, where the question which solutions contribute to the full result was sometimes hard to answer.
A crucial part of the method is the optimization procedure. Here, one has to analyze the graph polynomials for common sub-expressions. With this optimization, the method is able to produce low-dimensional MB representations. A simple algorithm for this purpose is given in appendix A.
The presented method can easily be implemented in a computer code.
Besides the practical applications, the reformulation of the Method of Brackets might help to deepen the understanding of the original Method of Brackets. It seems to be possible to relate the solutions of the original Method of Brackets in terms of multi-fold sums to the sums over residues of the MB representations obtained from our modified version.

Acknowledgments
The author wants to thank Robert Harlander, Fabian Lange, and the AMBRE collaboration for useful discussions and comments on the manuscript, and especially for Ievgen Dubovyk's help in compiling tab. 5.1.
This work was supported by BMBF contract 05H15PACC1. The computing resources were granted by RWTH Aachen University under project rwth0119. The Feynman diagrams in this article have been drawn with JaxoDraw [32] based on Axodraw [33].

A Common Sub-Expressions
In this appendix, we present a possible algorithm to find common sub-expressions in a given list of polynomials. The recursive algorithm shown in alg. A.1 is far from being optimal but it proved nevertheless successful for all our tests.
The main function of the algorithm is commonBinomials starting at line 46. The argument P is an array of polynomials. Polynomials are in turn represented as arrays of terms (monomials). Arrays all start at index one. The function commonBinomials should be called with p 0 = t 0 = 1 and an empty set r. The best optimization found by the algorithm is returned in the global variablesP andr, wherer is a set of rules which, when repeatedly applied to the array of polynomialsP , leads back to P .
The quality of an optimization is measured by the rank ρ calculated by the function calcRank starting at line 6. The rank ρ minus the number of propagators gives the number of MB integrals in the result. The goal of the algorithm is, therefore, to find an optimization, where ρ is minimal.
The first part (lines 48 to 78) of the algorithm fills an array J with all possible optimizations which can be performed at the current level of the recursion. In this step we scan for binomials appearing in P more than once. The actual scan starts at term t 0 in polynomial p 0 . These arguments to the function commonBinomials are used to prevent scans of regions already completed at a lower recursion-level. Lines 55 and 70 find all occurrences of the binomial b in all polynomials in P . These occurrences are stored as triplets (p, t 1 , t 2 ) in the set B, where the first term of the binomial is found at P [p, t 1 ] and the second term at P [p, t 2 ], and t 2 > t 1 .
If the polynomials in P do not have common binomials anymore, J is empty at line 80.
In that case the optimization is complete. If the rank ρ of this optimization is the lowest so far, the optimization is stored in the global variables.
If J is not empty, there are still common binomials in P . In principle, we could now try out all optimizations in J one-by-one and then recursively call commonBinomials. Unfortunately, for large polynomials the algorithm would not terminate in a feasible time.
For that reason, we only try out the first N optimizations with the largest number of common binomials. For a finite N < ∞, it is therefore not guaranteed that the algorithm finds the best possible optimization. In most test cases even very small values of N , eg. 3 or 4, were sufficicient to find very good obtimizations in only a few seconds.
The lines 87 and 91 cause an early exit, if the optimization at the current state is, even in the best-case scenario, not capable of producing a final optimization with a new minimum rank.
The function commonBinomials only returns rules with binomials on the right-hand side. In case, a symbol introduced by the algorithm does not appear in the returned list of optimized polynomials and only once on the right-hand side of one rule, it can be re-substituted without changing the rank of the optimization. This re-substitutions leads then to new rules where the right-hand sides have more than two terms.