Transforming differential equations of multi-loop Feynman integrals into canonical form

The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.


Introduction
With the observation of the Higgs boson in 2012 [1,2] the last missing building block predicted by the standard model has been discovered. Despite this great success of the standard model as fundamental theory of particle physics, it is well known that new physics beyond the standard model exists, neither dark matter nor neutrino masses are explained by the standard model in its present form.
With a steadily increasing experimental precision, the LHC experiments are currently searching for possible standard model extensions. In the experimental analysis, both direct searches and precision measurements are utilized to look for deviations from the standard model. While the former profit from the increased center of mass energy of run II, the latter benefit from higher statistics and a better understanding of systematic uncertainties. However, in both cases precise theoretical predictions for the signal reactions as well as the background reactions are mandatory.
Since most processes at the LHC are dominated by QCD, leading-order predictions suffer in general from large theoretical uncertainties and provide only a rough estimate of the respective cross sections. Higher order corrections in the perturbative expansion are therefore necessary in order to achieve percent level precision.
As far as next-to-leading order (NLO) calculations are considered, tremendous progress has been made in the last twenty years. Today, NLO calculations are often considered as an algorithmically solved problem. Various publicly available tools (cf. [3] and references therein) allow the automated calculation of NLO corrections for typical LHC processes. However, as the recent calculation of higher order corrections for Higgs production via gluon fusion [4] illustrates, in general NLO is not sufficient to achieve theoretical accuracies below ten percent and even higher order corrections are required. Beyond NLO, the same level of maturity has not yet been reached. While for some of the steps required in multi-loop calculations at least in principle solutions exist -the practical application is often limited by the available computer resources -, the evaluation of the scalar multi-loop integrals still represents a major bottleneck. In particular, no general algorithm is known to perform this integration in an automated way. This situation is very different from the NLO case where all the scalar one-loop integrals are known. It was in fact this knowledge combined with field theoretical insights which triggered the progress in NLO calculations mentioned above. A better understanding of multi-loop integrals is thus crucial for any progress beyond the one-loop level. In higher order calculations various techniques have been employed to evaluate the required multi-loop integrals. One of the most powerful ones is the method of differential equations [5][6][7].
A major improvement of the differential equations approach has been made by the observation that very often a particularly simple form of the differential equation can be achieved by changing the basis of integrals [8]. In this form, often called canonical or ǫform, the integration is -up to the determination of integration constants -reduced to a merely combinatorial task. This refined method has been successfully applied to many recent multi-loop calculations . Often, the most difficult part of these calculations is to find a basis of master integrals in which the differential equation attains a canonical form. Several methods to find a canonical form have already been discussed [8,12,14,15,18,34,38,39]. For the algorithm presented in [38] there is an implementation publicly available [40]. However, this method is limited to the case of integrals depending on only one dimensionless scale. The aim of this article is to present an algorithm allowing to compute a canonical basis -provided it exists -in the most general case of differential equations depending on multiple scales with a rational dependence on the dimensional regulator. The proposed algorithm has been implemented in Mathematica and has been successfully tested on non-trivial examples [41]. The outline of this article is as follows. In section 2, basic properties of the differential equations are recalled, which also serves the purpose of fixing the notation. Based on the assumption that a rational transformation exists that transforms a differential equation into canonical form, section 3 first explores some general features of such transformations, which are useful for devising the algorithm. Subsequently, it is shown that the transformation may be obtained as a rational solution of a finite number of differential equations. Using a generalized partial fractions technique [42,43], it is argued that rational solutions of these equations can be expressed as a linear combinations of a particular class of rational functions. Section 4 discusses the application of the presented algorithm to double box topologies, which are relevant for NNLO corrections to single topquark production [44,45] and vector boson pair production [46][47][48]. The full results of these examples are provided in ancillary files. The conclusions are drawn in section 5. For easy reference, appendix A lays out standard definitions and results about polynomial algebra that are needed in section 3.

Preliminaries
Higher order corrections in quantum field theory involve integrations over the unconstrained loop momenta, in general in the form of tensor integrals. It is straightforward to express the tensor integrals in terms of scalar integrals with raised powers of the propagators [49][50][51][52][53][54]: where the P i denote inverse propagators, which are functions of the loop momenta, the external momenta and the masses of the particles running in the loops. For a given set of propagators, each integral is assigned a sector-id by A set of integrals with the same sector-id is called a sector. Note that a sector is completely specified by the set of propagators with positive powers. A sector is said to be a subsector of another sector if its set of propagators with positive powers is a proper subset of the other sectors set. In practice, the tensor reduction often leads to a large number of scalar integrals. However, there also exists a large number of linear relations among them. These integrationby-parts identities [55][56][57][58][59][60][61] and Lorentz identities [7] can be used to express all scalar integrals as linear combinations of a finite number of independent master integrals [62,63]. The reduction to master integrals -while in practice still challenging -is often considered as a solved problem. For the Laporta algorithm [64], which allows to systematically perform this reduction, there are various implementations publicly available [65][66][67][68][69]. A different strategy has been presented and implemented in [70].
As mentioned before, one method to attempt the evaluation of the master integrals is to derive a set of coupled differential equations for the m-dimensional vector of master integrals f (ǫ, {x j }) [5][6][7]. The master integrals are assumed to be normalized such that their massdimension is zero. Then, the master integrals can be considered as functions of a set {x j } of M dimensionless kinematic invariants and the dimensional regulator ǫ defined through d = 4 − 2ǫ, where d denotes number of spacetime dimensions. The differential equation is obtained by taking the derivatives of f (ǫ, {x j }) with respect to all kinematic invariants. Each derivative of a master integral equals a linear combination of scalar integrals with the same or lower sector-id. Applying the Laporta reduction to these integrals, one may express the derivative of a master integral again as a linear combination of master integrals. Thus, upon differentiating with respect to all kinematic invariants, the following linear system of differential equations is obtained with the a i (ǫ, {x j }) being m × m matrices of rational functions in the kinematic invariants {x j } and ǫ. The fact that the matrices a i (ǫ, {x j }) are rational functions of the kinematic invariants and ǫ follows from the structure of the integration-by-parts relations. The reduction to master integrals can be done such, that each scalar integral is expressed as a linear combination of master integrals with the same or lower sector-id. If the components of f (ǫ, {x j }) are then ordered by their sector-ids, the a i attain a block-triangular form where each sector corresponds to a block. In the more compact differential notation eq. (2.4) can be written as Taking the exterior derivative of eq. (2.5) implies the following integrability condition where it has been used that the master integrals are linearly independent over the field of rational functions in the invariants. In practice, this condition can serve as a consistency check of the differential equation.
Transforming the basis of master integrals with an invertible transformation T , as suggested in [8], leads to the following transformation law for a(ǫ, {x j }): The differential equation is said to be in dlog-form if a(ǫ, {x j }) can be written as follows Here L l ({x j }) denotes polynomials in the kinematic invariants and the A l are m × m matrices, which solely depend on ǫ. The set of polynomials is commonly referred to as the alphabet of the differential equation. The individual polynomials are called the letters of the differential equation. In [8] it was observed that with a suitable change of the basis of master integrals it is often possible to arrive at a form in which the dependence on ǫ factorizes: withÃ l being constant m × m matrices. In this form, which is called canonical form or ǫ-form, it is particularly easy to solve the differential equation in terms of iterated integrals [71,72].

Algorithm
Finding a basis in which the differential equation attains ǫ-form is in general a highly non-trivial task. In fact, it is not even known whether such a basis can always be found for master integrals that evaluate to multiple polylogarithms. It is assumed throughout this section that an ǫ-form exists for the a(ǫ, {x j }) under consideration and that it can be attained with a rational transformation. In addition to that, it is assumed that a(ǫ, {x j }) itself is also rational in ǫ and the invariants. By making only these mild assumptions, it is possible to both learn more about general properties of the problem and construct an algorithm, which has a broad scope of application. In subsection 3.1 some general properties of a transformation to an ǫ-form are presented. Then, it is shown in subsection 3.2 that such transformations are determined by a finite number of differential equations, which are obtained by expanding a reformulated version of the transformation law. An extension of this strategy to off-diagonal blocks is described in subsection 3.3, allowing for a more efficient recursive application of the algorithm. A generalized partial fractions technique is described in subsection 3.4, which is used in subsection 3.5 to solve the aforementioned differential equations for a rational transformation.

General properties of the transformation
It is useful to first look into general properties of the transformation law eq. (2.9). Let T be a transformation that transforms the differential equation into ǫ-form. Then anÃ exists such that holds. In this case, eq. (2.9) can be written in the form A simple, but important observation is that a subsequent constant transformation C does not spoil the ǫ-form of the differential equation, as it leads to which is again in ǫ-form. Similarly, a subsequent transformation of the form T = g(ǫ)I with a nonzero rational function g(ǫ) does not alter the differential equation at all and thus in particular preserves the ǫ-form.
Taking the trace on both sides of eq. (3.2) leads to Applying Jacobi's formula for the differential of determinants Note that the term dlog-form is used here in a more general sense, since det(T ) may depend on ǫ. As the components of T are required to be rational in the invariants and ǫ, det(T ) will also have this property. Therefore, the summands of det(T ) can be put on a common denominator and the resulting numerator and denominator polynomials can then be factorized into irreducible polynomials in K[ǫ, {x j }]. Here, K[ǫ, {x j }] denotes the ring of polynomials in the invariants and ǫ with coefficients in a field K. There is no need to specify the field at this point, for the present application one may have the real or complex numbers in mind. Thus, det(T ) can be written as with e i ∈ Z and d j ∈ Z. The irreducible factors, which only depend on the invariants, are labeled by p and those, which depend on both ǫ and the invariants, are labeled by q. The product of all factors that solely depend on ǫ is denoted by F (ǫ). The factorization allows to rewrite eq. (3.8) This equation can be understood as a necessary condition on the form of Tr[a] for a rational transformation T to exist that transforms the differential equation into ǫ-form. In particular, it implies Tr[a (k) ] = 0, ∀ k < 0, (3.14) where the a (k) denote the coefficients of the ǫ-expansion of a(ǫ, {x j }). The coefficients of the dlog-terms stemming from det(T ) are integers, whereas the coefficients of the dlog-terms from Tr[dÃ] are proportional to ǫ. The determinant of T can therefore be calculated up to a rational function F (ǫ). Moreover, the traces of theÃ l of the resulting ǫ-form can be read of as well. In practice, it can be tested whether Tr[a] is of the form (3.10). If this is not the case, it follows that no rational transformation exists that transforms a(ǫ, {x j }) into ǫ-form. Otherwise, it is possible to extract from the coefficients of the dlog-terms. As will be argued later, both equations provide useful information for the determination of T . Often, the factors q j are absent and therefore . In this case, the above observations turn into statements about the coefficients of the ǫ-expansion of a(ǫ, {x j }): Note that for one-dimensional sectors eq. (3.15) already fixes the transformation up to a rational function in ǫ. It was shown earlier that the choice of this function does not alter the resulting a ′ . Therefore, the undetermined F (ǫ) may be set which then completely fixes the transformation. The determinant provides valuable information for the computation of T for higher-dimensional sectors as well.

Expanding the transformation
Every invertible transformation T that transforms the differential equation into ǫ-form has to satisfy eq. (3.2) for some dÃ, which has to be determined as well. For invertible T , eq. (3.2) can equivalently be written as dT − aT + ǫT dÃ = 0. This form has the advantage of not containing the inverse of T . The strategy to find a solution of this equation is to expand T in ǫ and solve for its coefficients order by order.

Reformulation in terms of quantities with finite expansion
In general, the ǫ-expansion of T may have infinitely many non-vanishing coefficients. This poses a problem for the algorithmic computation of these coefficients. In the following, it will be shown how this problem can be circumvented. It is evident that eq. (3.21) is invariant under the multiplication of T by a rational function g(ǫ). Any such rational function can be written as a product of some power of ǫ and a rational function η(ǫ) with non-vanishing constant coefficient One part of the freedom to chose g(ǫ) can be exploited by demanding the expansion of T to start at order ǫ 0 This condition only fixes the value of τ and leaves η(ǫ) unaffected.
As a(ǫ, {x j }) is required to be rational in both the invariants and ǫ, a polynomial h(ǫ, {x j }) exists such thatâ = ah has a finite Taylor expansion in ǫ a = kmax k=0 ǫ kâ(k) . be denoted by

Investigating the relation of f and h
It is straightforward to compute h for a given a(ǫ, {x j }). However, f could only be computed directly if T was known. Since this is not the case, the relation of f and h will be investigated in the following. With the above definitions eq. (3.21) reads The right-hand side of eq. (3.29) only consists of sums and products of quantities, which are assumed to have a finite expansion. Therefore, both sides of the above equation have a finite expansion. For the left-hand side this means that has a finite expansion. In the following, it is shown that already each summand of the above sum has a finite expansion. Note that it is sufficient to show that there is no number n of such terms with infinite expansion that can sum up to give a finite expansion. For n = 1 this is obvious and therefore it remains to be shown that if the assertion holds for n terms, it also holds for n + 1 terms. Consider f 1 , . . . , f n+1 and assume that the assertion is not true, i.e. each hT df i /f i has an infinite expansion but the sum of all of these terms has a finite expansion. Defining F n = f 1 · · · f n one may write The second term on the left-hand side has by assumption an infinite expansion and the first term has to have an infinite expansion because the assertion holds for n terms. Since the right-hand side is assumed to have a finite expansion, both F n and f n+1 have to be canceled by corresponding factors in the numerator. However, neither h norT can be a product of one or both of these factors with a quantity with finite expansion, since this would render the expansions of the terms on the left-hand side finite. Thus, the only possibility left to investigate is here r(ǫ, {x j }) denotes a rational form with finite expansion. Upon integration, this relation leads to with ρ denoting a polynomial in ǫ. Since F n · f n+1 is polynomial in the invariants and ǫ, the finiteness of the expansion of r implies with p being a polynomial in the invariants. However, since f is required to be minimal, it cannot contain any irreducible factors that are independent of ǫ and therefore p has to be a constant and thus one finds Both, F n and f n+1 need to have non-vanishing differentials, because otherwise both terms on the left-hand side of eq. (3.31) would have a finite expansion. Consequently, both factors have a non-trivial dependence on the invariants. Since F n and f n+1 are polynomials, their product has a non-trivial dependence on the invariants as well, which contradicts eq. (3.36). Thus, the assertion has to be true for n + 1 terms as well and therefore, by induction, hold for all n > 0. Altogether, this shows that each summand in eq. (3.30) has to have a finite expansion.
The minimality of f implies thatT cannot be of the formT = rf i for some rational r(ǫ, {x j }) with finite expansion, because otherwise the factor f i would not be necessary to render the expansion of T finite and consequently f would not be minimal. Also note that the minimality of f implies that its irreducible factors must all depend non-trivially on both ǫ and the invariants. There are only the following two possibilities for a summand of (3.30) to have a finite expansion: where again r i denotes a rational function of the invariants and ǫ that has a finite expansion. However, since the left hand sides of (3.37) are polynomial, a denominator of r i would have to be canceled by f i , but this would imply that r i has an infinite expansion. Thus, r i has in fact to be a polynomial. The first of the above possibilities implies f i = c i (ǫ) by an argument analogous to the one around (3.32), where c i (ǫ) denotes an irreducible polynomial in ǫ. In the second case, f i is equal to one of the irreducible factors of h(ǫ, {x j }). Thus, the irreducible factors of f that are not given by an irreducible factor of h are independent of the invariants.
Obtaining a finite expansion with h As mentioned above, f cannot be used to render the transformation finite, since it cannot be determined directly prior to the computation of T . In the following it will be argued how this can be overcome by exploiting the relation of f and h and by using the remaining freedom in the choice of η(ǫ). Let S denote the set of indices of the irreducible factors of h, which both depend nontrivially on the invariants and are equal to an irreducible factor of f . The product of all irreducible factors of f that only depend on ǫ is denoted by c(ǫ). Using this notation, f can be written as follows The remaining freedom in the choice of the overall factor g(ǫ) can be used to absorb c(ǫ) by demanding η(ǫ) = c(ǫ). This completely fixes g(ǫ) and reduces f to Although f contains the smallest possible number of irreducible factors that is needed in order to render the expansion of transformation finite, it cannot directly be used in practice, since the set S is a priori unknown. However, by multiplying with all irreducible factors of h, the resulting transformation will also have a finite expansion. This amounts to defininĝ T = T h, which can now easily be seen to have a finite expansion bŷ

Expansion of the reformulated transformation law
The transformation law eq. (3.21) can now be rewritten entirely in terms of quantities with finite expansion −T dh + hdT −âT + ǫhT dÃ = 0.
Note that the equation at some order k only involvesT (n) with n ≤ k. Therefore, theT (n) can be computed successively, starting with the lowest order. Given some a(ǫ, {x j }), the first step is to calculate h andâ, which fixes the values of l min , l max and k max . The value of n max remains unknown until the solution forT is known. Therefore, it is tested at each order k whether k = n max . In order to do so, it has to be checked ifT (n) = 0 for all n > k solves the equations of the remaining max(k max , l max + 1) subsequent orders. The algorithm stops as soon as this test is successful and returns T =T /h.

Recursion over subsectors
The algorithm presented in subsection 3.2 is applicable to differential equations a(ǫ, {x j }) of arbitrary dimension. However, if a(ǫ, {x j }) comprises more than one sector, the computational cost can be significantly reduced by making use of its block-triangular form. In particular, the block-triangular form allows to compute the transformation to an ǫ-form by means of a recursion over the subsectors of a(ǫ, {x j }). Starting from the lowest subsector, at each step of the recursion the next diagonal block is transformed into ǫ-form with the algorithm presented in subsection 3.2. The off-diagonal blocks are transformed into ǫ-form in a subsequent part of the recursion step, which will be the topic of this subsection. Similar considerations have been made in [14,15,38].

The interplay of overall and subsector transformations
In order to investigate the recursion step, it is assumed that the first p subsectors have already been transformed into a block-triangular ǫ-form by a transformation t p . Using the algorithm from subsection 3.2, a transformation t p+1 can be computed that transforms the next diagonal block into ǫ-form. Up to this point, the transformation has been applied to the original a(ǫ, is of the form wherec andẽ are in dlog-form withc being block-triangular. The goal of this subsection is to devise an algorithm to compute the remaining transformation t r , such that attains a block-triangular ǫ-form: In subsection 3.1 it has already been established that there is some freedom in the choice of the transformations t p and t p+1 . However, the algorithm from subsection 3.2 just returns one particular choice. It is conceivable that these independently made choices do not fit together and therefore have to be modified. It is thus important to investigate the relation between t p and t p+1 and a transformation that transforms the full differential equation a(ǫ, {x j }) into ǫ-form. The superdiagonal block has been set to zero in order to preserve the block-triangular form. As the goal is only to find some transformation, a subsequent invertible transformation C and the multiplication by a rational function g(ǫ) may be chosen freely The relation between the diagonal blocks of T and t is assumed to be the following with invertible constant transformations c p and c p+1 . Here g p (ǫ) and g p+1 (ǫ) are matrices rational in ǫ, which only encompass the additional degrees of freedom that are not already accounted for by the constant matrices c p and c p+1 . Note that this assumption is stronger than the mere assumption of the existence of T , since it has only been shown that a subsequent constant transformation and rescaling with a rational function will preserve the ǫ-form of any differential equation. However, there also exist cases in which there is more freedom than that. A simple example is given by a differential equation with two sectors, where neither is a subsector of the other. Then there is no off-diagonal block and the integrals of the two sectors can be rescaled with different ǫ-dependent rational functions without altering the ǫ-form.
Let the blocks of the constant transformation C be denoted by The freedom in the choice of C can be used as follows Again, the superdiagonal block has been set to zero in order to preserve the block-triangular form of the differential equation. Together with eq. (3.54) it follows that the computation of t r may be split into two consecutive steps by means of the following factorization: with and At this point it becomes apparent that in general K and J cannot be completely fixed with the choice of g(ǫ). Instead, both K and J need to be determined by eq. (3.51). The choice of g can be used to fix one of the components of K or J. The quantities D, K and J are determined by the following equations, which are implied by eq. (3.51) In the latter equation, the product of three unknown quantities occurs in the term ǫJb ′ K −1 . A linear ansatz for these quantities would result in nonlinear equations in the coefficients of the ansatz. This can be prevented by defining b ′ = ǫJb ′ K −1 and first solving for D and b ′ . Note that b ′ has to be in dlog-form, since J and K are independent of the invariants and therefore do not alter the dlog-form ofb ′ .

The determination of t KJ
In a second step, the equations are solved, which is equivalent to finding a t KJ that transforms into a ′ , which is in ǫ-form. This can be achieved by a procedure outlined in [38], which is reproduced here for convenience. Since a D is in dlog-form, it can be written as Every transformation V (ǫ) that transforms a D into ǫ-form has to satisfy for constant matricesõ l . A necessary condition for V (ǫ) to exist is that the eigenvalues of a D l (ǫ)/ǫ are constant. The following argument shows that this is indeed the case. Each of the a D l is again of the same block-triangular form as a D . The determinant of a blocktriangular matrix equals the product of the determinants of its diagonal blocks. This leads to a factorization of the characteristic polynomials of the a D l det(a D l − λI) = det(ǫc l − λI) det(ǫẽ l − λI). (3.69) In this form it is obvious that the eigenvalues of a D l (ǫ) are proportional to ǫ. Therefore, the eigenvalues of a D l (ǫ)/ǫ must be constant. In order to calculate such a transformation, eq. (3.68) has to be solved. Since the constant matrices on the right-hand side are unknown, the components of V (ǫ) cannot be solved for directly. However, as the right-hand side of eq. (3.68) is manifestly independent of ǫ, the following holds In the last form, for each l = 1, . . . , N there is a linear equation for V (ǫ, µ). This set of equations can now be solved for the components of V (ǫ, µ) subject to the constraint that the block-triangular form is preserved. Finally, a constant µ 0 needs to be chosen such that t KJ = V (ǫ, µ 0 ) is non-singular. It is straightforward to check that this t KJ transforms a D into ǫ-form: Setting up a recursion over subsectors for t D In the following part of this section the determination of t D is considered. The goal is to find a rational D and a b ′ in dlog-form that satisfy eq. (3.64): The block-triangular form ofc can be used to solve eq. (3.77) in a recursion over subsectors. To this end, all quantities are split according to the block-triangular structure into subsectors: (3.79) In this notation, eq. (3.77) may equivalently be written as a system of p equations of the form Note that the equation for a subsector k only depends on the D n of higher subsectors n ≥ k. It is therefore possible to solve for the D k in a recursion that starts with the highest subsector. As for the recursion step, suppose that the equations for the topmost p − k subsectors have already been solved. The contribution of the higher subsectors to the equation of subsector k is most naturally absorbed into the definition of Thus, the following equation has to be solved  The argument can only be repeated until the equation of order n min − 1, since at higher orders also contributions fromb appear. As the constant values of the D (k) with k < n min −1 do not affect the equations of the orders n min or higher, they can be set to zero without loss of generality It has now been established that the ǫ-expansion of D can be assumed to start at order n min − 1 or higher. Note that this assertion incorporates the case m min > n min as well. Moreover, the coefficient at order n min − 1 can be assumed to be constant.

Obtaining finite expansions
The ǫ-expansion of D may still have infinitely many non-vanishing terms. Using ideas similar to those in subsection 3.2, it will be shown that eq. (3.82) can be reformulated such that a solution for D can be obtained by solving only finitely many differential equations. Since D is assumed to be rational in ǫ and the invariants, a polynomial f (ǫ, {x j }) has to exist such thatĎ = Df has a finite ǫ-expansion. Similarly, there exists a polynomial k(ǫ, {x j }) such thatb =bk has a finite ǫ-expansion as well. In order to fix f and k up to constant factors, both are required to only contain the minimal number of irreducible factors that are necessary to satisfy the aforementioned conditions. The products of all irreducible factors of f and k that are independent of the invariants are subsequently denoted byf andk respectively. Then their factorizations read Furthermore, let γ(ǫ) be a polynomial with a minimal number of irreducible factors, such that b ′ γ has a finite expansion. Note that γ(ǫ) does not depend on the invariants since b ′ is in dlog-form.
For a givenb it is straightforward to compute k, but as D is not known in advance, f cannot be calculated directly. Therefore, the relation between f and k has to be investigated. In order to do so, consider the eq. (3.82) rewritten in terms ofĎ The right-hand side obviously has a finite expansion and thus also the left-hand side has to have a finite expansion. By similar arguments as in the previous subsection, each of the summands on the left-hand side has to have a finite expansion. Note that df i /f i cannot be equal to a rational function with finite expansion. The same holds forĎ/f i due to the minimality of f . Since γ does not depend on the invariants, it follows that eachf i is equal to somek j and thus with p(ǫ, {x j }) being a polynomial andf denoting the product of all irreducible factors of f that depend on the invariants. By applying this relation to eq. (3.97) and dividing byf , the following equation is obtained The same argument as above leads to p(ǫ, . Combining this relation with eq. (3.98), it is evident that the productk of all irreducible factors of k that depend on the invariants contains two powers off k = rf 2 . (3.100) In order to learn aboutf , the above equation is applied to eq. (3.99) and subsequently divided byfk The irreducible factors of r(ǫ, {x j }) cannot be equal to irreducible factors off (ǫ), because they are not independent of the invariants. The other factors in the numerator can be a product of an irreducible factor off and a quantity with finite expansion. Since onlyk is known prior to solving the equations, some irreducible factors off remain unknown.

Reformulation in terms of quantities with finite expansion
Since f cannot be used in practice, as it is not computable before solving for D, an alternative factor will be defined such that the expansion ofD where the polynomial s(ǫ, {x j }) is required to have the minimal number of irreducible factors. By virtue of eq. (3.100), this definition ensures thath captures all irreducible factors off . As for the irreducible factors off , it is only known that some of them may be equal to irreducible factors ofk. The following definition incorporates all of these factors and leaves the missing factors to a factor g(ǫ) that has to be solved for Note that the minimality off implies that g(ǫ) has a non-vanishing constant coefficient With the definitionsb =bh 2k andb ′ =kgb ′ , the differential equation eq. (3.82) can be rewritten entirely in terms of quantities with finite ǫ-expansion (3.107) All quantities on the left-hand side have finite expansions by definition. The expansion ofb must be finite as well, becauseb =bs and the expansion ofb is finite by definition. Together, this implies thatb ′h2 must have a finite expansion. Sinceb ′ is in dlog-form, only factors that are independent of the invariants can render its expansion infinite. However, these factors could not be compensated byh 2 , which is a product of irreducible factors depending on both ǫ and the invariants, and thereforeb ′ itself has to have a finite expansion. Thus, all quantities in eq. (3.107) indeed have a finite expansion. Altogether, the procedure is as follows: First,k andk are computed from the given b, which then allows to inferh andb. Subsequently, eq. with The equations E (n) = 0 are solved order by order, starting at the lowest order n = n min − 1.
Since E max is unknown until the solution is known, it is tested at each order n whether n = E max . To this end it is checked if Once this test has been successful, eq. (3.107) is satisfied to all orders upon setting the coefficients ofD, g andb ′ of all, still undetermined, higher orders to zero too. The algorithm stops and returns D =D/(hkg).

Leinartas decomposition
In the previous subsections it was shown that the computation of a transformation to ǫform is equivalent to finding a rational solution of finitely many differential equations in the invariants. These equations do in general admit transcendental solutions as well. The strategy to find a rational solution of these equations is to make a rational ansatz. It is favorable to use an ansatz that depends linearly on its parameters, since this will translate linear differential equations into equations in the parameters that are linear again. This leaves the question which type of rational functions is sufficient to express any other rational function as a linear combination. An answer will be given in this subsection by showing that any rational function can be decomposed as a linear combination of a certain simple type of rational functions. In the univariate case, a partial fractions decomposition of the denominator polynomial could be used. However, in the multivariate case a naive generalization of partial fractioning may run into an infinite loop. This is illustrated by the following example .
In the first equation, the partial fractions decomposition was applied with respect to x and in the second equation it was applied with respect to y. Apparently, this procedure runs into a loop. This can be avoided by a more careful generalization of the partial fractioning procedure, as outlined in [42,43]. In the following, a brief account of this decomposition method is given, based on the above references and [73]. The focus will be on the computational aspects and only those proofs will be shown that are relevant for the implementation of the decomposition. For the readers convenience, some definitions and standard results about polynomial rings that are used throughout this subsection are collected in appendix A.

Denominator decomposition
Let K[X] denote the ring of polynomials in d variables X = {x 1 , . . . , x d } with coefficients in a field K. Again, the cases K = R and K = C are the most relevant for the present application, but there is no need to specify the field for the following considerations.
It is straightforward to check that the elements of the ideal E = I ∩ K[Y 1 , . . . , Y m ] are annihilating polynomials. The following theorem provides a means to actually compute the elements of E.
Theorem 1 (Elimination Theorem). Let I ⊂ K[X, Y 1 , . . . , Y m ] be an ideal and G be a Gröbner basis of I with respect to lexicographic order with X > Y 1 > · · · > Y m . Then Algorithm 1: Nullstellensatz certificate following theorem, which provides a generalization of the partial fractions decomposition to the multivariate case.
Theorem 2 (Leinartas). Let f = p/q be a rational function with p, q ∈ K[X] and q = q e 1 1 . . . q em m be the unique factorization of q in K[X] and V i = {x ∈ K d | q i (x) = 0}. Then f can be written in the following form with the sum running over all subsets S ⊆ {1, . . . , m} with ∩ i∈S V i = ∅ and {q i | i ∈ S} being algebraically independent.
The proof of this theorem will be presented, because it directly translates to an algorithm that decomposes rational functions into the above form. The decomposition can be separated into two consecutive steps. In the first step, a form is attained that satisfies ∩ i∈S V i = ∅ for each summand. This step is called Nullstellensatz decomposition. Let f = p/q be a rational function. In the case ∩ m i=1 V i = ∅, the Nullstellensatz decomposition is already complete. Thus, it remains to consider the case ∩ m i=1 V i = ∅. As q i has the same zero-set as q e i i , it follows that {q e 1 1 , . . . , q em m } has no common zero in K d . According to corollary 1, a Nullstellensatz certificate 1 = m i=1 h i q e i i exists in this situation. Multiplying the f with this factor of one yields This step is applied repeatedly until the denominator factors of each term have a common zero. Note that this procedure will eventually stop since single irreducible factors always have a zero V i = ∅. In the second step, the goal is to achieve that {q 1 , . . . , q m } is algebraically independent for each summand. Let f = p/q be a summand of the Nullstellensatz decomposition. If {q 1 , . . . , q m } is algebraically independent, then this term is already in the desired form. If this is not the case, the set {q e 1 1 , . . . , q em m } is also algebraically dependent by virtue of lemma 2. Therefore, an annihilating polynomial κ = ν∈S c ν Y ν ∈ K[Y 1 , . . . , Y m ] exists, which has been written in multi-index notation with S ⊂ N m . Let µ ∈ S refer to the powers of the monomial with the smallest norm µ = m i=1 µ i . The annihilating polynomial vanishes on Q = (q e 1 1 , . . . , q em m ) This factor of one can be used to decompose f As µ has the smallest norm in S, there has to exists some j for each ν ∈ S such that µ j + 1 ≤ ν j and therefore e j (µ j + 1) ≤ e j ν j . So in each summand at least one factor in the denominator cancels. Again, this step is applied repeatedly to all summands whose denominator factors are algebraically dependent. Eventually, this procedure will stop, since a single irreducible factor is obviously algebraically independent. This completes the proof of the Leinartas theorem. Following this proof, a recursive algorithm can be built that computes the above decomposition of rational functions.

Numerator decomposition
The Leinartas decomposition as presented in [42,43] leaves the numerator polynomial untouched. However, by employing multivariate polynomial division, the above decomposition can be extended to the numerator polynomial as well, which results in summands with simpler numerator polynomials. The precise meaning of simple in this context will be stated below.
Consider a summand f = p/(q e 1 1 . . . q em m ) of the above decomposition, i.e. with ∩ i∈S V i = ∅ and the q i being algebraically independent. The numerator polynomial p can be decomposed according to the following theorem (cf. [73]).
Theorem 3 (Division Algorithm). Fix some monomial ordering on Z d ≥0 and let (f 1 , . . . , f m ) be an ordered m-tuple of polynomials in K[X]. Then every p ∈ K[X] can be written as p = β 1 f 1 + · · · + β m f m + r with β 1 , . . . , β m , r ∈ K[X] and either r = 0 or r is a linear combination of monomials with coefficients in K such that no monomial is divisible by any of the LT(f 1 ), . . . , LT(f m ). Moreover, for all β i f i = 0 the following holds It should be noted that the resulting decomposition depends on both the ordering of the (f 1 , . . . , f m ) and the monomial ordering. Let the ordered tuple of polynomials be given by the set of denominator polynomials (q 1 , . . . , q m ) and apply the above theorem to the numerator polynomial p = β 1 q 1 + · · · + β m q m + r, (3.126) to arrive at The denominator factors of the resulting summands are still algebraically independent, since every subset of an algebraically independent set of polynomials is algebraically independent. Moreover, every subset of a set of polynomials that share a common zero, has a common zero as well. So after decomposing the numerator as above, the denominator polynomials of the resulting summands still have a common zero and are algebraically independent. Therefore, this decomposition can be applied recursively. The recursion stops at a summand whenever there is no monomial of the numerator polynomial that is divisible by the leading term of any of the denominator polynomials. It has to be shown that the recursion will always stop after a finite number of steps. For the first summand in eq. (3.127) the recursion trivially stops. Concerning the other terms, it is sufficient to show that the multidegree strictly decreases multideg(p) > multideg(β i ) (3.128) at each step, due to property 3 of definition 5 given in appendix A. Lemma 4 implies multideg(q i ) ≥ 0 with respect to any monomial ordering. However, the case multideg(q i ) = 0 cannot occur, since it implies q i = const. Thus, multideg(q i ) is strictly greater than zero. Using property 2 of definition 5 and lemma 3 it follows multideg(β i q i ) = multideg(q i ) + multideg(β i ) > multideg(β i ).
The terms in such a decomposition are not necessarily linearly independent over K, as the following example illustrates In the last step, this redundancy is removed by eliminating all such relations from the set of summands. Altogether, it has been demonstrated that every multivariate rational function can be decomposed into K-linearly independent summands such that denominator polynomials of each summand share a common zero and are algebraically independent, and the numerator polynomial is not divisible by the leading term of any of its denominator polynomials. In the following this decomposition is referred to as Leinartas decomposition and the individual summands are said to be in Leinartas form.

Solving for a rational transformation
In this subsection the Leinartas decomposition of multivariate rational functions will be employed to solve the differential equations that appear at each order of the expansion of eq. (3.42) and eq. (3.107). In both cases these differential equations, in general, admit transcendental solutions forT (n) andD (n) . Since only rational solutions are of interest for the present application, it suggests itself to solve these equations with a rational ansatz. In the previous subsection it has been shown that any multivariate rational function can be written as a linear combination of rational functions in Leinartas form. In particular, this implies that the rational solutions of the differential equations above can also be written as a linear combination of these functions. Therefore, the ansatz can be chosen to be a linear combination of rational functions in Leinartas form with unknown coefficients without losing generality.

The strategy for diagonal blocks
First, consider the part of the algorithm for the diagonal-blocks, outlined in subsection 3.2.
The following ansatz is used for each Taylor coefficient ofT where the τ (n) k denote m × m matrices of unknown parameters. It is necessary to determine the right set R T of rational functions in Leinartas form that is sufficiently large to encompass a solution. To this end, it is very useful that the determinant ofT can easily be computed by virtue of eq. (3.15). The powers of the irreducible factors in the determinant can be used as input for a heuristic procedure to generate an ansatz, which will be described in detail in a future publication [41].
Note that dÃ is also unknown in eq. (3.42). However, the dependence ofÃ on the invariants is restricted by the requirement that dÃ is in dlog-form where the α l are considered to be m × m matrices of unknown parameters. The set of letters has to be chosen such that it contains all letters that are necessary for a resulting ǫ-form.
A natural choice is to take the set of all irreducible denominator factors occurring inâ. In subsection 3.1 it was shown that eq. (3.16) fixes the traces of all α l and thereby reduces the number of free parameters that have to be solved for. Upon inserting this ansatz in the expansion of eq. (3.42) and requiring the resulting equations to hold for all allowed values of the invariants, a system of equations in the unknown parameters is obtained. It is possible thatT (n) is not fully determined by the equations of order n or lower. IfÃ is not fully determined by these equations as well, it may happen that terms, which are nonlinear in the parameters, arise in the equations of order n + 1. This is due to the term ǫT dÃ in eq. (3.21). Therefore, the system of equations in the unknown parameters is, in general, polynomial.

The strategy for off-diagonal blocks
A similar strategy is employed for the part of the algorithm that is concerned with the off-diagonal blocks, which is discussed in subsection 3.3. For the coefficients ofD in the expansion of eq. (3.107) the ansatẑ where the β (n) l denote matrices of unknown parameters. The set of letters is taken to be the set of irreducible denominator factors inb. Since the constant coefficient g (0) of g(ǫ) is nonzero, eq. (3.107) can be divided by g (0) . Subsequently, this factor can be absorbed into the definitions ofD andb ′ . Effectively, this amounts to setting g (0) = 1 without loss of generality. All higher Taylor coefficients of g(ǫ) are treated as unknown parameters. Once all of the above is inserted into the expansion of eq. (3.107), linear equations in the unknown parameters are obtained at each order.

Beyond the canonical form
The presented algorithm is able to compute a rational transformation of a given differential equation into ǫ-form, whenever such a transformation exists and it is decomposable in terms of the ansatz that is used. If no such transformation exists for the given ansatz, the equations in the parameters of the ansatz will not have a solution. In this case, either the ansatz is not general enough or a rational transformation to ǫ-form does not exist at all. A sufficient condition for the latter case is the presence of non-rational factors in the determinant of the transformation T , which can be computed with eq. (3.15). In this case an ǫ-form may still be attainable with a non-rational transformation.
However, it is well known [36,[76][77][78][79][80][81][82][83][84] that Feynman integrals exist that satisfy higher order differential equations and therefore a canonical form as in (2.13) can not exist for these integrals. It has been observed that for the differential equations of these integrals a dlog-form with linear dependence on ǫ can be attained where theĀ l andÃ l denote constant matrices. In this more general case, the transformation law (3.21) generalizes as follows Note that the term on the right-hand side has not been present in the original transformation law (3.21). The main ideas of the presented algorithm carry over to the problem of finding a transformation that satisfies the more general equation eq. (3.139). Since eq. (3.139) is invariant under the multiplication of T with a rational function g(ǫ), a procedure similar to the one described in section 3 can be used to construct a transformation with finite expansion. By expanding (3.139) in ǫ and making the ansatz for dĀ in the same way as for dÃ, the algorithm generalizes naturally to this more general situation. If no canonical form exists, there will be no solution withĀ = 0. In this case there may still be a solution with non-vanishingĀ, which then corresponds to the more general form eq. (3.138) of the differential equation.

Applications
In this section, the algorithm described in the previous section is applied to a set of nontrivial examples. These are given by four two-loop double box topologies, which can be specified by seven propagators and two irreducible scalar products: (4.1) A finite basis of master integrals has been computed for each of the examples with Reduze [66,67].

Two loop single top-quark production
The integrals considered in this example are necessary to include certain color suppressed contributions in the NNLO QCD corrections to single top-quark production [45], which have been neglected a previous calculation [44]. These integrals have not been considered before and therefore represent a new result. The algorithm is applied to the planar topology 1 and the non-planar topology 2, which are given by the following sets of propagators.

Conclusion
Assuming the existence of a rational transformation that transforms a differential equation of master integrals to an ǫ-form, the algorithm presented here can be used to compute such a transformation. It is applicable to differential equations involving multiple scales and allows for a rational dependence of the differential equation on the dimensional regulator and thus extends previous approaches. It has been shown that the transformation can be obtained as the solution of finitely many differential equations. These are solved with an ansatz that is given by a linear combination of rational functions in Leinartas form. Any multivariate rational function can be expressed as a linear combination of functions of this type. After choosing a sufficiently large set of these functions for the ansatz, a transformation can be constructed by solving polynomial equations in the parameters of the ansatz. As already suggested in previous approaches, it is beneficial to make use of the block-triangular form of the differential equation by computing the transformation in a recursion over subsectors. This strategy has been incorporated into the presented algorithm as well. The algorithm has been implemented in Mathematica, which will be the topic of a further publication [41]. The power of the algorithm has been demonstrated by its application to non-trivial integral topologies, some of which were previously unknown. With its broad scope of application, the presented algorithm may prove particularly useful to facilitate multi-loop calculations that involve multiple scales.
is an ideal, which is called the ideal generated by {f 1 , . . . , f m }. Definition 5 (Monomial ordering). A monomial ordering on K[X] is a relation > on the set of monomials x α , α ∈ Z d ≥0 which satisfies: 1. > is a total ordering on Z d ≥0 .
3. For all A ⊆ Z d ≥0 there exists an α ∈ A such that β > α for all β = α in A.
While the considerations in subsection 3.4 are agnostic about the monomial ordering, in practice the lexicographic ordering has proven to be a good choice.

≥0
it is said that α > lex β, if the leftmost nonzero entry of α − β ∈ Z d is positive.
Note that different orderings of the variables give rise to different lexicographic orderings.
Definition 7. Let f = α a α x α be a nonzero polynomial in K[X] and α ∈ Z d ≥0 and let > be a monomial order.
1. The multidegree of f is the maximum is taken with respect to the monomial order >.

The leading coefficient of f is
3. The leading monomial of f is