Reducing differential equations for multiloop master integrals

We present an algorithm of the reduction of the differential equations for master integrals the Fuchsian form with the right-hand side matrix linearly depending on dimensional regularization parameter $\epsilon$. We consider linear transformations of the functions column which are rational in the variable and in $\epsilon$. Apart from some degenerate cases described below, the algorithm allows one to obtain the required transformation or to ascertain irreducibility to the form required. Degenerate cases are quite anticipated and likely to correspond to irreducible systems.


Introduction
For a few last decades, the demand for the multiloop calculations is constantly growing, the methods of such calculations evolved accordingly. For multiscale integrals, probably, the most powerful technique is the differential equations method [1][2][3][4][5]. Within this method, the master integrals are found as solutions of the differential equations obtained with the help of the IBP reduction [6][7][8].
Recently, a remarkable observation has been made by Henn in Ref. [9] concerning the differential equations method. Namely, it appeared that in many cases the ǫ dependence of the right-hand side of the differential equations for the master integrals can be reduced to a single factor ǫ by a judicious choice of the master integrals. With this form (and also the initial conditions) at hand, finding the solution up to any fixed order in ǫ becomes a trivial task. Moreover, the solution manifestly possesses a remarkable property of homogeneous transcendentality. Since then a number of papers successfully applied this approach to the calculation of various classes of integrals [10][11][12][13][14][15][16][17][18][19].
In general, finding an appropriate basis is not easy. In Refs. [14,20] algorithms of the reduction has been presented assuming a very special form of the differential system. Despite these advances, finding an appropriate basis has been rather an art than a skill so far. In particular, explicit parametric representation for the integrals has been used to determine the homogeneous transcendentality integrals. While such methods have already proved their validity, nevertheless, devising a practical algorithm of finding the described form of the differential system is of essential interest.
In the present paper we describe a method of finding an appropriate basis which is based on the differential system alone. Our main algorithm can be divided into three stages 3. Factoring out ǫ, Section 6.
Except for the last step, it is not specific to the systems depending on parameter. In particular, it can be used to eliminate apparent singularities and to find the matrices of monodromy around singular points. We give one nontrivial example of its application.

Preliminaries
The system of the differential equations for master integrals can be written in the matrix form where ǫ is the dimensional regularization parameter (d = 4 − 2ǫ), x is some parameter, J is the column of the master integrals, M is n × n matrix, rational in both ǫ and x. Under the change of functions the system modifies to an equivalent system where The observation of Ref. [9] states that it is often possible to find a transformation T so that the new column J satisfies a simple equation (2.5) For brevity we will refer to such a form of the differential system as ǫ-form. Though it is not stated explicitly in Ref. [9], we will require that the matrix S has a Fuchsian form, i.e., where k runs over finite set. This condition is very important on its own because the form (2.6) allows one to express the result in terms of generalized harmonic polylogarithms. In what follows we will often omit ǫ in the arguments of functions unless it may lead to confusion.
Definition 1. The differential system (2.1) is said to have a regular singularity at x = x 0 = ∞ (at The power-like growth of the master integrals (which are the solutions of the system) in the vicinity of any point follows from their parametric representation. Therefore, it is natural to expect that all singular points of the differential system for the master integrals are regular singularities.
An apparent singularity is a regular singularity which is a finite-order pole or a regular point of any solution of the system. Therefore, the monodromy around an apparent singularity is an identity matrix. As we shall see, it means that, locally, we can always remove apparent singularity with a rational transformation. If p = 0, we say that the system is Fuchsian in x = x 0 and call A(0) a matrix residue. Respectively, we call x 0 a Fuchsian point of the system.
It is easy to show that when the Poincaré rank of a system is zero at some point, this point is a regular singularity of the system. But the converse is not always true. However, if some point is a regular singularity, it is possible to transform the system to the equivalent one with zero Poincaré rank at that point. More generally, Moser [21] has given necessary and sufficient condition of the possibility to reduce the (generalized) Poincaré rank of the system and also presented an algorithm for finding the appropriate transformation matrix. Barkatou and Pflügel have given an improved version of the algorithm in Refs. [22,23]. Their algorithm consists of a sequence of rational transformations, each lowering the generalized Poincaré rank p + r/n − 1, where r = rank A(0) and n is the size of A(0). Applying these transformations several times for each singularity, one can minimize the Poincaré rank of all singularities, except maybe one (usually chosen to be x = ∞). In particular, if all singularities are regular, after the application of the algorithm, Poincaré ranks for all but one singularities can be nullified and thus the system is reduced to a Fuchsian form everywhere, except, may be, one point. In fact, their algorithm also allows one to transform a regular system to Fuchsian form globally with a penalty of introducing some apparent singularities.
The possibility to transform a regular system to Fuchsian form in all points and to eliminate all apparent singularities would mean the positive solution of the 21st Hilbert problem, consisting of proving of the existence of linear differential equations having a prescribed monodromy group. However, Bolibrukh in Ref. [24] has proved by presenting an explicit counterexample, that it is not always possible and thus 21st Hilbert problem has negative solution. Nevertheless, the problem of reducing, when it is possible, a rational differential system to Fuchsian form without apparent singularities is very important. An ultimate solution of this problem in the most general case, and, in particular, deciding whether such a reduction is possible, is not known so far to the best of our knowledge.
In this definition the condition det T 0 = 0 simply states that T −1 (x) is also a power series near the point x = x 0 (x = ∞). Naturally, regular transformations can not change the pole order of M, so we have to consider singular transformations. While there are transformations singular at only one point on the extended complex plane, their form appears to be too restrictive for our purposes 1 . The key tool of our approach is the transformation singular at two points.

Definition 4.
A balance is a transformation, generated by the matrix T of the form where c is some constant, P, P are the two complementary projectors, i.e. P 2 = P and P = I − P.
More specific, we call the transformation generated by (2.7) the P-balance between x 1 and x 2 .
Note that this transformation appears in the consideration of the Riemann problem in complex analysis, see, e.g., Ref. [25]. We will always put c = 1 when both x 1 and x 2 are finite. When x 1 = ∞ (when x 2 = ∞), we put c = x 1 (c = 1/x 2 ) and understand c(x − x 2 )/(x − x 1 ) as a limit for The inverse of the balance is also a balance, since Therefore, the transformation (2.7) is regular everywhere, except the points x = x 1 and x = x 2 , where, respectively, T(x) and T −1 (x) have simple poles.

Reduction at one point
The basic idea of reducing the Poincaré rank is to find such a projector P that the transformation generated by (2.7) lowers the rank of A 0 . For a regular singularity, the idea is to use (2.7) to normalize the eigenvalues of the matrix residue. Let us concentrate on the reduction of the differential system at one point. Without loss of generality, we assume that x = 0 is a singular point of the system (2.1) and the Laurent series expansion of M(x) near x = 0 has the form

Lowering Poincaré rank
First, let us consider the problem of lowering of the Poincaré rank, so p > 0 in this subsection. We assume that A 0 is a nilpotent matrix since it is a necessary condition for the existence of a transformation which lowers the Poincaré rank [21]. Therefore, A 0 can be reduced to Jordan form with zero diagonal. Let r = rank A 0 , then a necessary and sufficient condition of existence of a transformation lowering the generalized Poincaré rank p + r/n − 1 introduced in Ref. [21] is that identically as a function of λ. It is convenient to use an equivalent form of this condition, which was introduced in Ref. [22]. Let {u (α) k |k = 1 . . . N, α = 0, . . . n k } be a basis constructed of the generalized eigenvectors of A 0 with the properties Here N is a number of Jordan cells (including the trivial ones), n k is a rank of k-th Jordan cell, which is its dimension minus one. In what follows we assume that Jordan cells are ordered by their sizes, so that n 1 n 2 . . . n N . Let , . . . k . This matrix generates the similarity transformation A 0 → A 0 = U −1 A 0 U reducing A 0 to Jordan form. Then where v We will call v (α) † k the left generalized eigenvectors of A 0 , in contrast to u (α) k which we will call the right generalized eigenvectors of A 0 . From so that {u where c is an arbitrary number, and k and l are some fixed Jordan cell numbers, k > l (we remind that n 1 n 2 . . . n N in our convention). The above transformation corresponds to the transformation of the matrix U: where (E (l,k) ) iα jβ = δ il δ jk δ αβ . Here we denoted by kα the number of the column in which u (α) k stands in U. The condition (3.2) can be written as [22,23] det L(λ) = det(L 0 + λL 1 ) = 0 , L 0 → (I − cδ n k n l ∆ (l,k) )L 0 (I + c∆ (l,k) ) , (3.12) where ∆ (l,k) is the matrix with unity on the intersection of l-th row and k-th column and zero elsewhere, i.e. ∆ (l,k) ij = δ il δ jk . It is easy to check that L 1 is invariant under these transformations. General composition of the transformations of the form (3.9) can be written as 14) The expression for ∆ can be derived from the representation I + ∆ = (I + c i ∆ (li,ki) ), but its explicit form is irrelevant for further discussion. What is relevant, is that, given an arbitrary uppertriangular matrix ∆ with zero diagonal, we can easily reconstruct E.
Our idea now is to use transformations (3.12) for the reduction of the matrix L to some suitable form, allowing for simple determination of the appropriate projector P for the rank-reducing transformation (2.7). Namely we have the following Claim 1. Using the transformations (3.12) it is possible to secure that (L 0 ) jk = 0 for any j and k satisfying j ∈ S&k ∈ S ∪ {k 0 } , (3.16) where k 0 is the number of nontrivial Jordan cell (so that n k0 = 0) and S is some set of the numbers of trivial Jordan cells, i.e. for any i ∈ S holds n i = 0.
A constructive prove of this claim is given in Algorithm 1.
Output: {k 0 , S, ∆}, where ∆ is uppertriangular with zero diagonal such that the transformation (3.14) results to L 0 of the form described in Claim 1 with the corresponding k 0 and S.
Construct L 0 = (a 1 , a 2 , . . .) by striking out from L 0 all rows with numbers from S. Below a i denotes the i-th column of this matrix. 6 Find the minimal i such that i ∈ S and i-th column of L 0 is linearly dependent on first i − 1 columns: The transformation on line 9 guarantees that any i-th column of L 0 with i ∈ S is zero. It may be not obvious why it is always possible to find appropriate i on line 6 when S contains only numbers larger than k. To explain this, let us examine the form of the matrix L(λ) after m passes of the 'repeat' loop. Then S = {i 1 , . . . i m }, where i j > k is the number appearing at pass #j. Let L ′ (λ) denote a matrix obtained from L(λ) by simultaneous rearrangement of columns and rows in such a way that i k -th column (and row) of the latter is k-th-to-last of the former. Then L ′ (λ) has the following block form where Z(λ) is a lower-triangular m × m matrix with diagonal elements equal to λ. Then, from the condition det L ′ (λ) = det L(λ) = 0, we obtain det X(λ) = 0, and, in particular, Now we note that the columns of X(0) coincide, up to rearrangement, with the eligible columns of L 0 on line 5 of the algorithm, and the condition (3.18) tells that there is a linear dependency between them. Thus, it is indeed possible to find i as prescribed in line 6. The algorithm terminates at most when all i > k are already included in S. Now we can use the output of Algorithm 1 for the construction of the appropriate projector, such that the transformation (2.7) strictly lowers the rank of A 0 . First, we use ∆ for the reconstruction of the matrix E. To this end it suffices to represent ∆ as a linear combination of ∆ (l,k) . Trivially, ∆ = l,k; l<k ∆ lk ∆ (l,k) , so E = l,k; l<k ∆ lk E (l,k) . Using this matrix, we apply transformation (3.13) to U. Let now u strictly lowers the rank of A 0 .
The proof is very simple. We note that A 0 P = 0 and the Laurent expansion of the transformed matrix M near x = 0 has the form where In order to prove that A 0 has matrix rank strictly smaller than that of A 0 it is sufficient to demonstrate that A 0 has more eigenvectors (with zero eigenvalue) than A 0 . Let us check that any left eigenvector v But, according to the Claim 1, (L 0 ) jk = 0 in the sum. So, we have proved that all eigenvectors of A 0 remain to be the eigenvectors of A 0 . Obviously, we have an extra eigenvector of the latter, namely, v (3.19) several times, we lower the rank of the leading coefficient A 0 until it becomes zero (and thus A 0 itself is zero). This lowers the Poincaré rank by one. Acting in the same way, we finally lower the Poincaré rank to zero.
Algorithm 1 as well as the transformation (3.19) are very similar to those presented in Refs. [22,23]. Moreover, our transformation is not optimal in a sense of [22]. The only advantage of our transformation (3.19) is that it gives as few terms in the sum in Eq. (3.20) as possible. This will be helpful for the constructions of Section 4.

Normalizing eigenvalues in Fuchsian singularities
The results of the previous subsection allow one to reduce the Poincaré rank at one point in a stepwise manner provided A 0 is nilpotent and (3.2) holds. If at some step either of these two conditions fails, then the point is irregular. Otherwise, we can lower Poincaré rank to zero, i.e., make system Fuchsian at a given point. The question remains whether we can do still better -can we find a rational transformation that will restrict the form of the matrix residue? In this subsection we assume that p = 0 in Eq. (3.1), i.e., that the Laurent series expansion of M(x) near x = 0 has the form Similar to the previous subsection, let be a basis constructed of the generalized eigenvectors of A 0 with the properties The vectors of the dual basis {v 2 , . . .} obey orthonormality condition (3.7) and satisfy v Let us consider the transformation generated by B(P, 0, Since PA 0 P = λ 1 PP = 0, the Laurent series expansion near x = 0 of the transformed matrix M starts from x −1 : With the account of multiplicity, only one eigenvalue of A 0 is different from the corresponding eigenvalue of A 0 . Namely, λ 1 changes to λ 1 + 1. The proof of this proposition becomes obvious if one examines the form of A 0 in the basis (3.25) and calculates its characteristic polynomial. Indeed, in the basis (3.25), matrix A 0 has the following (1) denotes the matrix with f 1 , f 2 , . . . standing above the diagonal and zero elsewhere, f i = 0 or 1. Then where c 1 is the first column of the matrix A 1 . So, the matrix A 0 differs from A 0 only in the first column and first row. Obviously, the characteristic polynomial of the former is P ( shifts one eigenvalue down. Thus we come to the following Claim 3. Using balances it is possible to reduce the matrix residue to the normalized form in which all its eigenvalues have the real parts lying in the interval [a, a + 1), where a is a real number.
Usual choice is a = 0, however we will prefer a = −1/2 due to the reasons which should be clear from the consideration below. Note that in this normalized form the monodromy matrix for the small loop around x = 0 is given by Thus, using the results of this subsection and the previous one, we can simply find the monodromy matrix around any regular point of the differential system. In particular, we can detect whether a given point is an apparent singularity (i.e., the monodromy is an identity). Note that if the matrix residue is not normalized, in general, the monodromy matrix is not given by Eq. (3.33) due to resonances (the eigenvalues of A 0 , whose difference is an integer number).

Global reduction
The transformations considered in the previous section have a serious flaw: while improving the form of the matrix at one point, they, in general, worsen its form in another. In principle, the reduction of the Poincaré rank to zero can always be done at the cost of introducing some apparent Fuchsian singularities. This is because balances may increase the pole order at most by one. So, choosing at each step a regular point as x 2 , we can globally reduce the Poincaré rank to zero. However, we, of course, would like to avoid generating unnecessary apparent singularities in the process of reducing the Poincaré rank. The situation is different when we want to normalize all Fuchsian singularities. In this case we definitely do not want to generate apparent singularities, since any apparent singularity is not normalized (otherwise there would be no singularity at all). In the present section we show that, except for some degenerate cases, it is possible to slightly modify the projectors constructed in the previous section so that the resulting balances respect the Poincaré rank at the second point.
Let us first describe transformations which do not increase Poincaré rank at any point. Suppose x 1 and x 2 are two finite singular points of the matrix M(x), so that the Laurent series around x 1 and x 2 have the form and p 1 0 , p 2 0 .
Claim 4. If Q is a projector such that Im Q and Ker Q are invariant subspaces of A 0 and B 0 , respectively, then the transformation B(Q, x 1 , x 2 |x) does not increase the Poincaré rank of M at any point.
The proof is straightforward after observing that Q satisfies We stress that the claim is also valid when one or both points are Fuchsian. More explicitly, let Such a basis for m-dimensional left space exists iff the space does not contain a vector, orthogonal to all u 1 , . . . , u m . Then is the projector satisfying conditions of Claim 4. Let us now consider the Q-balance between x 1 and x 2 with where all notations are as in Eq. (3.20)  In order to prove this claim, let us use the identities and These identities simply follow from the definitions of the projectors P and Q, Eqs. (3.20) and (4.6). Then The expression in square brackets is just the transformation of the leading coefficient generated by B(P, 0, x 2 |x). Taking into account that (Q + P) = (P + Q) −1 , we see that the transformed leading coefficient A 0 after the transformation T 1 = B(Q, 0, x 2 |x) coincides with that after the transformation T 2 = B(P, 0, x 2 |x)(P + Q) (Note that these transformations are nevertheless different, since T 1 = (Q + P)T 2 ). Then, the correctness of Claim 5 follows from that, on one hand, B(Q, 0, x 2 |x) satisfies conditions of Claim 4, and on the other hand the leading coefficient is transformed as though by the transformation which is a product of B(P, 0, x 2 |x), satisfying conditions of Claim 2, and constant nonsingular matrix (which does not change the rank of A 0 ). Similar modifications should also be made for the balances (3.32) used for the normalization of the matrix residue eigenvalues. We simply replace in their definitions the vectors v   where in the last transition we used the identity PA 0 = PA 0 (P + Q). Again, we see that the expression in square brackets is just the transformation of the leading coefficient generated by B(P, 0, x 2 |x). Since A 0 is, up to a similarity, the same as in (3.30), the Proposition 1 proves the claim. If the second point is also Fuchsian, this transformation simultaneously shifts in the opposite direction the eigenvalue of the matrix B 0 , corresponding to v † and u, respectively. Therefore, the process of normalization resembles balancing the scales, this is the reason why we call the transformation (2.7) the balance. 2. there exist u and v † , right and left eigenvectors of B 0 and A 0 , such that v † u = 1 and the real part of the eigenvalue of A 0 , corresponding to v † is greater or equal than 1/2.
Here A 0 and B 0 are the matrix residues of the Laurent expansion of M(x) near x = x 1 and x = x 2 , respectively. More specific, we say x 1 can be balanced with depending on whether the first or second condition holds.
Definition 6. We say that two Fuchsian points x 1 and x 2 = x 1 can be mutually balanced if at least one of the two conditions holds 1. there exist u and v † , A 0 u = λu, v † B 0 = µv † , such that ℜλ < 1/2, ℜµ 1/2, and v † u = 1.
Here A 0 and B 0 are the matrix residues of the Laurent expansion of M(x) near x = x 1 and x = x 2 , respectively. More specific, we say that x 1 and x 2 can be mutually balanced via B(uv † , x 1 , x 2 |x) or via B(uv † , x 2 , x 1 |x), depending on whether the first or second condition holds.
The reason for these definitions is clear: if x 1 can be balanced with some point, there exists a balance which moves one eigenvalue of matrix residue in x = x 1 towards the interval [−1/2, 0). If the two points can be mutually balanced, there exists a balance which moves one eigenvalue of matrix residue at x = x 1 and that at x = x 2 towards the interval [−1/2, 1/2).

Reduction process
The transformations described in two previous sections give one much freedom in reducing a given system to a Fuchsian form and in normalizing eigenvalues of the matrix residues at Fuchsian points.
Let us summarize the basic line of the reduction process in the form of two algorithms. Note that this algorithm assumes that all singular points of the system are regular, so the transformation on line 13 can be always constructed. Let us comment on the condition 2 on line 5. This condition holds if it is possible to find an invariant subspace of the matrix B 0 , which has a dual basis with {u (0) k , k ∈ S ∪ {l}}, see (4.6). It appears to be a nontrivial task due to the complexity of the set of invariant spaces of an arbitrary matrix, see, e.g. Ref. [26]. However, one might try the subspace formed by the eigenvectors of B 0 , and consecutively add vectors from the Jordan chain if needed. If these attempts fail, one may simply go to line 10 with a penalty of possibly introducing an extra apparent singularity. Given that at the next stage this singularity is likely to disappear, this is not a real problem.
Though being very useful, the above algorithm do not necessarily give a canonical form of M(x) in any sense. In particular, the outcome depends on the sequence of the pairs of points chosen at a specific step. However, in many tested cases, this algorithm succeeds in normalizing the system at all but one singular points, in particular, removing all apparent singularities. As it was already mentioned, the possibility of removing all apparent points is equivalent to the content of the 21st Hilbert problem. As proved by Bolibrukh [24], this task is not always possible to complete and, therefore, the 21st Hilbert problem has a negative solution. In his paper Bolibrukh presents an example of the system which can not be reduced to Fuchsian form without apparent singularities. We have checked, that our algorithm indeed fails to reduce this system. At some step it appears to be not possible to balance an apparent singularity with any other singular point due to the orthogonality of the corresponding eigenvectors.
On the other hand in the same paper it was proved that for n = 2 the 21st Hilbert problem can always be solved. For our setup, it translates to the statement that, given a Fuchsian system of two equations, it is always possible to get rid of the apparent singularities. Let us show that the tools developed in this section easily allow to perform this task, thus, giving a constructive proof of the statement. Our line of reasoning is very simple: we show that it is always possible to shift the eigenvalues of the matrix residue in the apparent singularity towards the interval [−1/2, 1/2) without introducing new apparent points and increasing the pole order. The eigenvalues of the matrix residue in apparent singularity should definitely be integer, otherwise, we may show that the point is not an apparent singularity by normalizing the system at this point (possibly spoiling its form in others) and calculating the monodromy from Eq. (3.33). Moreover, when both eigenvalues are zero, the whole matrix should be zero. Then, in a finite sequence of shifts we will eventually eliminate singularity. Eliminating singularities one by one, we obtain the desired form.
Suppose x = 0 is the apparent singularity and A 0 = 0 is a 2 × 2 matrix residue at this point. Note that the differential system in Fuchsian form can not have only one singular point, so we may rely on the existence of at least one singularity different from x = 0. If both eigenvalues of A 0 are nonzero and of the same sign, we may use the transformation T = x x−x2 I or T = x−x2 x I to raise or lower both eigenvalues. Here x 2 is some other singular point. Thus, we may restrict ourselves to the case when, say, one eigenvalue is negative and the other one is non-negative. Suppose that A 0 = diag(n 1 < 0, n 2 0). The right eigenvector of A 0 , corresponding to n 1 is u = (1, 0) † . Suppose, all left eigenvectors of matrix residues at other singular points are orthogonal to u. Then, it is easy to show that the general form of these matrix residues is a b 0 a . But this form is in obvious contradiction with the requirement that the sum of all matrix residues is zero. This is because the diagonal elements of this sum are n 1 + i a i and n 2 + i a i which can not be both zero. Therefore, there is a left eigenvector v † of the matrix residue at some point x 2 , such that v † u = 1 and x = 0 can be balanced with x = x 2 via B(uv † , 0, x 2 |x).
6 Factoring out ǫ So far, we described the constructions which are not specific to the systems depending on parameter. However, the idea of their application to the reduction of the systems, depending on ǫ, should be clear. First, we use Algorithm 2 to reduce the system to Fuchsian form. A necessary condition of existence of the ǫ-form (2.5) is that the eigenvalues of all matrix residues have the form n + αǫ, where n is integer. If this condition is not satisfied, then the system definitely can not be transformed to the form (2.5). In this case one might try some changes of variable. If the condition holds, one may pass to the Algorithm 3 in order to normalize eigenvalues of the matrix residue at all but one point x = x 1 , assuming ǫ is sufficiently small (i.e., assuming n + αǫ belongs to the interval [−1/2, 1/2) only if n = 0). If this step appears to be doable, the normalized eigenvalues are all proportional to ǫ. The sum of the eigenvalues in x = x 1 is also proportional to ǫ since the matrix residue at this last point is simply minus the sum of the matrix residues at the normalized points (and so the trace is minus sum of the traces). Then one should try to balance x = x 1 in two steps. First, shift down one of the positive unnormalized eigenvalues by means of balance with some point x = x 2 , either singular or regular, and then mutually balance x 1 and x 2 shifting up one of the negative unnormalized eigenvalues of the matrix residue at x = x 1 . Let us assume from now on that it appeared to be possible to secure by the above method that the system is Fuchsian and normalized at all points. Then we have a system 1) and the eigenvalues of all matrices M k are proportional to ǫ. Clearly, this does not necessarily mean that matrices M k themselves are proportional to ǫ. If we had only one matrix M 1 (ǫ), we could have factorized ǫ by making a transformation which transforms M 1 (ǫ)/ǫ to Jordan form. In general case we need to find an x-independent transformation matrix which simultaneously transforms all matrices M k (ǫ) to the form ǫS k , where S k are constant matrices. Let T(ǫ) be such a matrix. Then we have Multiplying this equation by T(ǫ) from the left and by T −1 (µ) from the right, we obtain a linear system for the elements of the matrix T(ǫ, µ) = T(ǫ)T −1 (µ). If the general solution of this system (found routinely) determines an invertible matrix, the transformation we are looking for can be chosen as T(ǫ) = T(ǫ, µ 0 ), where µ 0 is some arbitrarily chosen number, provided T(ǫ, µ) is nonsingular at µ = µ 0 .

Using block-triangular form
The size n of the matrices M(ǫ, x) appearing in the differential equations for master integrals may be quite large (∼ several tens). This may constitute computational complications for the transformations that we need. Fortunately, the very process of the derivation of the differential equations, the IBP reduction, shows that M(ǫ, x) contains a lot of zeros. Namely, the integral J 1 may enter the righthand side of the differential equation for the integral J 2 only if the graph corresponding to J 1 can be obtained from that corresponding to J 2 by contraction of some edges. In particular, this means that the matrix M(ǫ, x) has a block-triangular form with diagonal blocks corresponding to the integrals with a given set of denominators (= integrals of a given sector).
Let us show that we can use this block-triangular form to essentially alleviate the process of reduction. Suppose from now on that we have already reduced all diagonal blocks of M(ǫ, x) to ǫform. Basically, the idea of further reduction is simple. In order to reduce the pole order of the off-diagonal elements we redefine the integrals by adding some suitable combination of the simpler integrals, similar to the approach of Ref. [14]. Let us prove that it is always possible to make this redefinition in order to reduce the Poincaré rank at a given point to zero without changing both the block-triangular structure of the system and the Poincaré rank at other points. Therefore, it gives one a tool to reduce the system to Fuchsian form.
We prove by the induction over sectors. Without generality loss, we may assume that we are interested in reducing the Poincaré rank to zero at x = 0 2 . Suppose J 1 is a column-vector of masterintegrals in a certain sector θ. By the induction hypothesis the differential system for the integrals in the subsectors of θ already has zero Poincaré rank and thus no master in the subsectors will not be changed at this and later steps. We can write the differential system for J 1 in the form where J 2 is the column-vector of the master integrals in the most complex subsector of θ entering the right-hand side of the equation with singular coefficient, whose Laurent expansion starts with x −r B(ǫ) with r > 0. By the assumption, A(x) is regular at x = 0. Naturally, the number of entries in J 1 and J 2 is not required to be the same, so, in general, B is a rectangular matrix. In Eq. (7.1) the dots denote terms which are either nonsingular, or contain integrals in the less complex sectors than the sector of J 2 , or contain integrals J 2 with coefficients less singular than x −r . The differential equation for J 2 has the form x∂ x J 2 = ǫC(x)J 2 + . . . , (7.2) where C(x) is regular at x = 0. The dots denote contribution of the subsectors. Let us make the substitution where D is a constant matrix. We have Therefore, in order to cancel x −r singularity, we need to find such D that This is a system of linear equations for the matrix elements of D. This system obviously has a solution since the linear operator acting on D in the right-hand side is arbitrarily close to unity. Note that this line of reasoning does not work when the diagonal blocks are not in ǫ-form and/or when r = 0. Therefore, starting from the most complex integrals in the right-hand side and from the highest poles in their coefficients, we can eliminate singular coefficients in the right-hand side, step-by-step. Note that the substitution (7.3) corresponds to the transformation generated by where N is a matrix whose nonzero elements coincide with the elements of D. It is easy to see that N 2 = 0, so that the inverse matrix has the form Therefore, this transformation is regular everywhere, except x = 0. Now we may assume that we have a Fuchsian block-triangular matrix M(ǫ, x) such that each diagonal block is in ǫ-form. Since the characteristic polynomial of this matrix is a product of those of the diagonal blocks, the eigenvalues of M(ǫ, x) are proportional to ǫ and we have a system of the form (6.1). In order to find a transformation matrix T(ǫ) from (6.2), which, in addition, preserves the block-triangular form of M (x), we may nullify in all elements of T(ǫ, µ), corresponding to zero elements of M(ǫ, x), before solving the system (6).

Example
Let us demonstrate in some details how our method works for the master integrals in the topology shown in Fig. 1. There are 28 master integrals shown in Fig. 2. We use an experimental version of LiteRed, [27,28], for the IBP reduction. Unfortunately, due to the complexity of the IBP reduction, we have not been able to obtain starting differential equations for the 3 master integrals in the highest sector, shown in the last row, so we had to limit ourselves to the differential equations for 25 master integrals J = (J 1 , . . . , J 25 ) T . They depend nontrivially on the dimensionless variable x = t/s. The differential system has the form (2.1) where the explicit form of the matrix M(ǫ, x) is not presented here to save space and to avoid cluttering. There are three singular points of the system, x = 0, −1, ∞. where Since M {23−25} (ǫ, x) already has a Fuchsian form, we skip steps described in Algorithm 2 and pass to the Algorithm 3. From now on let us denote the matrix residue at infinity as C(ǫ), Since v † u = 1 = 0, the points x = 0 and x = ∞ can be mutually balanced via B(uv † , 0, ∞, x). After the transformation we have the same form (8.1) with The eigenvalues of A, B, and C are now Note that a pair of eigenvalues has been shifted towards the interval [−1/2, 1/2). Now the right and left eigenvectors of the matrices B and C, corresponding to the eigenvalues −3ǫ − 1 and 2ǫ + 1, respectively, are After the transformation we have the form (8.1) with The eigenvalues of A, B, and C are Now the system is normalized at x = 0 and x = −1, but not in x = ∞. In order to normalize the system at all points, we need to perform intermediate transformation moving one unnormalized eigenvalue to another point. In particular, we may use the right and left eigenvectors of the matrices C and B, corresponding to the eigenvalues −4ǫ − 1 and ǫ, respectively, which are u = (0, ǫ + 1, 4ǫ + 1) and make the transformation B(uv † /(v † u), ∞, −1, x). After the transformation we have The eigenvalues of A, B, and C are Now it is easy to check that x = −1 and x = ∞ can be mutually balanced via B(uv † /(v † u), −1, ∞, x), where u = (0, ǫ + 1, 4ǫ + 1) T , v † = (−2(6ǫ + 1), 1, 0) (8.14) are the corresponding eigenvectors of B and C. After that we have with the eigenvalues At this stage we have succeeded to normalize all matrix residues A, B, and C. Finally, we solve the system of linear equations with respect to the matrix elements of T. We obtain up to an arbitrary factor. We can now put µ to any constant number provided T remains invertible (in particular, we can not put µ to 0, −1, or −1/5). We choose µ = 1. Making the transformation with T(ǫ, 1) we finally obtain the desired ǫ-form: At this stage one may want to make yet another transformation with a constant matrix, which reduces one of the matrix residues to diagonal form. E.g., we can take the matrix, transforming A to diagonal form The resulting matrix has a somewhat simpler form: In a similar way we reduce all diagonal blocks to ǫ-form. Finally, using the approach of Section 7, we obtain the system where S 1 and S 2 are presented in the appendix. To avoid clutter, we do not present here the transformation matrix T. Both this matrix and the original form of the system are available upon request from the author.

Conclusion
We have presented a practical algorithm of the reduction of differential system to ǫ-form. The main tool of our approach is the transformation (2.7) which we call balance. We have shown how to construct a balance which does not increase the Poincaré rank of the system at any point on the extended complex plane. Moreover, we have shown how to construct the balances which can be used to lower the Poincaré rank p at the point with p > 0 and to normalize the eigenvalues of the matrix residue at the point with p = 0. The reduction to ǫ-form can be divided into three stages 1. Reduction to Fuchsian form, Algorithm 2.
We have also shown how to use the block-triangular form of the system to alleviate computation. Namely, we first apply the above three step to each diagonal block and find the corresponding matrices T i transforming each block to ǫ-form. After the block-diagonal transformation T = diag(T 1 , T 2 , . . .) the diagonal blocks of the transformed system are in ǫ-form. Then we use prescriptions of Section 7 and, finally, factor out ǫ from the whole system. The latter can be done in such a way as to preserve the block-diagonal structure of the system, as explained in the end of Section 7. There may be obstructions to the construction of the appropriate balance due to the orthogonality of the left and right eigenvectors. However, the appearance of obstructions is expected due to the negative solution of the 21st Hilbert problem by Bolibrukh [24]. For a Fuchsian system with normalized eigenvalues we have shown how to find the constant transformation reducing the system to ǫ-form. We have successfully applied our method to the reduction of several differential systems. We have also checked that for the case of three-loop all-massive sunrise propagator master integrals the obstruction to the reduction appears. This obstruction naturally corresponds to the fact that these master integrals can not be expressed in terms of harmonic polylogarithms [29].
The example presented in Section 8 did not require the reduction of the system to Fuchsian form, as described by Algorithm 2, since all diagonal blocks have been already in Fuchsian form. Though it may be considered as a poor choice of the example, we underline, that the reduction to a Fuchsian form can, in principle, be done solely by means of the Barkatou&Pflügel algorithm [22,23]. Thus, a demonstration of the viability of our algorithm for this stage is not very crucial. On the other hand, the system (8.1) is not of the form assumed in Refs. [14,20] and, therefore, its reduction to ǫ-form with the tools developed in the present paper seems to be quite expository.
Finally, we note that, though it is possible to make the reduction manually, it is very desirable to automatize the process as much as possible. A dedicated Mathematica package is being developed now and will be presented elsewhere.
Appendix. The form of matrices S 1 and S 2 .