Hypergeometric structures in Feynman integrals

For the precision calculations in perturbative Quantum Chromodynamics (QCD) gigantic expressions (several GB in size) in terms of highly complicated divergent multi-loop Feynman integrals have to be calculated analytically to compact expressions in terms of special functions and constants. In this article we derive new symbolic tools to gain large-scale computer understanding in QCD. Here we exploit the fact that hypergeometric structures in single and multiscale Feynman integrals emerge in a wide class of topologies. Using integration-by-parts relations, associated master or scalar integrals have to be calculated. For this purpose it appears useful to devise an automated method which recognizes the respective (partial) differential equations related to the corresponding higher transcendental functions. We solve these equations through associated recursions of the expansion coefficient of the multivalued formal Taylor series. The expansion coefficients can be determined using either the package Sigma in the case of linear difference equations or by applying heuristic methods in the case of partial linear difference equations. In the present context a new type of sums occurs, the Hurwitz harmonic sums, and generalized versions of them. The code HypSeries transforming classes of differential equations into analytic series expansions is described. Also partial difference equations having rational solutions and rational function solutions of Pochhammer symbols are considered, for which the code solvePartialLDE is designed. Generalized hypergeometric functions, Appell-, Kampé de Fériet-, Horn-, Lauricella-Saran-, Srivasta-, and Exton–type functions are considered. We illustrate the algorithms by examples.


Introduction
It is a general observation, that the Feynman parameter integrals for certain classes of topologies can be expressed in terms of higher transcendental functions of the hypergeometric type, cf. e.g. [1][2][3][4]. This concerns their representation before expanding in the dimensional parameter ε = D − 4, with D the dimension of space-time. Here we consider the generalization of the Euler integrals to the generalized hypergeometric functions p F q , and the multiple hypergeometric functions of the Appell-, Kampé de Fériet-, Horn-, and Lauricella-, Saran-, Srivastava-, and Exton type [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. In physical applications a standard integration method is that of solving systems of ordinary and partial differential equation systems [24] generated by the integration by parts (IBP) relations [25]. In this context it is important to recognize the differential equations of the classes of the aforementioned functions, since their mathematical structure is widely known. This allows the direct analytic solution of at least this part of the physical problem. Starting with certain topologies, more general differential equations will contribute, requiring different solution technologies. Structures of the above kind have been obtained in general off-shell representations at the one-loop level for multi-leg diagrams, cf. e.g. [26][27][28][29][30]. At the two-and three-loop level for various scattering processes related structures are found, cf. e.g. [17,18,31,32].
For all the above quantities the (partial) differential equations are known and they partly turn out to be of rather high order. On the other hand, one may consider the formal multiple Taylor expansion of these higher transcendental functions, which allows one to obtain difference equations for the corresponding expansion coefficients. This is advised, since these are remarkably simpler.
In this paper we describe a systematic classification of partial differential equations for scalar or master integrals of one or more scales w.r.t. known solutions in the hypergeometric classes. These differential equations have multivariate multiple series solutions for parameters x 1 , ..., x n in the vicinity of {0, ..., 0} as formal Taylor series. We determine the expansion coefficients, which are obtained as rational product solutions. In identifying the associated non-linear coefficient pattern for the respective case the expansion coefficients can be factorized into rational expressions of Pochhammer symbols.
The method works for general values of the space-time dimension D. The (multiple) infinite series representations found can be finally expanded in the dimensional parameter ε, which transforms the summand in terms of Pochhammer symbols and general hypergeometric products by introducing in addition (cyclotomic) harmonic sums [33][34][35] or generalized versions, like Hurwitz harmonic sums. One may try to simplify the obtained multiple sums in terms of hypergeometric products and indefinite nested sums to expressions that are purely given in terms of indefinite nested sums using the package EvaluateMultiSums [36][37][38][39]. The underlying summation engine is based on the package Sigma [40,41] that contains non-trivial algorithms in the setting of difference rings [42][43][44].
Hypergeometric structures emerge in the calculation of Feynman integrals using Feynman parameter representations from the simplest topologies onward, cf. e.g. [45]. In the most simple cases they can be calculated in terms of Euler Beta functions (1) Here and in the following we represent the respective higher transcendental functions in their convergence region but allow to perform analytic continuations to their whole analyticity range, cf. [46]. The next more involved function is 2 F 1 followed by p+1 F p by the iterative integral Other topologies [32] lead to the integral representation of the Appell function F 1 and others. The solution of Feynman integrals through Feynman parameterizations mapping to higher transcendental functions is not a method which can be easily made uniform. At a certain stage it will also require the use of Mellin-Barnes integral representations [47] to be solved by the residue theorem. Although this method can establish links to higher transcendental functions in principle since those have Pochhammer Umlauf-integral representations [7,48,49], it may easily lead to non-minimal representations [50] which are difficult to reduce analytically, if one is not only interested in numerical results [51]. The advantage of all these representations lies in the fact that multiple Feynman parameter integrals are reduced to much lower dimensional infinite sum representations, which are onefold in the case of generalized hypergeometric functions, two-fold e.g. for Appell and Horn functions [8,9,12,17,18], three-fold for the Srivastava functions [19] and further given by multisum Lauricella-type functions [20] in more involved cases, with an early application in [31].
In this paper we will consider (partial) differential equations for master integrals w.r.t. their parameters. The master integrals are obtained as the result of the IBP-reduction. Usually these are first order equations. However, one will decouple the corresponding systems, cf. [52,53], using algorithms which are available, e.g., in OreSys [54]. In this way higher order differential equations will emerge. In the case of partial differential equations one may use, e.g., Janet bases [55].
These (partial) differential equations can be mapped to corresponding multi-variate difference equations expanding the associated ansatz in terms of the multi-variate formal Taylor series The recurrences obeyed by the expansion coefficients f [k 1 , ..., k n ] can be solved using difference ring techniques [40,41] for the linear case. For the multivariate case we will utilize ideas from [56,57] that led to the new package solvePartialLDE that may support in parts the solving task. Finally, one may express f [k 1 , ..., k n ] as a rational term of (multi-indexed) Pochhammer symbols, which contain all parameters of the original differential equations, like particle masses and kinematic invariants of the processes considered, including the dimensional parameter ε.
In the general case first product-solutions emerge, which can be factored into Pochhammerstructures by solving algebraic equations. Alternatively, one can keep the multiplicands in nonlinear form and can apply a new function implemented in the package EvaluateMultiSum that produces the ε-expansions without introducing algebraic extensions. In most cases, we will limit our consideration to the principal structure of the known classes of higher transcendental functions of the hypergeometric type, i.e. those where f [k 1 , ..., k n ] is given by Pochhammer-ratios f [k 1 , ..., k n ] = p i=1 (a(i)) l(i) q i=j (b(j)) m(j) , l(i), m(j), p, q ∈ N, (6) and l(i), m(j) are linear functions of k r ∈ N with integer coefficients. Here the Pochhammer symbols are defined by (a) n = Γ(a + n) Γ(a) , a ∈ C\Z − , n ∈ N and The coefficients a(i) and b(j) will in general depend on the dimensional parameter ε and we will further consider the expansion of the higher transcendental functions in this parameter.
The paper is organized as follows. In Section 2 we list the differential equations of the multivariate generalized hypergeometric functions up to four variables in explicit form, parameterizing them linearly. There are also general differential equations as those for the hypergeometric functions p F q , the Kampé de Fériet function [11], and the Lauricella-Saran functions [20][21][22]. In Section 3 we derive the recursions for the multivariate expansion coefficients of these functions. An algorithm is presented in Section 4 to find hypergeometric product solutions for first-order linear recurrence systems. In this way the multivariate functions f (x 1 , ..., x n ) can be represented in the vicinity of ( 0) n . The parameters of the differential and difference equations depend also on the dimensional parameter ε. Usually one would like to perform corresponding expansions in this quantity, which we describe in Section 5. Here the so-called Hurwitz harmonic sums and more general versions occur, the summation problem of which can be dealt with the packages EvaluateMultiSums and Sigma. In Section 6 we demonstrate the full machinery, to obtain for a given system of linear differential equations the first coefficients of the ε-expansions in terms of indefinite nested sums and products. In Section 7 we supplement the solving tools from Section 4 and turn to general partial difference equations with rational coefficients. Based on the algorithms presented in [56,57] we present different strategies to find solutions in terms of hypergeometric products and iterative sums over such products that appear in the calculation of Feynman integrals. Section 8 contains the conclusions.
In Appendix A we provide for convenience a list of the main functions dealt with in the present paper, which are defined by their series representation, see also the file cases.m. Appendix B illustrates the matching conditions to be met to obtain from the general solutions in a direct way the Pochhammer-type solutions. They are given in computer-readable form in the file Mconditions.m. In Appendix C a brief description of the commands of the code HypSeries is given and Appendix D provides a brief description of the code solvePartialLDE. For both cases we will provide Mathematica notebooks illustrating the corresponding operations in examples. In Appendix E a special constant is evaluated, which appears in one of the examples. Appendix F lists the Mathematica and other software packages required to execute the example notebooks.

The differential equations
Multivariate master integrals obey partial differential equations, which are obtained after the IBP reduction and, if necessary, the decoupling of coupled systems. In this way differential equations of higher than first order are obtained. The first class concerns the univariate case of the generalized hypergeometric functions; for its definition we refer to Appendix A.
The differential operator of Gauß' 2 F 1 function reads, cf. [7], which we write more generally as For the function 3 F 2 one obtains with In general, we get for p+1 F p the linear differential equation The p F q function is the homogeneous solution of the differential operator The products of the differential operators in (12) ϑ = x(d/dx) ≡ x∂ x , can be written in the following form Inserting this into Eq. (12) will imply the corresponding general differential operator, which has the form with the corresponding polynomials P k (x) and m = max{p, q}. The coefficient polynomials result from the expansion of (12).
Here and in the following we will first parameterize the differential operators in general terms. In the literature the different coefficients are usually related by algebraic equations, which is possible, but not necessary. The list of these equations are given in a subsidiary file to this paper. The differential operators for the two-variable Horn type functions [8][9][10][11][12] F 1 to F 4 , G 1 to G 3 , and H 1 to H 7 , including the Appell functions [8,9], are given by x + f y∂ y + (gy + hxy)∂ 2 x,y + jy 2 ∂ 2 y = 0 with the example of the Appell F 1 function Here ∂ 2 x,y = ∂ x ∂ y , etc. In physical applications two more differential operators appeared in the bi-variate case, [17,18], to which the functions S 1 and S 2 belong. The differential operators read : : For the Kampé de Fériet function one obtains the following annihilating differential operators [10,11] p j=1 (x∂ x + y∂ y + a j ) The differential operators for the triple hypergeometric series [19] read The differential operators for the quadruple versions are given by They cover the functions K i , i = 1...21 of Refs. [13,14].

The Recursions
The formal power series ansatz (5) allows to obtain difference equations for the expansion coefficient from the differential equations given in Section 2. 1 The following recursions are obtained: In the two-variable cases the expansion coefficients of the Horn-type functions obey a + bm + e(m − 1)m + n f + hm + j(n − 1) f [m, n] and for the S 1 -functions one has 1 There are also contiguous relations for the corresponding functions, cf. [4,58].
as, likewise, for the S 2 -functions For the expansion coefficients f [r, s] of the Kampé de Fériet functions the recurrences read The coefficients in the 3-variable cases obey For the 4-variable systems one has

The Solution of the Recursions
Let K be a field of characteristic 0 (i.e., a field that contains Q as subfield). A power series in r variables is called a multiple hypergeometric series if the multivariate sequence A : N r → K is hypergeometric, i.e., we have s i A(n 1 , . . . , n i , . . . , n r ) = t i A(n 1 , . . . , n i + 1, . . . , n r ), i = 1, . . . , r for polynomials s i , t i ∈ K[n 1 , . . . , n r ] being coprime. Often the hypergeometric sequence A is given in terms of binomial coefficients, Pochhammer symbols, Γ-functions and related special functions. However, in concrete applications one often starts with a given system of partial linear differential equations and searches for a hypergeometric series solution as specified above. Plugging this ansatz into the equations and doing coefficient comparison w.r.t. x n 1 1 · · · x nr r yield a system of partial linear difference equations; for concrete examples see Section 3. In the general case not too many methods are known that can support the use to solve these difference equations; for some first steps in this direction we refer the reader to Section 7. In the following we concentrate on first-order systems of the form (57).

An algorithm for hypergeometric products
Given a hypergeometric sequence A with (57) we seek for a representation in terms of indefinite nested products that can be modeled, e.g., within the summation package Sigma.
For the univariate case r = 1 this task is immediate. Since s 1 , t 1 ∈ K[n 1 ] have only finitely many roots, there is a λ 1 ∈ N such that s 1 (k) = 0 = t 1 (k) for all k ≥ λ 1 . Thus for n 1 ≥ λ 1 we get A(n 1 ) = Next, we turn to the multivariate case. As introduced in [59] we call a sequence nontrivial if the zero points vanish on a nonzero polynomial from K[n 1 , . . . , n r ]. In other words A is almost everywhere a nonzero sequence. An important consequence of [59,Prop 4] is that for such a hypergeometric sequence with (57) the following compatibility property holds for R i = s i t i ∈ K(n 1 , . . . , n r ): for 1 ≤ i ≤ j ≤ r, R i (n 1 , . . . , n j + 1, . . . , n r ) R i (n 1 , . . . , n j , . . . , n r ) = R j (n 1 , . . . , n i + 1, . . . , n r ) R j (n 1 , . . . , n i , . . . , n r ) .
In particular, the Ore-Sato Theorem [60] holds: A can be written as a product in terms of geometric products and factorial terms; for a rigorous (and rather involved) proof see [59] and for further generalizations see [61]. In the following we will introduce a special case of the Ore-Sato theorem that deals with the problem to represent A in terms of hypergeometric products which are valid for all (n 1 , . . . , n r ) ∈ N where the n i are chosen sufficiently large. This is precisely the situation that we require for hypergeometric power series as given in (56).
As it turns out, such a representation is always possible if we require the following additional assumptions (which hold in the univariate case automatically): we can choose 2 λ i ∈ N such that for all (n 1 , . . . , n r ) ∈ N r with n i ≥ λ i we have s i (n 1 , . . . , n r ) = 0 = t i (n 1 , . . . , n r ).
Similarly to the univariate case we get the following consequence: A(n 1 , . . . , n r ) is the zero sequence (for all n i ≥ λ i ) if A(λ 1 , . . . , λ r ) = 0 or it is nonzero for all n i ≥ λ i otherwise.

Remark:
In the second case all zeroes of A are finite and thus vanish on a particular chosen nonzero polynomial. Thus we can apply [59,Prop 4] as above. If the compatibility criteria (72) does not hold, then A must be the zero sequence or the hypergeometric system is inconsistent.
We remark that in all the examples of this article we can set λ i = 0 for 1 ≤ i ≤ r.

Examples
Let us illustrate the solution of some of the recursions in explicit form. Here we refer first to the general representation of the corresponding differential and difference equations. We consider the differential equation (9) which leads to the recurrence (38) for the expansion coefficient The recursion is of order one and is solved for f [n] = 0. Sigma obtains the following product solution which is not yet expressed by Pochhammer symbols. Mathematica allows to obtain the factorization of the product in (75) in terms of Pochhammer symbols by with By replacing A 1 , B 1 and C directly to one obtains This choice of variables is therefore instrumental to obtain the most simple structure. However, it will sometimes not naturally appear in the physical differential equations, requesting associated variable transformations in general. This becomes more and more involved in higher hypergeometric cases, which is already illustrated in the case of the generalized hypergeometric function 3 F 2 . Its differential equation (10) implies the recurrence for f [n] (39) with f [n] = 1, which has the solution Eq. (80) can be rewritten in terms of radicals by One observes that the substitutions (90) are related to the root-relations by Vieta's theorem [63] for the roots r i of the algebraic equation which obey −a n−1 = r 1 + ... + r n a n−2 = r 1 (r 2 + ... + r n ) + r 2 (r 3 + ...r n ) + ...r n−1 r n . . .
In all cases, which can be solved by a single recurrence at the time the above procedures are applied. Considering the generalized hypergeometric function p+1 F p the product-solution for the expansion coefficient reads and one may factorize the corresponding product as in the above examples. However, the corresponding roots can in general only be obtained numerically. Still one may work with the corresponding symbolic expressions. This is in particular useful w.r.t. their expansion in the dimensional parameter ε, which is contained in the quantities A n , B n and C in polynomial form.
We turn now to the multivariate case. Here we have explicit formal solutions which apply to all concrete cases listed in the appendix, resp. in the files attached, but may cover even more cases. To find this out for concrete parameter settings one is advised to check whether these particular solutions obey the corresponding difference equations.
For the Horn-type functions one obtains from Eqs. (19), (20) and for the functions S 1 and S 2 one has Eq. (95) can be rewritten as with The Pochhammer-form of (98) directly allows the ε-expansion, if the free parameters in the Pochhammer symbols are replaced accordingly by expressions that contain also the ε-parameter. Further simplifications are obtained using the replacement rules given in Appendix A. E.g. for the Appell function F 1 one obtains by using the replacements given in (305), Appendix B.
In the tri-variate cases one obtains Finally, in the four-variable case the product solution reads Pochhammer solutions are of advantage, since the ε-expansion can be derived more easily compared to the case of the product solutions. The contributing powers in i 1 in the above products determine the degree, d, of the algebraic equations to switch to the associated Pochhammer form, which is in many cases d = 2 and d = 3 for the S 1,(2) functions, for the functions considered in the present paper. In the p F q case it can be even of higher order. Here complex solutions will appear in general for the Pochhammer symbols. The first index of the Pochhammer symbol will imply a new constant in the ground field to be used in the summation problem. However, still the solutions remain real. If the corresponding algebraic equations can be solved in closed form the special conditions discussed in Appendix B need not to be obeyed. In any case, if the degree d of the algebraic equations is too high, in particular, if the algebraic extensions get too complicated, one can use the general tools developed in Section 5.1 to derive the ε-expansion by introducing generalized versions of harmonic sums and Hurwitz type sums where the summands have denominators which do not factor linearly.

Computing the expansion in ε
Performing the expansion in the dimensional parameter ε on the basis of series representation around x i in the vicinity of zero, the convergence region of the respective series has to be known in general. For the one-parameter series we consider the p F q functions, which converge for |x| < 1 for p ≤ q + 1, [7], which we are going to consider. In the two-variable case one has [7] (105) In the attachment converg.m we present the corresponding convergence conditions for all functions up to three variables, as have been given in [19], in computer-readable form, for the convenience of the user. These conditions are partly very involved. An example is f 26f [19]. More involved conditions are obtained in the four-variable case. They may be derived using d'Alemberts ratio test [46,64] to these cases.
In general, multi-sums appear with complicated hypergeometric products and one may try to apply, e.g., the package EvaluateMultiSums [36][37][38][39] (utilizing the difference ring algorithms [42][43][44] available in Sigma) to represent these sums to indefinite nested sums. In general, this seems not possible. But we will show how this goal can be accomplished for various interesting cases with our computer algebra tools. In particular, if the products depend on the dimensional parameter ε and one is interested in its ε-expansion, the best tactic is to perform the ε-expansion of the innermost summand, given in terms of hypergeometric products, and to apply afterwards the summation quantifiers to the coefficients of the expansion; here one has to take care that the interchange of infinite summation quantifiers and the differential operator w.r.t. ε is possible. To accomplish this task, we will first explain how such products can be expanded in full generality. Afterwards we will focus on the task to carry out the summations on top of the ε-expansion.

The ε-expansion of the summand
In general the summand is built by a product of the form where h(ε, x) ∈ K(ε, x) is a rational function in the variables ε and x, or by a linear combination of power products of such products; for concrete examples see (95) and below. For simplicity we suppress further summation variables that may arise in h and move them to the ground field (e.g., for the variables m, n in (95) we take the rational function field K = K(m, n) over a field K of characteristic 0). Before expanding in the dimensional parameter ε one may map to Pochhammer symbols. In such a representation ε occurs usually in the form (a + rε) n , with r ∈ Q. (109) The series expansion is then given in terms of harmonic sums [33,34] at argument a and a + n, with a ∈ C\Z − , Here the harmonic sums [33,34] are defined by Analogous expressions are obtained in the case that the Pochhammer symbols depend on ε polynomially. The harmonic sums S c (a; n) will be called Hurwitz harmonic sums, since they converge to the Hurwitz ζ-values [65] in the limit n → ∞.
These sums are defined by Single Hurwitz harmonic sums are given by Here the harmonic sums are understood as derived from their Mellin transformation, cf. Ref. [34].
More involved relations of this kind hold also for nested sums. In course of further summations in the multivariate case also the Hurwitz generalizations of the sum having been dealt with in Refs. [35,[66][67][68] can occur. If the multiplicands h(i, ε) of the arsing products (108) do not factorize linearly over the given field, one has to introduce algebraic extensions, such as given in (99), in order to obtain the product representations as given in (109). In the case that one wants to avoid such nontrivial field extensions (which are often hard to handle with symbolical tools), we propose the following general and rather flexible method.
Let ℓ be an integer and suppose that f i (ε) are functions in ε which are nonzero and complex differentiable around 0 (and thus infinitely many times complex differentiable) for all integers i ≥ ℓ. By the product rule (and the quotient rule for the second identity) it follows that holds for all n ≥ ℓ.
Since f i (ε) for i ≥ ℓ is infinitely many times differentiable around 0, also the summand ∂εf i (ε) and thus the finite sums in (115) and (116) are infinitely many times differentiable around 0. E.g., we get As a consequence, we can apply D r ε iteratively on n i=ℓ f i (ε) and obtain an explicit expression F r (ε) given by the product n i=ℓ f i (ε) itself times a polynomial expression in terms of sums where the denominator is of the form f i (n) r and the numerator is built by a linear combination of power products of the form f i (ε) e 0 (∂f i (ε)) e 1 . . . (∂ r f i (ε)) er with e 1 + · · · + e r = r and e 1 < r.
To calculate ε-expansions for such products, we assume from now on in addition that f i (0) = 0 holds for all i ≥ ℓ. Then is well defined and by Taylor's formula we get the power series expansion Within the package EvaluateMultiSums we specialized this general mechanism to the product case (108), i.e., we assume that is a rational function in the variables ε and x. If h(0, i) is zero for some i ≥ ℓ, we take ℓ ′ ≥ ℓ as the minimal value such that this is not the case and extract the critical part with for an integer s where p, q are coprime polynomials in ε with p(0)q(0) = 0. Applying now the above machinery to n i=ℓ ′ h(ε, i) leads to a power series expansion of order u ≥ 0 as stated above. In addition, we can compute a Laurent series expansion of r in ε of order s. Thus by the Cauchy product we end up at the Laurent series expansion of order t = u+s. Here each coefficient H r is given by n i=ℓ ′ h(0, i) times a polynomial expression in terms of single sums n i=ℓ ′ a(i) h(0,i) r where a(i) is built by a linear combinations of power products of the form h(ε, i) e 0 (∂h(ε, i)) e 1 . . . (∂ r h(ε, i)) er | ε=0 with e 1 + · · · + e r = r and e 1 < r.
In order to obtain a nicer output, we factorize the input multiplicand h(ε, i) over the fixed field K and pull over the product sign to each irreducible factor. In addition, we replace any product z with positive exponents z and use (116) instead of (115). Carrying out the expansions for each product and combining them by the Cauchy product yield an expression in terms of the input product n i=ℓ h(0, i) (or a power product built by the irreducible parts is a polynomial of degree smaller than the degree of the polynomial The following remarks should be stated. First, applying this general method to (a + rε) n = n i=1 p(ε, i) with p(ε, x) = (−1 + a + εr + x) we rediscover precisely the ε-expansion given in (110). Second, the polynomial p(0, i) may be reducible and thus the denominators in the sums can be split further. In this case the routines of package EvaluateMultiSums (using the summation tools of Sigma) split the sums automatically further. E.g., calling the command However, for a generic irreducible polynomial p(ε, x), also p(0, x) is irreducible. For instance, consider the product expression where f [0, n] equals to the product given in (75). Then applying the command SeriesForProduct to this expression gives If necessary or appropriate, depending on the application, the found sum solutions (and the products) can be factorized further (within an appropriate algebraic field extension).

Symbolic summation
We consider now multi-sums over such products (e.g., a single sum over the discrete parameter n, or sums over further discrete parameters that appear in other products or even inside of products). Applying the package EvaluateMultiSum one can now try to work from the inner sum towards the outermost sum and to transform the definite sums stepwise to indefinite nested versions. Internally, one computes stepwise recurrences and tries to solve these recurrences within the class of indefinite nested sums; for details see [41]. In the case that a sum has an infinite upper bound, one first considers a truncated version with the upper bound N, applies the symbolic summation tools to this version and performs afterwards the limit N → ∞ using procedures available in the package HarmonicSums [33][34][35][66][67][68][69][70]. Since in our application also the formal parameters x i are involved, it may turn out in course of the summation that the summation problem cannot be solved for certain classes of cases, while it is possible in others. In particular, if the ε parameter appears in the innermost summand, it is of great advantage to first expand in ε and to apply afterwards the summation tools to the coefficients of the expansion which are free of ε. To carry out the infinite sums after the ε-expansion, the infinite power series have to be considered in their convergence region around zero to perform the infinite sums, see converg.m for the cases up to three variables.
For instance, we take the summand in (117) and specialize it further to A → 3, B → −2, C → −1. Then with the expansion (118) we obtain Given this expansion, we apply our summation tools to the ε-free sums in the second step. For G 0 we consider first the truncated version and get the simplification Finally, we perform N → ∞ yielding 2 ) 3π .
The sum G 1 is more complicated and the command EvaluateMultiSum[G 1 ] produces the output As will be shown by non-trivial considerations in Appendix E this convergent sum can be simplified to with γ E the Euler-Mascheroni constant and ψ(x) the digamma function. If this transformation works successfully (in particular, if the recurrences arising within the course of the transformation can be fully solved within the class of indefinite nested sums defined over hypergeometric products), one obtains finally an expression in terms of special functions f (x 1 , ..., x n ), which are the results of the ε-expansion of the respective higher transcendental function. In this process one tries to keep the parameters symbolically and one finally inserts the respective function of the parameters of the original differential equations. This will in general lead to representations in radicals. For numerical representations this is not problematic, while the analytic representations are involved. Calculating the respective amplitudes for off-shell invariants one may use these quantities in principle in higher loop diagrams by observing the respective kinematics. Whether this will be a practical method compared to the direct calculation of the higher loop diagrams has to be seen in the respective cases.
Summarizing, the ε-expansion leads to (multiple) infinite sums which can be simplified further by symbolic summation in many non-trivial applications. These are functions of the corresponding set of variables, either in terms of functions which also appear in other quantumfieldtheoretic calculations [35,66,67,70] or higher transcendental functions. Frequently the different letters appear within root-valued expressions.
In the examples that we will present in Section 6 below or in the Mathematica notebooks attached one obtains e.g. the following sums and much more complicated structures and variables, as shown in the attachment in several examples. The above sums evaluate to In general one has to introduce integral representations successively as has been described in Ref. [67] in detail.

The full machinery
In the following we consider two-variable examples starting from its partial differential equation down to its infinite sum representation and ε-expansion to illustrate the principle formalism.

Example 2
Consider for example the system of equations We can write its solution as F (x, y) =

x,y≥0
A(m, n)x m y n with The quantity A(m, n) can also be expressed as and F (x, y) can be rewritten as n≥0 y n f 2 (n, ε) .
Expanding F 1 and F 2 in a series in ε using EvaluateMultiSums, one can write an expression containing infinite (nested) sums. These are rewritten as iterated integrals following [67]. Two of the sums are written in semi-analytic form as definite integrals by writing part of the summand as the Mellin transform of a function. For example, we encounter the sum By isolating the term i = 1 and applying the Legendre duplication formula and the identity we write The first three sums are treated following [67]. The fourth sum can be written as The ε expansion of F 1 (x, ε) and F 2 (x, ε) then can be written by with

Example 3
Consider as an example the system of two differential equations in two variables implied by the following differential operators, Assuming a hypergeometric solution This quantity cannot be analytically expressed as a product of Pochhammer symbols due to the high degree of the polynomials appearing.

Partial difference equations with rational coefficients
In a series of problems also partial linear difference equations in various variables with polynomial coefficients occur, with the target solution space being that of rational functions in several variables, possibly also including harmonic sums or Pochhammer symbols in the numerator.
In various interesting situations, see Section 6, one can derive solutions iteratively by solving first-order linear recurrences. More generally, one may solve higher-order linear recurrences using difference ring algorithms [44,71] implemented in Sigma. However, in the general case of multivariate linear difference equations, there are only very few algorithms available to find the solution if compared to the case of the univariate difference equations. To support possible future challenges in applications, we developed a Mathematica implementation of the algorithms of Refs. [56,57], which are a multivariate generalization of the algorithm described in Ref. [71].
In addition, we enhanced these methods by further heuristic techniques that may be useful for the calculation of Feynman integrals. The basic idea of these algorithms is to constrain the denominator of the solution. From this, finding the numerator of the solution using an ansatz becomes easier. In particular, it only requires the solution of a linear system of equations. In the following, we give a survey on how to constrain the denominator of the solution of a partial linear difference equation (PLDE). In Section 7.1, we describe the notation used and in Section 7.2 we describe the concepts of aperiodic and periodic denominator bounds given in the literature, and in Section 7.3 we discuss the determination of the numerator of the solution. In particular, we explain how one can deal with a hypergeometric prefactor in the solution in Section 7.3.1 and how one can search in addition for solutions in terms nested sums in Section 7.3.2. After commenting on the problem to combine the solutions using initial values in Section 7.3.3 we turn in Section 7.3.4 to tools to obtain a Laurent expansion in the dimensional parameter ε efficiently by successively solving a set of difference equations where the parameter no longer appears in the coefficients. This section is supplemented by Section D where we describe the commands available in our Mathematica implementation solvePartialLDE.

The basic problem description
With y(n 1 , . . . , n r ) ∈ K(n 1 , . . . , n r ) a rational function in r variables, we define the shift operators N s with respect to the shift s = (s 1 , . . . , s r ) ∈ Z r as N s y = y(n 1 + s 1 , . . . , n r + s r ).
Partial linear difference equations are equations of the type s∈S a s N s y = f, where S is a finite subset of Z r , a s and f are polynomials in the variables n 1 , . . . , n r , and y is an unknown rational function to be determined; the set S of all shifts appearing in the equation is called the shift set or structure set. Because the equation is linear, the general solution is the sum of a particular solution of (207) and of the homogeneous equation with f = 0. An example of the type of equation under consideration is: − (1 + k + n 2 )y(n, k) + (4 + k + 2n + n 2 )y(1 + n, 2 + k) = 0.
It has the shift set S = {(0, 0), (1, 2)} and its coefficients are A distinction used in the literature [42,44,56] is the one between periodic and aperiodic polynomials. A polynomial p is periodic if there exist infinitely many shifts, mapping p into p ′ , such that gcd(p, p ′ ) = 1. A polynomial is called aperiodic if it is not periodic. For example, the polynomial (n + k + 2) is periodic, and the polynomial (n 2 + k + 6) is aperiodic. An important fact is that any polynomial can be factorized into a periodic and an aperiodic part.
Given a partial linear difference equation, algorithms exist to constrain what denominators may appear in the solution. These algorithms target separately the periodic and the aperiodic part of the denominator of the solution of (207). In our package we have implemented and enhanced the algorithms described in [56,57] and we describe our implementation choices and their rationale in the following.

Denominator bounds
Let us first review the reason why the calculation of a denominator bound for the solution of a PLDE is valuable. One naive way, which one aims to avoid, to search for solutions of (207) among the space of rational functions would be to start with an ansatz; for example, one can naively write a generic rational function in the variables n 1 , . . . , n r with undetermined coefficients c k and c k ′ , By plugging the ansatz (211) into (207), one obtains equations for the unknown coefficients c k and c k ′ by imposing the equality of every monomial in the variables n i on both sides of the equation, and one finds in this way, if they exist, the solutions having numerator and denominator of degree lower or equal to the degree chosen for the ansatz. However, the equations obtained in this way will be, in general, non-linear, and therefore difficult to solve. As observed in the univariate case [71] the situation improves if we are able to find a denominator bound for the solutions. A denominator bound d is a polynomial such that for any solution y = n p of (207) it must be p|d: the denominator of the solution is a divisor of the denominator bound.
If we were able to calculate d algorithmically, then only an ansatz for the numerator of the solution would be required, and the equations for the unknown coefficients c k would be linear, and therefore easier to solve. It is possible to formulate an ansatz for the numerator which also includes terms involving harmonic sums [33,34], satisfying a wide class of recurrence equations.
If we write the solution to a partial linear difference equation as y = n uv with u aperiodic and v periodic, it is always possible to calculate a bound d a for the aperiodic part u of the denominator. We refer to [56] for a description of how the aperiodic denominator bound is calculated.
For the periodic part v it is not always possible to obtain a complete denominator bound for a PLDE. This is illustrated for example by the equation which is satisfied by 1 (n+k) α for any α ∈ N. Clearly, no polynomial can be a denominator bound for equation (212).
In other words, one cannot expect to obtain a complete denominator bound (due to the intrinsic problem that periodic factors might arise with arbitrary powers). Nevertheless, it is often possible to calculate a partial bound, and to identify what shape the factors of v, that cannot be predicted, must have. (A partial bound is a bound for some, but not all, the periodic factors). The algorithm in [57] works by successively examining the periodic factors u of the coefficients a p when p is a "corner point", see [57] for a definition. Applying all the tactics described in this article, one obtains an explicitly given polynomial d p , a finite set of polynomials P and a set of generators that spans a lattice V in Z r such for any solution of the given PLDE one can predict the aperiodic denominator part v as follows: where v semi-known is a polynomial whose factors are take from the set {N s p m | s ∈ Z r , m ∈ Z} and v unknown is a polynomial such that 3 spread(v unknown ) = V .
If P = {} or V = {}, the implementation will print out the corresponding data in order to support the user to guess the missing parts v semi-known and/or v unknown . Summarizing, the user will obtain a guidance in formulating an ansatz d user for the missing factors in the denominator of the solution. To force their inclusion in the search when looking for the numerator of the solution, one can use the option InsertDenFactor → d user of our package, cf. Section 7.3 and Appendix D.

Determination of the numerator
Once the aperiodic and periodic denominator bounds d a , d p are calculated, and possibly an ansatz d user for missing factors in the denominator has been set, one can search for the numerator contribution. In general, it has been shown in [72] based on [62] that this problem is unsolvable: given a homogeneous PLDE with polynomial coefficients, there does not exist an algorithm that can determine all polynomial solutions. Nevertheless, one can search for the desired polynomial solutions by taking as ansatz a general polynomial num(c i ) with undetermined coefficients c i where the polynomial degree is set sufficiently high. Then one may substitute the rational function into the equation (207). Then finding non-trivial solutions of the the underlying linear system allows one to specialize the c i such that y is a solution of the given PLDE. In many cases, it is the determination of the c i which requires the largest computation time, whereas the denominator bounds can be computed quite quickly. For this reason we propose the following strategy to reduce the computation time.
In the cases where the PLDE does not contain any symbolic parameters (such as the dimensional regulator ε, or ratios of invariants) other than the shift variables, one may obtain constraints on the undetermined c i simply by plugging, sufficiently many times, random numerical values for the shift variables. Then one quickly obtains a linear system for the c i .
If there are symbols present, instead, one may consider performing a first pass with the symbols replaced by random numbers, with the purpose of identifying and removing redundant constraints. Then, after removing the redundant equations for the c i , the system can be solved in a stepwise manner, i.e. considering one at a time the constraints produced by one monomial, and plugging the result in the rest of the equation. This is what our package does when the function SolvePLDE is called, cf. Appendix D.
It is certainly possible that the use of random numbers to generate constraints can cause the system to generate two equations for the c i which are not independent. The probability of such an occurrence can be made arbitrarily small by choosing a sufficiently large range over which the random numbers are chosen. In any event, the consequence of an unfortunate draw of random numbers can only cause the software to output more functions misidentified as solutions when in fact they are not; it cannot cause the software to miss any solutions. By explicitly checking the result, one can guard against this remote possibility, at the expense of additional computation time.
In the following we elaborate further enhancements in order to extend the solution space from the rational function case to more general classes of functions. Besides the examples below, further examples for each aspect can be found in the Mathematica notebook auxiliary to this paper.

Treatment of a hypergeometric prefactor
Given a partial linear difference equation (207), it is possible to derive another difference equation whose solution y ′ is related to y by y ′ = ry with r = r(n i ) a hypergeometric function of its arguments, i.e. a function such that the ratio N e i r r = r(n 1 , . . . , n i + 1, . . . n r ) r(n 1 , . . . , n i , . . . , n r ) is for all i a rational function of the variables n i . Examples of hypergeometric functions are Pochhammer symbols, factorials, Γ-functions, binomial symbols, and obviously rational functions and polynomials.
The transformation from (214) to (215) is useful whenever it is possible to formulate an ansatz for r. Once some specific form can be postulated for r, the equation (215) is obtained by substitution and by exploiting the hypergeometric property.
We can now solve the new equation, obtaining From this we conclude that the solution of (218) is for some constant C ∈ K(ε).

Finding solutions in terms of nested sums
Solutions connected to Feynman integrals involve often also indefinite nested sums, such as (cyclotomic) harmonic sums [33][34][35] or generalized versions, like Hurwitz harmonic sums. A straightforward modification of the ansatz (213) is to search for a numerator num(c i ) that is built not only by a polynomial in K[n 1 , . . . , n r ] with the unknown coefficients c i but to search for polynomial expressions of a finite set of nested sums, i.e., one takes a linear combination of power products in terms of the given nested sums whose coefficients are polynomials in K[n 1 , . . . , n r ] with unknown coefficients. In practice, it is important for this list of nested sums to be shiftstable, meaning that a shift in any of the variables must not introduce new harmonic sums not already included in the list, and they also should be linearly independent. To guarantee this property, one can use quasi-shuffle algebras or difference ring methods [43,73]. The nested sums at shifted arguments can then be rewritten through the repeated application of identities of the type and similarly for all other nested sums, until only unshifted nested sums appear. After clearing denominators, the PLDE implies a set of linear constraints on the undetermined parameters c i , obtained by coefficient comparison in all the power products which appear when y is plugged back into the equation (207). Note that the number of unknowns c i increases strongly: one tries to determine not only one numerator polynomial but numerous polynomials for each power product. In this regard, the homomorphic image techniques described in the beginning of Section 7.3 are instrumental to perform these calculations in reasonable time.
Remark. A more advanced (and also less heuristic) tactic is to apply a recursive strategy as worked out in [44]: one defines an order of the nested sums (s 1 , s 2 , . . . , s e ) where a sum s i does not arise inside of any of the sums s 1 , . . . , s i−1 . Then one makes an ansatz for the solution y = p 0 + p 1 s e · · · + p d s d e where p 0 , . . . , p d are polynomial expressions in terms of the remaining nested sums s 0 , . . . , s e−1 with coefficients from the ground field K(n 1 , . . . , n r ). Here one has to set up d sufficiently high in order to guarantee that the desired solution can be derived. Then one plugs y into (207), applies the shift rules such as (224), clears denominators and compares the coefficients of the highest term s d e . This yields a new PLDE in terms of the unknown p d . Now we compute by recursion all the solutions of this new PLDE in terms of the remaining sums s 1 , . . . , s e−1 , plug the solutions into p d of the original system and obtain an updated PLDE of (207) where s e occurs only up to degree d−1. Now we proceed by degree reduction to compute the remaining coefficients p 0 , . . . , p d−1 in order to obtain the final solution y. We remark that in the base cases, i.e., when all sums are removed within the recursion one ends up to solve several PLDEs purely in the ground field K(n 1 , . . . , n r ), i.e., the machinery described in the beginning of Section 7 is applied. It is our plan in the near future to implement this more advanced machinery within the formal setting of RΠΣ-difference ring extensions [42] based on the reduction strategy given in [44].

Matching the solution to initial values
If initial values are provided, it is possible to look for a general solution that conforms to them. This general solution is found by building a linear combination with undetermined coefficients of the solutions of the homogeneous equation, plus a particular solution of the equation. Next, the initial values are plugged in, and a system of equations is obtained. In the case that the system contains symbolic parameters other than the shift variables, the undetermined coefficients to be searched for are not just numbers. In that case, the coefficients of the linear combination are taken to be general rational functions in the parameters up to some chosen degree. The combination of the solutions will be of particular importance for the next subsection.

Finding the solution in a series expansion
In many applications it is desirable to obtain the Laurent series expansion of the solution of a difference equation. This may be easier to achieve than the derivation of a complete solution, because, at each order in the expansion, it is possible to derive a difference equation where the expansion parameter is absent, therefore the linear system to find the coefficients c i can potentially be solved much more quickly. The procedure, described in the following, generalizes the univariate case given in [50]. It assumes that the initial values of the solution in its εexpansion are known.
Consider for instance of (207), possibly containing a parameter ε in the coefficients: where the coefficients a s (n i , ε) are polynomials in the shift variables and in the parameter ε.
Assume that the solution of (228) has, around ε = 0, a Laurent expansion starting from the power ε −ℓ of the parameter, with ℓ known, y ε (n i ) = ε −ℓ y −ℓ (n i ) + · · · + y 0 (n i ) + εy 1 (n i ) + · · · + ε c y c (n i ), and that the right-hand side of the equation can be expanded in a series in ε as Assume also that the a s (n i , ε = 0) are not all zero, so that an overall power of ε, if present in the equation, has been factored out. Then, one may proceed by inserting (229) and (230) Equation (231) is now free of ε, which facilitates the task of finding a solution and reduces the computational time required. If (231) can be uniquely solved for y −ℓ and the solution matched to initial values, one can move to the next higher power in ε by plugging the solution into (229).

Conclusions
We reviewed some techniques, algorithms and implementation choices for the solution of partial linear differential equations in form of multivariate power series representations. Here we extract the underlying partial linear difference equations of the power series coefficients (see, e.g., Section 3) and try to solve them in terms of special functions. For this task we presented an algorithm that can solve frequently arsing hypergeometric systems in terms of hypergeometric products (see Section 4) and elaborated heuristic methods to find such solutions (also in terms of nested sums) for the general higher-order case (see Section 7). Special care has been put on the ε-expansion of such solutions (see Sections 5.1 and 7.3.4) where in addition, e.g., Hurwitz harmonic sums and generalized versions may therefore arise. Finally, we utilize the available summation tools in the setting of difference rings to simplify the found sum solutions in terms of indefinite nested sums over hypergeometric products. In particular, various concrete examples of this computer algebra machinery has been elaborated (see Section 6).

A The multiple series representation
In the following we summarize the different multiple series representations that can be found in the existing literature. We note that starting with their partial linear differential representation our methods described above can also provide the the presented sum representations. The expansion coefficients are given as rational functions of Pochhammer symbols, which requires a corresponding re-parameterization of the coefficients of the foregoing differential and difference equations.
One of the simplest function of these classes is Gauß' hypergeometric function Its generalizations, the generalized hypergeometric functions, are given by The bi-variate Appell series [8,9] have the representation This set is extended to the Horn-type bi-variate series [12] x m y n m!n! (244) x m y n m!n! (245) x m y n m!n! .
As in the case of the generalized hypergeometric functions there are confluent forms of the bi-variate functions.

B Mapping conditions to the Pochhammer case
The parameters of the general representations of the (partial) differential equations given in Section 2, leading to product solutions obey the well-know Pochhammer solutions, if they apply a number of relations. Here we present some typical examples for these relations.
returns a corresponding recurrence for f[m,n,...]. The last two commands implement the techniques presented in the beginning of Section 4.
To prepare for the expansion in the dimensional parameter ε, which frequently occurs in the parameters of the differential and difference equations, one usually needs to check for the convergence domain of the corresponding solution, to be able to perform the respective limit N → ∞ in the sums involved. Generally it is assumed that {x, y, z, t} ∈] − 1, 1[. However, often stronger conditions are needed in the multivariate cases. Internal Tables, cf. [19], allow to check for this in the two-and three-variate cases using the commands findCond2 and findCond3. One first has to determine the corresponding function label fcn via classifier2, classifier3, as e.g.
classifier3 returns the convergence conditions, which are in some cases given in implicit form. The εexpansion is performed using algorithms implemented in Sigma. The attached notebooks Ex-HypSeries.nb and ExSolvePartialLDE.nb give a more detailed explanation on this.
In the cases the ε-expansion of the considered higher transcendental functions can be performed to a certain power O(ε k ) one may want to check, whether the solution satisfies the corresponding differential equations. This is provided by the command CheckDE[sol,eq], where sol denotes the solution up to the corresponding degree in ε and eq the differential equation

CheckDE[sol, eq]
returns then a result, which is of higher order in ε.

D A brief descriptions of the commands of solvePartialLDE
The Mathematica package SolvePLDE.m implements the aforementioned algorithms for solving partial linear difference equations. It requires Sigma and HarmonicSums to be loaded. Additionally, the software Singular [75] must be installed, and made available through the interface given in [76]. The installation path of Singular can be set using the command appropriate for the user's system, e.g. • SolvePLDE[eq == rhs, f[n, k, ...], (options)]. This command solves the linear partial difference equation. It has the following available options: -UseObject → list of Harmonic sums and/or Pochhammer symbols Allows to define a list of harmonic sums and Pochhammer symbols to be searched in the numerator of the solution.
-PLDEdegBound → d Allows to choose the total degree d of the ansatz for the numerator of the solution. Defaults to 0.
-InsertDenFactor → factors In the case the periodic denominator bound was not complete, the user may force the search to include factors in the denominator.
-PLDESymbols → list Any symbols appearing other than the shift variables must be declared in list.
-InitialValues → list A list of initial values in the form {{var 1 → val 1 , var 2 → val 2 , . . . , initialvalue}, . . .} -SymbolDegree → d When initial conditions are provided, a linear combination of the homogeneous solutions is built, having as coefficients rational functions in the symbols. This option sets the maximum total degree of the numerator and denominator of those rational functions. • expandHypergPref[eq == rhs, f[n, k, ...], fac]. This command derives a new equation whose solution has the hypergeometric factor fac removed, as described in Section 7.3.1.

E A constant
We calculate the constant C given in (119). The two contributions to this infinite sum do both diverge, while Unfortunately (119) cannot simply be written in terms of a hypergeometric function of main argument 1, since this diverges, which is easily seen by applying Gauß' formula. However, one can define it as the following limit Before expanding in ε one should map the main argument as, cf. e.g. [7], Furthermore, the relation for the digamma function While the notebook ExsolvePartialLDE.nb needs a few minutes computation time only, ExHypSeries.nb needs 1.32 days.