Reduction to master integrals via intersection numbers and polynomial expansions

Intersection numbers are rational scalar products among functions that admit suitable integral representations, such as Feynman integrals. Using these scalar products, the decomposition of Feynman integrals into a basis of linearly independent master integrals is reduced to a projection. We present a new method for computing intersection numbers that only uses rational operations and does not require any integral transformation or change of basis. We achieve this by systematically employing the polynomial series expansion, namely the expansion of functions in powers of a polynomial. We also introduce a new prescription for choosing dual integrals, de facto removing the explicit dependence on additional analytic regulators in the computation of intersection numbers. We describe a proof-of-concept implementation of the algorithm over finite fields and its application to the decomposition of Feynman integrals at one and two loops.

1 Introduction Scattering amplitudes at the loop level are linear combinations of Feynman integrals.These integrals, in dimensional regularization, obey linear relations -most notably integration by parts identities [1,2], Lorentz invariance identities [3] and symmetry relations.Using these we can reduce a large number of integrals into a much smaller set of linearly independent ones called master integrals.This reduction is an essential step of most modern multi-loop predictions.
The most successful and widely used integral decomposition method is the Laporta algorithm [4], which consists in generating and solving a large and sparse system of equations obeyed by loop integrals.This approach produced extraordinary results and has been a key ingredient of the bulk of multi-loop predictions in the last two decades, also thanks to efficient public implementations [5][6][7].More recently, this approach has been combined with finite fields and functional reconstruction techniques [8,9], where complex algebraic manipulations are replaced with efficient numerical evaluations over machine-size integers and analytic results are reconstructed out of several numerical evaluations.This method -also implemented in both private and public [10][11][12][13] programs -significantly pushed the state of the art of multi-loop integral decomposition and, more generally, multi-loop theoretical predictions.
Despite its tremendous success, the reduction to master integrals is still a major bottleneck of higher-order theoretical predictions and it very often determines whether a multiloop calculation is doable or not.Moreover, this approach doesn't make manifest the vectorspace structure obeyed by Feynman integrals.Indeed, since all the Feynman integrals in a family (i.e.having a common denominator structure) can be decomposed into a finite [14] basis of master integrals, they can be viewed as the elements of a finite-dimensional vector space over the field of rational functions of the invariants that describe a process.In traditional reduction approaches, this property only indirectly comes out of a large number of linear identities and it is not directly exploited.The search for decomposition methods that more directly exploit this structure is definitely of interest from a purely theoretical point of view, as it is going to improve our understanding of Feynman integrals.From a more pragmatic and phenomenological viewpoint, this improved understanding may inspire the development of more efficient decomposition methods, opening up a number of important theoretical and phenomenological studies that are currently beyond our technical capability.
An approach that has been gaining a lot of interest is the study of Feynman integrals within the mathematical framework of intersection theory [15][16][17][18][19][20][21][22][23][24].The latter offers a method for defining rational scalar products, called intersection numbers, in vector spaces of functions with a suitable integral representation -among which are Feynman integrals.Using a scalar product, in turn, it is straightforward to project any element of a vector space into its components with respect to a vector basis.Hence, the problem of decomposing any Feynman integral into a basis of master integrals is reduced to the problem of computing scalar products.
This approach, while promising, is currently not competitive with more traditional decomposition methods.One of the main drawbacks of current approaches to the calculation of intersection numbers is the appearance of non-rational expressions in intermediate stages of the calculation.Indeed, intersection numbers are typically computed as sums of residues of local solutions of differential equations around the poles of some rational functions.These poles can generally appear at non-rational locations.Because intersection numbers are rational, all non-rational contributions will eventually combine into a rational expression after summing over all residues.The handling of non-rational expressions is however a major algebraic bottleneck.Moreover, their appearance prevents us from using the available finite-fields-based techniques and programs in this context, since these apply to algorithms that are a sequence of rational operations.Finally, since both the inputs and the outputs of intersection numbers are rational, it would be mathematically and theoretically more satisfactory to avoid non-rational expressions in all steps of the calculation.A possible solution to this problem was proposed in [22], where a rational algorithm for computing intersection numbers was formulated in terms of a sequence of changes of bases, integral transformations and the application of the global residue theorem.The required changes of bases and integral transformations are however highly non-trivial and can themselves become a bottleneck, especially for more complex integrals.
In this paper, we present a new purely rational method for computing intersection numbers which does not involve any integral transformation or change of basis.The explicit appearance of irrational poles is sidestepped via the systematic usage of the polynomial series expansion, or p(z)-adic expansion for brevity, namely the series expansion of rational functions in powers of a prime polynomial (it is conceptually similar to p-adic numbers, which are instead series expansions of rational numbers in powers of a prime integer).A p(z)-adic expansion is effectively equivalent to a local expansion around all the (potentially irrational) zeroes of a polynomial p(z) at the same time.However, it can simply be computed in terms of a polynomial division algorithm -which is obviously rational -and does not require to know the location of the roots of p(z).We thus locally solve the aforementioned differential equations in terms of p(z)-adic (rather than Laurent) expansions of their solution.For the final sum over residues we then use a simple generalization of the global residue theorem that applies to functions with a known p(z)-adic expansion.
We implement the new method over finite fields using the Mathematica package FiniteFlow and test it on several one-and two-loop examples, finding agreement with reductions obtained with the Laporta algorithm.
Another drawback of intersection theory, when applied to loop integrals, is the need to introduce additional analytic regulators that significantly increase the complexity of the calculation and obfuscate the typical block triangular structure of integral reductions.A mathematical formalism for dealing with this was proposed in [25,26].In this paper we propose an alternative approach based on a suitable choice of master integrals for the dual space of Feynman integrals and on performing operations on the coefficients of the leading terms in an expansion where the additional regulators disappear.This "effectively" removes any analytic dependence on the regulators in the calculation and provides a number of major simplifications, besides recovering the aforementioned block-triangular structure.
The paper is structured as follows.In section 2 we give a pedagogical review of intersection theory and its application to Feynman integrals, setting the notation and the main concepts used afterwards.Section 3 contains the main result of this paper, namely the description of a new rational algorithm for computing intersection numbers.In section 4 we describe our new approach for dealing with analytic regulators.An implementation of our new method over finite fields is described in section 5. Finally, in section 6 we validate our results with some examples and in section 7 we draw our conclusions and discuss possible future developments.We give more details about the p(z)-adic expansion and the global residue theorem in the appendixes.

Intersection numbers of Feynman integrals
In this section we set the notation and review some basic concepts of intersection theory and its application to the reduction of Feynman integrals into a basis of master integrals.This overview is meant to be accessible to readers who are not familiar with intersection theory and we do not attempt to be mathematically rigorous or thorough.For a more comprehensive treatment of the subject and some of its applications, we refer readers to [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32] and references therein.
We are interested in analyzing classes of integrals, such as Feynman integrals, that can be viewed as elements of a vector space.In particular, we consider n-fold integrals in the variables z = (z 1 , . . .z n ) of the form and dual integrals We also refer to these as right and left integrals respectively.In this work, we are not concerned with studying the dependence on the integration domain, which never varies and we thus leave it implicit, with our only assumption being that integrands vanish at its boundary.In the previous equations φ R and φ L are rational functions in z while u(z) is a multivalued function that regulates the singularities of φ R and φ L .In particular, in our applications, u takes the form where B j (z) are polynomials and γ j are generic exponents, i.e. exponents that are not identically equal to any integer.For Feynman integrals, γ j are functions of symbolic regulators, such as the dimensional regulator.The explicit form of u (or B j ) and the rational functions φ R,L depends on the specific problem one is interested in -for our identifications see subsection 2.2.We study classes of integrals with the same u(z) and different rational functions φ L,R , namely Feynman integrals within the same family.We also assume that these (regulated) integrands vanish at the boundaries of integration.Hence, we can write integration by parts identities (IBPs) as 1 A more rigorous definition considers the elements |φR⟩ and ⟨φL| as equivalence classes of n-forms such that forms in the same class would yield integrands that differ by total derivatives, as in eq.s (2.5).In this paper, with an abuse of notation, we identify these with the integrals themselves, since we only compute linear relations between integrals that can be written as integration by parts identities, i.e. in the form of eq.(2.5).
or equivalently, for a list of n rational functions ξ .In this work we are however interested in a more direct method for achieving such decompositions.
Our main goal is the computation of rational scalar products between left and right integrals.These are referred to as intersection numbers.Intersection numbers are obviously linear in both φ L and φ R and consistent with the identities in eq.s (2.5).By calculating these scalar products and combining them, we can project left and right integrals into master integrals in the same way a vector belonging to a vector space can be decomposed into a basis via projections.Therefore we express the decomposition of any integral |φ R ⟩ and dual integral ⟨φ L | in their bases as where we introduced the metric C ij that contains the intersection numbers between all the master integrals.
In the following, we will thus focus on methods for computing intersection numbers and their application to the decomposition of Feynman integrals to master integrals.

Computing intersection numbers
We now review a known method for computing multivariate intersection numbers [17,18], which is recursive in the integration variables.We start from the univariate case, which is the base case of the recursion, and then describe the recursive step, which computes multivariate intersection numbers of n-fold integrals in terms of intersection numbers of (n − 1)-fold integrals.
While intersection numbers are rational, the method reviewed in this section generally involves non-rational intermediate expressions, namely poles of rational functions.In section 3 we will propose a new method that avoids non-rational expressions in all steps of the calculation.

Univariate case
For the univariate case, we consider intersection numbers between 1-fold integrals of the form Intersection numbers are calculated as a sum of residues where the sum is extended over all p ∈ P ω and ψ is the local solution around the poles p of the differential equation where The local solution can be computed as a Laurent expansion around which inserted in eq.(2.13) yields a linear system of equations for the unknown coefficients c i .The Laurent expansion can be truncated at a finite order that is sufficient for the purpose of taking the residue in eq.(2.11).

Multivariate case
We now consider n-fold integrals of the form in eq.s (2.1) and (2.2).We rewrite these as where |φ R ⟩ n−1 and ⟨φ L | n−1 are (n − 1)-fold integrals which have a parametric dependence on z n .Since this is a recursive method, we assume to be able to compute intersection numbers of (n − 1)-fold integrals, which we write as and are rational functions of z n .These, in turn, can be used to project any (n − 1)-fold right and left integral into a basis of master integrals |e (R) j=1 and ⟨e where the coefficients φ R,j and φ L,j are also rational functions of z n .Intersection numbers of n-fold integrals can be computed as where ψ j are solutions of the system of differential equations with φ L,j defined as in eq.(2.18), while the matrix Ω jk is the differential equation matrix of (n − 1)-fold left master integrals with respect to z n , k=1 In practice, one can take the derivative of the (n − 1)-fold master integrals with respect to z n under the integral sign and decompose the resulting expression into the same master integrals to obtain the matrix Ω jk .The sum in eq.(2.19) runs over the poles of the matrix elements On a side note, we observe that eq.(2.19) can be rewritten as where ⟨ψ| n−1 solves from which the more practically useful eq.(2.20) is obtained by decomposing both ⟨φ L | n−1 and ⟨ψ| n−1 into (n − 1)-fold left master integrals, defining ψ j as (2.25)

Application to Feynman integrals
Multiloop Feynman integrals in dimensional regularization are usually written, as a first step, in momentum representation.After a tensor decomposition, a generic L-loop integral of a given integral family with E + 1 external momenta p 1 , . . ., p E+1 (of which only E are independent because of momentum conservation) takes the form where z 1 , . . ., z n are a complete set of denominators of loop propagators and auxiliary denominators, such that any scalar product of the form k i •k j or k i •p j is a linear combination of the z i .It follows that n = L(L + 1)/2 + LE.The exponents α j are integers, which can be positive, negative or zero (negative exponents thus correspond to numerators).
Because of the vanishing of total derivatives in dimensional regularization, we can write IBP identities [1,2] L j=1 where v µ = k µ i or v µ = p µ i .We can exploit these identities, possibly combined with Lorentz invariance identities [3] and symmetry relations, to perform the decomposition of a given set of Feynman integrals to master integrals.The most widely used method, i.e. the Laporta algorithm [4], consists in writing a long list of linear identities of the form of eq.(2.27) (and similar for Lorentz invariance identities and sometimes symmetries) for different choices of v µ and α j and combining them into a large and sparse system of linear equations satisfied by the integrals.The solution of such a system yields the reduction to master integrals (further modded out by symmetries, if included in the system).As mentioned, solving this system is one of the main bottlenecks of many higher-order predictions.Moreover, the vector-space structure obeyed by Feynman integrals is not manifest in this approach.
In order to work in the framework of intersection theory, we need to rewrite eq.(2.26) into a representation that mimics the one in eq.(2.1).Throughout this work, we use the Baikov representation [33], which consists in changing the integration variables from loop momenta k µ j to propagators z i .Other representations are also usable in this context, see e.g.[25,26,34].In the Baikov representation, Feynman integrals read . The Baikov polynomial B is the Gram determinant of the loop momenta and the independent external momenta, expressed as a function of the z i .K is a constant that does not depend on the integration variables nor the exponents.Feynman integrals in the Baikov representation also obey IBP identities, that are the vanishing of total derivatives with respect to the z j .
Comparing eq.(2.28) to the integrals defined in section 2, we identify the multivalued function u and the rational functions φ R,L as respectively.As in [18], we inserted factors z ρ j j in u(z) because φ R,L can have singularities in z j = 0 that are generally not regulated by the Baikov polynomial alone, while having them regulated is a requirement for applying the framework of intersection theory, as we stated in section 2. Indeed, one can easily check that the differential equations we need to locally solve to compute intersection numbers have no solution around unregulated singularities.It is understood that we are interested in the limit ρ j → 0 of a decomposition to master integrals.We will refer to these additional variables ρ j as analytic regulators.Analytic regulators can generally be dropped for auxiliary denominators z i which only appear with a negative exponent in φ R,L , if the corresponding poles at infinity are regulated by the Baikov polynomial (which is usually the case).
The need of analytic regulators is a further complication of the application of intersection theory to Feynman integrals, since it adds additional parameters into the problem and obfuscates the typical block triangular structure of integral reduction identities.We will show in section 4, however, that if we are interested in the decomposition of Feynman integrals, we can effectively remove the explicit dependence on ρ j from the calculation of intersection numbers, by making a suitable choice of basis for dual integrals and systematically work in the ρ j → 0 limit.An alternative mathematical framework for dual Feynman integrals which also removes the need of regulators was proposed in [25,26].
In this paper we identify Feynman integrals in the Baikov representation as right integrals |φ R ⟩ and their duals as left integrals ⟨φ L |.We therefore apply the framework of intersection theory for computing intersection numbers between Feynman integrals and their duals.This approach allows to directly decompose an integral into the basis of chosen master integrals via the projections in (2.7), without resorting to the construction of an IBP system.For a given integral, this approach implies the necessity to calculate intersection numbers of n-fold integrals, with n being the number of independent denominators.

Intersection numbers via polynomial series expansions
In this section, we present the main result of this paper, namely a new rational method for computing intersection numbers via the systematic use of polynomial series expansions, or p(z)-adic expansions for brevity.
As mentioned, one of the main drawbacks of the method described in section 2 is the appearance of non-rational poles in the calculation of intersection numbers.While intersection numbers are rational, irrational intermediate expressions represent a bottleneck in complicated algebraic manipulations and make the usage of efficient techniques based on finite fields impractical.
Because we always deal with rational integrands, all their (rational and non-rational) poles are roots of rational polynomial factors of their denominators.In the following, we will thus consider each of these polynomial factors, which are prime3 over Q, rather than individual poles, as the building blocks of our calculation.

p(z)-adic series expansions
We elaborate on the expansion of any rational function in terms of a polynomial p(z), a necessary step to calculate intersection numbers with our newly-proposed rational algorithm.
Consider a univariate polynomial p(z) of degree deg p in the variable z that is prime over the field of rational numbers.We define the p(z)-adic series expansion of a rational function f (z) as where the coefficients c i (z) are polynomials in z over the rational field, whose degree is lower than the one of p(z) The notation O((p(z) k ) stands for terms proportional to the k-th power of the polynomial p(z).If z is such that p(z) = δ, where δ is some small quantity, then we obviously have Hence, the p(z)-adic expansion allows us to consider the limit p(z) → 0, without resorting to non-rational operations nor knowing the explicit location of the (irrational) roots of p(z).In particular we have where i = min is the starting order of the expansion.Similarly to Laurent expansions, the latter can be deduced from polynomial factors of the form p(z) k in f (z), but unlike Laurent expansions around irrational points, here they can be detected using only operations over the rational field.
When calculating intersection numbers, we are often required to expand around the irrational poles of ω or Ω.These irrational poles are the roots of some polynomials p(z) irreducible over Q. Therefore a p(z)-adic expansion is effectively equivalent to considering all expansions of a function around all the roots of the polynomial at once, ultimately avoiding their explicit appearance.
The expansion in (3.1) can be obtained by repeated polynomial divisions modulo p(z) with remainder, as illustrated in appendix A. As a simpler alternative, we observe that we can implement the same expansion via a shortcut that avoids repeated polynomial divisions.We introduce an auxiliary parameter δ and define, for a generic function f (z), its remainder that is a polynomial of degree deg p−1 in z and has a rational dependence on the parameter δ.We highlight that this step amounts to the substitution p(z) = δ and is exact, i.e. we obtain back the original function by substituting δ → p(z) in the right-hand-side.After doing this, we series expand for small δ, obtaining where the coefficients c ij are identified with those appearing in the p(z)-adic expansion as in eq.s (3.1) and (3.2).In this way we obtain the p(z)-adic expansion of a rational function up to the order O(p(z) max+1 ).
Polynomial series expansions can be combined with the univariate global residue theorem (see appendix B) to compute the sum of the residues of a rational function around the poles of a rational polynomial, with purely rational operations.A polynomial p(z) can always be factorized over the complex field as where l c is the leading coefficient, i.e. the coefficient of z deg p in p(z), and y k are the roots of p(z).For a generic function f (z) we define i.e. the sum of the residues of f (z) at the roots of p(z).If f (z) admits the p(z)-adic expansion in eq.(3.1), then the sum of residues in the previous equation can be computed without resorting to irrational operations and without explicitly knowing the location of the roots y k , using the following generalization of the global residue theorem where the coefficients c ij are defined as in eq.(3.2).We now apply p(z)-adic expansions to the computation of univariate and multivariate intersection numbers.

Univariate intersection numbers
For the computation of intersection numbers of univariate integrals in the framework of p(z)-adic expansion, we rewrite eq.(2.11) as where the sum runs over all factors p(z) of the denominators of ω The contribution ⟨φ L |φ R ⟩ ∞ is computed exactly as the contribution at p = ∞ to eq. (2.11).Each other addend of the form ⟨φ L |φ R ⟩ p(z) is instead the sum of all contributions to (2.11) coming from the roots of p(z).
For each factor p(z), we make an ansatz for the local solution ψ of the differential equation (2.13) close to all the roots of p(z).This takes the form of a p(z)-adic expansion (3.11) We stress that, even when p(z) has irrational roots, these do not explicitly appear in eq.(3.11) which, once plugged into eq.(2.13) and p(z)-adic expanded again, yields a linear system of equations with rational coefficients that is then solved for the unknowns c ij .
The solution for ψ is then multiplied by φ R and the product is again p(z)-adic expanded with respect to p(z).We obtain and apply the univariate global residue theorem as in eq.(3.8) where l c is the leading coefficient of p(z).As we stated, this is the sum of the contributions to the intersection number between ⟨φ L |φ R ⟩ given by all the roots of p(z).

Multivariate intersection numbers
The generalization to the multivariate case is straightforward.In this subsection we let z ≡ z n .We rewrite eq.(2.19) as where the sum is over the factors p(z) of the denominators of Ω.
As before, the contribution ⟨φ L |φ R ⟩ ∞ is computed exactly as the contribution at p = ∞ to eq. (2.19).Each other addend of the form ⟨φ L |φ R ⟩ p(z) is instead the sum of all contributions to (2.19) coming from the roots of p(z).
We make an ansatz for the local solutions ψ i to the differential equation (2.20) around all the roots of p(z) that, as in the univariate case, take the form of p(z)-adic expansions Eq. (2.20) thus yields a linear system that we solve for the unknowns c ijk .
We then take the scalar product of ψ j and ⟨e (L) j |φ R ⟩ n−1 and p(z)-adic expand with respect to p(z).We obtain The contribution of the roots of p(z) to the intersection number ⟨φ L |φ R ⟩ is again given by

Dual integrals and analytic regulators
As explained in subsection 2.2, one of the drawbacks of the application of intersection theory to Feynman integrals is the need of introducing analytic regulators ρ j for each integration variable z i that may appear as a denominator for φ R or φ L .However, since we are interested in computing intersection numbers for the purpose of reducing Feynman integrals φ R to master integrals, we can exploit the freedom of choice for the dual basis to simplify our problem.Indeed, while intersection numbers obviously depend on the dual integrals, the coefficients of the decomposition of a Feynman integral |φ R ⟩ are independent of the choice of dual bases.In this section we discuss a simple strategy that "effectively" removes the explicit dependence on analytic regulators from the calculation, without substantially changing the algorithm for computing intersection numbers.An alternative mathematical formalism for choosing dual integrals and dealing with this issue was presented in [25,26].A comparison between the two approaches, as well as an in-depth study of all implications and possible limitations of our new approach, is outside the scope of this work and is left to future investigations.

Choice of dual integrals
Our approach simply consists in choosing dual integrals of the form and systematically work in the ρ j → 0 limit.In other words, we multiply each factor z −α j j by the analytic regulator ρ j if (and only if) α j > 0, i.e. when z j appears in the denominator of φ L .Then, each step in the algorithm is worked out by keeping only the leading contributions in the ρ j → 0 limit.
One can easily see that the dependence on ρ j of intersection numbers only appears at the j-th level of the recursion.On the other hand, intersection numbers of j-fold integrals are systematically computed in the ρ j → 0 limit, which is finite when we use left integrands of the form (4.1). 5 Hence, we only have to consider one regulator at a time, namely ρ j (and only ρ j ) at the j-th step of the recursion.
Consider a generic integration variable z = z j and the corresponding regulator ρ = ρ j .When solving eq.(2.13) for ψ, for each order of the Laurent or p(z)-adic series expansion in z of the equation, we only consider the leading term in a ρ → 0 expansion (similar statements hold for ψ j solving eq.(2.20)).Since we systematically work on the leading coefficients of a ρ → 0 expansion and obtain finite intersection numbers after each step of the recursion, the calculation is effectively as simple as one without any dependence on ρor even simpler in most cases.
For a more detailed discussion, we distinguish the two cases of exponents α ≤ 0 and exponents α > 0 inside φ L ∼ z −α .If α ≤ 0, then φ L has no singularity in z = 0 hence we do not expect to need an analytic regulator at all and one may set ρ = 0 before solving for ψ.If instead α > 0, since poles in z = 0 are generally not regulated by ω, eq.(2.13) has no solution unless we add an analytic regulator.The solution for ψ, in turn, develops a pole in ρ = 0, which cancels against the prefactor ρ we add according to the prescription in (4.1), yielding a finite solution for ψ in the ρ → 0 limit.We observe that, therefore, for α > 0 the solution for ψ is only non-zero (in the ρ → 0 limit) around poles at z = 0 and possibly at z = ∞, hence we can skip the calculation at all other poles or denominator factors.
This strategy has a number of advantages.
• The calculation is effectively independent of ρ j since we are directly working on the leading coefficients of expansion around ρ j → 0. The full dependence on ρ j is never considered and, when we implement the algorithm over finite fields, we do not sample over or reconstruct the ρ j dependence.
• Intermediate analytic expressions (if reconstructed) are dramatically simpler.
• The matrix of intersection numbers ⟨φ L |φ R ⟩ for a list of φ L and φ R , as well as the matrices C ij and Ω ij have a block triangular structure, similar to what we expect from integral decompositions in traditional approaches, where blocks correspond to so-called sectors.In particular, ⟨φ L |φ R ⟩ ̸ = 0 only if the set of indexes k such that α k > 0 for the integrand φ L is either the same set or a subset of the one for φ Rin the language of Feynman integrals, we say that φ L belongs to a sector that is a subsector of φ R .The same structure appears in the matrices C ij and Ω ij (examples are given in section 6).We note that this structure is transposed to the one that is observed in the reduction or differential equations for (right) Feynman integrals |φ R ⟩.
• A large set of intersection numbers and contributions of poles to intersection numbers vanishes, for the reasons illustrated above.
• At intermediate steps of the recursive algorithm, sometimes there are fewer master integrals with respect to the case where the full ρ dependence is considered.

Subtleties of analytic regulators
A few subtleties need to be discussed.The system of equations for the coefficients of the expansion of ψ in z can sometimes be underdetermined, regardless of the sign of the exponent α of z.In this case, we add more constraints coming from higher-order terms in the ρ → 0 expansion of the equations.This can also happen when α ≤ 0 -in this case a regulator is temporarily added -as well as for contributions to intersection numbers from denominator factors p(z) ̸ = z, ∞ (see eq.(3.14) and (3.15))where we can regulate the system with the prescription Even in such cases, we still solve the equation for the coefficients of ψ in the ρ p → 0 limit, keeping higher-order terms in such limit only as necessary.In most cases the additional constraints simply amount to setting to zero the previously undetermined coefficients, hence the next-to-leading equations in the ρ → 0 (or ρ p → 0) expansion can almost always be dropped after a numerical check.Another subtlety is that, even if a variable z is not a factor of the Baikov polynomial, it is still possible that Ω may have z as a denominator factor at the corresponding step of the recursion.This, in turn, might regulate the singularity at z → 0 for at least some of the integrals φ L , without the need of the corresponding analytic regulator ρ.In such cases ψ does not develop the singularity 1/ρ that cancels the ρ prefactor in eq.(4.1) and therefore we obtain ψ = 0 for all poles -which for the purpose of computing intersection numbers is equivalent to setting ⟨φ L | = 0.While in principle we may remove the ρ prefactor from eq. (4.1) when this happens, in all our tests this was not needed.Indeed, even in such cases we have always found enough integrals of the form in (4.1) to obtain a complete basis of dual integrals in the ρ → 0 limit.
As a final remark, our approach is only meant to deal with singularities at z = 0 or underdetermined systems for expansions around arbitrary factors p(z).In cases where a system with no solution appears for a p(z) ̸ = z, ∞, both this approach and the traditional one that considers the full dependence of intersection numbers on the analytic regulators in (2.29) are problematic and deserve further investigation.In principle the prescription in (4.2) may fix the singular point, but the solution for ψ generally develops a singularity in ρ p = 0 which only cancels after the intersection numbers are combined into a decomposition as in eq.(2.8).It is possible that a generalization of the strategy proposed here -namely multiplying the offending integrands by ρ p and systematically work in the ρ p → 0 limit after applying eq.(4.2) -may also simplify the calculation in these cases.

Implementation over finite-fields
One of the main motivations for developing a rational algorithm for intersection numbers, besides avoiding algebraic difficulties in handling non-rational roots, is the possibility of implementing it over finite fields.Finite fields are numerical fields with a finite number of elements.In the context of scattering amplitudes, the most common choice is the field of integers modulo a prime number, namely Z p = {0, . . ., p − 1} with p a prime.In Z p we can perform efficient numerical evaluations using machine-size integers (for primes p < 2 64 ) which are also exact, i.e. not affected by numerical inaccuracies.We can implement over Z p any algorithm that is rational, i.e. a sequence of rational arithmetic operations.Out of these numerical evaluations, we can thus use rational reconstruction techniques to infer full analytic results as rational functions of the free parameters of the problem (see e.g.[8][9][10]12] for more details).
In this section we describe a proof-of-concept implementation of our new algorithm over finite fields that uses the Mathematica package FiniteFlow [12].This package offers the possibility of building new algorithms by combining core algorithms in computational graphs (dataflow graphs) from a high-level interface and reconstruct analytic results out of numerical evaluations.
We give an overview of how our implementation of multivariate intersection numbers works.At first, for simplicity, we will assume to know in advance a basis of master integrals and dual master integrals for all levels of the recursive algorithm.We will then address this point more thoroughly in subsection 5.4.

Setting up the recursive calculation
Before doing any explicit calculation, we make a list of intersection numbers we need to compute at each step of the recursive algorithm.Our starting point is a list of n-variate intersection numbers we wish to compute.This will typically include all the intersection numbers needed for the decomposition of a selection of integrals |φ R ⟩ via eq.s (2.7) and (2.8), namely ⟨e This makes up a new list of required (n − 1)-variate intersection numbers.We proceed recursively, by applying the same analysis to this new list, obtaining all the required (n−2)variate intersection numbers, until we reach the univariate case.
Using this method, we easily generate a list of required intersection numbers for each step of the recursive algorithm, before any explicit calculation is done.We can thus proceed with the implementation of the algorithm described in section 3, starting from univariate intersection numbers and moving up until the n-variate case.

Laurent and p(z)-adic expansions
In this subsection, we describe how we implement Laurent and p(z)-adic expansions over finite fields, which are extensively used in all steps of the recursion.While FiniteFlow already has algorithms for reconstructing Laurent expansions, we find it is more convenient to adopt a different method that uses the sparse linear solvers implemented in the package.
The starting point is a rational function f (z) for which we want to compute the series expansion, which we write as and for which we have reconstructed the full dependence on z, while the coefficients n j and d j are generally just known numerically, i.e. they can be evaluated via a rational algorithm.A Laurent expansion for f (z) around z = 0 can be easily obtained by making an ansatz for it where min is easily determined from the lowest-degree non-vanishing terms in the denominator and numerator of f , while max is the order at which the expansion is needed.We thus plug eq.5.2 into the relation truncated at the suitable order determined by max, and impose that the coefficient of all terms proportional to z k in the previous equation must vanish.This gives a linear system of equations for the coefficients c i in eq.5.2.It is easy to see that this system is triangular, hence equivalent to a list of substitutions.Expansions around z = ∞ can be obtained similarly, by considering instead the function where the prefactor −1/δ 2 is the Jacobian of the transformation.Hence we expand g(δ) for δ → 0, using the same method illustrated above.Note that the coefficients in the numerator and the denominator of g coincide with the ones for f , except that they multiply different monomials, hence there's no need of shifting variables in this case.The calculation of p(z)-adic expansions is instead done in two steps.Let p(z) be the polynomial with respect to which we want to expand our functions (5.5) In the first step we compute for all needed values of the exponent k.This is done by writing a linear system of equations of the form for integers α and β, where the unknowns are monomials z k δ j .We solve these linear relations for the monomials, expressing monomials with higher powers in z as linear combinations of monomials with lower powers in z (and higher powers in δ).This yields the p(z)-adic expansion of every monomial we need, that takes the form We stress that this expansion is exact, i.e. it will produce an exact identity after substituting δ → p(z), although in practice it can also be truncated to the needed order in δ.We also notice that the system is triangular, hence equivalent to a sequence of linear substitutions.
In the second step, we proceed similarly to the expansion around z = 0, by making an ansatz for the p(z)-adic expansion of f where we have set p(z) = δ for convenience.The minimum order can be inferred by the order of the expansion of the denominator d(z), which in turn can be easily derived by substituting eq.(5.8) into its expression.We thus substitute this expansion into eq.( 5.3), perform the substitutions in (5.8) and impose that all the coefficients of terms z k δ j of the (appropriately truncated) expansion of (5.8) vanish.This yields a linear system of equations for the coefficients c jk .Notice that, in the special case where p(z) is linear, eq.(5.8) is equivalent to a shift of variables and the second step becomes identical to the expansion around zero discussed above.
We observe that we typically need to expand several rational functions having the same denominator.As a shortcut, we expand a generic numerator where the coefficients n j are symbolic.In practice, we treat n j as additional unknowns in our linear system, so that the solution for the coefficients of the expansion will be a linear combination of the coefficients n j .Moreover, for expansions around z = 0 and z = ∞, we remove factors of z in the denominator by allowing the numerator n(z) to contain terms with negative powers of z.This allows us to combine the expansion of a larger set of functions together.

Polynomial factors and multivariate intersection numbers
We now describe the most important step of the implementation, namely the recursive step in the calculation of multivariate intersection numbers.Both the univariate and the multivariate algorithm illustrated in section 3 consist in the computation of Laurent or p(z)-adic expansions which are then plugged into eq.(2.13) to set up linear systems for the coefficients of ψ, which are ultimately used to take the sum of residues contributing to the intersection numbers.All of these are clearly combinations of rational operations.The non-trivial part, that we discuss in the following, is how to generate the input for each step of the recursion without reconstructing (potentially large) intermediate results of the previous steps.
For the univariate case, the input is just the analytic expression for u(z), or equivalently the polynomials B j (z) and the generic exponents γ j .
For the multivariate case, we start from a numerical implementation of the required list of all (n − 1)-variate intersection numbers.These, via straightforward algebraic operations, are combined to obtain the following quantities • the matrix elements Ω ij defined in eq.(2.21) • the coefficients φ L,j of the decomposition into left master integrals of ⟨φ L | n−1 needed in the differential equation (2.20) • the intersection numbers ⟨e In the following we call the union of these three lists of quantities X n , which we are thus able to evaluate numerically (over finite fields) as a function of z n and the other parameters it depends on, including other integration variables z k with k > n.It is easy to see that X n is the input required for the computation of n-variate intersection numbers via the recursive algorithm we illustrated in this paper.More precisely, we need to know the dependence on z n of each function f ∈ X n .Moreover, we would like to have the list of factors p(z n ) appearing in eq.(3.15).In general, we want to avoid to compute the polynomial factors p(z n ) of the denominators of X n every time we evaluate intersection numbers.Knowing the full analytic dependence of the factors p(z n ) (not just on z n but on every other parameter) avoids the need to perform polynomial factorization at each numerical evaluation and also allows to significantly simplify the reconstruction in z n .Moreover, as we will now see, the reconstruction of the full analytic dependence of p(z n ) is generally very simple and can be performed in a relatively small number of numerical evaluations.
In order to set up the recursion in our numerical implementation, we follow these three steps 1.We first reconstruct every f ∈ X n in the variable z n modulo a prime number p, while every other parameter is set to a random numerical value.We thus factorize the denominators of the reconstructed functions, modulo p, to obtain a semi-numerical list of factors p(z n ).
2. We proceed to reconstruct the full analytic dependence of the denominator factors p(z n ).We first identify a "simple" subset S n of X n such that i) the union of the z ndependent denominator factors in S n coincides with the one in X n and ii) the degrees of numerators and denominators of the functions in S n are as low as possible.The subset S n is easily identified from the z n reconstruction of point 1.We thus proceed to reconstruct the z n -dependent denominator factors in S n as follows.For each f ∈ S n , we make an ansatz of its dependence on z n of the form which, again, is easily built from the result of point 1.We stress that the coefficients n j and d j will have a rational dependence on other parameters, that is implicit in the previous equation.We also fix the arbitrary normalization of numerator and denominator by setting the lowest degree coefficient of the denominator to be equal to one, i.e. d min(j) = 1.This choice also has the effect of moving any factor of f that does not depend on z n into the numerator coefficients n j , hence keeping the coefficients d j very simple.We can thus write the previous equation as which evaluated for several values of of z n yields a linear system that we solve for n j and d j .From numerical solutions of this linear systems, obtained with different values of the parameters they depend on, we reconstruct the full analytic dependence of the denominator coefficients d j only.Given their relative simplicity, this usually requires a small number of evaluations and the reconstruction is thus quite efficient.By factorizing the full analytic form of these denominators in a computer algebra system, we obtain the full list of factors defined in eq.(3.15).
3. By combining the results obtained in steps 1 and 2 we can easily map each denominator factor (or each suitable product of factors) appearing in the semi-numerical reconstruction of point 1 into its fully analytic counterpart reconstructed in step 2.
This allows us to make an ansatz for each f ∈ X n of the form (5.12) where the unknown coefficients c α 1 ,...,αm,k are independent of z n .By evaluating X n for several values of z n we obtain a linear system of equations that we solve for the unknown coefficients.This list of numerically-evaluated coefficients, together with the functional form in z n given by the ansatz, are used as input for the calculation of n-variate intersection numbers, which proceeds following subsection 3.3.

Master integrals
In each step of the recursive algorithm, we need to select a basis and a dual basis of master integrals, which for simplicity was assumed to be known in advance in the discussion above.In ref. [34] it is shown that the number of master integrals can be found as the number of solutions of a system of rational equations.As observed in [18] the same number can also be computed as the dimension of a polynomial ideal, which in turn can be found via rational operations.A guess for the list of master integrals can be made from the list of independent monomials in such ideal, see e.g.ref.s [22,32] for more details.
As an alternative, we can also find bases of master integrals with a more pragmatic approach that is based on the computation of intersection numbers.We may start from a list of integrals which form overcomplete bases (i.e. a set of spanning vectors that is not minimal) and compute an "enlarged metric" which is obviously not invertible since the basis is overcomplete.We then column reduce the matrix (or better, a numerical evaluation of it over finite fields) to find a minimal list of independent columns, which corresponds to a basis of independent dual integrals (5.15) The independent columns can thus be row-reduced and the linearly independent rows correspond to a basis of master integrals In each step of the recursive algorithm, this strategy is used to find a basis E R and a dual basis E L of master integrals.

Examples
We present some one-and two-loop examples of reduction to master integrals that use the new rational algorithm presented in this paper.All the examples we showcase also use our new prescription of section 4 for dealing with analytic regulators.The two methods are however independent of each other and our new rational algorithm has also been extensively tested on several examples retaining the full dependence on analytic regulators (see also [32]).
In the following subsections we report some of the decompositions we obtained and the bases of master integrals we used, including the ones for intermediate layers of integration.We also include, in some cases, the reconstructed metric of intersection numbers between master integrals.We stress that one never needs to analytically reconstruct the metric when performing a reduction and we report it in some examples for the sole purpose of showing their simplicity and block triangular structure, a feature that is absent when keeping the full dependence on analytic regulators.
In the following, we use the notation in eq.(2.26) to identify Feynman integrals |φ R ⟩.As in the previous sections, the integration variables are z 1 , . . ., z n and in the recursive algorithm we choose the ordering of integration where z 1 is the innermost integration variable and z n the outermost one.Even if we do not explicitly write it, to avoid cluttering the notation, we understand that all dual integrals ⟨φ L | are multiplied by analytic regulators according to the prescription in section 4. In other words, the replacement is understood for all the integrands of left integrals (i.e.dual integrals).For all families we produced reductions for a wide selection of integrals with various combinations of positive and negative powers of denominators.All of these have been successfully checked against the decomposition obtained using the traditional Laporta algorithm.A small selection among the simplest ones is quoted for illustration purposes.

One-loop families
At one-loop, we consider the integral families of the massless box (Fig. 1) and the one-loop box with two off-shell external legs -also known as two-mass hard box (Fig. 2).We compute the reduction of several integrals belonging to these two families.We consider the family of one-loop massless box with four external legs p 1 , . . ., p 4 satisfying

Massless box
and momentum conservation The integrals in this family depend on two Mandelstam invariants The loop propagators z i are and yield the Baikov polynomial We start with the one-fold integral in z 1 .The input for univariate intersection numbers is the Baikov polynomial and its exponent γ = 1/2(d − 5).The bases for both left and right integrals are given by e where, as stated above, for left integrals the replacement in eq.(6.1) is understood.The metric obtained by calculating the univariate intersection numbers between the first-layer master integrals is Using one-fold intersection numbers, we are able to reduce the one-fold integrals in z 1 to master integrals and compute the differential equation matrix w.r.t.z 2 , defined as in eq.(2.21), which reads where the two z 2 -dependent polynomial factors in the denominators are We notice that both the metric and Ω ij present a triangular structure.This is a general feature of our approach for dealing with analytic regulators.We also observe that p 2 is quadratic in z 2 , as well as in the other variables z i it depends on.Hence, even in this simple example, Ω ij has non-rational poles.
After calculating the relevant univariate intersection numbers, we move to the second integration variable, z 2 , and we compute the needed bivariate intersection numbers.The bases for both left and right integrals for the second layer are given by e The metric of bivariate intersection numbers reads For the third integration variable z 3 , the bases for both left and right integrals are and the metric is We finally move on to the fourth and last integration variable, z 4 .The left and right bases for this integral family are corresponding to one box and two bubbles.This last step produces the following metric The master integrals for the last layer coincide with the one of the integral family and consist of one box, one triangle and four bubbles.The metric presents the following triangular structure The metric for the last layer of master integrals reads and presents a diagonal structure due to the fact that no master integral belongs to a subsector of the others.We quote a simple example of a reduction for this family

.31)
Massive two-loop sunrise The 5-fold metric presents the following block triangular structure Here is the example of a reduction We observe that the master integrals in this example obey additional symmetry relations which cannot be cast as IBP identities.Indeed, as already stated in section 2, the framework of intersection theory does not take these into account.This is not an issue, because these additional relations can be easily identified and added afterwards.In this case, one can easily see that integrals of the form I[0, α 2 , α 3 , 0, α 5 ] are invariant under permutations of the exponents α j .This reduces the number of independent integrals from six to three, since Since we are on the maximal cut, all the integrals we are considering belong to the same sector and thus all entries of the metric C ij are non-vanishing.We reconstructed the reduction of all integrals up to degree 5 in the variables z i (which are those that contribute e.g. to QCD amplitudes) and successfully checked them against the Laporta method.

Conclusions and Outlook
We presented a new method for computing intersection numbers via a purely rational algorithm that does not require any change of basis or integral transformation.This is achieved via the systematic use of the p(z)-adic series expansion.The latter expands functions in powers of a polynomial, allowing to study their behaviour close to potentially irrational singular points, without having to perform any irrational operation or knowing the explicit location of these points.This result represents significant progress in the application of intersection theory to the decomposition of Feynman integrals.The new algorithm is elegant and satisfactory from a mathematical and theoretical point of view, since everything is rational in all steps of the calculation.It also sidesteps the algebraic bottleneck of dealing with irrational expressions and opens up the possibility of combining intersection theory with efficient computational techniques, such as finite fields and functional reconstruction.As a proof of concept, we implemented the new algorithm over finite fields, using the Mathematica package FiniteFlow and tested it on several one-and two-loop examples.
We also proposed a new strategy for dealing with analytic regulators, de facto removing any analytic dependence of the calculation on them and showing dramatic simplifications on both analytic expressions and the structure of the results.These results open up several possible lines of research aimed at making intersection theory a viable approach for phenomenological applications.Besides optimizing and simplifying our proof-of-concept implementation, exploring different integral representations which reduce the number of integration variables may lead to substantial improvements.Examples are the loop-by-loop Baikov representation and the Lee-Pomeransky representation [34].Even more interestingly, a method for the direct calculation of multivariate intersection numbers -thus stepping away from the recursive univariate method -has recently been proposed [24].A possible generalization of our method that builds on [24] and To find the next term in the expansion we repeat the procedure considering q 1 (z)/d(z) as a new rational function.In general we have the recursive formula c i (z) = q i (z) d(z) mod p(z), (A.7) where q i (z) for i > 0 can be computed from q i−1 (z) and c i−1 (z) using q i−1 (z) = c i−1 (z)d(z) + q i (z)p(z), (A.8) and q 0 (z) = n(z).Following this procedure we can compute c i (z) up to the order we need.
Readers familiar with the concept of p-adic numbers may recognize that the construction of the p(z)-adic expansion of a rational function is analogous to the construction of the p-adic expansion of a rational number, where p is a prime number rather than a prime polynomial over the rational field and the remainder of integer division is used instead of the polynomial remainder.

B The univariate global residue theorem
We give a brief overview of the univariate global residue theorem, whose generalization is a key ingredient of the calculation of intersection numbers with the rational algorithm, as described in subsection 3. where Res p(z) is defined in eq.(3.7).
generating and solving a large number of these identities, one can decompose any integral |φ R ⟩ into a basis of linearly independent integrals |e master integrals, 2 where ν is the (finite) dimension of the vector space of integrals.One can similarly reduce dual integrals ⟨φ L | into a dual basis ⟨e

j
|φ R ⟩ and the ones appearing in the metric C ij = ⟨e consider a generic list of n-variate intersection numbers ⟨φ L |φ R ⟩ for the sake of generality.From these, by reviewing the algorithm in subsection 2.1, we can easily make a list of (n − 1)-variate intersection numbers which are required as input for the recursive algorithm.More precisely, if |ej | n−1 ν (n−1)j=1 are the chosen bases for right and left (n − 1)-fold integrals respec-tively, the calculation of the n-variate intersection numbers in the list requires the following (n − 1)-variate intersection numbers• the matrix elements of the (n − 1)-variate metric ⟨e

Figure 4 .
Figure 4. Two loop massive sunrise with equal internal masses and off-shell external momenta.

Figure 5 .=
Figure 5. Massless pentabox, in the example we consider its maximal cut.
coprime with a polynomial p(z), we can calculate the polynomial reminder of f (z) modulus p(z) with eq.(A.4).It takes the explicit formf (z) ≡ f (z) mod p(z) = deg p−1 j=1 f j z j .(B.2)The univariate global residue theorem states that the global residue of f (z)/p(z), namely the sum of all the local residues of f (z)/p(z) at the zeroes of p(z), isRes p(z) f (z) p(z) = f deg p−1 l c , (B.3)