Polyhedral Omega: A New Algorithm for Solving Linear Diophantine Systems

Polyhedral Omega is a new algorithm for solving linear Diophantine systems (LDS), i.e., for computing a multivariate rational function representation of the set of all non-negative integer solutions to a system of linear equations and inequalities. Polyhedral Omega combines methods from partition analysis with methods from polyhedral geometry. In particular, we combine MacMahon's iterative approach based on the Omega operator and explicit formulas for its evaluation with geometric tools such as Brion decompositions and Barvinok's short rational function representations. In this way, we connect two recent branches of research that have so far remained separate, unified by the concept of symbolic cones which we introduce. The resulting LDS solver Polyhedral Omega is significantly faster than previous solvers based on partition analysis and it is competitive with state-of-the-art LDS solvers based on geometric methods. Most importantly, this synthesis of ideas makes Polyhedral Omega the simplest algorithm for solving linear Diophantine systems available to date. Moreover, we provide an illustrated geometric interpretation of partition analysis, with the aim of making ideas from both areas accessible to readers from a wide range of backgrounds.

In this article we present Polyhedral Omega, a new algorithm for computing a multivariate rational function expression for the set of all solutions to a linear Diophantine system. This algorithm connects two branches of research -partition analysis and polyhedral geometry -between which there has been little interaction in the past. To make this paper accessible to researchers from either field, as well as to readers with other backgrounds, we give an elementary presentation of the algorithm itself and we take care to motivate the key ideas behind it, in particular their geometric content. To begin, we use this introduction to define the problem of computing rational function solutions to linear Diophantine systems and to give an overview of the algorithms developed in partition analysis and polyhedral geometry to solve it, before pointing out the benefits of Polyhedral Omega.
Linear Diophantine Systems and Rational Functions. Let A ∈ Z m×n be an integer matrix and let b ∈ Z m be an integer vector. In this article, we are interested in finding the set of non-negative integer vectors x ∈ Z n 0 such that Ax b. Since we are restricting our attention to non-negative solutions x ∈ Z n 0 , it is equivalent to consider systems consisting of any combination of equations and inequalities. However, to streamline this article, we are going to focus on the case Ax b. We call such a system of constraints, given by A and b, a linear Diophantine system (LDS).
Linear Diophantine systems are of great importance, both in practice and in theory. For example, the Integer Programming (IP) problem -which is about computing a solution to an LDS that maximizes a given linear functional -plays a pivotal role in operations research and combinatorial optimization [55]. In this article, we are not interested in finding one optimal solution, however. Instead, we wish to compute a rational function representation of the set of all solutions to an LDS.
Of course, the set of solutions to a linear Diophantine system may be infinite. For example, the equation x 1 − x 2 = 0, which is equivalent to the system x 1 − x 2 0 and x 1 − x 2 0, has the solution set {(0, 0), (1,1), (2,2), . . .}. One way to represent such infinite sets of solutions is via multivariate rational functions. We identify a vector x ∈ Z n with the monomial z x := z x 1 1 ·z x 2 2 ·. . .·z x n n in n variables. Then, using the geometric series expansion formula, we can represent the above set of solutions by the rational function 1

Problem 1.1. Rational Function Solution of Linear Diophantine System (rfsLDS)
Input: A ∈ Z m×n , b ∈ Z m Output: An expression for the rational function ρ ∈ Q(z 1 , . . . , z n ) representing the set of all non-negative integer vectors x ∈ Z n 0 such that Ax b. Such rational function solutions to LDS are of great importance in many applications. For example, they can be used to prove theorems in number theory and combinatorics [4,5,50], compute volumes [16], count integer points in polyhedra [17], to maximize non-linear functions over lattice points in polyhedral [32], to compute Pareto optima in multi-criteria optimization [30], to integrate and sum functions over polyhedra [12], to compute Gröbner bases of toric ideals [29], to perform various operations on rational functions and quasipolynomials [15], and to sample objects from polyhedra [52], from combinatorial families [35] and from statistical distributions [28]. We recommend the textbooks [13,20,31] for an introduction.
Note that, while above rfsLDS is stated in terms of pure inequality systems, this restriction is not essential. The methods and algorithm we present in this article can be easily extended to handle the case of mixed systems directly, as does our implementation [23].
Partition Analysis. One of the major landmarks in the long history of Problem 1.1 is MacMahon's seminal work Combinatory Analysis [50], published in 1915, where he presents partition analysis as a general framework for solving linear Diophantine systems in the above sense, particularly in the context of partition theory. The general approach of partition analysis is to employ a set of explicit analytic formulas for manipulating rational function expressions that can be iteratively applied to obtain a solution to a given linear Diophantine system. We will examine partition analysis in detail in Section 2; for now, we will confine ourselves to a quick preview to convey the general flavor.
Consider the linear Diophantine inequality 2x 1 + 3x 2 − 5x 3 4 (1) To solve this system, MacMahon introduces the Omega operator Ω defined on rational functions in K(x 1 , x 2 , x 3 )(λ) in terms of their series expansion by where c s ∈ K(x 1 , x 2 , x 3 ) for all s. In other words, Ω acts by truncating all terms in the series expansion with negative λ exponent and dropping the variable λ. Using the Omega operator, the generating function φ S of the set of solutions S of (1) can be written as The expression on the right-hand side of this equation is known as the crude generating function. Partition analysis is about developing calculi of explicit formulas for evaluating the Omega operator when applied to rational functions of a certain form. MacMahon's 1st Rule provides a typical example: Applied iteratively, such formulas allow us to symbolically transform the crude generating function into a rational function expression for φ S , as desired. At the turn of the 20th century, Elliott was the first to develop such a calculus that yielded an algorithmic method for solving a linear Diophantine equation. At the same time, MacMahon was pursuing the ambitious and visionary project of applying such methods to partition theory. While algorithmic, Elliott's method was not practical for MacMahon's purposes, as it was too computationally intensive to carry out by hand. Therefore, MacMahon developed and continually extended his own calculus which enabled him to solve many partition theoretic problems, even though it did not constitute a general algorithm. Despite many successes, MacMahon did not achieve the goal he had originally set himself of using his partition analysis calculus to prove his famous formula for plane partitions [5]. It is perhaps for this reason that partition analysis fell out of interest for much of the 20th century, before it started to attract renewed attention in recent decades, beginning with Stanley [56].
In 1993 Barvinok was able to show in a seminal paper [16] that for every LDS there does exist a rational function expression for its set of solutions whose encoding length is bounded polynomially in terms of the encoding length of the input -provided that the number of variables is fixed. Moreover, Barvinok gave an algorithm for computing such a short rational function representation. The key ingredient that makes these short decompositions possible is Barvinok's method for computing a short signed decomposition of a simplicial cone into unimodular simplicial cones. These Barvinok decompositions will play an important role in this article as well.
Barvinok's algorithm for solving rfsLDS combines Barvinok decompositions with a number of other sophisticated tools from polyhedral geometry, ranging from the computation of vertices and edge directions of polyhedra [40,41] to the triangulation of polyhedral cones [34,53]. This combination makes Barvinok's algorithm very complex and much more involved than the relatively straightforward collection of rules based on explicit formulas that MacMahon had envisioned in the partition analysis framework. A testament of this difficulty is the fact that even though Barvinok's algorithm quickly grabbed the attention of scientific community, it took 10 years and a sequence of additional research papers, before Barvinok's algorithm was first implemented in 2004 by De Loera et al. [33].
The importance of Barvinok's short rational function representations in this area cannot be overstated, as they have given rise to a whole family of algorithms and theoretical complexity results for a large range of problems, many of which we have mentioned above. Of particular note in our context is that Barvinok and Woods proved a theorem in [15] which implies in particular that MacMahon's Omega operator can be evaluated in polynomial time in fixed dimension -not only on crude generating functions as they arise in partition analysis but on a wider class of rational functions.
To conclude our overview of prior literature, we briefly mention a couple of further related publications [25,43,44], a detailed discussion of which is out of scope of this article. [43,44] give two different algorithms for solving rfsLDS via an explicit formula, phrased in terms of a summation over all submatrices of the original system. This can be viewed as a brute force iteration over all simplicial cones formed by the hyperplane arrangement given by the system, together with a recursive algorithm for computing the numerators of the corresponding rational functions. In contrast to an explicit formula given in [25], the formulas from [43,44] are algorithmically tractable, though the running time can be exponential, even if the dimension is fixed. Since the iterative application of Ω is not the focus of [43,44], we do not view these algorithms as part of the partition analysis family for the purposes of this article. To our knowledge these algorithms have not been implemented.
Our Contribution. In this article, we bring partition analysis and polyhedral geometry together. Somewhat surprisingly, these two branches of research have so far had relatively little interaction. To bridge this gap, we provide a detailed and illustrated study of these two fields from a unified point of view. In particular, we interpret the partition analysis calculi provided by Elliott, MacMahon and Andrews-Paule-Riese from a geometric perspective. For example, MacMahon's ansatz of setting up the crude generating function corresponds in the language of geometry to a clever way of writing an arbitrary polyhedron as the intersection of a simplicial cone with a collection of coordinate half-spaces -a construction which we call the MacMahon lifting. While the connection between rational functions and polyhedral cones is well-known and a discussion of the Elliott-MacMahon partition analysis algorithm from this point of view can be found in [56], some of this material is new, including our geometric discussion of Andrews-Paule-Riese and Xin.
The main contribution of this article is Polyhedral Omega, a new algorithm for solving linear Diophantine systems which is a synthesis of key ideas from both partition analysis and polyhedral geometry. Just like partition analysis algorithms, Polyhedral Omega is an iterative algorithm given in terms of a few simple explicit formulas. The key difference is that the formulas are not given in terms of rational functions, but instead in terms of symbolic cones. These are symbolic expressions for simplicial polyhedral cones, given in terms of their generators. Applying ideas like Brion's theorem from polyhedral geometry gives rise to an explicit calculus of rules for intersecting polyhedra with coordinate half-spaces, i.e., for applying the Omega operator on the level of symbolic cones.
After the Omega operator has been applied, the final symbolic cone expression can then be converted to a rational function expression using either an explicit formula based on the Smith Normal Form of a matrix, or using Barvinok decompositions. The Smith Normal Form approach has the advantage of being simpler and easier to implement, while the Barvinok decomposition approach is much faster on many classes of LDSs and produces shorter rational function expressions. However, for many applications it can be advantageous to not convert the symbolic cone expression to a rational function expression at all, since this conversion can increase the size of the output dramatically. Especially when the output is intended for inspection by humans, we strongly recommend working with the symbolic cone expression as it is typically much more intelligible.
Polyhedral Omega has several advantages. In comparison with partition analysis methods, Polyhedral Omega is much faster. In fact, on certain classes of examples it offers exponential speedups over previous partition analysis algorithms -even if Barvinok decompositions are not used! -by virtue of working with symbolic cones instead of rational functions and through an improved choice of transformation rules. In comparison with polyhedral geometry methods, Polyhedral Omega is much simpler. It does not require sophisticated algorithms for explicitly computing all vertices and edge-directions of a polyhedron or for triangulating non-simplicial cones. These computations happen implicitly through the straightforward application of a few simple rules given by explicit formulas. At the same time, if Barvinok decompositions are used for the conversion of symbolic cones into rational functions, then Polyhedral Omega runs in polynomial time in fixed dimension, i.e., it lies in the same complexity class as the fastest-known algorithms for the rfsLDS problem. Polyhedral Omega is the first algorithm from the partition analysis family that achieves polynomial run time in fixed dimension. Moreover, Polyhedral Omega is the first partition analysis algorithm for which a detailed complexity analysis is available. This analysis is made possible by our geometric point of view.
On the whole, Polyhedral Omega is the overall simplest algorithm for solving linear Diophantine systems, while at the same time its performance is competitive with the current state-of-the-art.
In this article, we give a theoretical description and analysis of Polyhedral Omega, together with a detailed motivation and illustration of the central ideas that makes the exposition accessible to readers without prior experience in either partition analysis or polyhedral geometry. The definition of the algorithm is explicit enough to make it possible for interested readers to implement the algorithm even without being experts in the field. Moreover, we have implemented Polyhedral Omega on top of the Sage system [57] and the code is publicly available [23].
Organization of this Article. In Section 2, we give a geometric interpretation of previous partition analysis calculi by Elliott, MacMahon, Andrews-Paule-Riese and Xin. In Section 3, we present the Polyhedral Omega algorithm and prove its correctness. We take particular care to motivate and illustrate the key ideas in an accessible manner. In Section 4, we give a detailed complexity analysis of our algorithm. In Section 5, we then compare Polyhedral Omega to other algorithms from both partition analysis and polyhedral geometry. Finally, we conclude in Section 6, where we also point out directions for future research.

PARTITION ANALYSIS AND ITS GEOMETRIC INTERPRETATION
In this section we present some of the major landmarks of partition analysis and interpret the results, which are typically phrased in the language of analysis, from the perspective of polyhedral geometry.
In the following, we will introduce the required concepts and terminology from polyhedral geometry as we go along. However, a comprehensive introduction to polyhedral geometry is out of scope of this article. We therefore refer the reader to the excellent textbooks [20,31,54,66] for further details.

MacMahon.
The Ω operator and the crude generating function. MacMahon presented his investigations concerning integer partition theory in a series of (seven) memoirs, published between 1895 and 1916. In the second memoir [49], MacMahon observes that the theory of partitions of numbers develops in parallel to that of linear Diophantine equations. In particular, he defines the Ω operator in Article 66: "Suppose we have a function F (x, a) which can be expanded in ascending powers of x. Such expansion being either finite or infinite, the coefficients of the various powers of x are functions of a which in general involve both positive and negative powers of a. We may reject all terms containing negative powers of a and subsequently put a equal to unity. We thus arrive at a function of x only, which may be represented after Cayley (modified by the association with the symbol ) by Ω F (x, a) the symbol denoting that the terms retained are those in which the power of a is 0." A modern wording of the definition is given in [4]. The Ω operator is defined on functions with absolutely convergent multisum expansions in an open neighborhood of the complex circles |λ i | = 1. The action of Ω is given by MacMahon proceeds with defining what he calls the crude generating function. This is a (multivariate) generalization of an idea appearing in Cayley as well as in Elliott's work (see next section). Given a linear system of Diophantine inequalities, MacMahon introduces an extra variable for each inequality. These extra variables are denoted by λ. Given A ∈ Z m×d and b ∈ Z m we consider the generating function and based on the geometric series expansion formula (1 − z) −1 = x 0 z x we can transform the series into a rational function. The rational form of The crude generating function is the syntactic expression obtained by prepending Ω to the λ-generating function. We present an example that we also use for the geometric interpretation. Let A = [ 2 3 ] and b = 5 and consider the Diophantine system Ax b. Then Thus, the crude generating function for this system is After defining the crude generating function, MacMahon proceeds to evaluate this expression by the use of formal rules, such as .
In what follows we will explain what this means geometrically.
Cones and fundamental parallelepipeds. The connection between partition analysis and geometry hinges on the fact that certain rational function expressions correspond directly to generating functions of lattice points in polyhedral cones. A lattice point is any integer linear combination of a fixed set of basis vectors. Without loss of generality, in this article, we will always work with the the standard unit vectors as our basis, so that "lattice point" is synonymous with "integer point". We already used Φ to denote the formal power series generating function for linear Diophantine systems. More generally, for a set S ⊆ Z d of lattice points we will use Φ S to denote the generating function of S, i.e., Φ S = s∈S z s . For convenience, if S ⊆ R d is a set of real vectors, we will also write Φ S to denote the generating function of all lattice points in S, namely Φ S = s∈S∩Z d z s . Of course, if and where Φ S (1,3)).
(C) The corresponding tiling of C R ((1, 0), (1, 3)) by translates of its fundamental parallelepiped. converges depends on the set S, but these questions will be of no particular concern to us, as we will see below. We are primarily interested in generating functions Φ C where C is a simplicial polyhedral cone. We define the cone C generated by v 1 , v 2 , . . . , v d as The v i are the generators of C . If the v i are linearly independent, the cone C is simplicial. All cones in this article are rational polyhedra, which means that they can also be defined as the set of real vectors satisfying a finite system of linear inequalities with integer coefficients. A cone C is pointed or line-free if it does not contain a line. All simplicial cones are pointed. For pointed cones C the power series Φ C always has a non-trivial radius of convergence, see also Section 2.4. If a given cone C is pointed and the chosen set of generators v i is inclusion-minimal, then the sets R 0 v i are called the extreme rays of the cone. The set of extreme rays R i of a pointed cone is uniquely determined, but within each ray the choice of generators 0 = v i ∈ R i is arbitrary. A vector v ∈ Z n is primitive, if the greatest common divisor of its entries is 1. For every vector 0 = v ∈ Q n there exists a unique primitive vector prim(v) that is a positive multiple of v. Equivalently, prim(v) is the unique shortest non-zero integer vector on the ray R 0 v. With these normalizations we find that each pointed rational cone has a unique minimal set of primitive generators. Later, we are also going to need the notion of affine cones, which just means cones like those defined above translated by a rational vector q. For translates of pointed cones, the translation vector q is uniquely determined, in which case q is called the apex of the affine cone. We will frequently drop the adjective "affine" and just refer to cones in general. It will be clear from context whether a given cone has its apex at the origin or not. For a given q ∈ Q d we adopt the notation Here, + denotes the Minkowski sum, i.e., for two sets A and B we have If instead of considering all conic combinations of the generators with real coefficients, we consider only non-negative integer combinations, then we obtain the semigroup which we also call the discrete cone generated by the v i . As above we write C Z v 1 , . . . , v d ; q := q + C Z (v 1 , . . . , v d ).
As an example, consider the semigroup generated by (1, 0) and (1, 3) as shown in Figure 2a. If we take the geometric series 1 (1−z 1 ) and expand it, i.e., 1+z 1 +z 2 1 +. . ., we observe that this series is the generating function of the lattice points on the non-negative x-axis. In order to take the generating function for the lattice points at height 3, we need to multiply the series by z 1 z 3 2 . Accordingly, to take the lattice points at height 6 we multiply by z 1 z 3 2 2 and so on. It is easy to see, that the generating function of the semigroup is the product of two geometric series, namely 1 (1−z 1 ) and Following the same idea, we can translate the whole semigroup by multiplying by a monomial. If we multiply 1 (1−z 1 ) 1−z 1 z 3 2 by z 1 z 2 we obtain the generating function of the red points in Figure 2b. Similarly, we multiply by z 1 z 2 2 to obtain the generating function of the green points. Now, since there are no overlaps, we can sum the generating functions in order to obtain the generating function for all lattice points in the cone generated by (1, 0) and (1, 3) with real coefficients, The numerator of this rational function encodes the lattice points (0, 0), (1,1) and (1,2) which are precisely the lattice points in the parallelogram and the set of lattice points in the fundamental parallelepiped as Π Z d (C ) = Π R (C )∩Z d . Given this definition, the general form of the generating function of a simplicial cone where Φ Π Z d (C ) (z 1 , . . . , z d ) is the generating function for the lattice points in the fundamental parallelepiped of C . This fundamental observation goes back to Ehrhart [36,37,38]. From this identity it is immediately obvious that cones C with few lattice points in their fundamental parallelepiped Π Z d (C ) have particularly short representations as rational functions. Cones C where Π Z d (C ) contains just a single lattice point are called unimodular.
An important property of the fundamental parallelepiped is that its lattice points are coset representatives for the lattice points in the cone modulo the semigroup of the generators (as exemplified by the construction of the generating function). To put it in another way: the set of lattice points in the cone is the set of lattice points in the fundamental parallelepiped, translated by all non-negative integral combinations of the generators of the cone, i.e., Here denotes an ordinary union together with the assertion that the operands are disjoint.
Geometry of the Ω operator and the MacMahon lifting. Now we can see the connection of MacMahon's construction to polyhedral geometry. In Figure 3, we see the geometric steps for creating the λ-cone (equivalent of the λ-generating function) by means of the MacMahon lifting and then applying the Ω operator.
Given an LDS Ax b with m inequalities and d variables, we first lift the standard generators of the positive orthant of R d by appending the exponents of the λ variables (thus lifting the generators in R d +m ). This lifting idea first appears in an analytic context in Cayley's work [27]. Consider the cone generated by the MacMahon lifting, i.e., the cone generated by the columns of matrix V = Id d A . The generating function of this cone, due to (2), is exactly . This is true because the matrix V is unimodular by construction, whence the fundamental parallelepiped of the cone contains exactly one  lattice point, namely the origin. Next, we translate the cone by q = 0 −b and denote the new cone by Then the generating function of cone C is exactly the λ-generating function, i.e., Applying the Omega operator to Φ C now corresponds geometrically to intersecting C with all halfspaces where the λ-variables are non-negative and then applying a projection that forgets the λ-variables.
To make this precise, let us denote by H λ the intersection of coordinate halfspaces where all λ variables are non-negative, i.e., Next, let π denote the projection from R d +m to R d that forgets the last m coordinates which correspond to λ-variables. Given this notation, the action of Ω on the λ-generating function corresponds to first intersecting C with the H λ and then projecting the resulting polyhedron using π, i.e., Continuing with the example we had chosen previously, assume A = [2 3] and b = 5. In Figure 3a, we construct the vectors (1, 0, 2) and (0, 1, 3), lifting the two standard basis vectors (1, 0) and (0, 1) according to the entries of matrix A. In Figure 3b, we shift the cone generated by (1, 0, 2) and (0, 1, 3) by −b = −5, and consider the halfspace defined by the x y-plane, where λ is non-negative. The two intersection points of the cone with the hyperplane are marked. Now, in Figure 3c, we consider the intersection of the halfspace defined above with the shifted cone. The lattice points in the intersection are shown in blue marked, and their projections onto the λ = 0 plane are shown in red. These projections are the non-negative solutions to the original inequality, as shown in Figure 3d.
MacMahon's rules. MacMahon's method was general and in principle algorithmic, based on Elliott's work (see next section). As we shall see, Elliott's method is simple but computationally very inefficient. MacMahon introduced a set of rules in order to deal with particular combinatorial problems computationally -in particular, as he had to carry out his computations by hand. We present some of MacMahon's rules from a geometric point of view.  The first two rules MacMahon gave are There is an apparent symmetry in the input, but elimination results in structurally different numerators, i.e., 1 versus 1 + x y The fourth rule in MacMahon's list is Although the rational function on which Ω acts has three factors in the denominator, the resulting rational generating function has four factors in the denominator and the numerator has terms with both positive and negative signs. This is because the initial cone defined by the inequality x + y −z 0 is not simplicial. The cone admits a signed decomposition into simplicial cones. When summing up the generating functions of the cones we obtain the right hand side of the rule, with a negative term due to the signed decomposition. The decomposition is shown in Figure 6. We start with the non-simplicial cone C R ((1, 0, 0), (0, 1, 0), (1, 0, 1), (0, 1, 1)). Then we consider the simplicial cones C R ((1, 0, 0), (0, 1, 0), (1, 0, 1)) and C R ((0, 1, 0), (1, 0, 1), (0, 1, 1)) as well as their intersection C R ((0, 1, 0), (1, 0, 1)). By inclusion-exclusion we have a decomposition of the initial cone.
2.2. Elliott. One of the first references relevant to partition analysis is Elliott's article "On linear homogeneous Diophantine equations" [39]. The work of Elliott is exciting, if not for anything else, because it addresses mathematicians that lived a century apart. It was of interest to MacMahon, who based his method on Elliott's decomposition, but also to 21st century mathematicians for explicitly giving an important algorithm. Although the algorithm has very bad computational complexity, it can be considered as an early algorithm for lattice point enumeration (among other things).
The problem Elliott considers is to find all non-negative integer solutions of one homogeneous linear Diophantine equation The starting point for Elliott's work is the fact that even if one computes the set of "simple solutions", i.e., solutions that are not combinations of others, there are syzygies preventing us from writing down formulas giving each and every solution to the equation exactly once. He proceeds by explaining that his method computes a generating function whose terms are in one-to-one correspondence with the solutions of the equation.
The basic idea employed by Elliott (and the basis of partition analysis) is the introduction of an extra variable denoted by λ, encoding the equation. This idea, as we saw, was employed by MacMahon and can be traced back to Cayley. Elliott then observes that the terms in the series expansion of correspond to solutions of (4). The question then is how to decompose this rational function into a sum of rational functions, such that the expansion of each of them contains either only terms with non-negative λ exponents or only terms with non-positive λ exponents. Given such a decomposition it is safe to drop the components that give terms containing λ. Elliott's algorithm computes such a partial fraction decomposition. Given a rational function expression k (1−q j ) with p i being a monomial in z 1 , z 2 , . . . , z k , λ and q j a monomial in z 1 , z 2 , . . . , z k , λ −1 , based on the identity To achieve the goal of only having summands where the exponents of λ in the denominator all have the same sign, the above identity has to be applied iteratively. The left-hand side of (5), with exponents (α, −β), is reduced to (α − β, α) or (α − β, −β), depending on whether α β. This leads to a recurrence corresponding exactly to the Euclidean algorithm, see also [22].
Elliott's identity has a very nice geometric interpretation, which is illustrated in Figure 7. We observe that 1 in the interior of the cone C , the cones Their intersection is exactly the ray starting from the origin and passing through v = (1, 1, α − β). By a simple inclusion-exclusion argument we have the signed decomposition C = A + B − (A ∩ B ). This decomposition translated back to the generating functions level is exactly the partial fraction decomposition employed by Elliott. One should mention here that Elliott insists on the fact that the numerators are ±1, which geometrically means that he deals with unimodular cones, which also play a key role in Barvinok's algorithm.

Andrews-Paule-Riese.
In 1997, Bousquet-Mélou and Eriksson presented a theorem on lecture hall partitions in [21]. This theorem gathered a lot of attention from the community (and it still does, with many lecture hall type theorems appearing still today [18]). Andrews, who had already studied MacMahon's method and was aware of its computational potential, figured that lecture hall partitions offered the right problems to attack algorithmically via partition analysis. At the same time it was planned to spend a semester during his sabbatical at RISC in Austria to work with Paule. It is only natural that the result was a fully algorithmic version of MacMahon's method powered by symbolic computation. This collaboration gave a series of 10 papers [4,1,10,5,6,7,3,8,9,2]. Many interesting theorems and different kinds of partitions are defined and explored in this series of papers, but for us the most important two are [4] and [5], which contain the algorithmic improvements on partition analysis. Namely, in [4] the authors introduce OMEGA, a Mathematica package based on a fully algorithmic version of partition analysis, while in [5] they present a more advanced partial fraction decomposition (and the related Mathematica package OMEGA2) solving some of the problems appearing in OMEGA. While presenting the methods, we will see some of their geometric aspects.
The main tool in OMEGA is the Fundamental Recurrence (Theorem 2.1 in [4]) for the Ω operator. Following MacMahon, or Elliott for that matter, iterative application of this recurrence is enough for computing the action of Ω . The interested reader can find the definition of the recurrence in [4]. The fundamental recurrence assumes that the exponents of λ in the denominator are ±1. This is not a strong assumption, as noted in [4], since we can always employ a decomposition to linear factors. The obvious drawback of this approach is that we introduce complex coefficients instead of ±1. This motivates the  desire for a better recurrence. In [5] the same authors introduce an improved partial fraction decomposition method (the generalized partial fraction decomposition), given by the following recurrence. 1 Let α β 1, gcd(α, β) = 1 and inv b (a) denote the multiplicative inverse of a modulo b. Then Before interpreting the rational functions involved in the Fundamental Recurrence of [5] as cones in Figure 9, we introduce some notation and the concept of a half-open cone, which is useful when dealing with cone decompositions. The dimension of a polyhedron is the dimension of its affine hull. A face of a polyhedron P is a subset of P on which some linear functional is maximized. Faces of polyhedra are polyhedra themselves. Faces capture the intuitive notion of a "side" of a polyhedron. 0-dimensional faces are called vertices, 1-dimensional faces are called edges and the (d − 1)-dimensional faces of a polyhedron of dimension d are called facets. For example, a 3-dimensional cube has 8 vertices, 12 edges and 6 facets. Recall that Assume C is simplicial and let [k] = I ⊂ Z 0 be the index set for its generators. Then each face F of C can be identified by a set I F ⊆ I , since a face has the form Essentially, this means that the face is generated by a subset of the generators. Note that I F contains the indices of the generators of the simplicial cone C that are not used to generate F as in Figure 8a. A facet is identified with the index of the single generator not used to generate it.
x y A half-open cone C is a cone from which some facets have been removed. From the previous discussion, for each facet we need only to mention the generator corresponding to that facet. We will use a 0−1 vector o to express the openness of a cone, i.e., the openness vector has an entry of 1 in the position k if the facet corresponding to the k-th generator is open. To this end, we define The above notation will be used throughout the paper. To improve readability, we will drop the o from the notation when o = 0, i.e., when all facets are closed. Similarly, we drop q when q is the zero vector. Moreover, if V is a matrix with vectors v 1 , . . . , v d as columns we may write V instead of listing the vectors explicitly, such that, for example, C R (V ) := C (0,...,0) R (v 1 , . . . , v d ; 0). Note that both (2) and (3) extend naturally to the case of affine half-open cones. Let V be the matrix with v 1 , . . . , v d ∈ Z n as columns. If we generalize the notion of a fundamental parallelepiped to Returning to the cone interpretation of OMEGA2 (Figure 9), we observe that the 2-dimensional cone C = C R (1, 0, α), (0, 1, β) is the left hand side of the generalized partial fraction decomposition (6). Then we intersect the x y-plane with the plane the cone lives in. Choose a ray on this intersection (conveniently, one such ray is (−β, α, 0), denoted by red in the figure). Then consider two cones A = C R (−β, α, 0), (1, 0, α) and B = C (1,0) R (−β, α, 0), (0, 1, β) . We observe that the cone C is decomposed as the cone A minus the half-open cone B , as can be seen in Figure 9. In particular, the generating function for the fundamental parallelepiped of cone A is exactly the numerator P α,β and similarly the fundamental parallelepiped of B is precisely the numerator Q α,β . See Figure 10 for an example. This connection between polynomials of this form and fundamental parallelepipeds is well-known in the (6), for α = 13 and β = 5, decomposes C = C (g 1 , g 2 ) into the difference of A = C (g , g 1 ) and . While the fundamental parallelepiped of C contains just a single lattice point, the lattice points in the fundamental parallelepipeds of A, B correspond to the numerators P α,β and Q α,β respectively. The number of lattice points is linear in α, β, i.e., it is exponential in the encoding length of α, β.
context of Dedekind-Carlitz polynomials [19]. In particular, these polynomials have a short recursive description [22], which can be exponentially more compact than (7) and (8). This shows that the Andrews-Paule-Riese decomposition adds a layer of complexity to the geometry of partition analysis in comparison with previous methods. Elliott is dealing only with unimodular cones, but has to resort to a recursive procedure for the elimination of a λ variable. In contrast, the generalized partial fraction decomposition of [5], eliminates a λ variable in one step at the expense of having cones with fundamental parallelepipeds containing potentially exponentially many lattice points. Moreover, the Andrews-Paule-Riese decomposition is more complex than previous rules in that it works with half-open cones.

Questions of Convergence.
In the partition analysis series of papers by Andrews-Paule-Riese and in particular in [4], the authors stress the importance of working with absolutely convergent series. This is because the Omega operator is not well defined if rational function expansions do not have a common region of convergence. Quoting from [4]: "While MacMahon did not carefully distinguish whether his Laurent series were analytic or merely formal, we emphasize that it is essential to treat everything analytically rather than formally because the method relies on unique Laurent series representations of rational functions. For instance, if we were to proceed formally, then But if we allowed a purely formal application of the geometric series, then both initial expressions are 1 1−λq ." While this observation about the Omega operator is indeed crucial, the conclusion that it is therefore "essential to treat everything analytically rather than formally" is not valid, as there are several methods to treat these issues from a formal point of view.
Xin, who also gives an improved set of rules for computing the Omega operators in [64], notes that by working in the field of iterated Laurent series and by normalizing the rational function expression accordingly, it is possible to disregard questions of convergence. An in-depth study of different ways to define multivariate Laurent series appears in [11]. There, Aparicio Monforte and Kauers introduce multivariate Laurent series as linear combinations of series that are supported in possibly shifted pointed cones compatible with a given ordering. They prove that this construction ensures that the support is well ordered and embed their theory in the Malcev-Neumann series construction presented by Xin in [64]. Their construction is a natural and "large" field of multivariate Laurent series.
Independently from these developments, researchers in the polyhedral geometry community have been dealing with these analytical phenomena from a geometric perspective for a long time. Recall that, intuitively, we think of 1 1−q as corresponding to the 1-dimensional cone [0, +∞) and of 1−q −1 on the level of rational functions then leads us to considering equivalence classes of cones "modulo lines", see Section 3. This is a routine practice in the polyhedral geometry literature [13] which is used for great effect (for example in Brion's famous theorem, see Theorem 3.1 below) and should therefore be regarded as "a feature not a bug".
In this article, we have to be careful, however, since, as the above example shows, the Omega operator is not well-defined on equivalence classes modulo lines. Nonetheless, as we will see in the next section, polyhedral methods can still be applied to evaluate the Omega operator in an entirely symbolic manner, as long as we work with the right representatives of these equivalence classes modulo lines. To choose representatives consistently, we define any cone C as forward if all generators of C are lexicographically larger than the origin, i.e., if their first non-zero entry is positive. The notion of forward cones appears in [45] and plays a prominent role in the Lawrence-Varchenko decomposition (Theorem 3.2) that we use below. For the Omega operator to be well defined, we require that all series appearing in the expression on which Omega acts are supported in the half-open half-space of vectors that are lexicographically larger or equal than the origin. In general, one can define forward with respect to an arbitrary half-open half-space, but the above definition will suffice for our purposes. By working with expansions of rational functions that are forward in this geometric sense, we can apply the Omega operator without recourse to a full analytic treatment. In the example given above, the series − ∞ n=1 q −n λ −n is not a forward expansion of 1 1−λq . Note that Xin's iterated Laurent series field construction corresponds precisely to this geometric idea of working with linear combinations of cones that are all forward in the lexicographic sense, again see [11]. In Section 3, we will discuss both the method of working with cones modulo lines as well as the notion of forward cones in detail.

Applying Omega vs. Solving LDS.
MacMahon invented the Ω operator for solving linear Diophantine systems. In this framework, Ω is applied to rational functions that arise from the MacMahon lifting. However, once Ω is defined, the question arises how to evaluate Ω when applied to a general rational function r whose denominator factors into binomials. It turns out that this seemingly more general problem can be reduced to solving linear Diophantine systems, and, in particular, to applying Ω to rational functions as produced by the MacMahon lifting.
Let r ∈ K(x 1 , . . . , x d ) be a rational function whose denominator factors into binomials. Let Ω eliminate the last k of the variables x i . Due to linearity of Ω , we can, without loss of generality, restrict our attention to rational functions of the form where b, a 1 , . . . , a n ∈ Z d \ {0}, α 1 , . . . , α n ∈ K and the vectors b, a 1 , . . . , a n are forward with respect to the given ordering of the variables x i . Now, replace r with another rational function s ∈ Q(z 1 , . . . , z n , This is the rational function constructed by the MacMahon lifting from the system Ax −b, where A is the matrix with the a 1 as columns. It corresponds to the simplicial cone with generator matrix Id n A and apex 0 −b . Therefore the methods discussed in this paper can be applied to compute a rational function expression for Ω (s).
To obtain Ω (r ) from Ω (s), all that is necessary is to substitute z i = α i for all i in Ω (s). This substitution is possible by construction, as long as the rational function expression obtained for Ω (s) is brought into normal form first. However, as we will see below, the rational function expressions we usually obtain are sums of rational functions and z = α may well be a pole of individual summands in this expression. However, there are methods to do this substitution efficiently, and obtain Ω (r ) from Ω (s), without bringing Ω (s) into normal form first.
There are a number of references that discuss such methods in depth, so we will not concern ourselves further with these issues in this paper and refer the interested reader to the relevant literature: Substitutions into sums of rational functions of the above form are an essential part of Barvinok's algorithm for counting lattice points in polytopes [16,17,13,31]. In [15], Barvinok and Woods extend these techniques and develop a set of algorithms for performing a number of operations on rational function expression in the above form. In particular, they give an algorithm that computes the Hadamard product of two rational functions that runs in polynomial time if the number of factors in the denominators is fixed [15,Lemma 3.4]. This immediately gives an algorithm for computing Ω (r ) for any r of the form given above. This algorithm runs in polynomial time, if d and n are fixed. Computing Hadamard products with this method has been implemented, e.g., in the barvinok package [62,61]. In [65], Xin describes another algorithm that reduces the computation of Ω (r ) to solving linear Diophantine systems. In [64,63,65], Xin proposes several partition analysis algorithms based on iterated partial fraction decomposition (PFD) in the multivariate rational function field. While [63] focuses on PFD of rational functions with two factors in the denominator, [65] applies PFD directly to rational functions with n factors in the denominator.
As it turns out, partial fraction decomposition has a clear interpretation in the language of polyhedral geometry: Given a simplicial cone C ⊂ R d , PFD writes C as an inclusion-exclusions of simplicial cones C 1 , . . . ,C d , such that, for each C i , all but one of the generators of C i lie in the x d = 0 coordinate hyperplane. This decomposition is closely related to a Brion/Lawrence-Varchenko decomposition of a section of C .
Computing the PFD, we find .
On the level of symbolic cones, this corresponds precisely to the decomposition Geometrically, this decomposition can be obtained as follows. The intersection of C with the hyperplane (x, y, z) z = 2 is the triangle ∆ with vertices (2, 0, 2), (0, 2, 2) and (2, 1, 2). The height z = 2 is lowest possible such that the triangle has all integer vertices. ∆ has Lawrence-Varchenko decomposition Each of these three 2-dimensional cones, we now translate to the origin and take the Minkowski sum with the ray through its former apex. This means simply replacing C ). Thus, we get precisely the decomposition given in (10). We have now obtained a (signed) are much easier to describe. On the rational function level, Xin [65] makes use of this fact by using the PFD of a rational function ρ = ρ 1 +. . .+ρ d as a starting point and then giving a recursive procedure in the spirit of Euclidean algorithm formulas for computing the Ω (ρ i ). We will come back to Xin's algorithm in Section 5. By contrast, the algorithm we describe in Section 3 makes direct use of the Lawrence-Varchenko decomposition: We give explicit formulas on the level of symbolic cones that provide a Lawrence-Varchenko decomposition of the polyhedron C ∩ {x | x d 0} itself.

POLYHEDRAL OMEGA-THE NEW ALGORITHM
In this section, we introduce our new algorithm, Polyhedral Omega, which is a synthesis of the partition analysis approach to solving linear Diophantine systems and methods from polyhedral geometry.
The Polyhedral Omega algorithm solves the rdfLDS problem, as defined in the introduction. Given a matrix A ∈ Z m×d and a vector b ∈ Z m , it computes a rational function expression ρ in d variables z 1 , . . . , z d whose multivariate Laurent series expansion represents 2 the set of all non-negative integer solutions x ∈ Z d 0 of Ax b. As motivated in Section 2, this rational function expression corresponds to a representation of the set x ∈ Z d 0 Ax b as a signed linear combination of simplicial cones. Such a representation in terms of what we call symbolic cones, defined in Section 3.2 below, is central to the algorithm and can serve as a useful output of the algorithm in its own right.
Without loss of generality, we will deal only with systems Ax b of inequalities in this paper. However, it is straightforward to extend the algorithm presented here to handle mixed systems containing both inequalities and equations directly. In particular, we have added this feature to our implementation of Polyhedral Omega to which refer the interested reader for details [23].
The Polyhedral Omega algorithm consists of the following steps: (1) MacMahon Lifting. Given a matrix A ∈ Z m×d and a vector b ∈ Z m , we use the MacMahon lifting to compute a unimodular simplicial cone C ⊂ R d +m such that i.e., the desired set of solutions can be obtained by applying the Omega operator which eliminates the last coordinate m times to the cone C . (2) Iterative Elimination of the Last Coordinate using Symbolic Cones. This is the main step of the algorithm, in which we compute Ω m (C ) iteratively, by eliminating one variable at a time.
Each application of Ω corresponds to intersecting a polyhedron P ⊂ R n with the half-space H + n := x ∈ R n | x n 0 of points with non-negative last coordinate and then projecting-away that last coordinate. The crucial property of this procedure is that, throughout, polyhedra are represented as linear combinations of simplicial symbolic cones C which are stored as triples (V, q, o) of a generator matrix V , an apex q and an openness-vector o. In the end, we obtain a representation Ω m (C ) = α i C i in terms of symbolic cones, much more compact than a representation in terms of rational functions.
(3) Conversion to Rational Functions. The representation of the solution in terms of symbolic cones is then converted to a rational function representation. For this, two different methods can be used that each have their own advantages and that can also be used in tandem.
(a) One option is to explicitly enumerate the lattice points in the fundamental parallelepiped for each of the C i and then apply equation (2) to obtain a rational function. (b) Another option is to use Barvinok decompositions to represent each C i as a short signed sum of unimodular cones, which can be immediately converted to rational functions. (4) Rational Functions in Normal Form. Each of the alternatives in the previous step gives a rational function expression for the solution. However, this rational function expression will not in general be in normal form. Standard algebraic procedures may be used to bring the rational functions on a common denominator and to simplify the resulting expression. Note that conversion to normal form may increase the size of the output exponentially.
Depending on the application, it may be a good idea to work with the intermediate representations computed over the course of the algorithm and skip the later steps. As already mentioned, the representation in terms of symbolic cones is in most cases significantly more concise than the rational function representations. Especially if the output of the algorithm is to be examined by hand, we recommend 2 The coefficient of z x := z x is a solution of Ax b, x 0 and it is 0 otherwise. In the following, we will sometimes abuse terminology and refer to the coordinates of solution vectors and the corresponding variables of the generating function interchangeably. taking a look at the symbolic cone representation first. Among the rational function representations, Barvinok decompositions can be exponentially shorter than expressions obtained from enumerating fundamental parallelepipeds. On the other hand, the enumeration of the fundamental parallelepipeds can be described in terms of explicit formulas and expressions with fewer distinct factors in the denominators of the rational functions. Computing the normal form of a rational function can lead to useful cancellations, but is computationally expensive and can also increase the size of the output exponentially.
We now go through each of the steps of the algorithm in detail.
3.1. MacMahon Lifting. Just as MacMahon did, we begin by using the MacMahon lifting, which we already met in Section 2, to transform the representation of the problem. The MacMahon lifting is implemented by the function macmahon defined in Algorithm 1.
Given A ∈ Z m×d and b ∈ Z m , we construct a matrix V ∈ Z (d +m)×d and a vector q ∈ Z d +m by Geometrically, we view this construction as defining a cone C = C (V ; q) with apex q and the columns of V as generators. As already mentioned, the crucial property of C is that where for any polyhedron P ∈ R n , Ω (P ) = π(H + n ∩ P ) and π : R n → R n−1 is the projection that forgets the last coordinate. Moreover, this construction of C is such that for any polyhedron P obtained throughout the process of applying Ω , the projection π maps P bijectively onto its image and moreover preserves the lattice structure of P . More precisely, let X denote the affine hull of {x | Ax b, x 0} and let X j = π j (X ) denote the result of projecting-away the last j coordinates. Then π : R n− j +1 → R n− j induces a bijection between X j −1 and X j . It follows that π also induces bijections between P and π(P ) for any P ⊂ X j −1 . This property of the MacMahon lifting will be crucial for our construction below. In fact, we have the even stronger property that π gives a bijection between Z n− j +1 ∩ X j −1 and Z n− j ∩ X j . This latter property is essential when working with rational functions, but it is irrelevant for our purposes, as we will be working with symbolic cones instead.

Iterative Elimination of the Last Coordinate using Symbolic Cones.
A symbolic cone C is simply a triple (V, q, o) where V is a matrix of generators, q is the apex of the cone and o is a vector indicating which faces of C are open. This is the form in which we are going to store cones throughout the algorithm. The triple (V, q, o) represents the set C o R (V ; q), just as defined in Section 2. A linear combination of symbolic cones is a formal sum α i C i , i.e., a list of pairs (α i ,C i ) where each C i is a symbolic cone given as a triple. The meaning of these linear combinations is best understood in terms of indicator functions. For any set S ⊂ R n we define the indicator function [S] : R n → R by Indicator functions form a vector space. In particular, addition of indicator functions is defined pointwise, as is multiplication with a scalar. For a symbolic cone C = (V, q, o) we will write [C ] to denote the indicator function [C o R (V ; q)] of the associated simplicial cone. Thus, the formal sum α i C i represents the indicator function α i [C i ]. Later on we will also talk about equivalence classes of such indicator functions, but it is important to stress that by default [S] stands for the actual indicator function and not an equivalence class thereof.
In the second step of the algorithm, we compute Ω k (C ) by iteratively applying a function elimLastCoord, summarized in Algorithm 3, that takes a linear combination α i C i of symbolic cones and returns a representation of Ω ( α i [C i ]) as a linear combination of symbolic cones.
The action of the Omega operator on indicator functions is induced by its action on sets. There is, however, an important subtlety. The intersection with the half-space H + n is straightforward to define for indicator functions: For any indicator function f , To define the projection, we make use of the property of the MacMahon lifting that for any set S appearing throughout the algorithm and any (x 1 , . . . , x n−1 ) ∈ π(S) there exists a unique x n such that (x 1 , . . . , x n ) ∈ S and the same holds for indicator functions. This allows us to define π( f ) for an indicator function f by Since, if it exists, the x n in the above definition is uniquely determined, it follows that π( f ) is welldefined. Actually, all indicator functions f to which we apply π moreover satisfy f (x 1 , . . . , x n−1 , x n ) = 1, however, we will not make use of this fact. With these preparations we can now define Ω ( f ) := π(H + n ∩ f ). Since the Omega operator is linear, we have the important consequence that Therefore it suffices to define elimLastCoord on symbolic cones C i -we do not need to handle general polyhedra or indicator functions directly.
To compute Ω (C ) for a simplicial symbolic cone C , we decompose H + n ∩ C into a signed sum of cones. The most well-known decomposition of this type is the Brion decomposition which states, roughly, that "modulo lines" a polyhedron is equal to the sum of the tangent cones at its vertices. Let us make this statement precise. For a given polyhedron P and a point v, the tangent cone of P at v is Similarly, the feasible cone of P at v is fcone(P, v) = u v + δu ∈ P for all sufficiently small δ > 0 , i.e., tcone(P, v) = v + fcone(P, v). We call the tangent cones of P at its vertices v the vertex cones of P . For each vertex cone C i = tcone(P, v i ) of P let ρ C i denote a rational function expression for the generating function Φ C i of lattice points in C i . For simplicial cones, ρ C i can be obtained via (9) as we have seen in Section 2. For non-simplicial cones, such expressions can be obtained via triangulation, for example, as in Figure 6. Fortunately, it turns out that we will only have to deal with the simplicial case in our algorithm -we never have to triangulate. Given the above notation, Brion's theorem states the following. Theorem 3.1 (Brion [24]). For any polyhedron P with vertices v 1 , . . . , v N , Geometrically, i.e., on the level of linear combinations of indicator functions, this identity can be read as The phrase "modulo lines" means that this identity holds true up to a finite sum of indicator functions of polyhedra which contain lines. A line is a set of the form a + Rb for some vectors a, b. See [13] for details, in particular Theorem 6.4 therein.
For our purposes, working modulo lines is not ideal, since the Omega operator is not well-defined modulo lines, as we already discussed in Section 2.4. Another example that shows this, even in the simple case of one-dimensional intervals 3 , is this: even though (−∞, 1] ≡ −(1, ∞) modulo lines. We therefore have to work with a decomposition that holds exactly, not just modulo lines.
One option is the Lawrence-Varchenko decomposition, which makes use of the notion of forward cones [45]. In general, "forward" can be defined with respect to an arbitrary half-open half-space. For our purposes, the following more particular definition will suffice, however: A vector v is forward if its first non-zero entry is positive and a symbolic cone is forward if all of its generators v i are forward. Note that the cone obtained from the MacMahon lifting is forward by construction.
For any non-zero vector v, we have that v is forward if and only if −v is not forward. Therefore, we can turn a simplicial cone into a forward simplicial cone by reversing the direction of all "backward" generators -as long as we adjust the sign and which faces of the new cone are open. The resulting forward cone will be equivalent modulo lines to the original cone. Here we make use of the fact that for any vector v = 0, the one-dimensional ray Applying this operation to Brion's theorem, we can obtain the following identity of indicator functions, which is essentially the theorem of Lawrence-Varchenko, even though their theorem was originally stated in terms of generating functions. def flip(C ): [45,60]). For any polyhedron P with vertices v 1 , . . . , v N ,

Algorithm 2: Flip
The crucial point here is that this identity holds exactly, without taking equivalence classes modulo lines. The advantage of this exact identity over Brion's result is that we can use it safely in conjunction with the Omega operator. Since we are not taking equivalence classes, the problem of well-definedness does not even arise. Working with rational function representations necessitates working with equivalence classes, a complication we can avoid by working on the level of symbolic cones instead.
The key observation behind the Polyhedral Omega algorithm is now that for the simplicial cones C appearing through the course of the algorithm, we can give explicit formulas for a Lawerence-Varchenko decomposition of the polyhedron C ∩ H + n into simplicial cones. This elimination of the last variable is performed by the function elimLastCoord given in Algorithm 3. To see the correctness of this algorithm, we proceed just as we did above: We first derive explicit formulas that determine a Brion decomposition and then we convert this Brion decomposition into a Lawrence-Varchenko decomposition by flipping all cones forward. Since the resulting decomposition is an exact identity of indicator functions, the Omega operator can safely be applied to the resulting decomposition in the next iteration.

Algorithm 3: Eliminate Last Coordinate
Input : A symbolic cone C = (V, q, o) with V ∈ Z n×k , q ∈ Q n and o ∈ {0, 1} k , such that the projection π that forgets the last coordinate induces a bijection between aff(C ) and R n−1 . Output: A linear combination i α i C i of symbolic cones, represented as a dictionary mapping To derive these formulas, we have to distinguish three cases: Whether the apex q of C lies above the hyperplane H n = {x | x n = 0}, on the hyperplane or below the hyperplane.
Note that our original matrix A has m rows and d columns. After the MacMahon lifting, we are then dealing with a d -dimensional cone C in R d +m . Through successive eliminations of the last coordinate, we will eventually obtain a linear combination of d -dimensional cones in R d . Throughout this process we will denote the index of the last coordinate with n = d +m, d +m −1, . . . , d and we denote the number of generators by k = d .  . Intersecting a cone with the half-space H + n when the apex is above the hyperplane H n .
Case A: Apex above the hyperplane. Let C be a simplicial cone in R n with apex q, openness o and generators v 1 , . . . , v k such that q n > 0. This situation is illustrated in Figure 11. Since q is contained in the half-space H + n , the polyhedron C ∩ H + n has q as one of its vertices. The other vertices of C ∩ H + n are in one-to-one correspondence with the intersections w j of the extreme rays R j of C with the hyperplane H n . Since q n > 0, the extreme ray R j generated by v j intersects H n if and only if v j ,n < 0, i.e., the generator v j points "down" with respect to the last coordinate. Let denote the set of all indices j such that the corresponding generator v j points down. The intersection w j = H n ∩R j of the intersection hyperplane with such a "downward" ray R j , j ∈ J − can be computed by Applying Brion's theorem, we find that, modulo lines, C ∩ H + n can be written as a sum of vertex cones at the vertices q and w j for j ∈ J − . The vertex cone at q is identical to C , in particular it has the same generators. The generators g j 1 , . . . , g j k of the vertex cone C j := tcone(w j ,C ∩ H + n ) fall into three classes: (1) One of the generators of C j , let us call it g j j , is just the reverse of the generator v j of C , i.e., In other words, g j j points from the apex w j of C j to the vertex q of C ∩ H + n . (2) For each i ∈ J − with i = j , there is a generator g j i of C j that points to another vertex w i of C ∩H + n . These generators g j i are given by (3) Finally, for each i ∈ [k] \ J − , there is a generator g j i that lies in the intersection of H n with the two-dimensional face of C that is generated by v j and v i . Note that v j points down while v i points up or is horizontal with respect to the last coordinate. We can therefore find a suitable g j i by taking a linear combination of v j and v i with non-negative coefficients, such that the last coordinate of g j i is zero. Thus, we obtain If v i is horizontal, i.e., v i ,n = 0, then g j i is just a multiple of v i .
Note that even though, intuitively, the generators of the second and third kind arise from a different construction, the formulas defining g j i in both cases are identical: g j i is a linear combination of v j and v i such that the last coordinate of g j i is zero. It is important to note that the cones tcone(v,C ∩ H + n ), for v a vertex of C ∩ H + n , are all simplicial. If v = q, then this is true by assumption. If v = w j for some j , then we can observe this by noting that the matrix G j , which has the generators g j i of C j as columns, arises from the matrix V , which has the generators v i of C as columns, via the transformation which is of full rank as v j ,n < 0.
We have now determined apex and generators for each of the cones C j . Moreover, we observe that the half-space H + n is closed. Therefore the facet of C j corresponding to generator g j j is closed, while all other facets inherit their openness from C . The openness vector o We thus have found an explicit Brion decomposition On the other hand, the affine hull of aff(C ) of C is such that π| aff(C ) is a bijection onto its image, so that in particular π| C j is one-to-one for each cone C j in the decomposition. With these observations we obtain a decomposition [ which holds exactly, i.e., not just modulo lines. Note that C does not need to be flipped since we know that C is forward, by construction.
Case B: Apex below the hyperplane. The case where the apex q of cone C lies strictly below the hyperplane H n , i.e., q n < 0, is similar to the previous one. The difference is that here, q is not a vertex of C ∩ H + n as it is cut off by the intersection, as illustrated in Figure 12. The vertices w j of C ∩ H + n are thus in one-to-one correspondence with generators of v j of C that point "up", i.e., that satisfy v j ,n > 0. Let and let w j := H n ∩R j denote the intersection of H n with the ray R j = q +R 0 v j generated by v j for each j ∈ J + . Then w j can be computed via Let C j = tcone(w j ,C ∩ H + n ) denote the vertex cone at w j . Again, we can think of the generators g j 1 , . . . , g j k of C j as belonging to three different types.
(1) One generator, which we call g j j , is just the corresponding generator of C , i.e.    (2) For every i ∈ J + , i = j , there is one generator g j i which points from w j to the vertex w i of C ∩H + n , i.e.
Note that the sign is opposite from what we had in the previous case to ensure that the factor − If v i is horizontal, i.e., v i ,n = 0, then g j i is just a multiple of v i . Again, the terms in this expression have opposite signs compared to the previous case, in order to guarantee that the coefficients v j ,n and −v i ,n are both non-negative.
Just as in the previous case, the expressions for generators of the second and third kind are identical. Also, we can set up a transformation matrix T to express the generator matrix G j of C j as a transformation of the generator matrix V of C , via G j = V · T . In this case, T is given by First of all, we note that this has again full rank since v j ,n > 0, which guarantees that the cones C j are simplicial. But more importantly, T = −T , which means that both cases A and B can be treated with one common transformation The definition sg(0) = 1 will come in handy in case C, as we will see below.
We now have computed generators and apices for the vertex cones C j . Again, the half-space H + n is closed, whence the facet of C j corresponding to generator g j j is closed, while all other facets inherit  their openness from C . Therefore we now have an explicit Brion decomposition We proceed just as in the previous case and apply flip and π to obtain a Lawrence-Varchenko decompo- that holds exactly, not just modulo lines.
Case C: Apex on the hyperplane. At first glance, the case where the apex q of cone C lies on the hyperplane H n , i.e., q n = 0, appears special. The reason is that in this case C ∩ H + n can be a non-simplicial cone, as shown in Figure 13(a), and that the only vertex cone of C ∩ H + n is C ∩ H + n itself. Thus a simple Brion decomposition would take us out of the regime of formal sums of simplicial cones. Of course we could solve this issue by triangulating C ∩ H + n . But triangulation algorithms are a complex subject in their own right [34,46,53], at odds with the simple approach based on explicit decomposition formulas we have applied thus far.
Fortunately, it turns out that the same formula we used to handle case A can also be applied in case C to obtain a decomposition into simplicial cones. It just so happens that all simplicial cones that appear have the same apex q. This phenomenon can be explained with a deformation argument: For a small > 0, the apex of e n + C lies above the hyperplane H n so that case A applies and we obtain a decomposition [( e n +C ) ∩ H The generators of the cones ( e n + C ) and C j ( ) are independent of . The apices ( e n + q) and w j ( ) of the cones ( e n + C ) and C j ( ), on the other hand, do depend on . However, this dependence is continuous so that ( e n + q) → q and w j ( ) → q as → 0. In the limit, we thus obtain the desired decomposition of C ∩ H + n , as illustrated in Figure 13(b). That this works in general is the content of Liu's theorem.

Theorem 3.3 (Liu [48]). Let A ∈ R m×n , b : (−1, 1) → R m a continuous function and P (t ) = {x | Ax b(t )}.
Suppose that for each 0 = t ∈ (−1, 1) the polyhedron P (t ) is non-empty and has precisely vertices w t ,1 , . . . , w t , . Suppose further that for each j = 1, . . . , there exists a fixed cone C j such that fcone(w t , j , P (t )) = C j for all 0 = t ∈ (−1, 1), i.e., the apex of each tangent cone tcone(w t , j , P (t )) may change depending on t , but the feasible cone fcone(w t , j , P (t )) does not. Then, for each j , the vertex w t , j converges to some vertex of P (0) as t → 0. Let v be a vertex of P (0) and let J v be the set of all j such that w t , j converges to v. Then To apply this theorem, we first observe that there exists a matrix A and a continuous function b defined on the half-open interval [0, 1) such that ( e n +C ) ∩ H + n = {x | Ax b( )} . To extend b continuously to (−1, 1) we simply let b(− ) := b( ). By construction, ( e n + C ) ∩ H + n is nonempty for > 0. Moreover ( e n + C ) ∩ H + n has the same combinatorial type for each > 0. In particular, the number of vertices of ( e n +C ) ∩ H + n and the feasible cones at these vertices do not change for > 0. For = 0 we obtain the polyhedron C ∩ H + n that we are interested in. C ∩ H + n has just one vertex, namely q, to which all the vertices of ( e n + C ) ∩ H + n converge. Therefore, the conclusion of Theorem 3.3 gives us the desired decomposition where each C j is given by precisely the same formulas as in case A. In particular, the formula for w j gives q as the apex of each of the C j , as expected. We can view this decomposition as giving, implicitly, a triangulation of the dual cone of C ∩ H + n . Just as in the previous two cases, we obtain an exact decomposition of C ∩ H + n (not just modulo lines) by flipping all cones on the right-hand side forward. By projection we obtain Note that our convention sg(0) = 1 ensures that G j = sg(q n ) · V · T holds for all three cases.

Summary and Optimizations.
We have now seen that elimLastCoord correctly applies Ω to a linear combination of symbolic cones of the form produced by macmahon. The function elim, defined in Algorithm 4, iterates this elimination of the last variable m times. As the output of elim we thus obtain a list of simplicial symbolic cones C 1 , . . . ,C N and multiplicities α 1 , . . . , α N such that where each symbolic cone C i is given in terms of a triple (V, o, q) where V is an integer matrix of generators, o is a vector indicating which faces of the cone are open and q is a rational vector giving the apex of the cone. Note that each C i is a d -dimensional cone in R d . The above identity of indicator functions holds exactly, not just modulo lines.

Algorithm 4: Eliminate Coordinates
Input : A symbolic cone C = (V, q, o) with V ∈ Z d +m×d , q ∈ Q d +m and o ∈ {0, 1} d such that projection that forgets the last m coordinates induces a bijection between aff(C ) and R d . Output: A linear combination i α i C i of symbolic cones, represented as a dictionary mapping cones def elim(C ): The elimination algorithm is easy to implement, since all cones are stored in terms of integer matrices and all transformations are given explicitly. However, it is important to highlight two implementation aspects for performance reasons.
First, it is important to ensure that the generators for a given cone C i are stored in primitive form, as defined in Section 2. Algorithm 5 shows how to compute prim(v) for integer vectors. Replacing a column v of V with prim(v) does not change the cone C i the generator matrix V defines, but it may significantly reduce the encoding size of the matrix V . Therefore, after each round of elimination, all columns v of V should be replaced by prim(v), respectively, since the transformations applied in each round may introduce non-primitive columns.

Algorithm 5: Make a vector primitive
Input : A non-zero integer vector v ∈ Z n . Output: The shortest integer vector that is a positive multiple of v.
def prim(v): Second, one should note that after each round of elimination, a given cone C i may appear multiple times in the sum (11), with multiplicities of opposite signs. It is therefore crucial to collect terms after each elimination and sum up the multiplicities of all instances of a given cone C i , as a significant amount of cancellation may occur in some cases. For this purpose, the generators of the cones C should be stored in (lexicographically) sorted order, since when generators are primitive and sorted, every rational simplicial cone has a unique representation as a symbolic cone. Also, it may be expedient to store the sum (11) as a dictionary mapping symbolic cones C i to multiplicities α i throughout the computation, see the definition of map (Algorithm 6).

Algorithm 6: Map Linear Combination of Symbolic Cones
Input : A function f which maps a symbolic cones to a linear combination of cones, and a linear combination i α i C i of symbolic cones, represented as a dictionary D mapping symbolic cones C i to multiplicities α i .

Output: A dictionary representing the linear combination
Coefficients are collected by the C i , j , i.e., if C := C i 1 , j 1 = . . . = C i k , j k then the dictionary contains only the key-value-pair

Conversion to Rational Functions.
The decomposition (11) into symbolic cones that we have so far obtained has many advantages over a rational function expression for Φ P . In particular, (11) is very compact, highly structured and has a clear geometric meaning, which makes it suitable for further processing by human and machine alike. Therefore, when applying our algorithm, we recommend investigating whether the representation (11) can be used instead of a rational function expression.
In case a rational function expression for Φ P is indeed required, we present two different methods for computing it in this section. The first method explicitly enumerates the lattice points in the fundamental parallelepiped of a cone, using an explicit formula based on the Smith normal form of the matrix of generators. The second method uses Barvinok's recursive decomposition to express a given cone as a short signed sum of unimodular cones. The first approach has the advantage of giving the rational function explicitly, in terms of a simple formula, rather than appealing to a complex recursive algorithm. The second approach has the advantage that it produces much shorter formulas. In fact, if the dimension is fixed, Barvinok's algorithm is guaranteed to produce polynomial size expressions, whereas the number of points in the fundamental parallelepipeds involved may very well be exponential. However, the representation (11) in terms of symbolic cones is shorter than either of the two rational function expressions. Moreover, it turns out that in practice, conversion to rational functions is usually the bottleneck of the algorithm, taking much longer than the computation of (11) on a large class of inputs, no matter which of the two conversion methods is used, see Section 4. Both approaches have in common that they convert (11) into a rational function identity one simplicial cone at a time. If ρ C i (z) is a rational function expression for Φ C i (z) for any i , then Therefore, we can restrict our attention to computing rational function representations ρ C for a given simplicial symbolic cone C .
Fundamental Parallelepiped Enumeration. As we have already seen in Section 2, where the v i are the generators of C and Π o (v 1 , . . . , v d ; q) is the fundamental parallelepiped of C with given openness o and apex q. The number of lattice points in the fundamental parallelepiped Π is equal to the determinant of the matrix V of generators (provided V is square) and this determinant is exponential in the encoding length of V even if the dimensions of V are fixed. Therefore the polynomial in the numerator of the above expression can be exponential in size, even in fixed dimension.
Nonetheless, this representation of ρ C is attractive because the numerator is very simple to compute. Contrary to what one might think at first glance, enumerating the lattice points in the fundamental parallelepiped does not require solving another linear Diophantine system. Instead, this set of points can be generated directly, given the Smith normal form of the matrix V . This technique is well-known [26,42], but the method is very instructive and the formula we use is slightly different from [26,42] so we provide its derivation here. To simplify the exposition, we will assume that the apex q = 0 is at the origin and that o = 0, i.e., the cone C is closed.
Every integer matrix V ∈ Z m×n has a Smith normal form which we define as a triple of integer matrices U , S,W with V = U SW such that the following properties hold: First, U ∈ Z m×m and W ∈ Z n×n are unimodular, i.e., they are invertible over the integers or, equivalently, detU = detW = ±1. Second, S ∈ Z m×n is a diagonal matrix with diagonal entries s 1 , . . . , s min (m,n) such that if k = rankV then s 1 , . . . , s k > 0 and s k+1 , . . . , s min(m,n) = 0 and, moreover, s 1 |s 2 , s 2 |s 3 , . . ., s k−1 |s k . The matrix S is uniquely determined, but the transformations U and W are not. Nonetheless we will simply say that U SW = V is "the" Smith normal form of V . Note that if V is square then detV = s 1 · s 2 ·. . .· s k . Since in our situation C is always a simplicial cone, V will always have full rank so that k is the number of generators of C . For now, we will make the additional assumption that C is full-dimensional which means that V is square and k = m = n.
One way to interpret the Smith normal form is the following. Let L denote the lattice spanned by the columns of V . L is a sublattice of the integer lattice Z n . In this setting, the transformations U and W can be viewed as changing bases on both L and Z n so that with respect to the new bases the fundamental parallelepiped of L is just a rectangle with side-lengths given by the diagonal elements s i . Figure 14 illustrates this idea with an example. Suppose we start out with the standard basis e 1 , . . . , e n S now gives the coordinates of the v i relative to the basis e 1 , . . . , e n . The fact that S is diagonal now means that v i is simply a multiple of e i , or, in other words, that, with respect to the basis e 1 , . . . , e n , the fundamental parallelepiped Π(v 1 , . . . , v n ) of the basis v 1 , . . . , v n of L is simply a rectangle with sidelengths given by the s i . Thus, the set of lattice points in Π(v 1 , . . . , v n ) is extremely simple: it is (a linear transformation of ) a Cartesian product where the intervals [0, s i ) Z are taken to denote the integer values in the given range.
We now understand Π(v 1 , . . . , v n ) very well, but what is the relationship between Π(v 1 , . . . , v n ) and the fundamental parallelepiped Π(v 1 , . . . , v n ) that we set out to enumerate? It turns out we can easily transform the set of lattice points in one into the set of lattice points in the other through linear transformations and simple modular arithmetic, as shown in Figure 15. Since the matrices U and W are unimodular, it is clear that both Π(v 1 , . . . , v n ) and Π(v 1 , . . . , v n ) contain the same number of lattice points. Moreover, for both of the two fundamental parallelepipeds Π it is true that for every x ∈ Z n there exists a unique point y ∈ Z n ∩Π such that has x−y ∈ L. Therefore, the function f : Z n → Z n ∩Π(v 1 , . . . , v n ) that maps a lattice point x to the corresponding y ∈ Z n ∩ Π(v 1 , . . . , v n ) induces a bijection between the set that we can describe and the set we want to describe. As it turns out, f has a very simple explicit description: to compute f (x) we simply take coordinates with respect to the basis v 1 , . . . , v n , then we take the vector of fractional parts, and then undo the coordinate transformation. More precisely, for any α ∈ R let fract(α) = α − α denote the fractional part and for any vector x ∈ R n let fract(x) = (fract(x i )) i =1,...,n denote the vector of fractional parts. Then Putting these two observations together we obtain Note that S −1 is the diagonal matrix with diagonal entries 1 s i . All other matrices are integer. For computational efficiency it is expedient to work with integer values throughout. The common denominator of all values in S −1 is simply s n . Let s i := s n s i ∈ Z and let S denote the diagonal matrix with entries s 1 , . . . , s n . Then we can rewrite the previous identity to where the modulus is taken componentwise, i.e., x mod α := (x i mod α) i =1,...,n for a vector x = (x i ) i =1,...,n . The computation (12) can be performed entirely in the integers. In particular, the final division by s n is guaranteed to produce only integer values.
Translating (12) into the language of generating functions, we get the following explicit rational function formula ρ C (z) for the generating function of a symbolic cone C : The formulas (12) and (13) give a very concise and simple solution to the problem of listing lattice points in fundamental parallelepipeds and thus for converting cones to rational function expressions. The drawback of the rational function representation (13) is that the polynomial in the numerator may be exponential in the encoding size of C , even if the dimension is fixed. Nonetheless, computing (13) is much more efficient than solving a general linear Diophantine system, in the following sense: the Smith normal form (SNF) of V can be computed in polynomial time, even if the dimension is not fixed, and once the SNF of V is known, (12) provides an explicit parametrization of the lattice points in the fundamental parallelepiped. In particular, this gives an output-sensitive polynomial time algorithm for listing the points in the fundamental parallelepiped.
As already mentioned, this method is well-known. The expression (12) is essentially the same as [42,Lemma 11], the main difference being that (12) avoids rational arithmetic for performance reasons. [26] gives a similar method based on the Hermite normal form.
The approach presented above can be extended to accommodate general symbolic cones C . In particular, it is possible to handle cones with open facets, cones with rational apices and non-fulldimensional cones. The fully general statement of (12) and (13) is given in the following theorem.
To deal with open facets we introduce the notation and the generating function Φ C (z) of C is given by the rational function expression Theorem 3.4 gives rise to the function enumFundPar, defined in Algorithm 7, for enumerating fundamental parallelepipeds of symbolic cones.

Algorithm 7: Enumerate Fundamental Parallelepiped
Barvinok decomposition. An alternative way to convert a simplicial symbolic cone C to a rational function is to construct a signed decomposition of C into unimodular cones C i . A cone C i is unimodular if it contains just a single lattice point in its fundamental parallelepiped, which implies that the rational function representation (9) has just a single monomial in its numerator.
It is crucial that we are looking for a signed decomposition of C into unimodular cones C i , i.e., for a way of representing C as an inclusion-exclusion of the C i . While it is always possible to write C as a union of unimodular cones C i , the number of such cones C i required may well be exponential in the encoding size of C , as the left-hand side of Figure 16 shows: The cone generated by (1, 0) and (1, a) has encoding size log a. While it can be written as a union of unimodular cones C i generated by (1, i ) and (1, i + 1) for i = 0, . . . , a − 1, this positive decomposition requires a cones, which is exponential in log a. Signed decompositions, on the other hand, allow us to write C as a difference of just two unimodular cones, as shown on the right in Figure 16. If C 1 is generated by (1, 0) and (0, 1) and C 2 is generated by (0, 1) and (1, a) with openness-vector o = (1, 0), Barvinok's theorem [16], which is one of the landmark achievements in the field in the last decades, states that this works in any dimension: Every simplicial cone C can be written as a signed sum of half-open unimodular cones C i such that the number of cones C i is bounded by a polynomial in the encoding size of C , provided that the dimension of C is fixed. That is, the number of cones will grow exponentially with the dimension of the matrix V of generators of C , but once the dimensions of V are fixed, the number of cones depends only polynomially on the encoding length of the numbers in V . Such a decomposition is called a Barvinok decomposition.
The construction of Barvinok decompositions is out of scope of this article. We refer the interested reader to the textbooks [13,31] for details. Briefly, the basic idea is the following. Given a simplicial d -dimensional cone C with generators v 1 , . . . , v d , use the LLL algorithm to find a vector w that is "short" with respect to a certain basis. Then decompose C into cones C 1 , . . . ,C d where C i is generated by v 1 , . . . , v i −1 , w, v i +1 , . . . , v k . The vector w may lie outside of C , just as in the example in Figure 16 where w = (0, 1). If w lies outside of C , then some of the C i will have a negative sign in the decomposition (depending on whether the determinants detC and detC i have the same sign or not). Also, some of the C i will be half-open. Which faces of the C i are open can be determined via an entirely combinatorial case-analysis, as given in [42]. The algorithm then proceeds by recursion, constructing a decomposition of each of the C i in turn.
The parameter that guarantees termination of this recurrence is the number of lattice points in the fundamental parallelepiped of C , which we call the index ind(C ) of C . In a each recursive step, a given cone C is split into d new cones C i , until unimodular cones are obtained. This gives rise to a d -ary recursion tree T . The key property is that w is "short" enough so that the index of all d new cones is significantly smaller than the index of C . In fact, the index decreases so rapidly that the depth of T is guaranteed to be doubly logarithmic in the index of C , more precisely ∈ O (d · ln(ln(ind(C )))). The double logarithm is crucial, since ln(ind(C )) is polynomial in the encoding size of C . The number of terms in the final decomposition, i.e., the number of leaves of T , is then bounded by ln(ind(C )) O (d ·ln(d )) . If d is fixed, the number of terms is thus bounded by a polynomial in the encoding size of C . See [13, p. 140-141] for details.
In summary, Barvinok's algorithm provides us with the following result.
Theorem 3.5 (Barvinok [16], see also [14,33,42]). Let C be a simplicial symbolic cone in dimension d with encoding size s. Then there exists an N ∈ Z 0 , unimodular d -dimensional half-open symbolic cones C 1 , . . . ,C N with the same apex as C and signs α 1 , . . . , α N ∈ {−1, 1} such that and, for any fixed d , N is bounded by a polynomial in s. Moreover, let v i ,1 , . . . , v i ,d denote the generators of the C i and let u i denote the unique lattice point in the fundamental parallelepiped of C i . Then

and, again, the encoding size of this rational function is bounded by a polynomial in s, provided that d is fixed.
Comparing Theorem 3.4 with Theorem 3.5 we find that Barvinok's decomposition has the distinct advantage that the size of the rational function expressions it produces can be exponentially smaller than the expression obtained through explicit fundamental parallelepiped enumeration. This difference can be particularly pronounced when the number of variables of the linear Diophantine system is small, but the coefficients in the system are very large and give rise to cones in the decomposition that have large indices. (However, in Section 5 we are going to see an example of a class of linear Diophantine systems with large coefficients where the indices of the cones produced in the previous step are still small.) The drawbacks of Barvinok decompositions are that they are determined by a non-trivial recursive algorithm, as opposed to the explicit formula of Theorem 3.4, and that the number of distinct factors (1 − z v ) appearing in the denominators of the final rational function expression is much larger using the Barvinok approach than using fundamental parallelepiped enumeration.
It is important to remark on the dimension d that drives the exponential growth of Barvinok's decomposition. In the very first step of our algorithm, we apply the MacMahon lifting and transform a linear system with d variables and m constraints into a d -dimensional cone in (d + m)-dimensional space. However, the m extra dimensions are all eliminated during the second phase of the algorithm! Therefore the cones C for which we apply Barvinok's algorithm all have dimension (at most) d and lie in d -dimensional space again. Thus, the MacMahon-style iterative elimination approach we employ does not lead to an exponential growth of the Barvinok decomposition, because we work with symbolic cones throughout the elimination and convert to rational function only after all extra variables have been eliminated.
In practice, both approaches should be used in conjunction, as is also done in software packages like LattE [33]. If a symbolic cone C has a small index, fundamental parallelepiped enumeration is applied to directly obtain the rational function. If the index of C is large, Barvinok's algorithm is used to recursively decompose C . However, instead of stopping the recurrence when ind(C i ) = 1, i.e., when the cones C i become unimodular, the recurrence is stopped when the indices become small enough so that fundamental parallelepipeds can be explicitly enumerated efficiently. The threshold when the index becomes small enough is implementation dependent.

Rational Function in Normal Form.
No matter which mechanism for conversion to rational functions is used, the result is a rational function expression for Φ P , not a rational function in normal form. In a majority of applications, the desired output will be a rational function expression, as opposed to a rational function in normal form. In particular, if the linear Diophantine system has finitely many solutions, the normal form of the rational function will be a polynomial that explicitly lists all solutions, in which case it is often preferable to stick to a more compact expression. Even if the solution set is infinite, bringing the rational function in normal form -by bringing all summands on a common denominator, expanding and canceling terms -will typically destroy a lot of the structure of the rational function expression and may vastly increase its size. In fact, it is easy to construct families of cases where the size of the normal form is exponential in the size of the output produced by either of the conversion mechanisms presented above.
That said, there are certainly cases where computing the normal form can involve helpful cancellations that indeed simplify the rational function expressions. In cases where this behavior can be expected, it may be useful to bring the rational function in normal form or to at least perform some simplifications. Normal form algorithms are widely available in current computer algebra systems and can readily be used for this purpose. We will therefore discuss this subject no further. We merely remark that in our implementation using the Sage computer algebra system, using Sage's built-in mechanism for bringing the rational function in normal form often takes orders of magnitude longer than all the previous steps of the algorithm combined, due to the size of the expressions involved.

COMPUTATIONAL COMPLEXITY
In this section, we give a detailed complexity analysis of Polyhedral Omega, providing bounds on the running time and space requirements of the algorithm. Polyhedral Omega is the first partition analysis algorithm for which this has been done. A key reason for this is that the geometric point of view enables us to prove stronger invariants than a naive complexity analysis would allow. In particular, we prove a structure theorem that implies strong bounds on the number of symbolic cones that can appear during elimination and on their encoding size. 4.1. Structure Theorem. In order to prove a structure theorem for the symbolic cones that can appear during elimination inductively, we need to set up a suitable invariant that is preserved by elimLastCoord.
In what follows we will assume a dense representation of matrices and vectors in order to simplify the presentation. We note, though, that in practice most of the matrices are sparse. We also assume constant time for accessing any element of a matrix. Generally, the estimates we obtain in this section are coarse. In particular, we work with naive bounds on the complexity of integer and matrix multiplication. A finer analysis would thus yield tighter bounds.
Any simplicial cone can be described both in terms of its generators, as we have done in our symbolic cone representation, and in terms of a system of equations and inequalities. For the purpose of characterizing the cones that can appear during one run of elim, it will be convenient to work with inequality descriptions of symbolic cones. Given matrices Note that a superscript 1 reverses both the direction and the strictness of the inequality. Given this notation, we can observe that for flip(C ) = (V ,q,ô) we have where, without loss of generality, we assume that the columns v i ofV are indexed such that the generator v i lies on the extreme ray of the cone that is given by letting all the constraints A x ô b hold at equality, except for the i -th one. So, the openness vector o describes both, which facets of the cone are open and, correspondingly, which generators have been reversed by flip so that the cone points forward. One important observation is that no matter how a cone has been flipped in the past, the forward cone is uniquely determined by the original system. More precisely, for every vector o.
With these preparations we can characterize all cones that can appear throughout the elimination process.
Theorem 4.1. Let C = (V, q, o) be a symbolic cone with V ∈ Z (d +m)×d , q ∈ R d +m and o = 0 ∈ {0, 1} d such that the columns of V are primitive and linearly independent and such that C is forward. Moreover, let C have an inequality description of the form for some A ∈ Z m×d and b ∈ Z m . Then, any symbolic cone C = (V , q , o ) produced at iteration i = 0, . . . , m of elim(C ) is a forward cone determined by a system of the form

and A x b is a subsystem consisting of d rows of
where 0 d ∈ R d denotes the zero vector. In particular, A ∈ Z d ×(d +m−i ) and b ∈ Z d .
The above theorem can be generalized to input cones C that have open faces, but this is not necessary for our purposes.
Proof. The proof proceeds by induction over i . For i = 0, i.e., before elimLastCoord is applied for the first time, the claim is trivially true: The claim simply asserts that ( I d 0 ) x 0, ( A − Id m ) x = b is an inequality description of C which is true by assumption.
So we assume that the claim holds for some i and show that it then holds for i + 1. Let C = (V , q , o ) be a d -dimensional cone produced at iteration i and let be the system determining C , where A , b , A , b are as described in the statement of the theorem and the columns of V are indexed to fit the order of the rows of A as described above.
At iteration i + 1 we intersect C with the half-space given by x d +m−i 0. By assumption on A , A , the variable x d +m−i has a non-zero coefficient only in the last row of A and can therefore be seen as a slack variable. We rewrite where the a i are the first d entries of the last row of A . Projecting away the coordinate x d +m−i , we find that the polyhedron P = π( Consider first cases A and B from Section 3.2. As we observed, the cones produced by elimLastCoord are forward flips of the vertex cones of P . The vertex cones of a d -dimensional polyhedron in (d + m − i −1)-dimensional space are given by a system of m −i −1 equations and d inequalities. In our case, the m − i − 1 equations are preciselyÂ x =b and the d inequalities are some subsystem of d inequalities chosen fromÂ x ôb . As shown in Section 3.2, case C is a deformation of case A. In particular, the cones produced in case C arise in the same way as in case A and hence are determined by the same type of system.
We have now seen that any coneĈ produced at iteration i + 1 is of the form whereÃ x õb is a subsystem consisting of d rows ofÂ x ôb and the second equality follows from (14). The system (15) is precisely of the form asserted in the theorem, which completes the induction proof.
In particular, Theorem 4.1 applies to the symbolic cones C given by the MacMahon lifting.
To use Theorem 4.1 for our complexity analysis, we need to bound the size of a symbolic cone in terms of the size of its inequality description.

Lemma 4.3. Let C be a d -dimensional forward symbolic cone in R d +m−i with primitive generators that is determined by the system
Let the encoding size of all the entries in A , b , A , b be bounded above by s. Then the encoding size of C is bounded above by 2(d +1)(d +m −i ) 2 log(d +m −i )s +d with the size of each entry of V and q bounded by While L i is a function of d , m, i , s, we will in the following write just L i whenever the values of d , m, s are understood from context. Note that the vectors v are rational, while prim(v) are always integer. In particular, prim(v) may actually be longer than v. However, the entries of prim(v) are factors of the integers det(A k ), whence their encoding size is bounded above by (d +m −i ) log(d +m −i )s. Therefore, 2(d +m −i ) log(d +m −i )s gives a uniform bound on the encoding size of the entries of V and q and the total the encoding size of C is at most 2 Putting these results together we obtain bounds on the number and encoding size of the symbolic cones produced during elimination. These bounds are much tighter than a naive analysis of the algorithm would imply, due to the geometric insight used. In particular, the encoding sizes of all cones produced during the iterative procedure can be bounded in terms of the encoding size of the original system. Here we use that rational simplicial cones have a unique representation as a symbolic cone with primitive generators (which are stored in sorted order).

Complexity of Lifting and Elimination.
We can now give a complexity analysis of the algorithms in Section 3.1 and 3.2.  Proof. To make each column primitive, we compute the greatest common divisor of all the entries in the column and divide. For every column, the greatest common divisor computation and the division take time O ns 2 each.
Proof. By Theorem 4.1 and Corollary 4.4, we know that the encoding sizes of the entries in V and q are bounded above by L i −1 . Computing In order to make comparison of cones faster in the next step, we sort the generators of the symbolic cone lexicographically. This requires d log(d ) comparisons of vectors of encoding length (d + m − i )L i .
Finally, inserting the at most d resulting symbolic cones in a dictionary mapping a symbolic cone to its multiplicity The total complexity of eliminating the last coordinate is thus where, in the first step, we use L i −1 L i and, in the last step, we substitute Then we insert each cone appearing in these dictionaries into a new dictionary that will contain at most d +i Iterating for i = 1, 2, . . . , m results in time complexity Regarding the space used we observe that in each iteration we need to store at most d +m d cones and that each cone has encoding size at most O (d (d + m)L 0 ), which yields the desired bound.

Complexity of Rational Function
Conversion. Next, we analyze how long it takes to explicitly enumerate the lattice points in the fundamental parallelepiped of a symbolic cone C = (V, q, o), using the Smith normal form as described in enumFundPar (Algorithm 7). The total number of lattice points we have to enumerate is D = | det(V )| which can be exponential in the encoding length of V . As the size of the output can be exponential in the size of the input, it makes sense to give an output-sensitive analysis of the algorithm. In particular, the following lemma states the preprocessing time of enumFundPar, i.e., the time it takes to perform a one-off calculation at the beginning of the algorithm, and then the delay, which is the time it takes to generate the next lattice point in the list of lattice points in the fundamental parallelepiped. It turns out that the preprocessing time and the delay are polynomial in the input size and independent of the output size. The space required during preprocessing and generation of individual lattice points is polynomial in s and d and independent of the total number D of generated lattice points.
Proof. The preprocessing phase of enumFundPar consists of computing the Smith normal form and then computingq, W −1 S q,q int ,q frac and −Vq frac + s k q. Since we assume that C is full-dimensional, p does not have to be computed, but can be assumed to be p = 0. These operations involving q are relatively expensive as we need to work with rational vectors. Bounds on the running time required for computing the Smith normal form S, U −1 and W −1 , as well as bounds on the encoding size of the entries in these matrices are taken from [58,59]. In total, this gives a bound of O d 9 (log(d ) + s) 2 on the running time of the preprocessing step.
To generate the list of all lattice points, one lattice point at a time, we do not first generate x ∈ P and then compute W −1 S x, as this would be inefficient. Instead, we observe that the lattice points in P can be listed in such a way that two successive entries in the list differ by a unit vector. Thus we can list the lattice points W −1 S P by starting at 0 and then adding or subtracting one column of W −1 S in each step. After the modulus has been taken, multiplication with V is required, though. It is also important to note that the vectors V (W −1 S x mod o s k ) and −Vq frac +s k q are both integer and that the final division by s k is guaranteed to produce an integer vector. Since the final result is guaranteed to lie in the fundamental parallelepiped of C , the encoding size of every entry in every single output vector is bounded by s + d . In total, this gives a bound of O d 4 (log(d ) + s) 2 on the time taken to generate the next point in the list.
While generating lattice points, only one point in W −1 S P has to be stored at a time. Therefore the space required is independent of the total number of points generated.
To analyze the complexity of computing a Barvinok decomposition, we cite the following result due to Barvinok [16,17,14], see [13,31] for details. Unfortunately, an explicit formula for the bound t (d , s) is not available in the literature. The derivation of such a bound, while straightforward in principle, is out of the scope of this article.
Proof. The time bound, originally obtained in [17], is given in [31,Theorem 7.1.10]. We use the bound on the depth of the decomposition tree from [31, Lemma 7.1.9] to get N d  Proof. Let C = (V, q, o) be a symbolic cone output by elim(macmahon (A,b)). By Again, using the Hadamard bound we obtain | det(Ã)| (d a) d . Combining these inequalities we get | det(V )| (d a) d 2 as desired.
4.4. Summary of Complexity Analysis. The results of the above complexity analysis are summarized in the following theorem.
Theorem 4.13. Let A ∈ Z m×d and b ∈ Z m such that the encoding size of all entries in A and b is bounded above by s. Then the different variants of the algorithm described in Section 3 have the following running times.
(1) Computing a list of symbolic cones C 1 , . . . ,C n with multiplicities α 1 , . . . , α n such that using macmahon and elim takes time at most Elim RF Gn P G2 of the arrangement of hyperplanes given by the original system. In (2b), the set of exponents w i , j will typically be much larger, since the w i , j arise from the v i , j through a recursive arithmetic procedure.  (1) and (2b) we obtain an algorithm that solves the rfsLDS problem in polynomial time, if the dimension of the problem is fixed. Thus, the Polyhedral Omega algorithm lies in the same complexity class as the best known polyhedral algorithms for solving rfsLDS. None of the previous partition analysis algorithms is known to have this property, see Section 5.2.
A key bottleneck is the number of symbolic cones output by elim for which we have the upper bound d +m d . This upper bound is polynomial if either d or m are fixed, but can be exponential otherwise, e.g., A bottleneck of this type is inherent in all algorithms that have to touch each vertex of the polyhedron defined by the inequality system at least once (e.g., when using Brion decompositions): By McMullen's Upper Bound Theorem [51], we know that the maximal number of vertices that a polytope with a given number of facets can have is realized by duals of cyclic polytopes. In our context, this yields inequality systems with vertices [66]. It is important to remark that the actual number of symbolic cones is typically much lower than d +m d , but can be larger than the number of vertices, even in the case of simple polyhedra. See Section 5.3 for further discussion and open research questions related to this phenomenon.

COMPARISON WITH RELATED WORK
In this section, we compare Polyhedral Omega with previous algorithms from both partition analysis and polyhedral geometry.

Speedup via Symbolic Cones.
Partition analysis algorithms are characterized by the iterative application of a collection of explicitly given formulas. However, iteration can appear in the various algorithms on many different levels and in many different forms, which can have important performance implications. Table 1 gives an overview of the iterative structure of the classic algorithm by Elliott/MacMahon, the Omega2 algorithm by Andrews-Paule-Riese [5], Xin's algorithms Ell [63] and CTEuclid [65] and the two variants of our Polyhedral Omega algorithm, using fundamental parallelepiped enumeration (PolyOmega-FP) and Barvinok decompositions (PolyOmega-BD). Iteration can take either the form of a recurrence (R) or the form of a loop (L). By a loop we mean explicit formulas involving a long summation. The defining feature of all partition analysis algorithms is the iterative elimination (Elim) of one variable at a time using a recursion that splits one rational function/cone into several rational functions/cones. In analogy with geometry, we will call the binomial factors in the denominator of a rational function generators. Within the elimination of one variable, the algorithms have very different structure. Elliott's algorithm uses an outer recurrence over pairs of generators (P) as well as an inner recurrence, similar to the Euclidean algorithm, to apply Ω to one pair of generators (G2). A key contribution of Omega2 was to replace the inner recurrence by an explicit formula (i.e., a loop) leading to a significant speedup. Xin's Ell algorithm has a similar structure. In contrast, both Polyhedral Omega and Xin's CTEuclid algorithm do not iterate over pairs of generators, but apply Ω to all n generators at once (Gn). While CTEuclid uses a recursive procedure in this phase, Polyhedral Omega does not need to use any iteration at all here, due to the use of symbolic cones. In Polyhedral Omega, the corresponding iteration can happen outside the elimination recurrence, when converting symbolic cones to rational functions (RF). The fact that in Polyhedral Omega it is not necessary to iterate at all inside a single elimination can already provide an exponential speedup in comparison with other partition analysis methods, as we will see in a concrete example below. The conversion to rational functions can be performed after elimination is complete using either fundamental parallelepiped enumeration (i.e., an explicit summation formula) or the Barvinok decomposition (i.e., a recursive procedure).
Polyhedral Omega is the only partition analysis algorithm for which a comprehensive complexity analysis is available. Furthermore, Polyhedral Omega is the only partition analysis algorithm whose elimination phase runs in polynomial time if either the number of variables or the number of constraints is fixed, and, if Barvinok decompositions are used, it is the only partition analysis algorithm that, in total, runs in polynomial time if the dimension is fixed. These are not just better bound due to a refined analysis that exploits the geometric structure, but genuine exponential speedups, even if Barvinok decompositions are not used. To illustrate these performance gains, we will discuss two structural differences between the algorithms that lead to exponential improvements on large classes of instances.
First, it is generally desirable to handle all generators at once, instead of iterating over pairs of generators. Omega2, for example, acts on a rational function by taking a product of two generators with positive last coordinate and splitting it into a sum of two rational functions that each have one generator with positive last coordinate and one generator with zero last coordinate. Now, assume d is even and ρ is a rational function with d generators, half of which have positive and half of which have negative last coordinate. If we recursively apply the Omega2 rule until each summand has at most one generator with positive last coordinate (which is a base case of the Omega2 algorithm), we end up with a total of 2 d /2−1 summands. In contrast, Polyhedral Omega's elimLastCoord produces at most d +1 summands in the worst case.
Second, and most importantly, using symbolic cones during elimination can lead to exponentially more compact intermediate representations, which in turn implies exponential speedups during the elimination phase, compared with algorithms working with rational functions. This can save a lot of work in total, as soon as several variables need to be eliminated, as the following example shows. Let a 1 be any large positive integer and let a 2 be any positive integer coprime to a 1 . Let b 1 , b 2 denote positive integers such that a 1 b 2 − a 2 b 1 = 1. Now consider the linear Diophantine system S consisting of the two constraints b 2 x 1 − b 1 x 2 0 and −a 2 x 2 + a 1 x 1 1. If we solve S iteratively by, in the first step, computing a full rational function representation of the set of all Diophantine solutions x 0 to the inequality −a 2 x 2 + a 1 x 1 0, we will typically obtain a rational function expression such as x (a 1 ,a 2 ) )(1 − x (0,1) ) , or a lifted version thereof, which has a 1 terms in the numerator. This is because the fundamental parallelepiped of the corresponding cone C R ( a 1 0 a 2 1 ) contains a 1 lattice points. Using rational function representations, the first elimination has thus increased the encoding size of the problem exponentially, which will slow down subsequent eliminations correspondingly. Using symbolic cones, on the other hand, the encoding size does not increase during elimination: Polyhedral Omega starts with the symbolic cone ; 0).

The first elimination produces
).
As expected, the projection of C 1 onto the first two coordinates is the solution set of the system x 0 and −a 2 x 2 + a 1 x 1 1. Note that, from C 0 to C 1 , the determinant of the cone has increased from 1 to a 1 but the encoding size has not. The second elimination then produces which is precisely the solution set of the system S. The cone C 2 has again determinant 1, so that the fundamental parallelepiped can be enumerated in just a single iteration and the resulting rational function expression is short. Since we used symbolic cones we did not have to enumerate the fundamental parallelepiped explicitly at the intermediate stage and can get to the short end-result quickly. Rational function representations, however, produce an exponential blowup at the intermediate stage that can prevent partition analysis algorithms from reaching the short final result.
Of course there are shorter expressions for the rational function ρ given in (16), which can be obtained, e.g., using Barvinok decomposition or methods from [22,65]. This would avoid an exponential blowup in one elimination step. But the increase in representation size in a single elimination step would still be larger than using symbolic cones. It is thus preferable to apply Barvinok techniques only after elimination has been completed, as we do in Polyhedral Omega. Initial experiments with our implementation of Polyhedral Omega [23] confirm these benefits of symbolic cone representations in practice.
As the above example shows, symbolic cones are the key for obtaining Polyhedral Omega's improvements in running time. However, symbolic cones are not tied to the particulars of our algorithm. On the contrary, symbolic cones have great potential for further applications in partition analysis and beyond. One particularly promising direction for future research is to develop a symbolic cone version of partial fraction decomposition, following the polyhedral interpretation of PFD given in Section 2.6. This might lead in particular to a polyhedral version of Xin's CTEuclid algorithm with improved performance when several variables are eliminated.

First Polynomial-Time Algorithm from Partition Analysis Family.
Polyhedral Omega is the first algorithm from the partition analysis family for which a comprehensive complexity analysis is available (Section 4). Geometric insight played a crucial role in obtaining strong bound on the running time. Most importantly, by Theorem 4.13, Polyhedral Omega runs in polynomial time if the dimension is fixed, provided that the Barvinok decomposition is used. It is the first partition analysis algorithm with this property. This latter statement requires some discussion. First, it is important to draw the following careful distinction: The Barvinok decomposition is an algorithm for computing a short signed decomposition of a simplicial cone into unimodular simplicial cones, which runs in polynomial time if the dimension is fixed. More generally, however, there is Barvinok's algorithm for solving the rfsLDS problem. Barvinok's algorithm makes crucial use of Barvinok decompositions, but performs a whole range of other tasks as well, including but not limited to computing vertices and edge directions as well as triangulation.
As we mentioned in Section 2.5, [65, Section 3] describes a procedure for reducing the problem of applying Ω to a rational function to the rfsLDS problem. The rfsLDS problem itself, though, is solved by using Barvinok's algorithm as a black box. In our view, the procedure from [65, Section 3] does therefore not fall into the domain of partition analysis algorithms. In contrast to [65,Section 3], Polyhedral Omega does not use all of Barvinok's algorithm as a black box, but only Barvinok decompositions. The Barvinok decomposition itself can be described in terms of a recursive formula [42], very much in the spirit of partition analysis. Thus, we argue, Polyhedral Omega with Barvinok decompositions (PolyOmega-BD) is the first rfsLDS algorithm that follows the partition analysis paradigm and achieves polynomial running time in fixed dimension.
The CTEuclid algorithm given in [65,Section 4] is quite separate from the algorithm from [65, Section 3] mentioned above, and does not use Barvinok  , but these cones do not cancel when collecting terms. Especially in higher dimension, situations like these can arise even if the starting set S is convex and no cut goes through the apex of a cone.
However, as Xin acknowledges, even if the answer was positive, polynomiality would likely break down as soon as several variables are eliminated one after another. Here symbolic cones could be of help as explained in Section 5.1.

Comparison with Polyhedral Methods.
In comparison with polyhedral methods, the main benefit of Polyhedral Omega is simplicity. Polyhedral algorithms for rfsLDS typically make use of a number of non-trivial and highly-specialized algorithms for enumerating vertices and edges of a polyhedron and for triangulating a cone, since these tasks are essential for decomposing a polyhedron into simplicial cones. In contrast, Polyhedral Omega obtains such a decomposition implicitly through the iterative application of a few simple rules. This fact is the main insight transferred from partition analysis to polyhedral geometry as part of this research project. Crucially, this simple iterative approach still results in an algorithm that lies in the same complexity class as the best known polyhedral methods for solving rfsLDS: Polyhedral Omega runs in polynomial time in fixed dimension. However, there is one interesting regard in which there is a conceptual gap between the performance of Polyhedral Omega and polyhedral algorithms using specialized methods for vertex and edge enumeration. Applying our iterative approach, we obtain a representation of a polyhedron P as a sum of (indicator functions of) symbolic cones C i with multiplicities α i ∈ Z. As we discussed in Section 3.2 normalizing and collecting terms [C i ] is crucial for performance. However, over the course of the algorithm, there may still arise groups of symbolic cones that sum to the empty set, i.e., l i =k without our algorithm being able to recognize this fact and simplify the sum accordingly. To see how such zero sums can arise, consider the example given in Figure 17. The underlying issue here is that on the rational function level such zero sums could be recognized by bringing the rational function expression in normal form. This calls for the development of a symbolic method for bringing symbolic cone expressions into a normal form. Such a method could speed up Polyhedral Omega significantly and is a promising topic for future research.

SUMMARY AND FUTURE RESEARCH
In this article, we have given a detailed polyhedral interpretation of existing partition analysis algorithms solving the rfsLDS problem (Section 2) and used these insights to develop Polyhedral Omega (Section 3), a new algorithm solving rfsLDS which combines the polyhedral and the partition analysis approaches in a common framework. A key insight is the use of symbolic cones instead of rational functions, which allows us to retain the simple iterative approach based on explicit formulas that is inherent to partition analysis algorithms and still achieve a running time that had previously been attained only by polyhedral algorithms (Sections 3.2 and 5.1). For conversion from symbolic cones to rational functions, we present two methods (Section 3.3). The first, based on fundamental parallelepiped enumeration (Theorem 3.4), has the advantage of being described by a simple closed formula, while the second, based on Barvinok decompositions (Theorem 3.5), is a recursive algorithm that yields short rational function expressions. When using Barvinok decomposition, Polyhedral Omega runs in polynomial time in fixed dimension (Theorem 4.13). Thus it lies in the same complexity class as the fastest known polyhedral algorithms for rfsLDS and it is the first algorithm from the partition analysis family to do so (Section 5.2). Moreover the geometric point of view allows us to do the first comprehensive complexity analysis of any partition analysis algorithm, and obtain strong bounds on the running time from the geometric invariants we establish (Theorem 4.1). Compared to previous polyhedral methods, the key advantage of Polyhedral Omega is its simplicity: instead of using several sophisticated algorithmic tools from polyhedral geometry, a simple recursive algorithm based on a few explicit formulas suffices.
Our results point to several promising directions for future research. On the practical side, the next task is to complete our implementation of Polyhedral Omega [23]. So far, we have implemented the elimination algorithm using symbolic cones as well as rational function conversion using fundamental parallelepiped enumeration on the Sage system. The next step is the implementation of Barvinok decompositions on this platform. This would enable a comprehensive benchmark comparison of the different rfsLDS algorithms available today. In this context, it would also be desirable to do a detailed complexity analysis of the Barvinok decomposition, giving an explicit formula for an upper bound on the running time of that algorithm. While such an analysis is straightforward in principle, no such formula has been published so far.
On the theoretical side, the most interesting direction for future research is a detailed study of symbolic cones. Symbolic cones form an interesting algebraic structure closely related to, but distinct from both, the rational function field and the algebra of polyhedra [13]. In particular, a good definition of a normal form of a symbolic cone expression along with an efficient algorithm for bringing a symbolic cone expression into normal form would be of great use, both in the context of Polyhedral Omega (see Section 5.1) as well as for working with rational functions in general. Moreover, symbolic cones may be of use outside the context of Brion/Lawrence-Varchenko decompositions. Especially our polyhedral interpretation of partial fraction decomposition in Section 2.6 calls for closer investigation. A symbolic cone variant of partial fraction decomposition could, for example, lead to a polyhedral version of the CTEuclid algorithm.
Finally, a key feature of partition analysis algorithms has always been that they lend themselves very well for use in induction proofs, due to their being based on the recursive application of explicit rules. Polyhedral Omega promises to be of great use in both manual and automatic induction proofs, since the intermediate results obtained are given as symbolic cones and therefore more compact and easy to manipulate. We are especially looking forward to combining Polyhedral Omega with automatic induction provers, as this may lead to algorithms for proving structural results about infinite families of polytopes of varying dimension and even to algorithms for manipulating infinite-dimensional polyhedra. A first milestone in this direction would be to finally realize one of the visions that motivated Andrews-Paule-Riese to revive partition analysis in the first place: a fully automatic proof of the Lecture Hall Partition Theorem [4,21] in full generality, with the dimension treated as a symbolic variable.