Computing the non-properness set of real polynomial maps in the plane

We introduce novel mathematical and computational tools to develop a complete algo-rithm for computing the set of non-properness of polynomials maps in the plane. In particular, this set, which we call the Jelonek set , is a subset of K 2 where a dominant polynomial map f : K 2 → K 2 is not proper; K could be either C or R . Unlike all the previously known approaches we make no assumptions on f whenever K = R ; this is the first algorithmm with this property. The algorithm takes into account the Newton polytopes of the polynomials. As a byproduct we provide a finer representation of the set of non-properness as a union of semi-algebraic curves, that correspond to edges of the Newton polytopes, which is of independent interest. Finally, we present a precise Boolean complexity analysis of the algorithm and a prototype implementation in maple .


Introduction
Let f = (f 1 , . . ., f n ) : K n → K n be a polynomial map, where K ∈ {C, R}.We say that f is nonproper at a point y ∈ K n if for any neighborhood O of y, the preimage f −1 (O) is not compact, where O is the Euclidean closure of O. Namely, at the set J f ⊂ K n of points y at which f is non-proper, there exists a sequence {x k } k∈N ⊂ K n such that ∥x k ∥ → ∞ and f (x k ) → y.We call J f the Jelonek set of f .Jelonek first studied [19] the non-properness of maps C n → C n for the purpose of pushing forward the state-of-the-art results around the Jacobian conjecture [44].In this context, if the Jacobian matrix of such a map f is everywhere non-singular, then the invertibility of f becomes equivalent to emptiness of its Jelonek set.To this end, substantial work has been directed towards the study of this set [20,22,25], which led to solving many problems regarding the topology of polynomial maps [16,20,23,26].
Besides the Jacobian conjecture, the description of the Jelonek set is essential to compute the atypical values of a polynomial function C n → C [23,26], for classifying the algebraic subsets of R n that are images of R n under a polynomial morphism [16], and for characterizing the set of fixed points under a polynomial morphism [24].As for applications to real-life problems, polynomial maps appear as models in algebraic statistics [33], robotics [36], computer vision [13], and chemical reaction networks [11]; to mention few of them.For such a setting, the input of the problem is a point in the target space, while the output is a point in the preimage.Then, the Jelonek set represents some of those inputs that result in a sub-optimal output.
There is no shortage of effective algorithms for computing exactly the Jelonek set of some polynomial maps, even in more general settings [19,20,37,43].Unfortunately, they are not universal; for example, in some cases [19,21,37] one requires K to be an algebraically closed field, while in other cases, these methods require the maps to be finite [43].In addition, to our knowledge, there are no precise bit complexity estimates for the various algorithms.Even more, the known algorithms rely on black box elimination techniques based on Gröbner basis computations and they do not take into account the structure and the Newton polytopes of the input.When K = R, the situation is more dire; even though the geometry of the Jelonek set is fairly well understood [19,22,25,43], to our knowledge, there are no dedicated algorithms to compute it.Up until now, we had to rely on algorithms that assume K = C and compute a superset of the (real) Jelonek set.

Our contribution
We consider a dominant polynomial map f = (f 1 , f 2 ) : K 2 → K 2 , where K ∈ {C, R}, i.e., f (K 2 ) = K 2 .We present a complete and efficient algorithm to compute the Jelonek set of f , SparseJelonek-2 (Alg.3), along with its mathematical foundations, complexity analysis, and a prototype implementation.
The algorithm makes no assumptions on the input polynomials f .It outputs a partition of the (equations of the) Jelonek set to subsets of semi-algebraic (or algebraic) curves of smaller degree; hence it provides a more accurate and detailed picture of the topology and the geometry of the Jelonek set, from the one known before.It depends on their Newton polytopes; the latter encode the non-zero terms of the polynomials, see Sec. 3.1.1for the definition and various properties.This feature provides us with tools from combinatorial and toric geometry, and makes the algorithm input and output sensitive.We present a precise bit complexity analysis of the algorithm when the input consists of polynomials with rational coefficients, and a (prototype) implementation in maple.To our knowledge this is the first dedicated algorithm for computing the Jelonek set when K = R in the plane, under no assumptions.
An important aspect of our approach is the partition of the Jelonek set into sets of irreducible (semi-)algebraic curves.The equations of the curves in each partition set are obtained by analyzing the coefficients of f restricted at some distinguished edge Γ of the polygon NP(f ); the latter is the Minkowski sum of the Newton polytopes of f 1 and f 2 , NP(f 1 ) and NP(f 2 ), respectively.As distinguished edges, we consider the ones that their corresponding inner normal vector has at least one negative coordinate; we call them infinity edges (Definition 3.1).
Consider q to be a generic point of the Jelonek set of f , i.e., q ∈ J f , and y sufficiently close to q.By definition of J f , there is an isolated point x ∈ f −1 (y) whose Euclidean norm takes arbitrary large values.One thus can construct a change of variables that sends x = (x 1 , x 2 ) to (z M1 , z M2 ) = (z ), where M ij ∈ Z, so that, in the new variables, the (transformed) preimage is arbitrary close to one of the coordinate axes of K 2 .This suitable change of variables depends on an edge of the Newton polytope of f in the following way: the inverse U of the matrix M := (M 1 , M 2 ) ∈ Z 2×2 is such that its first row spans an edge of NP(f ) (Section 4.1).Accordingly, after the change of variables, the new polynomial system f (x)−y = 0 is written as U (f − y)(z) = 0.
The topological face multiplicity set corresponding to the face Γ, or T-multiplicity set for short, collects all points q, for which any y sufficiently close to q, gives rise to a solution z to the system U (f − y) = 0 satisfying ∥z − (ρ, 0)∥ → 0 for some ρ ∈ K * .
We denote it by TF K (Γ) (Definition 4.5).Going over all the infinity edges of its Newton polytope NP(f ), we obtain the T-multiplicity set of f , and through it the Jelonek set of f .The following theorem summarizes these properties; it appears in Thm.4.15 along with its proof.
Theorem (The Jelonek set as a union of T-multiplicity sets).Let f : K 2 → K 2 be a dominant polynomial map, where K ∈ {C, R}.Then, the Jelonek set of f is the union, over all the edges of its Newton polytope NP(f ) intersecting (R * ) 2 , of its T-multiplicity sets.
To compute the T-multiplicity sets we note that, as ∥y −q∥ → 0, if a solution z(y) ∈ K 2 to any (parametrized by y) polynomial system F y = 0 converges to a point z(q), then the multiplicity of z(q) (as a solution of F y = 0) should usually be higher for y = q than for most other values of y.
This observation demonstrates that we can detect points in T-multiplicity sets by capturing the change in the multiplicity of some solutions of the transformed system.The set of all such q ∈ K 2 that increase the multiplicity of a solution to F y = 0, is called the algebraic face multiplicity set corresponding to Γ, or A-multiplicity for short, and we denote it by AF K (Γ) (Definition 4.10).We develop an algorithm, ms_resultant, to compute the T-multiplicity set using the A-multiplicity set by employing tools from elimination theory.The following theorem supports ms_resultant; it appears in Prop.6.1 along with its proof.
Theorem (ms_resultant and computation of the T-multiplicity sets).Let f : K 2 → K 2 be a dominant polynomial map, where K ∈ {C, R}.Then, for any edge Γ of NP(f ), ms_resultant (Alg.4), correctly computes the T-multiplicity set.ms_resultant relies on resultant computations and exploits the property that the multiplicity of a root of the resultant accumulates the multiplicities of the solutions of the system in the corresponding fiber.It is the best choice for maps that are quite degenerate and it has polynomial worst case complexity.ms_resultant is straightforward for complex polynomial maps.For polynomial maps f : R 2 → R 2 , however, we need to perform a more refined analysis.Namely, let g := Cf : C 2 → C 2 denote the complexification of f , and let D g be its discriminant.This is the image, under g, of the critical points in C 2 of g.For each edge Γ ≺ NP(g) we consider the decomposition of AF C (Γ) ∩ R 2 into a collection C of smooth disjoint segments C 1 , . . ., C r , separated by isolated points from D g ∩ AF C (Γ) and singular points of AF C (Γ).That is, these points are boundary points of the curve segments and J f is the Euclidean closure of the disjoint union C 1 ⊔ • • • ⊔ C r .We show in §4 that each segment C ∈ C either lies entirely in the T-multiplicity set of f , or it is disjoint from it.Then, once the A-multiplicity set of the complex map g is computed, determining which of its segments C ∈ C is a part of the T-multiplicity set of f reduces to picking a random point in C, and counting the number of its real preimages under f .One has to make sure that this sampling procedure avoids other A-multiplicity sets.We describe this in §6.Note that, thanks to our first theorem above, this approach holds true if we replace the A-multiplicity set by J g and the T-multiplicity set by J f .
The complexity bound of ms_resultant for maps R 2 → R 2 (Thm.6.5) is much higher than the one in the complex case (Thm.6.5).However, this worst case bound is attained only for very particular polynomial maps.
Using the previous tools, a straightforward method to compute the Jelonek set is as follows: For all edges Γ of NP(f ) compute the corresponding T-multiplicity set.This is the backbone of the proof of correctness of algorithm SparseJelonek-2.
Theorem (Computation of the Jelonek set).Let f : K 2 → K 2 be a dominant polynomial map, where K = {C, R}.Then, SparseJelonek-2 (Alg.3) computes correctly the Jelonek set of f .Unlike previously known methods for computing J f , SparseJelonek-2 depends on the Newton polytope of f .Consequently, for maps that are mildly degenerate with respect to their Newton polytopes, our methods outperforms standard algorithms for complex maps (cf.Thm.2.5 and Cor.5.2.) As a byproduct of our methods for describing the Jelonek set, we obtain in Thm.4.22 of §4 a partial characterization of real polynomial maps whose Jelonek set is a union of algebraic curves in R 2 .Consequently (Corollary 4.23), we show that the Jelonek set of real maps, with a birational complexification, is algebraic.

An example
The following example illustrates our first main Theorem that relates the Jelonek set to Tmultiplicity sets.Let f : K 2 → K 2 be the following polynomial map The corresponding Newton polytopes and their Minkowski sum are in Figure 1.The preimage of y is such that x 1 x 2 = 2(y1−1) 2 (−2y1+y2−3) .Thus, the norm of x takes arbitrary large values if y converges to, say, (−2, −1).

Related work
There are already works that exploit the structure of Newton polyhedra for the computation of topological data of polynomial maps.For polynomial functions this includes computing the Milnor number at the origin [29] and the bifurcation set [32,46].Whereas for maps, Newton polyhedra were used to compute the Łojasiewicz exponents [3], prove non-properness conditions [42], and compute the set of atypical values [7,15].In all these cases, there is the requirement of some form of face non-degeneracy condition on the corresponding maps.That is, the tuple of polynomials is required to be in a Zariski open set in the space of all polynomial maps with a given set of Newton polyhedra.Our approach does not need any such assumption whatsoever.

Notation
We denote by O, resp.O B , the arithmetic, resp.bit, complexity and we use O, respectively O B , to ignore (poly-)logarithmic factors.For a polynomial f ∈ Z[x] or f ∈ Z[x 1 , x 2 ] its infinity norm ∥f ∥ ∞ equals the maximum of absolute values of its coefficients.We denote by L(f ) the logarithm of its infinity norm.We also call the latter the bitsize of the polynomial, that is a shortcut for the maximum bitsize of all its coefficients.A univariate polynomial is of size (d, τ ) when its degree is at most d and has bitsize τ .We represent a real algebraic number α ∈ R using the isolating interval representation; it includes a square-free polynomial, A, which vanishes at α and an interval with rational endpoints that contains α and no other root of A. If α ∈ C, then instead of an interval we use a rectangle in R 2 where the coordinates of its vertices are rational numbers.
For a polynomial f ∈ K[x 1 , x 2 ], where K ∈ {R, C}, we denote its zero set by V K (f ) ⊂ K 2 and V K * (f ) ⊂ (K * ) 2 is its zero set over the corresponding torus.We use the same notation if f is a polynomial system, that is f = (f 1 , f 2 ).

Organization of the paper
Sec. 2 gives the state of the art of the problem.We present the known methods for computing the Jelonek set for maps C n → C n (Algs.1, and 2), and deduce their complexity in Thms.2.4, and 2.5.In Sec. 3 we give the necessary notations to introduce the algorithm SparseJelonek-2.This includes a classification of faces of NP(f ) and the introduction of the toric change of coordinates.The first half of Section 5 gives a detailed description of the functionality of SparseJelonek-2, while in the second half we present its complexity analysis (Thm.5.1) and a detailed example.Sections 4 and 6 concern the correctness of Algorithm SparseJelonek-2; in particular we define T-multiplicity sets in terms of appropriate toric transformations introduced in Sec.4.1.Then, we reformulate Thm.1.1 as Thm.4.15 and prove it.The proof of Thm.1.1 is the proof of Prop.6.1 in Sec. 6.Finally, Sec. 7 presents our prototype implementation.

Complexity of known methods
We start with a definition of the Jelonek set, see [19,20].Jelonek proved [19,22] that for a dominant polynomial map f = (f 1 , . . ., f n ) : That is, for every point y ∈ J f , there exists a non-constant polynomial map φ : K → J f such that φ(0) = y.Moreover, if K = C, then J f is a hypersurface [19], while in the real case the dimension is less than or equal to n − 1.These two properties are also valid for maps over algebraically closed fields [38] and when the domain, or the codomain, is an affine variety [25].In this setting, there are methods for testing properness [20] and computing the Jelonek set [19,37].These is also an upper bound on the degree of J f when K = C [19], which is where µ(f ) is the number of points in a generic fiber of f (cf.[19]).Consider the maps be the polynomial defining the equation of the hypersurface , where each A i 0 is the i-th non-properness polynomial of f .For the special case n = 2, the following theorem holds: The computation of the implicit equation of parametrized hypersurfaces F i (C n ) requires to eliminate n − 1 from the variables x 1 , . . ., x n .Thus, to compute the Jelonek set we need effective computations with resultants or Gröbner bases.
In Alg. 2 and Alg. 1 we present the pseudo-code of the algorithms supported by Thms.2.2 and 2.3 for computing the Jelonek set in C n and C 2 .The proofs of the following two results are in the appendix.
where ω is the exponent in the complexity of matrix multiplication.It consists of polynomials in For the special case of two variables a slightly better bound is possible.

Preliminaries
We present the necessary terminology we need for the following sections.

Polytopes, Minkowski sums, and mixed volume
, where a 0 , a 1 , a 2 ∈ R. The latter are the supporting half-spaces of ∆ and their boundary intersects the boundary of ∆, ∂∆, at a connected set of ∆ that we call face.Thus, any face In this case a = (a 1 , a 2 ) ∈ R 2 is an interior normal vector to F and we say that it supports

Mixed volume
Given a convex set ∆ ⊂ R 2 , let Vol(∆) be its fixed and Lebesgue measure endowed in R 2 .Minkowski's mixed volume is the unique real-valued multi-linear, with respect to the Minkowski sum, function of two convex sets We denote the mixed volume by MV(∆ 1 , ∆ 2 ) and we can compute it using the inclusion-exclusion formula The other direction of this statement is also true; it is a particular case of Minkowski's theorem for the higher-dimensional mixed volume, see [28,Section 2].
A pair

Characterization of the faces
In what follows, we distinguish types of edges of the Minkowski sum of ∆ := ∆ 1 ⊕ ∆ 2 .The following definition details these types, while Figure 3 gives a pictorial overview.
The partition of the edges that SparseJelonek-2 exploits.Γ is an edge of the Minkowski sum and its summands are Γ 1 and Γ 2 .
• short if it is not long.
• semi-origin if at least one of its summands contains the origin, that is • origin if both of its summands contain the origin, that is (0, 0) ∈ Γ 1 and (0, 0) ∈ Γ 2 .
• infinity if the corresponding (inner) normal vector has a negative coordinate.
Example 3.2.Consider the three (Newton) polytopes in Figure 2. The third polytope is the Minkowski sum of the first two and we have the following characterization of its edges: • Semi-origin long edges: a 1 ⊕ b 3 , a 4 ⊕ b 6 .
• Pertinent edges: • All of the edges are infinity edges.

Polynomials restricted to faces
2 ] be a bivariate Laurent polynomial, that is where c a ∈ K.The support of P is the set supp The Newton polytope of P is the convex hull of its support; we denote it by NP(P ).where

Consider the pair of polynomials
The corresponding Newton polytopes (in our case polygons) are

Decomposing the Jelonek set
Let f : K 2 → K 2 be a dominant polynomial map sending (0, 0) to (K * ) 2 .The main goal of this section is to describe J f in terms of the multiplicities and the existence of solutions of polynomial systems resulting from suitable toric transformation of f .The results of this section contribute to the correctness proof of the main algorithm in Sec. 6.We keep using ∆ 1 , ∆ 2 , and ∆ to denote NP(f 1 ), NP(f 2 ), and NP(f 1 ) ⊕ NP(f 2 ), respectively.

Toric change of variables using edges
To each edge Γ ≺ ∆, we introduce a toric change of coordinates U ∈ SL(2, Z) to deduce a description of the points in the preimage of f that escape to infinity.Assume that (Γ 1 , Γ 2 ) is the pair of summands of Γ, and let a 1 be any endpoint of Γ.We use v to denote the primitive vector starting from a 1 along the direction of Γ.
Then, there exists a vector w ∈ Z 2 satisfying det(v, w) = ±1 and such that for any point in ∆ ⊕ {−a 1 } spanned by the basis ẽ := (v, w), the coefficient in front of w is positive.In other words, w points towards the direction of the polygon.We call (v, w) a Γ-basis for ∆.
The linear transformation U : Z 2 → Z 2 maps the new basis ẽ to the canonical basis of Z 2 .We can also relate the computatin of ẽ to the computation of the shorted vector problem in Z 2 [45, Lecture VIII].Now consider the matrix T , where It corresponds to the following change of variables z = x T ( 1 1 ) .The transformation U that we are looking for is the transpose of the inverse of T , that is U = T −⊤ which induces an isomorphism (K * ) 2 → (K * ) 2 , given by the transformation , where D = det(T ) = ±1 is the determinant of T .We deduce that T , and thus also U , depends on Γ and ∆.We also have the following immediate consequence.Lemma 4.2.Let α ∈ Z 2 be the primitive integer vector supporting an edge Γ ≺ ∆.Then, for any U ∈ TM Γ (Z 2 ), U Γ is an edge of U ∆.Moreover, there is a U such that the vector U − ⊤ • α equals (0, 1) and it supports U Γ.
If U f consists of Laurent polynomials, then there might be monomials with negative exponents.We transform them to polynomials by multiplying them by suitable monomials.We denote this transformation by U f , which is the map (U x r1 f 1 , U x r2 f 2 ), for suitable r 1 , r 2 ∈ Z 2 ; in other words we multiply by monomials of the smallest possible degree to clear the denominators.
The following observations will be useful in the sequel as they relate the roots of the transformed polynomial system to the roots of the original one.(i) For any U ∈ TM Γ (Z 2 ), the system U f Γ = 0 is univariate and U f = 0 is bivariate, and

The topological and algebraic multiplicity sets of f
Let f : K 2 → K 2 be a polynomial map whose Newton polytope is denoted by ∆ and let U ∈ TM Γ (Z 2 ) for some fixed Γ ≺ ∆.Define the graph and consider the projection π : Definition 4.5 (TF K (Γ), topological face multiplicity set).For any ρ ∈ K * , consider the following set The topological face multiplicity set of f corresponding to Γ, TF K (Γ), is the union of the sets where ρ takes all distinct values in K * satisfying U (f − ỹ)((ρ, 0)) = 0 for some ỹ ∈ K 2 .For brevity, we call it T-multiplicity set.
We prove later in Thm.4.15 that the Jelonek set of f is formed by the union of all Tmultiplicity sets, running over all the infinity edges of NP(f ).
Depending on the type of the edge Γ ≺ ∆, a non-empty set TF ρ K (Γ) can be either a finite set or a semi-algebraic curve.
We represent those cases in two examples.
Example 4.6.Consider the case g := U (f −y), where On the one hand, for each i ∈ {1, 2}, it holds g i (z 1 , 0) = 0 ⇔ z 1 = 1 making the set TF ρ K (Γ) empty whenever ρ ̸ = 1.On the other hand, as the sequence z(k) := (1 + 1/k, 1/k) converges to (1, 0), the sequence y(k) := (1, 4 + 3/k) satisfies g(z(k), y(k)) = 0, and converges to the line The following observation follows from the definitions.To compute T-multiplicity sets we need to introduce the multiplicity of a solution of a polynomial system.Definition 4.9.[8, Ch. 4, Def.2.1] Let g := (g 1 , g 2 ) ∈ K[z 1 , z 2 ] 2 and J = ⟨g 1 , g 2 ⟩ the corresponding ideal.The multiplicity of a solution x ∈ K 2 of the system g = 0 is the dimension of the ring obtained by localizing K[z 1 , z 2 ] at the maximal ideal m := ⟨z 1 − x 1 , z 2 − x 2 ⟩ and considering the quotient ring For our purposes, to compute the multiplicity of the solution of a bivariate polynomial system we proceed as follows.We augment the system with a linear form, say g 0 = s−r 1 z 1 −r 2 z 2 , where s is a new variable and r {1,2} are suffiently generic integers.After we eliminate the variables z 1 , z 2 from the system g 0 = g 1 = g 2 = 0, we obtain a univariate polynomial in s.The multiplicity of the roots of this last polynomial correspond to the multiplicities of the roots of the system.For the further details of this approach we refer the reader to [5,6,35].
For brevity, we also call it A-multiplicity set.
follow from the fact that y 1 = y 2 if and only if the system U (f − y) = 0 has a solution of the form (ρ, 0) for some ρ ∈ C * .Similarly, if 1 − y 2 z 2 ) from Example 4.7.For any K ∈ {R, C}, the point (1, 0) is a simple solution to the system U (f − y) = 0 whenever y ̸ ∈ L. Indeed, the line L is the zero locus of the polynomial in K[y 1 , y 2 ] obtained as the determinant of the Jacobian matrix det Jac (1,0) U (f − y).Therefore, it holds L = AF K (Γ).
As the previous examples indicate, for any Γ ≺ ∆, it holds TF K (Γ) ⊂ AF K (Γ).Indeed, since the system U (f − y) = 0, from Def. 4.5, has a solution ϱ ∈ K * × {0} whose multiplicity increases at points y ∈ TF K (Γ), we get the inclusion "⊆".We will see (Prp.4.16) that we have equality if K = C, and the inclusion may be strict for K = R (Rem.4.17 and Thm.4.22).Furthermore, notice (see e.g., Example 4.11) that, in general, we have TF R (Γ) In Section 6.1.2we present an algorithm to indentify the components (if any) of AF R (Γ) contributing to TF R (Γ) and hence to J f .

The Jelonek set as a union of multiplicity sets
We start by introducing some useful notation and auxiliary results.We use Sol K (Γ) to denote the set of numbers ρ ∈ K for which (ρ, 0) is a solution of the system U (f − y) = 0, for some y ∈ K 2 and U ∈ TM Γ (Z 2 ), where Γ is an edge of ∆.We denote by Since f is a dominant map, the following holds.
Lemma 4.13.The set We need the following technical lemma for the proof Thm.4.15.It identifies the edges that we can safely exclude from our computations.Lemma 4.14.Let Γ ≺ ∆ be an edge.If one of the summands of Γ is a vertex that is not the origin, then for any y ∈ K 2 , the system (f − y) Γ = 0 has no solutions in (K * ) 2 .
Proof.Let Γ = Γ 1 ⊕ Γ 2 .Assume that Γ 1 is the vertex that is not the origin.Then, (f 1 − y 1 ) Γ = c w x w and c w ̸ = 0. Thus, its solution is 0 and so the system (f − y) Γ = 0 has no solutions in the torus (K * ) 2 .Now we are ready to state and prove our main result.Theorem 4.15.Consider a dominant polynomial map f : K 2 → K 2 and its Newton polytope ∆, where K ∈ {C, R}.The Jelonek set of f is where Γ runs over all infinity edges of ∆.
Proof.Consider the subset J • f ⊆ J f containing points y such that f −1 (y) is finite 1 .Thus, On the other hand, , and so the preimage of any point in J f \ J • f shares with C f an irreducible curve.In addition the J f \ J • f is finite, i.e., it has dimension 0, as there are finitely many points y ∈ K 2 for which the system (f − y)(x) = 0 is not zero dimensional.So J • f is dense in J f .Let T ρ (Γ) be the subset of points y ∈ TF K (Γ) as in Def.4.5 (without taking the Euclidean closure).It is enough to show that where the faces Γ are the infinity edges of ∆.This is so, because both sets, on the left and on the right hand side of Eq. ( 6), are dense subsets of the corresponding closed (under the Euclidean topology) sets.
To prove the inclusion J • f ⊆ Γ T ρ (Γ), we borrow the arguments of the proof of [2, Thm.B] that relates the branches of the curves at infinity to the faces of the corresponding Newton polytopes (cf.also [18,Lemma 1]).Lem.4.13, the fact that the Jelonek set is K-uniruled [19,22] (see also the proof of Prop.6.1) imply that the discriminant, the T-multiplicity sets and the image of the coordinate axes form a union of curves in K 2 .Therefore, for any y ∈ J • f , one can choose a point q ∈ K 2 , and a line segment λ(]0, 1]) outside where Our assumptions on λ imply that all the solutions x(t) ∈ K 2 of ( 7) are simple and contained in (K * ) 2 .Moreover, the corrdinates of every solution is a function in t.In particular, the coordinates of the solutions have the following Puiseux series expansion x i (t) = a i t αi + higher order terms in t, for i ∈ {1, 2}, where a i ∈ K * and α i ∈ Q.If we substitute x(t) in f i − λ i (t) and set to zero the coefficient of the smallest power of t, say ω * , then we obtain (f i − y i ) Γ (a 1 , a 2 ) = 0, for some edge Γ ≺ NP(f ).Indeed, the value ω * is min(⟨α, ω⟩ | ω ∈ NP(f i )), which is attained only for points w in the face of NP(f i ) supported by the vector α = (α 1 , α 2 ).Therefore, the point a = (a 1 , a 2 ) ∈ (K * ) 2 is a solution to (f − y) Γ = 0.
To prove that Γ is an infinity edge we use the definition of the Jelonek set (Def.2.1).As y ∈ J • f , one of the solutions x(t) of ( 7) converges to infinity as t → 0 (we may take, e.g. a branch from f −1 (λ(]0, 1[))).Then, the value α i appearing in (8) is negative for some i.Together with Lem.4.14, it implies that Γ is an infinity edge.
To prove the inclusion J • f ⊇ Γ T ρ (Γ), let Γ ≺ NP(f ) be an infinity edge.Assume that T ρ (Γ) ̸ = ∅ for some matrix U ∈ TM Γ (Z 2 ).For any y ∈ T ρ (Γ), let q be close enough to y, which results a solution p ∈ (K * ) 2 to U (f − q) = 0 as in Def.4.5.As q converges to y, the second coordinate of p converges to 0, while the first one remains close to a non-zero constant.Then, we can express p by a Puiseux series of the form (9), with β = (0, 1).The matrix U ∈ TM Γ (Z 2 ) transforms the point p into a solution p U to the system f −q = 0, whose coordinates are expressed as in (8).The assumptions on Γ imply that one of the α i appearing in (8) is negative.Then, the solution p U converges to infinity as p → ϱ, and q → y.This proves that y ∈ J • f .

The T-multiplicity sets in C
In what follows, we describe the relation between T-multiplicity and the A-multiplicity sets for different cases of K.
Proposition 4.16.Consider a dominant polynomial map f : C 2 → C 2 , its Newton polytope ∆, and let ρ ∈ Sol C (Γ).Then, for any edge Γ ≺ ∆, it holds Proof.We first prove the statement for the case where Γ is a semi-origin edge.Recall the surface G ⊂ (C * ) 2 × C 2 , and G ⊂ C 4 defined in Sec.4.2.Then, Def.4.5 shows that TF C (Γ) = π(G ∩ {z 2 = 0}).Furthermore, since Γ is semi-origin, the value y i appears in the equation Note that, for a semi-origin edge Γ, the above arguments hold true after we replace C by R. Thus, we also get TF R (Γ) = AF R (Γ).The same cannot be said for pertinent edges.
Assume now that Γ is pertinent, and let T (Γ) be the subset of TF ρ C (Γ) defined as in Def.4.5 but without taking the Euclidean closure.
The inclusion T (Γ) ⊆ AF ρ C (Γ) follows directly from the definition of the multiplicity sets.Since AF ρ C (Γ) is closed, the closure of T (Γ) (i.e. the set TF ρ C (Γ)) is included in AF ρ C (Γ).To prove the inclusion TF C (Γ) ⊇ AF ρ C (Γ) we proceed as follows.Consider the set A(Γ) of points y ∈ AF ρ C (Γ) at which the system U (f − y) = 0 has finitely-many solutions.One can check (see e.g.proof of Thm.4.15) that AF ρ C (Γ) \ A(Γ) is finite and hence A(Γ) is dense in AF ρ C (Γ). Next, we choose a point q ∈ A(Γ), and will show that it belongs to TF C (Γ).Then, the point (ρ, 0) is an isolated solution to U (f − q) = 0.
Remark 4.17.Using the same arguments as in the proof Prop.4.16, we can extract two further statements: 1. the set AF C (Γ) is a union of irreducible algebraic curves, and 2. if f is a real map instead, and Γ is a semi-origin edge, then AF ρ R (Γ) = TF ρ R (Γ).Let us further describe the multiplicity sets for the complex case.Lemma 4.18.Let f and ∆ be as in Prop.4.16, and let S be an irreducible component of AF C (Γ) (see Rem. 4.17).Then, for any p ̸ ∈ S, and any q ∈ S \ Σ for some finite subset Σ ⊂ S, the following two quantities are equal and independent of the choice of p and q.
We denote the value of theses two quantities by m, which we also call the weight of the algebraic multiplicity.
Proof.Since the map is complex, the set G ∩ {z 2 = 0} is a union of irreducible algebraic curves.Then, Prop.4.16 shows that S is the image of a component C ⊂ G∩{z 2 = 0} under the projection π : (x, y) → y.
Assume first that Γ is semi-origin.Then, for some i ∈ {1, 2}, the equation U (f i −y i )(z 1 , 0) = 0 can be written as U (f i )(z 1 , 0) − y i = 0. Hence, the map K : C → {z 2 = 0}, given by the restriction to C of the other projection (z, y) → z, is a generically finite map, where K −1 (q) is a set of solutions to U (f − q) = 0 in (C * ) × {0}.The value m ∈ N will thus be exactly the topological degree of K.
Assume now that Γ is pertinent.Then, the above map K is not dominant, and S = AF ρ C (Γ) for some ρ ∈ C * .Following the steps of the proof of Prop.4.16 and using its notations we conclude that AF ρ C (Γ) is given by a polynomial Q ∈ C[y 1 , y 2 ] that divides ϕ 0 .Since the polynomial ( 12) is obtained from the ideal (11), the value m appearing therein is exactly the difference of multiplicities µ ρ (p) − µ ρ (q).Here, we set

The T-multiplicity sets in R
Let f : K 2 → K 2 be a dominant polynomial map, let ∆ be its Newton polytope, let Γ be an edge of ∆, and let U ∈ TM Γ (Z 2 ).We use g to denote the pair of polynomials (g 1 , g 2 ) := U (f − y)(z).Recall that G is given by {(z, y) | g = 0} ⊂ (K * ) 2 × K 2 , and G is given as the closure of G in K 4 .Let P := π |G , where π : Proof.First, note that P (Crit P ) can be expressed as From its construction, the map g is written as for some s 1 , s 2 ∈ N 2 , where φ : x is the coordinate change of variables induced by U .
Let (z, y) ∈ G.Then, there exists v ∈ Z 2 such that and since The multivariate chain rule implies that where det Jac z φ is non-zero for any z ∈ (K * ) 2 .Therefore, we have (z, y) ∈ G and det Jac z (f • φ) = 0 if and only if there exists x ∈ (K * ) 2 such that x = φ(z), f (x) = y and det Jac x (f ) = 0.This readily yields the proof.
In what follows, we assume that K = R.Let S ⊂ R 2 be a semi-algebraic curve, such that there exists a polynomial surjective map ϕ : R → S. Then S will be called a parametric semi-line.Jelonek showed in [22] that J f can be decomposed into a finite union of parametric semi-lines.In the rest of this section, we use Theorem 4.15 to obtain several results on this decomposition.
We say that p ∈ S is an endpoint if for any small enough ε > 0, the open ball B ε (p) ⊂ R 2 , intersected with S is homeomorphic (in the Euclidean topology) to the half-open interval [0, 1[.Theorem 4.20.let S be a parametric semi-line of J f , and assume that S has an endpoint q.Then q belongs to a component of the algebraic curve D f not containing S.
Proof.Let Γ be an edge of ∆.We keep the same notation as in this Section for U , g, and G for K = R.We use CG to denote the complex surface in (C * ) 2 × C 2 given by V(g), and by CG its Zariski closure in C 4 .
Eventhough the closure G ⊂ R 4 is smooth over all points in G, the same cannot be said about its boundary Then, using a sequence of blow-ups of C 4 at the complex algebraic curves CG ∩ {z 1 • z 2 = 0}, if necessary, we may assume (with abuse of notation) that G is smooth.The set G ∩ {z 2 = 0} is now a union of real irreducible algebraic curves and of points.
Recall that q is an endpoint of S.Then, if π |C denotes the restriction of π to the curve C, we conclude that there exists a point p in its critical locus Crit(π |C ) such that π(p) = q.In fact, since C ⊂ G, the point p is in the critical locus Crit(π |G ) of π restricted to G.
Thanks to Lemma 4.19, it is enough to show that π(Crit(π |G )) contains a curve D satisfying Since C is an algebraic curve and π(C) is a semi-algebraic curve with an endpoint q, it holds that |π −1 (y) ∩ C| ≥ 2 for finitely-many y ∈ π(C).Let U be a neighborhood of p, and let K be a connected component of G ∩ U, adjacent to C. Since π |G is dominant, for all y ∈ π(K), the preimage π −1 |K (y) also has at least two points.It follows that Here, the non-emptiness follows from π |K being a proper, locally algebraic morphism over a connected set K, whereas the inclusion is obvious.
Proof.Let S denote the component AF ρ C (Γ).We retain the set Σ ⊂ S from Lemma 4.18, and define V := S \ Σ.Then, the set RV := R 2 ∩ V is (homeomorphic to) a disjoint union of open intervals in S ∩ R 2 .Prop. 4.16 shows that for any q ∈ RV, there exists a point y ∈ R 2 \ S, converging to q, and a set K y of points z ∈ (C * ) 2 , converging to a subset K q ⊂ C * × {0}, such that U (f − y)(z) = 0. Lemma 4.18 shows that, there exists m ∈ N such that #K q = m if Γ is a semi-origin edge, and K q is a point of multiplicity m otherwise.Therefore, we get m = #K y .
Recall that U (f − y) = 0 is a real polynomial system for each y ∈ R 2 above, and K q has at least one point in R * × {0}.Then, if z is a point from K y converging to K q ∈ R 2 , so does its complex conjugate z.Since m = #K y is odd, at least one of the points in K y is real.We get q ∈ TF ρ R (Γ), and thus q ∈ S f from Thm. 4.15.Corollary 4.23.Let f : R 2 → R 2 be a polynomial map such that its complexification f := Cf : Proof.Since the map is birational, for any edge Γ of the Newton polytope ∆ of f , each nonempty component S of AF C (Γ) has odd weight.Thm.4.22 shows that J f contains each component S ∩ R 2 , and Thm.4.15 shows that J f is formed by all such components S ∩ R 2 .The proof thus follows from Remark 4.17 (1).

Characterizing the edges
The first phase of SparseJelonek-2 characterizes the edges (and the corresponding summands), see Sec. 3.1.2,of the Minkowski sum ∆ = ∆ 1 ⊕ ∆ 2 .In particular, it identifies the infinity edges Γ ≺ ∆ which are either semi-origin or pertinent.The characterization relies on the summands of Γ, say (Γ 1 , Γ 2 ).The algorithm for computing the Minkowski sum should check whether dim(Γ i ) is zero or not and if Γ i contains {0} or not, for i ∈ {1, 2}, see Def. 3.1.

Computing the multiplicity sets
We consider only the semi-origin and the pertinent edges (Lem.4.14).For each one them, say Γ with summands (Γ 1 , Γ 2 ), the algorithm computes the A-multiplicity set, AF K (Γ), which is a superset of the T-multiplicity set TF K (Γ).The A-multiplicity set consists of the set of points y ∈ K 2 for which there exists a particular transformation U ∈ SL(2, Z) (see Section 4.1) such that U (f − y) = 0 has more isolated solutions (counted with multiplicities) in K * × {0} than U f = 0 has in K * × {0} (see Defs. 4.5 and 4.10).First, we consider the semi-origin edges, Line 4 in SparseJelonek-2 (Alg.3).For each such Γ, we deduce from Remark 4.4 that after we restrict f − y to Γ, we obtain univariate polynomials in a monomial x k 1 x l 2 for some k, l ∈ N; which in turn we consider it as new variable t.Thus, (f which is a parametric representation of TF K (Γ); see Lines 6, 7, and 8 of SparseJelonek-2.We can obtain the implicit equation TF K (Γ) by eliminating the common variable t from the system.The multiplicity set is a point, a parametrized curve, or a union of lines.
Next, we consider the pertinent edges, Line 9 of SparseJelonek-2.Let Γ ≺ ∆ be pertinent.First, we transform the polynomials f − y, using a toric change of variables, to g , Lines 12 and 13, i.e., g := U (f − y).The change of variables corresponds to a Γ-basis for A (see Sec. 4.1.) of the lattice Z 2 , Line 10.
Following Remarks 4.3, 4.4, and Lem.4.14, the change of basis preserves the number of the 0-dimensional solutions and results in a simpler system to solve; actually a univariate one, Line 14.
We consider Γ only if U f Γ = 0 has solutions in K * × {0}; (see Remark 4.4 and Lem.4.14).This excludes edges Γ that are not semi-origin or not pertinent; see Lem. 4.14.Therefore, g = 0 has solutions (ρ, 0) ∈ K * ×{0} whose multiplicity changes depending on whether y belongs to the A-multiplicity set or not (Def.4.5).For each above solution ρ we compute a bivariate polynomial , and so to avoid the discriminant and all other multiplicity sets.If p i belongs to the Jelonek set, then TF R (Γ) contains the interval The union of all such intervals forms TF R (Γ).If p i is not in the Jelonek set, then we ignore this line segment.
We present Alg.ms_resultant in Sec.6.1 for computing the multiplicity of a solution of a bivariate system and the corresponding T-multiplicity sets.

Complexity and representation of the output
be polynomials of degree d and bitsize τ .Also, assume that their Newton polytopes have at most n edges.We go over the various steps of SparseJelonek-2 to estimate their complexity.Initial computations.First, we compute the Minkowski sum, Line 3.This costs O(n) and results a polygon with at most O(n) edges at its convex hull.We do this by slighly modifying the well known optimal algorithm [9, Sec.13.3] for computing the Minkowski sum of two polygons to remember the summands of its edge of the sum.
Computations with semi-origin edges.Next, we consider the semi-origin edges of the Minkowski sum, Line 4 to 8; that is edges having at least one summand containing 0. The most computational expensive operation is the computation of the roots (real or complex) of univariate polynomials; these are the polynomials P 1 (t) and P 2 (t) appearing at Lines 6 and 7.They come from the restriction of f 1 or f 2 on Γ and so their degree is at most d and bitsize at most τ .The computation of their roots (real or complex) costs O B (d 3 + d 2 τ ) [34].As there are at most O(n) semi-origin edges, the total cost of the first phase is O B (n(d 3 + d 2 τ )).
Regarding the output of this phase, first we assume that K = R.If the real roots of P 1 ∈ Z[t] are γ 1 , . . ., γ r , then whenever 0 ̸ ∈ Γ 2 , the output is a union of horizontal lines defined by numbers in Z[γ i ], for i ∈ [r].Similarly for the case 0 ̸ ∈ Γ 1 .When both Γ 1 and Γ 2 contain 0 (Line 8), then J f is a parametrized polynomial curve, defined by polynomials in Z[t] of degree at most d and bitsize at most τ .For K = C, its implicit representation, consists of a polynomial in Z[y 1 , y 2 ] of degree at most d and maximum coefficient bitsize O(dτ ).We compute it in O B (d 3 τ ), e.g., [31].If K = C and 0 ∈ Γ 1 (or 0 ∈ Γ 2 ), then we represent the union of lines in a more unified way by considering the resultant R 2 (y) = res(y 2 − j b j t j , P 1 (t), t) ∈ Z[y 2 ].In this way we have the implicit representation (y 1 , R 2 (y)) for J f whenever K = C and a parametrized one when K = R.
Computations with pertinent edges.The last phase of the algorithm deals with pertinent edges.The first task consists in computing a unimodular basis that fits our needs Line 10 and Section 4.1.The most costly part of this procedure is the computation of the Smith Normal Form (SNF) of matrix.As the degree of the input polynomials is O(d), this also bounds the dimension of the matrices.The cost of SNF is O B (d ω+1 ) [39], where ω is the exponent of matrix multiplication.
After the toric change of variables we obtain a univariate polynomials and we compute its roots.Then, we compute the multiplicity sets by exploiting the multiplicities of the roots of the corresponding bivariate polynomial system.ms_resultant (Alg.4) computes the corresponding multiplicity sets in O B (d 7 + d 6 τ ) for the complex case and O B (d 19.11 + d 18.11 τ ) in the real case.As there at most n pertinent edges, we should multiply the previous bounds by n to get the overall cost.The latter bounds dominate the complexity of the algorithm.If the polynomials are generic, then the complexity bound becomes significantly better.Proof.When the input polynomials are generic, then the Minkowski sum of their Newton polytopes does contain a pertinent edge with probability 1.Therefore, the complexity of the algorithm Figure 2 shows that NP(f ) = ∆ has exactly six infinity edges; they are either pertinent or semiorigin (see Def. 3.1).Let S 01 , S 02 , S 13 , S 24 , S 35 , S 46 ⊂ K 2 , denote the corresponding multiplicity sets in K 2 (see Def. 4.5) where By Thm.4.15, the union of the latter is J f .We apply SparseJelonek-2 to compute J f .

Computing the multiplicity sets
Let f : K 2 → K 2 be a dominant polynomial map sending (0, 0) to (K * ) 2 and ∆ := NP(f ).In this section we present how to compute the T-multiplicity set, TF K (Γ), corresponding to a pertinent edge Γ ≺ ∆.
Following the process of SparseJelonek-2, for a pertinent edge Γ, after we perform a toric change of variables, we obtain a bivariate polynomial system that has solutions of the form (ρ, 0), for some complex (or real) ρ, Line 14.Each ρ gives rise to the set AF ρ K (Γ), where The computation of AF ρ K (Γ) depends on the multiplicity of (ρ, 0).We present an algorithm ms_resultant, for computing the multiplicity of (ρ, 0) as a root of a bivariate polynomial system and then use it to compute the multiplicity set AF ρ K (Γ).The algorithm exploits resultant computations and its worst case complexity is polynomial.

Resultant computations for the multiplicity set
The pseudo-code of the algorithm appears in Alg. 4 and its proof of correctness presents the details of the various steps.Proposition 6.1.ms_resultant, Alg. 4, correctly computes the A-multiplicity set AF K (Γ) that corresponds to a pertinent edge Γ ≺ ∆.The result is a (possibly empty) union ρ K ρ of finitelymany curves K ρ that are the zero loci of polynomials in K[y 1 , y 2 ], where K ∈ {R, C} and ρ runs over all points in K * .
Proof.Let (ρ, 0) be a solution of g 1 = g 2 = 0, say of multiplicity µ, where We will exploit the fact that if we project on z 1 , then ρ is a root of the resultant of multiplicity at least µ.
We consider the resultant of g 1 and g 2 with respect to z 2 .This results a polynomial Obviously, ρ is a root of g of multiplicity at least µ, say µ 1 ≥ µ.We also know that . The presence of the factor R 11 is guaranteed because the system has solutions of the form (ρ, 0).We divide out the factors of R 1 that depend only on z 1 , that is R 11 , and we end up with the polynomial R 12 .If we do the substitution z 1 = ρ to R 12 , then the resulting polynomial J 1 is in Z[ρ][y 1 , y 2 ] and it is non-zero.If we want ρ to be of higher multiplicity as a root of R 1 , then J 1 should be zero.Thus, J 1 is a superset of the part of the multiplicity set AF ρ K (Γ) that emanates from (ρ, 0).Then, we consider the resultant of g 1 and g 2 with respect to z 1 .We proceed as before, mutatis mutandis, where now the R 21 is a power of z 2 .At the end we compute a superset of the part of the multiplicity set AF K (Γ) emanating from (ρ, 0).
If we want the solution (ρ, 0) to have multiplicity higher than µ as a solution to the system g 1 = g 2 = 0, then both ρ and 0 should be roots of higher multiplicity of the corresponding resultants, that is R 1 and R 2 , respectively.Thus, the gcd of J 1 and J 2 should vanish.The latter is J ρ ∈ Z[ρ][y 1 , y 2 ] which is the implicit equation of a curve K ρ = AF ρ K (Γ) that is the A-multiplicity set.For the complex case, the A-multiplicity set AF ρ C (Γ) is the T-multiplicity set TF ρ C (Γ).For the real case, following Cor.4.21, we have to check if a line segment of AF ρ C (Γ) ∩ R 2 belongs to TF ρ R (Γ) or not.
By repeating the same procedure over all solutions (ρ, 0) of the system g 1 = g 2 = 0, we obtain the T-multiplicity set TF R (Γ).To compute R 1 we employ fast subresultant algorithms, e.g., [31].We perform O(d) operations.Each operation consists of multiplying two trivariate polynomials in z 1 , y 1 , y 2 ; with degrees and bitsize as mentioned before.Each multiplications costs O B (d 5 τ ) and so the overall costs for computing R 1 is O B (d 6 τ ).
To compute R 12 we consider R 1 as a bivariate polynomial in y 1 and y 2 with coefficients in Z[z 1 ] and we compute its primitive part; that is to compute the gcd of all the coefficients and then divide all of them with it.The coefficients are polynomials in z The same complexity and bitsize bounds hold for the computation of R 2 and R 22 .
The polynomial g is of size (d, τ ) and so the cost for solving is O B (d 3 + d 2 τ ) [34].Then, for each root of g, say ρ, we make the substituion (z 1 , z 2 ) ← (ρ, 0) to obtain the polynomials J 1 and J 2 and we compute their gcd, that is . The latter is of degree d with respect to both y 1 anf y 2 .Its coefficients are polynomials in ρ of bitsize O(d 3 + d 2 τ ).The expected cost of the gcd is O B (d 6 + d 5 τ ) [30], that is almost linear in the size of the output.As we have to do this at most d times in the worst case, which corresponds to the different roots of g, the bounds follows for the complex case.Remark 6.3.For the complex case, we can avoid working with algebraic numbers.We notice that we perform computations with all the roots of g and so we exploit the Poisson formula of the resultant.Thus, we first consider the resultant of R 12 and g, that is H ← res(R 12 , g, z 1 ) ∈ Z[y 1 , y 2 ]; this is the evaluation of R 12 over all the roots of g.Then, we consider the gcd with R 22 , that is gcd The theoretical complexity is same as considering each root independently and perform gcd computations in an extension field.The gain is that all the output polynomials have integer coefficients.
6.1.2Testing the emptiness of TF ρ R (Γ) and its complexity Our setting is as follows.Assume that for a pertinent edge Γ ∈ ∆ we have performed the toric transformation to the system and have computed a real value ρ resulting in a polynomial in Let us call this curve C ρ .The degree of J ρ , with respect to y 1 and y 2 , is O(d) and its coefficients are polynomials in ρ of bitsize O(d 3 + d 2 τ ); also the ρ is the real root of a polynomial of size (O(d), O(d + τ )).
First, we check the condition of Thm.4.22: We pick a point p ∈ C ρ and we compute the value δ := µ ρ (p) − µ ρ (0).Due to the preprocessing procedure, recall that 0 ̸ ∈ J f .If δ is odd, Thm.4.22 shows that C ρ ⊂ J f , and we are done.The complexity of this step is dominated by the complexity of the subsequent steps, so we do not elaborate further.
If Thm. 4.22 does not apply, then we need to consider the intersection points of C ρ with the discriminant of f , say D. Recall that D is defined by a polynomial in Z[y 1 , y 2 ] and has size ( O(d 2 ), O(d 2 + dτ )).Consequently, we partition C using these intersection points and we test which curve segments contribute to the Jelonek set of f .
To do so, we choose a generic point, say p, at each curve segment and we rely on the fact that the following two conditions are equivalent: We can solve this polynomial, that have coefficients in an extension field, in O B (d 11 + d 10 τ ) [40].The latter bound dominates the complexity of computing the topology of C ρ .
Then we need to consider the intersection points of C ρ and D f .This costs less that computing the topology of C ρ as the latter (implicitly) requires to solve the system of J ρ and its derivatives and it involves polynomials of higher degree and bitsize than the polynomial defining D f .
At this point we can assume that we have partitioned C ρ to curve segments and we have computed a generic point at each, say p.One the coordinates is a rational number of bitsize at most O(d 7 + d 6 τ ).
Therefore, it suffices to test condition (C2) for a point p in C ρ .To do this, we perform the following steps.
(S1) We compute a point q = (q 1 , q 2 ) ∈ R 2 such that the segment from p to q does not intersect any other curve C ρ ′ and the discriminant curve of f .The curves C ρ ′ correspond to Amultiplicity sets emanating from other roots ρ ′ and/or edges of NP(f ).We obtain the discriminant curve by eliminating x 1 and x 2 from the equations {f 1 (x 1 , x 2 )−y 1 , f 2 (x 1 , x 2 )− y 2 , det(Jac(f 1 , f 2 ))}.
(S2) We check whether the systems f − p = 0 and f − q = 0 have the same number of real solutions.If this is the case, then the curve segment of C ρ that p belongs to does is not part of the multiplicity set TF R (Γ), following Def.4.5.
The above three steps are enough for our purposes: The set R 2 \ D f ∪ J f is a union of connected components, each of which has a constant number of real preimages under f .A ball B small enough around a generic point p ∈ C ρ is either included in one of the components of R 2 \ D f ∪ J f or intersects exactly two of them.Assume that p ∈ D f .We conclude from condition (C2) that if we sample two points q and q ′ , following (S1), then they belong to two different connected components of R 2 \ D f ∪ J f adjacent to J f if and only if the number of real solutions (counted with multiplicities) of the system f − p = 0 is smaller than that of either f − q = 0, or f − q ′ = 0.
There is a straightforward way of realizing the steps to test the where the corresponding curve segment contributes to the Jelonek set by solving various polynomial systems.Instead, we will avoid solving systems (as much as possible) and we will rely on (separation) bounds of their roots [14] to compute the points of interest.
The polynomial J ρ has degree O(d) with respect to ρ, y 1 , and y 2 and its bitsize is O(d 3 + d 2 τ ) [30,30]; the latter follows from the gcd computations in an extension field.
(S1) Compute the intermediate point q One of the coordinates of p is a rational number.Assume, without loss of generality that this is the second one and its value is c.That is p = (p 1 , c), where p 1 is a root of J ρ,c ∈ Z[ρ][y 1 ] of degree d and bitsize O(d 8 + d 7 τ ).Consider the line y 2 = c and the intersection of this line with all the other curves, say C ρ ′ with equation J ρ ′ (y 1 , y 2 ) that correspond to other multiplicity sets, and the discriminant curve of f .Let the closest intersection to p be a point p ′ .Then it suffices to choose the middle point of the segment p p ′ .Even more, we can estimate, using root bounds how close to p the point p ′ could be in the worst case.Then choose a point q that it closest to p with respect to this worst case estimation.Using root bounds, there is no need to solve the corresponding equations.
Regarding the curves corresponding to multiplicity sets, the polynomials J    [41] gives a way to choose (the first coordinate) of the intermediate point q.Hence, the bitsize of the coordinates of q is O(d 7 + d 6 τ ).
Consider the system f 1 (x 1 , x 2 ) − y 1 = f 2 (x 1 , x 2 ) − y 2 = det(Jac(f 1 , f 2 )) = 0.This is a system of three equations in four variables.If we eliminate x 1 and x 2 , then we obtain a polynomial D ∈ Z[y 1 , y 2 ] which is the discriminant of f .It has degree O(d 2 ) and its bitsize is O(d 2 τ ).Then, we work similarly as in the case of the curves that correspond to the multiplicity sets.The bounds for q are similar.
To summarize, if we add a number of order 2 O(d 10 +d 9 τ ) to the coordinates of p, then we obtain a point q such that the segment p q does not intersect neither any other curve corresponding to another multiplicity set, nor the discriminant curve.
(S2) Count the number of real roots Having computed the points p = (p 1 , p 2 ) ∈ K ρ and q = (q 1 , q 2 ) it remains to compute the real roots of the systems f (x 1 , x 2 ) − p = 0 and f (x 1 , x 2 ) − q = 0.
The second system consists of polynomials in Z[y 1 , y 2 ] of degree d and bitsize O(d 10 + d 9 τ ).
Using [6], we can solve the system and thus compute its real roots in O B (d 14.37 + d 13.37 τ ), using the fastest algorithm for matrix multiplicaiton.
The polynomials of the first system have coefficients in an extension field.Recall that p 1 = c ∈ Z, where c ≤ 2 O(d 7 +d 6 τ ) and p 2 is a root of a polynomial J ρ (c, y 1 ) ∈ Z[ρ][y 1 ] and ρ is a real root of the polynomial g(z 1 ) = 0. Thus, we can compute the real roots of f (y 1 , y 2 ) − p = 0 by computing the real solutions of the system f 1 (x 1 , x 2 ) − y 1 = f 1 (x 1 , x 2 ) − c = J z1 (c, y 1 ) = g(z 1 ) = 0.This is a system of four equations in four unknowns, that is x 1 , x 2 , y 1 , z 1 .The degree of the polynomials is bounded by O(d) and their bitsize is O(d 7 + d 6 τ ).We can solve this system in O B (d 22.11 + d 21.11 τ ) [6].The latter bound dominates the complexity of the overall procedure.

Figure 1 :
Figure 1: Two Newton polytopes and their Minkowski sum.They correspond to the polynomials of the map in Eq. (2).

Figure 2 :
Figure 2: Two Newton polytopes and their Minkowski sum.They correspond to the polynomials of the map in Section 5.4.The edges of the blue, resp.the black, polytope are labeled by a i , resp.b j , for i, j ̸ = 0.The vertices that correspond to (0, 0) are a 0 , and b 0 .

3 cFigure 4 :
Figure4: Toric change of basis.The face Γ is the edge delimited by a 1 and a 2 , while Γ ′ is delimited by a 1 and a 3 .The corresponding primitive vectors are v and v ′ , respectively.The vectors v and w consist a basis of Z 2 .In addition they define a cone that contains all the lattice points of the polygon.Thus, the toric change of basis consists of the vectors v and w and ẽ = (v, w).

Lemma 4 . 8 .
The T-multiplicity set of f corresponding to Γ, TF K (Γ), does not depend on the choice of the toric transformation U from TM Γ (Z 2 ).

Corollary 4 . 21 .Theorem 4 . 22 .
With the same notations as in Theorem 4.20, let Σ ⊂ R 2 be the finite set of points formed by endpoints of S and its singularities.Then S is homeomorphic to a line if Σ is empty.Otherwise, each connected component of S \ Σ is a segment homeomorphic (in the induced Euclidean topology) to the open interval ]0, b[, where b ∈ {1, +∞}.Let g := Cf : C 2 → C 2 be the complexification of f , and let AF ρ C (Γ) be a component of the multiplicity set AF C (Γ) of f having odd weight (see Lemma 4.18), for some ρ ∈ R * and some Γ ≺ ∆.Then, it holds that

Theorem 5 . 1 .
Let f = (f 1 , f 2 ) : K 2 → K 2 be a dominant polynomial, where f 1 , f 2 ∈ Z[x 1 , x 2 ]are polynomials of size (d, τ ) and their Newton polytopes have at most n edges.SparseJelonek-2 computes the Jelonek set of f in O B (n(d 7 +d 6 τ )) over the complex and in O B (n(d 19.11 +d 18.11 τ )) over the real numbers.
are polynomials of size (d, τ ) and their Newton polytopes have at most n edges.If f 1 and f 2 are generic, then SparseJelonek-2 computes the Jelonek set of f in O B (n(d 3 + d 2 τ )).

Figure 5 :
Figure 5: How to test a curve segment of C ρ contributes to the Jelonek set.The horizontal line y 1 = p 1 intersects the other curves of other multiplicity sets.If p ′ is the closest intersection to p, then q is a point in the segment p p ′ .
2.1 (Jelonek set).Given two affine varieties, X and Y , and a map F : X → Y , we say that F is non-proper at a point y ∈ Y , if there is no neighborhood U ⊂ Y of y, such that the preimage F −1 (U ) is compact, where U is the Euclidean closure of U .In other words, F is non-proper at y if there is a sequence of points {x k } k∈N in X such that ∥x k ∥ → +∞ and f (x k ) → y.The Jelonek set of F , J F , consists of all points y ∈ Y at which F is non-proper.
and the T-multiplicity set is the union of all AF ρ C (Γ); Prop.4.16.If K = R, then we have to test whether certain distinguished points, say {p 1 , . . ., p r }, in AF ρ R (Γ) (if any) are in the Jelonek set or not.That is, according to Cor. 4.21, we subdivide AF ρ C