Sparse resultant based minimal solvers in computer vision and their connection with the action matrix

Many computer vision applications require robust and eﬃcient estimation of camera geometry from a minimal number of input data measurements, i . e ., solving minimal problems in a RANSAC framework. Minimal problems are usually formulated as complex systems of polynomial equations. Many state-of-the-art eﬃcient polynomial solvers are based on the action matrix method that has been automated and highly optimized in recent years. In this paper we explore the theory of sparse resultants for generating minimal solvers and propose a novel approach based on a using an extra polynomial with a special form. We show that for some camera geometry problems our extra polynomial-based method leads to smaller and more stable solvers than the state-of-the-art Gr¨obner basis-based solvers. The proposed method can be fully automated and incorporated into existing tools for automatic generation of eﬃcient polynomial solvers. It provides a competitive alternative to popular Gr¨obner basis-based methods for minimal problems in computer vision. Additionally, we study the conditions under which the minimal solvers generated by the state-of-the-art action matrix-based methods and the proposed extra polynomial resultant-based method, are equivalent. Speciﬁcally we consider a step-by-step comparison between the approaches based on the action matrix and the sparse resultant, followed by a set of substitutions, which would lead to equivalent minimal solvers.


Introduction
The robust estimation of camera geometry, one of the most important tasks in computer vision, is usually based on solving so-called minimal problems (Nistér, 2004;Kukelova et al, 2008;Kukelova, 2013), i.e., problems that are solved from minimal samples of input data, inside a RANSAC framework (Fischler and Bolles, 1981;Chum et al, 2003;Raguram et al, 2013).Since the camera geometry estimation has to be performed many times inside RANSAC (Fischler and Bolles, 1981), fast solvers to minimal problems are of high importance.Minimal problems often result in complex polynomial systems, in several variables.Further, the systems tend to be overdetermined and consist of polynomials whose coefficients are non-generic, i.e., algebraically constrained (Cox et al, 1998, p. 109).A popular approach for solving minimal problems is to design procedures that can efficiently solve only a special class of systems of equations, e.g., systems resulting from the 5-pt relative pose problem (Nistér, 2004), and move as much computations as possible from the "online" stage of solving equations to an earlier pre-processing "offline" stage.
Most of the state-of-the-art specific minimal solvers are based on Gröbner bases and the action matrix method (Cox et al, 1998;Stetter, 1996).The Gröbner basis method was popularized in computer vision by Stewenius (Stewenius, 2005).The first efficient Gröbner basis-based solvers were mostly handcrafted (Stewenius et al, 2006(Stewenius et al, , 2005a) ) and sometimes very unstable (Stewenius et al, 2005b).However, in the last 15 years much effort has been put into making the process of constructing the solvers more automatic (Kukelova et al, 2008;Larsson et al, 2017a,b) and the solvers stable (Byröd et al, 2007(Byröd et al, , 2008) ) and more efficient (Larsson et al, 2017a,b;Larsson and Åström, 2016;Bujnak et al, 2012;Larsson et al, 2018;Martyushev et al, 2022).There are now powerful tools available for the automatic generation of efficient Gröbner basis-based solvers (Kukelova et al, 2008;Larsson et al, 2017a).
Note that while all these methods are in the computer vision community known as Gröbner basis methods, most of them do not generate solvers that directly compute Gröbner bases.They usually compute polynomials from a border basis (Mourrain and Trébuchet, 2008) or go "beyond" Gröbner and border bases (Larsson et al, 2018).However, all these methods are based on computing an action matrix1 .Therefore, for the sake of simplicity, in this paper, we will interchangeably write "action matrix method" and "Gröbner basis method" to refer to the same class of methods (Stetter, 1996;Kukelova et al, 2008;Larsson et al, 2017a,b;Larsson and Åström, 2016;Bujnak et al, 2012;Larsson et al, 2018;Martyushev et al, 2022).
The first step in such action matrix methods is to compute a linear basis B of the quotient ring A = C[X]/I where I denotes the ideal generated by input polynomial system.An action matrix method based on the Gröbner bases computes the basis of A by defining the division w.r.t.some Gröbner basis of the ideal I.In general computing the Gröbner basis requires to fix some monomial ordering.Alternatively, a heuristic basis sampling method (Larsson et al, 2018) that goes beyond Gröbner bases and monomial orderings in order to compute B can be used.The basis sampling method is related to the border basis methods (Mourrain and Trébuchet, 2008;Mourrain et al, 2021), which generalize the concept of monomial ordering and even propose a nonmonomial basis of A. The second step in these action matrix methods is to compute a linear map T f , representing the multiplication in A, w.r.t.some polynomial f .The representative matrix M f of the map T f is what we call the action matrix.Recently, (Telen and Van Barel, 2018;Telen et al, 2018) generalized the method used for generating the basis B. The proposed methods are known as normal form methods, developed for general systems of polynomial equations.They are not tailored to problems that appear in computer vision.
While the action matrix method for generating efficient minimal solvers has been thoroughly studied in computer vision and all such recently generated solvers are highly optimized in terms of efficiency and stability, little attention has been paid to an alternative algebraic method for solving systems of polynomial equations, i.e. the resultant method (Cox et al, 1998, Chapter 3).The resultant method was manually applied to several computer vision problems (Kukelova et al, 2012;Hartley and Li, 2012;Kasten et al, 2019;Kukelova, 2013;Kukelova et al, 2012).However, in contrast to the action matrix method, there is no method for automatically generating sparse resultant-based minimal solvers.The most relevant works in this direction are the methods using the subdivision of the Minkowski sum of all the Newton polytopes (Emiris, 2012;Canny and Emiris, 2000;Checa and Emiris, 2022) and the iterative method based on the Minkowski sum of a subset of Newton polytopes (Heikkilä, 2017).However, the subdivision methods are not applicable to polynomial systems with non-generic coefficients, whereas the method in (Heikkilä, 2017) leads (due to linearizations) to larger and less efficient solvers than the action matrix solvers.
Our first objective in this paper is to study the theory of sparse resultants and propose an approach for generating efficient solvers for solving camera geometry problems, by using an extra polynomial with a special form.Our approach, based on the Newton polytopes and their Minkowski sums, is used to generate sparse resultant matrices, followed by a Schur complement computation, and a conversion of the resultant constraint into an eigenvalue problem.
Note that our approach differs from the subdivision methods (Canny and Emiris, 2000;Emiris, 2012) in how the Newton polytopes are used for generating minimal solvers.The subdivision methods compute the Minkowski sum of the Newton polytopes of all the polynomials and divide its lattice interior into cells, which are used to construct the sparse resultant matrices.Whereas in this paper, we make this process iterative, computing the Minkowski sum of each subset of Newton polytopes (of the input polynomials) and avoid computing a subdivision.We directly use the lattice interior to generate a candidate sparse resultant matrix.For systems with nongeneric coefficients, such an iterative approach has been effective in generating sparse resultant matrices for the minimal problems in computer vision.The sparse resultant matrix and the eigenvalue problem together represent our minimal solver.The solvers based on our proposed approach have achieved significant improvements over the stateof-the-art sparse resultant-based solvers (Heikkilä, 2017) and achieved comparable or better efficiency and/or accuracy to state-of-the-art Gröbner basisbased solvers.Moreover, our proposed approach has been fully automated, and can be incorporated in the existing tools for automatic generation of efficient minimal solvers (Kukelova et al, 2008;Larsson et al, 2017aLarsson et al, , 2018)).
There is a similarity between the solvers obtained using our sparse resultant-based approach and the solvers based on the action matrix methods (Kukelova et al, 2008;Larsson et al, 2017aLarsson et al, , 2018;;Stetter, 1996;Byröd et al, 2009;Martyushev et al, 2022Martyushev et al, , 2023)).Therefore the second objective in this paper is to investigate this similarity.In a step-by-step manner we demonstrate that for a minimal solver generated based on the action matrix method, we can change the steps performed by the sparse resultant method such that it leads to an equivalent solver.Similarly, we also demonstrate that for a given minimal solver generated based on the sparse resultant method, we can change the steps performed by the action matrix method such that it leads to an equivalent solver.Specifically, our contributions are: 1.A novel approach (Sec.3), of adding an extra polynomial with a special form for generating a sparse resultant matrix whose sparse resultant constraint can be decomposed (Sec.3.3) into an eigenvalue problem.
• A scheme (Sec.3.2) to iteratively test the Minkowski sum of the Newton polytopes of each subset of polynomials, searching for the most efficient minimal solver, in the presence of algebraically constrained coefficients.• Two procedures (Sec.3.4) to reduce the sparse resultant matrix size, leading to comparable or better solver speeds than those generated by many state-of-the-art Gröbner basis-based methods.• A general method for automatic generation of efficient minimal solvers.The automatic generator is publicly available at (Bhayani et al, 2020a).2. A study of the constraints (Sec.4) to be satisfied so that the solvers based on our sparse resultant method as well as the action matrix method are equivalent, i.e. they involve eigenvalue decomposition of exactly the same matrix, and the steps performed for constructing that matrix can be interchanged.This paper is an extension of our work (Bhayani et al, 2020b), where we proposed an extra polynomial sparse resultant method for generating efficient minimal solvers.

Theoretical background and related work
In this section, we summarize the basic concepts and notation used in the paper.This notation is based on the book by Cox et al (1998), to which we refer the reader for more details.
Our goal is to compute the solutions to a system of m polynomial equations, in n variables, X = {x 1 , . . ., x n }.Let us denote the set of polynomials from (1) as F = {f 1 , . . ., f m }.The variables in X can be ordered and formed into a vector.W.l.o.g.let us assume that this vector has the form, x = x 1 . . .x n ⊤ .
Then for a vector α We collect all monomials present in F and denote the set as mon(F ).Let C[X] denote the set of all polynomials in unknowns X with coefficients in C.
The ideal is the set of all polynomial combinations of generators f 1 , . . ., f m .The set V of all solutions of the system (1) is called the affine variety.Each polynomial f ∈ I vanishes on the solutions of (1).Here we assume that the ideal I generates a zero dimensional variety, i.e., the system (1) has a finite number of solutions (say r).Using the ideal I we can define the quotient ring A = C[X]/I which is the set of equivalence classes, denoted over C[X] defined by the relation a ∼ b ⇐⇒ (a − b) ∈ I.If I has a zero-dimensional variety then the quotient ring A is a finite-dimensional vector space over C (Cox et al, 1998).For an ideal I there exist special sets of generators called Gröbner bases which have the nice property that the remainder after division is unique.Using a Gröbner basis we can define a linear basis for the quotient ring A.
Non-generic polynomial system: One of the important property of the polynomial systems in computer vision is that of non-genericity of its coefficients.The genericity of a property of a system is formally defined in (Cox et al, 1998, p. 109), which we adapt here to define non-genericity as, Matrix form: In this paper we will need to express a given set of polynomials F via matrix multiplication.To achieve this we fix an order for the polynomials in F , and also do the same for the monomials in B = mon(F ).The matrix representation of F has the form Monomial multiples of a polynomial set: We begin with a polynomial system F and the set of monomials in F , i.e., B = mon(F ).Let B ′ ⊃ B be another set of monomials.
Then an important step in our proposed method is to extend F to the largest possible set, say F ′ , such that mon(F ′ ) ⊆ B ′ .This extension is done by multiplying each f i ∈ F with monomials.i.e., we are generating polynomials from I = f 1 , . . ., f m .For this, we compute the set T i of all possible monomials T i = {x α } for each We will use the following shorthand notation to express such an operation where T = {T 1 , . . ., T m }.Subsequently, in this operation we assume to have removed all monomials in B ′ which are not present in the extended set F ′ and denote the modified set as B ′ by abuse of notation.In other words, we will assume that B ′ = mon(F ′ ).

The action matrix method
One of the well known SOTA methods for polynomial solving is the Gröbner basis-based action matrix method (Cox et al, 1998;Stetter, 1996;Sturmfels, 2002).It has been recently used to efficiently solve many minimal problems in computer vision (Kukelova et al, 2008;Kukelova, 2013;Stetter, 1996;Larsson et al, 2017a;Byröd et al, 2007Byröd et al, , 2009;;Larsson et al, 2018).It transforms the problem of finding the solutions to (1), to a problem of eigendecomposition of a special matrix.We list the steps performed by an action matrix method in Appendix A, and note here that the algorithm can essentially be distilled in a sequence of three important steps, viz.construct the set T j of monomial multiples for each of the input polynomials f j ∈ F , extend F via monomial multiplication to F ′ and linearize the resulting system as a matrix product, Cb.A G-J elimination of C is then used to extract the required action matrix, whose eigenvalues give us the roots of F .

Resultants
Alternative to the action matrix method, we have the method of resultants for polynomial solving.Originally, resultants were used to determine whether a system of n + 1 polynomial equations in n unknowns has a common root or not.Let us have a system of polynomial equations as defined in (1) and assume that m = n+1.Classically, a resultant is defined to be an irreducible polynomial, constraining the coefficients of the polynomials in F (see Eq. ( 2)) to have a non-trivial solution.Then the resultant is a polynomial Res([c i,α ]) with c i,α as variables.A more formal treatment of the theory of resultants can be obtained from (Cox et al, 1998, Chapter 3).

Polynomial solving using resultants
A step in a sparse resultant method is to expand the given system of polynomials F to a set of linearly independent polynomials.This is usually done by adding some monomial multiples of the original polynomials, i.e., using the operation (4).
The expanded set of polynomials, say F ′ , can be expressed in a matrix form as Usually, C([c i,α ]) in ( 5) is a rectangular matrix (tall or wide).
The resultant-based method here, requires the coefficient matrix C([c i,α ]) to be a square invertible matrix for randomly assigned values to the coefficients, c i,α ∈ C =0 i.e., det C([c i,α ]) = 0.A matrix with such properties is called the resultant matrix and in this paper we will denote it as M([c i,α ])2 .If C([c i,α ]) is a square invertible matrix, we can rewrite (5) as M([c i,α ]) b.
(6) Now, F = 0 implies F ′ = 0 and it leads to Thus the requirement for F = 0 to have common solutions is the following condition on the coefficients of We call this the resultant constraint.It is a polynomial with c i,α as variables.From the definition of a resultant Res([c i,α ]) (Cox et al, 1998, Theorem 2.3) we have that Res([c i,α ]) is a polynomial with c i,α as variables and that it vanishes iff the system of equations F = 0 has common roots.This gives us the necessary condition for the existence of roots the system F = 0, that det M([c i,α ]) must vanish if the resultant vanishes, i.e., This implies that given a polynomial system F , the resultant constraint ( 8) is a non-trivial multiple of its resultant Res([c i,α ]).
While resultants are defined for a system of one more polynomial than the number of variables, we can employ them for solving a system of n polynomials in n variables.One way to do this, is to hide one of the variables to the coefficient field (in other words, consider it to be a constant), another way is to add an extra polynomial by introducing a new variable, and then hide this variable to the coefficient field.In both these approaches, we end up with a system where we have one more polynomial than the number of variables.

Hiding a variable
By hiding one variable, say x n , to the coefficient field, we are left with n polynomials F in n − 1 variables.This gives us a way to use the concept of resultants and compute Res([c i,α ], x n ) which now becomes a function of c i,α and x n .In this case, (6) becomes M([c i,α ], x n )b, (10) where the symbols have their usual meaning.For simplicity we will denote M([c i,α ], x n ) as M(x n ) in the rest of this paper.Its determinant det M(x n ), is a multiple of the resultant Res(x n ).This is known as a hidden variable resultant and it is a polynomial in x n whose roots are the x n -coordinates of the solutions of the system of polynomial equations.For theoretical details and proofs we refer to (Cox et al, 1998, Chapter 7).Such a hidden variable approach has been used in the past to solve various minimal problems (Kukelova, 2013;Kukelova et al, 2012;Hartley and Li, 2012;Kasten et al, 2019).
This approach leads to computing the roots of the polynomial, det M(x n ) = 0.However, computing the determinant of a polynomial matrix det M(x n ) and then its roots, may be slow and unstable.Therefore, the most common way to solve the original system of polynomial equations is to first transform the following matrix equation to a polynomial eigenvalue problem (PEP) (Cox et al, 2007), which is then expressed as, where l is the degree of the matrix M(x n ) in the hidden variable x n and M 0 , . . ., M l are matrices that depend only on the coefficients c i,α of the original system of polynomials.The PEP (12) can be easily converted to a generalized eigenvalue problem (GEP), written as, and solved using standard efficient eigenvalue algorithms (Kukelova et al, 2012).Basically, the eigenvalues give us the solutions to x n and the rest of the variables can be extracted from the corresponding eigenvectors, y.Such a transformation to a GEP relaxes the original problem of finding the solutions to the input polynomial system since the eigenvectors in general do not satisfy the monomial dependencies induced by the monomial vector b as well the monomial vector y.Moreover, this relaxation may also introduce extra parasitic (zero) eigenvalues leading to slower polynomial solvers.

Adding an extra polynomial
Alternatively, we can add a new polynomial to F and compute the so called u-resultant (see (Waerden et al, 1950) and (Cox et al, 1998, Chapter 3)) by hiding u 0 , . . ., u n .In general, random values are assigned to u 1 , . . ., u n (and u 0 is a new unknown).Just like in the hidden variable method, this method hides the variable u 0 to the coefficient field and generates the resultant matrix M(u 0 ), i.e., in this case the matrix form of the extended system of polynomials (6) has the following form where u 0 is a new unknown variable and b is a vector of all monomials in this extended system.We will denote M([c i,α ], u 0 ) as M(u 0 ) in order to simplify the notation.Again, it holds that det M(u 0 ) is a multiple of the resultant Res(u 0 ).Thus, in order to compute the common zeros of F = 0, we need to solve detM(u 0 ) = 0, similar to the case of the hidden variable resultant.Instead of computing the determinant of a polynomial matrix, here we also solve it as a GEP described in Sec.2.2.2.However, as mentioned in the hidden variable method, such a method introduces spurious solutions, arising from the linearization of the original system in (15), i.e., not considering monomial dependencies in b, as well as from transforming PEP to GEP and not considering the monomial dependencies in y in (13).Some of these spurious solutions (eigenvalues) can be removed by using Schur complement, described in the section below or by applying the method for removing zero eigenvalues, similar to (Kukelova, 2013;Bhayani et al, 2021).

Schur complement
One way to remove the spurious solutions introduced by the linearization of a hidden variable resultant matrix M(u 0 ) is to use the Schur complement of one its submatrices.Here, we briefly review this method.Let us first consider some partition of the set of monomials B as ) and b 2 = vec(B 2 ).This imposes a column partition on M(u 0 ) in ( 15).Morever, we can order the rows of M(u 0 ) such that its upper block is independent of u 0 .Together, we obtain the following block partition of M(u 0 ): Here, the upper block M 11 M 12 is independent of u 0 .Thus we can write (15) as The requirement for existence of solutions of F = 0 is that the vector in (18) should vanish.We thus obtain the following two vector equations If we were able to partition M(u 0 ) such that M 12 is a square invertible matrix, we can eliminate b 2 from these two equations, to obtain The matrix X(u 0 ) is the Schur complement of the block M 12 of M(u 0 ), which has the following property: where, det(A 12 ) = 0 by our assumption.Therefore, for a generic value of u 0 , det(M(u 0 )) = 0 ⇔ det(X(u 0 )) = 0, which means that X(u 0 ) is a square invertible matrix and its determinant is also a multiple of the resultant.However, X(u 0 ) corresponds to a smaller eigenvalue problem as compared to the the sparse resultant matrix M(u 0 ).Both the resultant based methods, described in Secs.2.2.2 and 2.2.3 are proposed for square generic systems (Emiris, 2012) via mixed subdivision of polytopes.However, the polynomial systems studied here, are usually sparse, overdetermined and consist of polynomials with nongeneric coefficients (see Def. 1).The sparsity of the systems can be exploited to obtain more compact resultants using specialized algorithms.Such resultants are commonly referred to as the sparse resultants.

Polyhedral geometry
Sparse resultants are studied via the theory of polytopes (Cox et al, 1998, Chapter 7).Therefore, we define the important terms and notations related to polytopes, which we will later use in the text.
The Newton polytope NP(f ), of a polynomial f is defined as a convex hull of the exponent vectors of all the monomials occurring in the polynomial (also known as the support of the polynomial).Therefore we have NP(f i ) = Conv(A i ) where A i is the set of all integer vectors that are exponents of monomials with non-zero coefficients in f i .A Minkowski sum of any two convex polytopes P 1 , P 2 is defined as We demonstrate the concept of a Newton polytope using a simple example.
Example 1 Let us consider a system of two polynomials Here, The Newton polytopes P 1 = NP(f 1 ), P 2 = NP(f 2 ) as well as the Minkowski sum Q = P 1 + P 2 are depicted in Fig. 1.

Sparse resultants
Consider the SOTA methods in (Canny and Emiris, 2000;Emiris and Canny, 1993;Emiris, 2012), for computing the sparse resultant for a well-determined generic system F .The main idea in all such methods is to compute the Minkowski sum of the Newton polytopes of all the polynomials, f i ∈ F , and divide its interior into cells.Each cell determines a multiple of one of the input polynomials, f i ∈ F with a monomial.All such monomial multiples collectively lead to an extended polynomial system, F ′ .The extended polynomial system, via linearization into a matrix product, then leads to a sparse resultant matrix M(x n ), if using the hidden-variable technique in Sec.2.2.2, or M(u 0 ), if using the u-resultant of the general form in Sec.2.2.3.As such the resulting solvers are usually quite large and not very efficient.However, if F has non-generic coefficients (see Def. 1), these methods may fail, as the algebraic constraints on the coefficients leads to algebraic constraints on the elements of the sparse resultant matrices M(x n ) or M(u 0 ) and hence they may not satisfy the necessary rank conditions.For such systems, Heikkilä (2017) recently proposed an iterative approach based on the hidden variable resultant, to test and extract M(x n ) in ( 6).Thereafter, it transforms (6) to a GEP (13) and solves for eigenvalues and eigenvectors to compute the roots.Our proposed approach here, extends this iterative scheme for generating a sparse resultant matrix M(u 0 ), specifically for the extra polynomial approach in Sec.2.2.3.

Proposed extra polynomial resultant method
In this paper, we propose an iterative approach based on (Heikkilä, 2017) and apply it to a modified version of the u-resultant method described in Sec.2.2.3.Specifically, we propose the following modifications to the u-resultant method.1.Instead of the general form ( 14) of the extra polynomial in the u-resultant based method, in this paper, we propose to use the following special form of the extra polynomial where u 0 is a new variable and x k ∈ X is one of the input variables.In general, we can select any x k ∈ X.However, since in practice, selecting different x k 's leads to solvers of different sizes and with different numerical stability, we test all x k ∈ X when generating the final solver.2. For non-generic and overdetermined polynomial systems F , we avoid computing the Minkowski sum of all the Newton polytopes N P (f ), ∀f ∈ F a , as proposed in the methods in (Canny and Emiris, 2000;Emiris and Canny, 1993;Emiris, 2012).Instead, we iterate through each subset, F sub ⊂ F a , and compute the Minkowski sum of N P (f ), ∀f ∈ F sub .Instead of dividing the Minkowski sum into cells, we simply use its lattice interior to determine the monomials B in the extended polynomial system F ′ , i.e.B = mon(F ′ ). 3. We exploit the form of the extra polynomial ( 27) and propose a block partition of M(u 0 ) (15), which facilitates its decomposition using Schur complement directly into a regular eigenvalue problem.This regular eigenvalue problem, compared to GEP that arises for general u-resultant polynomial ( 14), leads to fewer spurious eigenvalues and hence a faster solver.Note that our method of iteratively testing various polynomial subsets, has been quite effective in generating efficient solvers, as demonstrated on many minimal problems (see Sec. 5).The generated solvers are comparable in efficiency and speed with those generated by the SOTA action matrix-based methods (Larsson et al, 2017a(Larsson et al, , 2018;;Martyushev et al, 2022).

Method outline
In the following, we first go through the important steps performed by our method.
1. Let us consider a system of m polynomial equations, F = 0 (1), in n variables, X = {x 1 , . . ., x n }.For all variables x k ∈ X, we perform the steps 2 − 5 in the offline phase.

[Offline]
We fix the form of the extra polynomial to f m+1 = x k − u 0 (27) and augment F with f m+1 .
We hide the new variable u 0 to the coefficient field which means that, F a ⊂ C[X].

[Offline]
We execute steps 3(a)-3(c), for each subset of polynomials F sub ⊂ F a and for every variable x k ∈ X.
(a) From the set F sub , we attempt to construct a favourable monomial set B, using a polytope-based method, described in Sec.3.2.(b) We extend the polynomial system, F a , using the computed monomial set B, represented as where T = {T 1 , . . ., T m+1 } (4).
(c) The set of equations F ′ a = 0, can be expressed in a form of a matrix equation as C(u 0 )b = 0, where b = vec(B).The output of this step described in detail in Sec.3.2, is all favourable monomial sets B and the corresponding coefficient matrices C(u 0 ). 4.
[Offline] For each favourable monomial set B and the corresponding coefficient matrix C(u 0 ) we perform the following: (a) We partition B = B 1 ⊔B 2 in two different ways, Note that B 2 = B \ B 1 .(b) Based on the two set-partitions of B, we attempt to partition the matrix C(u 0 ) in the following two ways. or and the matrix A 12 has the full column rank.The output of this step is the the coefficient matrix C(u 0 ), for which a partitioning as (32) or (33), with the full column rank matrix A 12 is possible, and which corresponds to smallest set B 1 .If we have more than one such choice of C(u 0 ), we select the smallest matrix C(u 0 ).This step is described in more detail in Sec.3.3.

[Offline]
In this step we aim to reduce the size of the matrix C(u 0 ), selected in the previous step.
(a) We first try to remove a combination of rows and columns from C(u 0 ) and the corresponding monomials from the favourable monomial set B, such that the resulting monomial set is still a favourable monomial set (Sec.3.2) and that the coefficient matrix C(u 0 ) can still be partitioned as in (32) or in (33).(b) We next remove the extra rows from C(u 0 ) to obtain a sparse resultant matrix M(u 0 ) (6), while still respecting its block partition, as in ( 32) or ( 33).This step is described in Sec.3.4.6.
[Online] In the final online solver, we fill the precomputed sparse resultant matrix M(u 0 ) with the coefficients coming from the input data/measurements.Then we compute the Schur complement of a block of M(u 0 ), as described in 2.2.4,which is then formulated as an eigenvalue problem of the matrix X (49) (or ( 51)).Finally, the eigendecomposition of the matrix X gives us the solutions to the input system of polynomial equations F , i.e. to the given instance of the minimal problem.

Computing a favourable monomial set
Let the extra polynomial f m+1 have the form f m+1 = x k −u 0 (27) for x k ∈ X, and let us assume a subset F sub ⊂ F a of the augmented system F a = F ⊔ {f m+1 } (28).
In the step 3(a) of our algorithm (Sec.3.1) we attempt to generate so-called favourable monomial sets.Our method for constructing these sets is based on the polytope-based algorithm proposed in (Heikkilä, 2017), where such sets were generated for the original system F and its subsets.Here we describe our method that constructs such favourable sets for subsets F sub of the augmented system F a (28).In this subsection, we refer to the basic terminology and notations described in Sec.2.2.5.
In our method, we begin by computing the support A j = supp(f j ) and the Newton polytope NP(f j ) = conv(A j ) for each polynomial f j ∈ F a .We also compute the unit simplex NP 0 ⊂ Z n and the Minkowski sum, Let us construct a small displacement vector δ ∈ R n , such that each of its elements is assigned one of the three values, {−10 −3 , 0, 10 −3 }.In other words (34) Then using this value of δ, we compute the set of integer points inside Q+δ, from which we compute a set of monomials, i.e. a potential candidate for a favourable monomial set.We demonstrate this step on a system of two polynomials in Ex. 2.
Example 2 Let us continue with Example 1, where we computed the Newton polytopes P 1 = NP(f 1 ) and P 2 = NP(f 2 ) as well as their Minkowski sum Q = P 1 + P 2 .Let us assume the displacement vector to be δ = [−0.1,−0.1].In Fig. 2 we depict the integer points in the interior of Q + δ, i.e. the points in the set Z 2 ∩ (Q + δ).This set of points give us a favourable set of monomials In the next step (see step 3(b) in Sec.3.1), we extend the input polynomial system, F a , using the set of monomials B as where T = {T 1 , . . ., T m+1 } (see (4)).Each T i denotes the set of monomials to be multiplied with the polynomial f i ∈ F .The extended set of polynomials F ′ a can be written in a matrix form as where b is a vector form of B w.r.t.some ordering of monomials.In the rest of the paper we will denote the coefficient matrix C([c i,α ], u 0 ) as C(u 0 ) for the sake of simplicity.
Our approach then evaluates the following three conditions: for a random value of u 0 ∈ C. The first condition (38) ensures that we have at least as many rows as the columns in the matrix C(u 0 ).The second condition (39) ensures that there is at least one row corresponding to every polynomial f i ∈ F a .The third condition (40) ensures that C(u 0 ) has full column rank and that we can extract a sparse resultant matrix from it.If these conditions are satisfied, we consider the set of monomials B to be a favourable set of monomials.
Note that we repeat this procedure for all variables x k ∈ X, for each subset F sub ⊂ F a , and for each value of the displacement vector δ (34).The output of this step are all generated favourable monomial sets B and the corresponding coefficient matrices C(u 0 ).

Block partition of C(u 0 )
In the step 4 of our method (Sec 3.1), we iterate through all favourable monomial sets B and the coefficient matrices C(u 0 ), which was the output of the previous step.
For each favourable monomial set B, we know the corresponding sets of monomial multiples T i for each polynomial f i ∈ F a and the set T = {T 1 , . . ., T m , T m+1 }.We have also computed the corresponding coefficient matrix C(u 0 ).Assume the extra polynomial (27) to be fixed as f m+1 = x k − u 0 , for x k ∈ X, while computing this favourable monomial set.
Since B was constructed such that it satisfies (40), C(u 0 ) has full column rank.Let us assume, ε =| B | and p = Σ m+1 j=1 |T j |.As C(u 0 ) satisfies the condition (38), we have p ≥ ε.Therefore, we can remove the extra p − ε rows from C(u 0 ), leading to a square invertible matrix.The algorithm of row removal approach is described in B.3.The square matrix so obtained, is a sparse resultant matrix M(u 0 ) and it satisfies the resultant matrix constraint (7), if the set of equations F a = 0 has solutions.
Instead of directly solving the resultant constraint (8) or converting (7) to GEP, we exploit the structure of the extra polynomial f m+1 (27) and propose a special ordering of the rows and columns of M(u 0 ).This facilitates a block partition of M(u 0 ), such that the Schur complement of one of its block submatrices, then helps us to reduce its matrix constraint (7) to a regular eigenvalue problem3 .
Rather than block partitioning the sparse resultant matrix M(u 0 ), we first fix a block partition of the coefficient matrix C(u 0 ) and then remove its rows, while respecting this partition, to obtain the sparse resultant matrix M(u 0 ).Note that we obtain two different block partitions on b and C(u 0 ), in (37), based on our selected setpartition of the favourable monomial set B = B 1 ⊔ B 2 : Note that B 2 = B \ B 1 .We consider both these partitions in Prop.3.1.Thereafter, we will describe our method of removing the rows from C(u 0 ) such that we can compute a Schur complement of a block of M(u 0 ).
Proposition 3.1 Consider a block partition of C(u 0 ) and b in (37) as We can achieve this in two different ways, based on our choice the set-partition of B in (41) or (42).In either case, if C 12 has full column rank, then the resultant matrix constraint (7) can be converted to an eigenvalue problem, of the form X b 1 = u 0 b 1 if we set-partition B as in (41), and of the form The matrix X is square and its entries are the functions of the coefficients of F.
Proof: Let us first consider the set-partition of B as in (41).Then, the special form of the extra polynomial (27) implies that We next order the monomials in B such that b can be partitioned as b = vec(B 1 ) vec(B 2 ) T = b 1 b 2 T .This induces a column partition of C(u 0 ).Moreover, we can row partition C(u 0 ) by indexing the rows in its lower block by monomial multiples of f m+1 (x ′ ).All such multiples are linear in u 0 , i.e., these multiples have the form x α j (x k − u 0 ), ∀x α j ∈ T m+1 .The upper block is row-indexed by monomial multiples of f 1 (x ′ ), . . ., f m (x ′ ).Such a row and column partition of C(u 0 ) gives us a block partition as in (43).As C 11 C 12 contains polynomials independent of u 0 and C 21 C 22 contains polynomials of the form x α j (x k − u 0 ) we obtain We can order monomials so that T m+1 = b 1 .Now, the block partition of C(u 0 ) implies that C 21 is column-indexed by b 1 and row-indexed by T m+1 .As C 21 C 22 has rows of form x α j (x k −u 0 ), x α j ∈ T m+1 =⇒ x α j ∈ B 1 .This gives us, B 21 = −I, where I is an identity matrix of size This also means that A 21 is a square matrix of the same size as B 21 .Thus we have where C(u 0 ) is a p × ε matrix.If C(u 0 ) is a tall matrix, so is A 12 .Therefore, we can eliminate extra rows from the upper block A 11 A 12 so that we obtain a square invertible matrix Â12 while preserving the above mentioned structure.Such a row-removal will also give us the sparse resultant matrix M(u 0 ) which satisfies the resultant matrix constraint (7).We will describe our method for removing the extra rows in Sec.3.4.
Let us now here, use the Schur complement technique 2.2.4, to reduce the size of the eigenvalue problem.From (47), we have Â11 Â12 Eliminating b 2 from the above pair of equations we obtain The matrix X is the Schur complement of Â12 w.r.t.Ĉ0 .
If we select the alternative set-partition of B, as in ( 42), we obtain a different block partition of b and C(u 0 ).Specifically, in (46), we have A 21 = I and A 22 = 0.By assumption, we have A 12 has full column rank.Therefore, we can remove the extra rows from C(u 0 ) in ( 47) and obtain Substituting the value of M(u 0 ), in (7), we get Eliminating b 2 from these equations, we obtain an alternate eigenvalue formulation The matrix X here, represents an alternative representation to the one in (49), of the Schur complement.
Note that this proposition allows us to test the existence of the Schur complement X (49) (or (51)), for a given favourable monomial set B and the corresponding coefficient matrix C(u 0 ).This condition is tested for each B and corresponding C(u 0 ), for both choices of set-partitions (41) or (31).Out of those that succeed, we select the coefficient matrix C(u 0 ), which led to the smallest possible Schur complement X.If we have more than one such choice, we choose the smallest coefficient matrix C(u 0 ).Note that our iterative approach is crucial in increasing the chances of obtaining a minimal solver, even when the polynomials in F have non-generic coefficients.

Row-column removal
The next step in our method is to attempt to reduce the size of the matrix C(u 0 ) selected in the previous step.For this, we employ a method, inspired by (Kukelova, 2013).Here we select columns of C(u 0 ), one by one in a random order to test for their removal.For each such column, we select rows (say r 1 , . . ., r s ) that contain nonzero entries in the column and also consider all columns (say c 1 , . . ., c l ) that have non-zero entries in r 1 , . . ., r s .Then we can remove these s rows and l columns from C(u 0 ) only if the following conditions hold true for the resulting reduced matrix C red (u 0 ).
1.After eliminating the monomials from T , we require that there is at least one monomial left in each T i .2. If C(u 0 ) is of size p × ε, the reduced matrix C red (u 0 ) would be of size (p−s)×(ε−l).Then we require p − s ≥ ε − l and rank(C red (u 0 )) = ε − l. 3. C red (u 0 ) must be block partitioned and decomposed as in Proposition 3.1.We repeat the above process until there are no more columns that can be removed.Note that the last condition is important as it ensures that at each stage, the reduced matrix can still be partitioned and decomposed into an eigenvalue formulation (49) (or alternately (51)).Now by abuse of notation, let us denote C(u 0 ) to be the reduced matrix and denote B and T to be the reduced set of monomials and the set of monomial multiples, respectively.
If C(u 0 ) still has more rows than columns, we transform it into a square matrix by removing extra rows (say q 1 , . . ., q j ) and the monomials from T indexing these rows, chosen in a way such that the three conditions mentioned above are still satisfied.Note that it is always possible to choose rows, such that these three conditions are satisfied.Moreover, our proposed approach first, tries to remove as many rows as possible from the lower block, indexed by T m+1 .This is to reduce |T m+1 |(= |B 1 |) as much as possible and ensure that the matrix A 21 and hence X (49) (or (51)) for eigenvalue problem has as small size as possible.Then if there are more rows still to be removed, the rest are randomly chosen from the upper block indexed by {T 1 , . . ., T m }.Algorithms for this step of matrix reduction are provided in the Appendix B.
The sparse resultant matrix M(u 0 ) is constructed offline through these three steps.In the online stage, we fill in the coefficients of M(u 0 ) from the input measurements and compute a Schur complement X (49) (or ( 51)).Then, we extract the solutions to x 1 , . . ., x n by computing the eigenvectors of X.The speed of execution of the solver depends on the size of b 1 (=|B 1 |) as well the size of Â12 while the accuracy of the solver largely depends on the condition number as well as the size of the matrix to be inverted i.e., Â12 .

Action matrix vs sparse resultant
In this section, we study the connection between a solver generated using an action matrix method (Kukelova et al, 2008;Kukelova, 2013;Stetter, 1996;Larsson et al, 2017a;Byröd et al, 2007Byröd et al, , 2009;;Larsson et al, 2018), and a solver based on the sparse resultant method proposed in Sec. 3. Observe that, our sparse resultant method exploits only the structure of the input polynomials, via the geometry of their Newton polytopes, whereas the SOTA action matrix method algebraically investigate the input polynomial system via the Gröbner basis.Seemingly different, the generated solvers exhibit the same properties, and hence in this section our objective is to throw some light on the similarities.Some of the symbols used in both the methods are the same.Therefore, to distinguish them, in this section we will use the prefixes a and r to respectively denote the symbols used in the action matrix method in Sec.2.1 and the sparse resultant method in Sec.3.1.Let us assume that both these methods are used to generate a solver for a system F = 0 (1) of m polynomial equations in n variables with r solutions.An action matrix method results in a solver that is performing G-J elimination (or alternatively LU/QR decomposition) of an elimination template a C (A8) and the eigendecomposition of an action matrix M f (A3) of size r × r 4 .The proposed sparse resultant method, on the other hand, leads to a solver, which computes a Schur complement X (49) (or (51)) of size N × N , from the sparse resultant matrix M(u 0 ), followed by its eigendecomposition.In general, N ≥ r.
While the final solvers generated by these two methods are in general different, i.e., they perform different operations, at the same time, they demonstrate some similar properties, e.g., both these solvers extract solutions from eigenvectors of some matrices, and both involve computing a matrix inverse (G-J elimination can be replaced by matrix inverse).This motivates us to study the similarities of these solvers.Let us first define the equivalence of a solver generated using an action matrix method (AM) and a solver generated using the sparse resultant method from Sec. 3 (SRes).
Definition 2 For a given system of polynomial equations F = 0 (1), an action matrix-based solver and a sparse resultant-based solver are defined to be equivalent, iff the following two conditions are satisfied: 1.The action matrix M f (A3) is the same as the Schur complement X (49) (or ( 51)), i.e.M f = X.
2. The step of G-J elimination of the template matrix a Ĉ in the online stage (step 7) of the action matrix method (Sec.2.1) can be replaced with the matrix product Â−1 12 Â11 , used to compute the Schur complement in the online stage (step 6) of our sparse resultant method (Sec.3.1), or viceversa.The size of the matrix a Ĉ is the same as the size of the upper block Â11 Â12 of the sparse resultant matrix M(u 0 ) (48) (or (50)).Now, our goal is to define conditions under which the final online solvers generated by these two methods are equivalent.
We next demonstrate that if we have an action matrix-based solver (AM) for a given system F = 0 (1), we can modify the steps performed by the sparse resultant method proposed in the Sec.3.1, such that both the solvers are equivalent according to Def. 2.
2018) is equal to the number of solutions to the input system.The only exceptions are the methods proposed in (Byröd et al, 2009) and (Martyushev et al, 2023), which can result in larger action matrices.

X (SRes) from M f (AM)
Let us assume that we have generated a solver for a system of polynomial equations, F = 0 (1) using the action matrix method described in Sec.2.1.The action polynomial is assumed to be f = x 1 , where x 1 ∈ X.
We first propose the following changes to the steps performed by the sparse resultant method in the offline stage, in Sec.3.1.In what follows, we will assume that step i actually means the step i in Sec.3.1.C1R The step 1 is skipped, as there is no need to test each variable x k ∈ X. C2R In the step 2, the extra polynomial is assumed to be f m+1 = x 1 − u 0 .C3R In the step 3(a), the favourable set of monomials r B is directly constructed from a B, as r B = a B.Moreover, the set of monomial multiples is constructed as r T i = a T i , i = 1, . . ., m and r T m+1 = B a .The step 3(b) is to be skipped, as the coefficient matrix r C(u 0 ) will be directly obtained from the action matrix-based solver, in subsequent steps.The output of this step then is the favourable monomial set r B.  32) is directly obtained from a Ĉ, as Multiplying f m+1 with the monomials in r T m+1 , the expanded set of polynomials is obtained as {x It is possible to order the polynomials in this set, such that its matrix form is This determines the lower block of r C(u 0 ).
Here, A 21 and A 22 are binary matrices, with entries either 0 or 1.The upper and the lower blocks, together give us The output of the modified step 4, is the coefficient matrix r C(u 0 ).C6R By construction, r C(u 0 ) is already a square invertible matrix and has no extra rows to be removed.As we do not need to attempt rowcolumn removal from r C(u 0 ), the step 5 is to be skipped, to directly construct the sparse resultant matrix as M(u 0 ) = r C(u 0 ).In this case, the submatrices Â11 and Â12 of M(u 0 ) are equal to the submatrices A 11 and A 12 of r C(u 0 ), respectively.
Applying these changes to the steps 1-5 in Sec.3.1, i.e. the offline stage, we obtain the sparse resultant matrix M(u 0 ).After that, the online solver computes the Schur complement X of the block submatrix Â12 of M(u 0 ), followed by an eigenvalue decomposition (49) of X as described in the step 6 in Sec.3.1.In the proposition C.1 we prove that the sparse resultant-based solver so obtained, is equivalent to the action matrix-based solver.
4.2 M f (AM) from X (SRes) Let us assume that for a system of polynomial equations, F = 0 (1), we have a sparse resultantbased solver, generated by following the steps in Sec.3.1.In this solver the coefficient matrix r C(u 0 ) is assumed to be partitioned as in (32), and the alternative partition (33) will be considered in the next Sec.4.3.Moreover, the Schur complement X is assumed to have as many eigenvalues as the number of solutions to F = 0, i.e., N = r 5 .The 5 If the Schur complement X has more eigenvalues N than the number of solutions r, we can still change the steps performed by the action matrix method such that it leads to a solver equivalent to the sparse resultant-based solver.However such a case would be too complicated, and is not discussed here.Recently, Martyushev et al (2023) have proposed an action extra polynomial is supposed to be of the form We first propose the following changes to the steps performed by the action matrix method in the offline stage, in Sec.2.1.In the following list, we assume that step i actually means the step i in Sec.2.1.C1A There is no change to the step 1. C2A In the step 2, set the basis of the quotient ring A, to be By assumption, A is an r-dimensional vector space over C spanned by the elements of B A 6 .Construct the monomial set B a = B 1 .C3A In the step 3, assume the action polynomial f = x 1 .C4A The monomial multiples, required in the step 4, are obtained as a T i = r T i , i = 1, . . ., m.This also gives us the extended polynomial set F ′ .Note that a B = r B = mon(F ′ ).C5A In the step 5, the required sets of reducible and excess monomials are determined as Note that by ( 44), x α ∈ B 1 =⇒ x 1 x α ∈ r B. C6A In the step 6, the vector of monomials is determined as a b = r b.This will also determine the vectors b a , b r and be .Thus, the monomials in the action matrix method are ordered in the same way as the monomials in the sparse resultant method.Moreover, in the step 6, the elimination template a Ĉ, and its block partition, are determined as column partition of a Ĉ is determined by the partition of the vector a b.By the construction of M(u 0 ), Â12 is a square invertible matrix-based automatic generator which attempts to generate solvers with an eigenvalue problem larger than the number of roots of the system.
6 It was proved in (Cox et al, 1998, Theorem 6.17), that if F is a generic polynomial system, then A would be spanned by the elements of the set BA. However even if the polynomial system is not generic, we can still follow the steps performed in the same proof.
matrix.Therefore, we can always find a row partition such that C 22 is a square invertible matrix.As, Ĉ22 and Â12 are square matrices, Ĉ11 is also a square matrix.Therefore, there are no extra columns to be removed and the elimination template a Ĉ will be such that its G-J elimination will lead to the required form, as in (A11).
Applying these changes to the steps 1-6 in Sec.2.1, i.e. the offline stage, we obtain the elimination template a Ĉ.Subsequently, the online action matrix-based solver (step 7 in Sec.2.1) computes the G-J elimination of a Ĉ, from which the entries of the action matrix M f are read-off.This implies that the action matrix-based solver is equivalent to the sparse resultant-based solver, which is proved in Proposition C.2.

M f (AM) from X (SRes) with alternative form
Let us assume that for a system of polynomial equations, F = 0 (1), we have used the alternative partition of r B (31) and r C(u 0 ) (33), and generated a resultant-based solver by following the steps in Sec.3.1.This means that we need to assume that no solution of F = 0, has x 1 = 0.The other assumptions remain the same as in the Sec.4.2.The alternative partition of the favourable monomial set r B (31) implies that the Schur complement X (51) gives us a representation of each coset x α j x 1 , x α j ∈ B 1 as a linear combination of the cosets [x α j ], ∀x α j ∈ B 1 .However, the approach in Sec.4.2 does not work, because in this case we need to set the action polynomial f to be 1/x 1 .Therefore, we can use the so-called Rabinowitsch trick (Cox et al, 1998), and propose the following changes to the offline stage of the action matrix method in Sec.2.1.In the following list, we will assume, that step i actually means the step i in Sec.2.1.C1A ′ In the step 1, the polynomial system F , is augmented with an extra polynomial a f m+1 = x 1 λ − 1, where λ is a new variable.The augmented polynomial system is denoted as a F a , and the augmented set of variables as a X a = X ⊔ {λ}.Consider the ideal I a = a F a and the quotient ring A a = C[ a X a ]/I a .C2A ′ Note that the number of solutions to F = 0 is the same as that of a F a = 0, because we have assumed x 1 = 0. Therefore, A a is an r-dimensional vector space over C. Its basis is constructed as Construct the monomial set B a = B 1 .C3A ′ In the step 3, the action polynomial is assumed to be f = λ.C4A ′ The sets of monomial multiples required in the step 4, are constructed as The extended polynomial set a F ′ a is obtained by multiplying each f i ∈ a F a with the monomials in a T i .In the step 4, the set of monomials is obtained a B = mon(F ′ a ).C5A ′ In the step 5, the required monomial sets are Applying these changes to the steps 1-6 in Sec.2.1, i.e. the offline stage, we obtain the elimination template a Ĉ.Then, the action matrix-based solver (step 7 in Sec.2.1) computes the G-J elimination of a Ĉ, from which the entries of the action matrix M f are read-off.This implies that the action matrix-based solver is equivalent to the sparse resultant-based solver, which is proved in Proposition C.3.

Experiments
In this section, we evaluate the performance of the minimal solvers, generated using our sparse resultant method, proposed in Sec. 3. We compare the stability as well as the computational complexities of these solvers with the state-of-art Gröbner basis-based solvers for many interesting minimal problems.The minimal problems selected for comparison, represent a huge variety of relative and absolute pose problems and correspond to those studied in (Larsson et al, 2018).Note that for the two minimal problems, Rel.pose f +E+f 6pt and Abs.pose refractive P5P, we generated solvers from a simplified polynomial system as described in (Martyushev et al, 2022), instead of the original formulation.In comparison to the SOTA Gröbner basis-based action matrix methods such as (Larsson et al, 2017a(Larsson et al, , 2018;;Martyushev et al, 2022), the recent method for laurent polynomial systems (Martyushev et al, 2023) generates a minimal solver by iteratively testing different elimination template matrices for rank conditions and does not rely on Gröbner basis computations.As the goal in this paper is to compare our sparse resultant method with the Gröbner basis-based action matrix method, we have only compared the performance of our minimal solvers with those based on the three Gröbner basis methods (Larsson et al, 2017a(Larsson et al, , 2018;;Martyushev et al, 2022).

Computational complexity
The biggest contributor towards computational complexity of a minimal solver is the size of the matrices that undergo crucial numerical operations.The solvers based on our sparse resultant method in Sec. 3, the state-of-the-art Gröbner basis solvers as well as in the original solvers proposed by the respective authors (see column 4) involve two crucial operations, a matrix inverse7 and an eigenvalue decomposition.This is indicated by the size of the solver in the table, e.g. a sparse resultant-based solver of size 11 × 20 means computing a matrix inverse Â−1 12 of size 11 × 11, ultimately leading to a Schur complement X of size 20 − 11 = 9, and an eigenvalue decomposition of this matrix X. Similarly an action matrix-based solver of size 11 × 20 means performing a G-J elimination of a 11×20 matrix C and then an eigenvalue decomposition of an action matrix M f of size 20 − 11 = 9.So in Tab. 1, we compare the size of the upper block Â11 Â12 of the sparse resultant matrix M(u 0 ) in our extra polynomial sparse resultant-based solvers, with the size of the elimination template C used in state-of-the-art Gröbner basis solvers as well as in the original solvers proposed by the respective authors (column 4).
The Gröbner basis-based solvers used for comparison include the solvers generated using the approach in (Larsson et al, 2017a) (column 5), the Gröbner fan-based and the heuristic-based approaches in (Larsson et al, 2018) (columns 6 and 8 respectively), and the template optimization approach using a greedy parameter search in (Martyushev et al, 2022) (column 9).Out of the two methods for generating minimal solvers, proposed in (Martyushev et al, 2022), we consider the smallest solver for each minimal problem.
As we can see from the Tab. 1, for most minimal problems, our sparse resultant method leads to smaller solvers compared to the Gröbner basis solvers based on (Larsson et al, 2017a(Larsson et al, , 2018) ) and of exactly the same size as the solvers based on (Martyushev et al, 2022).The smallest solvers where size of the eigenvalue decomposition problem is the same as the number of solutions, are written in bold in the Tab. 1.
Moreover, for some minimal problems, our sparse resultant method leads to a larger eigenvalue problem than the number of solutions, written in blue in Tab. 1.For three such minimal problems, i.e.Rel.pose E+f λ 7pt, Unsynch.Rel.pose, and Optimal pose 2pt v2, our sparse resultant method leads to solvers with the smaller size as compared to the solvers based on the Gröbner basis-based methods (Larsson et al, 2017a(Larsson et al, , 2018)), whereas for the problem, Abs.pose quivers, our sparse resultant method leads to a smaller solver than the solvers based on (Larsson et al, 2017a(Larsson et al, , 2018) ) and of comparable size w.r.t. the solver based on (Martyushev et al, 2022).
Note that for the two minimal problems, Optimal pose 2pt v2 and Rel.pose E angle+4pt, the elimination template C in the solvers based on the approach in (Martyushev et al, 2022), underwent an extra Schur complement-based step in the offline stage.Therefore, the template sizes reported for these two solvers are smaller than those in the solvers based on the other Gröbner basis methods in (Larsson et al, 2017a(Larsson et al, , 2018)).For the problem, Rel.pose E angle+4pt, our sparse resultant method failed to generate a solver.
Note that here we do not compare our solvers' sizes with the resultant-based solvers generated by (Heikkilä, 2017) and (Emiris, 2012).These methods can not be directly applied to most of the studied minimal problems as they can not handle more equations than unknowns.Though these methods can be adjusted such that they work with more equations with unknowns, and generate full-rank matrices, even then they lead to larger solvers than the new method proposed in the Sec.3.1.With the method in (Heikkilä, 2017), we also failed to generate full rank solvers for some minimal problems while for some other problems, the generated solvers were larger than ours, and GEP involved in (Heikkilä, 2017) led also to many unwanted solutions.

Stability comparison
We evaluate and compare the stability of our solvers from Tab. 1 with the Gröbner basis-based solvers.As it is not feasible to generate scene setups for all considered problems, we instead evaluated the stability of minimal solvers using 5K instances of random data points.Stability measures include mean and median of Log 10 of the normalized equation residuals for computed solutions, as well as the solver failures.The solver failures are an important measure of stability for real world applications, and are computed as the % of 5K instances for which at least one solution has a normalized equation residual > 10 −3 .These measures on randomly generated inputs have been shown to be sufficiently good indicators of solver stability (Larsson et al, 2017a).Also, Tab. 2 shows the stability of the solvers for all the minimal problems from Tab. 1.We observe that for most of the minimal problems, our proposed sparse resultantbased solvers have no failures.Among those with some failures, for all except two minimal problems our sparse resultant-based solvers have fewer failures, as compared to the Gröbner basis-based solvers.
In general, our new method generates solvers that are stable with only very few failures.For a selected subset of nine minimal problems from the Tab. 1, we computed the Log 10 of the normalized equation residuals and depicted their histograms in the Fig. 4. The histograms agree with our observation from the Tab.2, that our proposed sparse resultant method leads to solvers with only few failures.
Note that as our new solvers are solving the same formulations of problems as the existing state-of-the-art solvers, the performance on noisy measurements and real data would be the same as the performance of the state-of-the-art solvers.The only difference in the performance comes from numerical instabilities that already appear in the noise-less case and are detailed in Tab. 2 (fail%).For performance of the solvers in real applications we refer the reader to papers where the original formulations of the studied problems were presented (see Tab. 1, column 3).Here we select two interesting problems, i.e., one relative and one absolute pose problem, and perform experiments on synthetically generated scenes and on real images, respectively.For comparison, we generated solvers using our sparse resultant method (green), Gröbner basis method (Larsson et al, 2017a) (black), heuristic method (Larsson et al, 2018) (red) and greedy parameter search method (Martyushev et al, 2022) (blue).
E+f λ solver on synthetic scenes: We study the numerical stability of our sparse resultant-based solver for the problem of estimating the relative pose of one calibrated camera, and one camera with unknown focal length and radial distortion from 7-point correspondences, i.e. the Rel.pose E+f λ 7pt problem from Tab. 1.We consider the formulation "elim.λ" proposed in (Larsson et al, 2018) that leads to the smallest solvers.We study the performance on noise-free data and compare it to the results of Gröbner basis solvers from Tab. 1. Table 2 Stability comparison for solvers for the minimal problems from Tab. 1, generated by our new method, solvers generated using (Larsson et al, 2017a) (col 4), heuristic-based solvers (Larsson et al, 2018) (col 5) and greedy parameter search (Martyushev et al, 2022) (col 6).For each minimal problem, the solvers which led to minimum number of failures are written in bold.Mean and median are computed from Log 10 of normalized equation residuals.( †): In the offline stage in (Martyushev et al, 2022), the elimination template was reduced using a Schur complement.(−): Failed to extract solutions to all variables.( * ): Using a simplified input system of polynomials, as in (Martyushev et al, 2022).
We generated 10K scenes with 3D points drawn uniformly from a [−10, 10] 3 cube.Each 3D point was projected by two cameras with random feasible orientation and position.The focal length of the first camera was randomly drawn from the interval f gt ∈ [0.5, 2.5] and the focal length of the second camera was set to 1, i.e., the second camera was calibrated.The image points in the first camera were corrupted by radial distortion following the one-parameter division model.The radial distortion parameter λ gt was drawn at random from the interval [−0.7, 0] representing distortions of cameras with a small distortion up to slightly more than GoPro-style cameras.(Larsson et al, 2017a), heuristics (Larsson et al, 2017a), greedy parameter search (Martyushev et al, 2022) and our sparse resultant method.
f (right), obtained by selecting the real root closest to the ground truth.All tested solvers provide stable results with only a small number of runs with larger errors.The solver based on our sparse resultant method (green) is not only smaller, but also slightly more stable than the heuristic-based solver (Larsson et al, 2018) (red) and the solver based on (Larsson et al, 2017a) (black).For most minimal problems, the solver based on our proposed sparse resultant method has fewer failures, as compared to the solver based on the greedy parameter approach in (Martyushev et al, 2022) (blue).
P4Pfr solver on real images: We evaluated our sparse resultant-based solver for a practical problem of estimating the absolute pose of camera with unknown focal length and radial distortion from four 2D-to-3D point correspondences, i.e. the P4Pfr solver, on real data.We consider the Rotunda dataset, which was proposed in (Kukelova et al, 2015), and in (Larsson et al, 2017c) it was used for evaluating P4Pfr solvers.This dataset consists of 62 images captured by a GoPro Hero4 camera.Example of an input image from this dataset (left) as well as undistorted (middle) and registered image (right) using our new solver, is shown in Figure 5 (top).The Reality Capture software (rea, 2016) was used to build a 3D reconstructions of this scene.We used the 3D model to estimate the pose of each image using the new P4Pfr resultant-based solver (28 × 40) in a RANSAC framework.Similar to (Larsson et al, 2017c), we used the camera and distortion parameters obtained from rea (2016) as ground truth for the experiment.Figure 5     errors for the focal length, radial distortion, and the camera pose.Overall, the errors are quite small, e.g.most of the focal lengths are within 0.1% of the ground truth and almost all rotation errors are less than 0.1 degrees, which shows that our new solver works well for real data.We have summarized these results in Tab. 3 where we present the errors for the focal length, radial distortion, and the camera pose obtained using our proposed solver and for the sake of comparison we also list the errors, which were reported in (Larsson et al, 2017c), where the P4Pfr (40x50) solver was tested on the same dataset.Overall the errors are quite small, e.g.most of the focal lengths are within 0.1% of the ground truth and almost all rotation errors are less than 0.1 degrees, which shows that our new solver as well as the original solver work well for real data.The results of both solvers are very similar.However, note that the slightly different results reported in (Larsson et al, 2017c) are due to RANSAC's random nature and a slightly different P4Pfr formulation (40x50) used in (Larsson et al, 2017c).

Conclusion
In this paper, we have proposed a sparse resultant method for generating efficient solvers for minimal problems in computer vision.It uses a polynomial with a special form to augment the initial polynomial system F in (1) and construct a sparse resultant matrix M(u 0 ) (6), using the theory of polytopes (Cox et al, 1998).The special form enables us to decompose the resultant matrix constraint ( 7), into a regular eigenvalue problem (49) (or ( 51)) using the Schur complement.As demonstrated in the Sec. 5, our sparse resultant method leads to minimal solvers with comparable speed and stability w.r.t. to the SOTA solvers based on the action matrix method (Kukelova, 2013;Larsson et al, 2017aLarsson et al, , 2018;;Martyushev et al, 2022).Note that the way our sparse resultant method is designed, for some minimal problems, it leads to solvers with larger eigenvalue problems but performing a smaller matrix inverse and with comparable or better stability compared to that of the Gröbner basis-based solvers.
While the action matrix method and the sparse resultant method are based on different mathematical theories, we have observed that the resulting solvers involve similar numerical operations, such as eigenvalue computation.This raises the question, "Under what conditions for a given minimal problem (a given system of polynomial equations), are the two solvers the same?"In the Sec. 4, we have attempted to answer this question.Specifically, if we begin with an action matrixbased solver for a given minimal problem, then we propose a list of changes to be made to the steps in our sparse resultant method so that it leads to an equivalent sparse resultant-based solver.In the opposite direction, we also study the case when we begin with a sparse resultant-based solver and establish a list of changes to the steps performed by an action matrix method such that it leads to an equivalent solver.In other words, if we begin with an action matrix-based solver, it determines the extra polynomial f m+1 and the favourable monomial set, to be used in the sparse resultant method.Or, if we begin with a sparse resultantbased solver, it determines the monomial ordering, the extended polynomial set F ′ , and the basis of the quotient ring to be used for computing an action matrix-based solver.
We hope that this discussion paves a path towards unifying the two approaches for generating minimal solvers, i.e., the action matrix methods and the sparse resultant method based on an extra polynomial.It is known that both these approaches are examples of the normal form methods for a given polynomial system F , recently studied in (Telen and Van Barel, 2018;Telen et al, 2018;Mourrain and Trébuchet, 2008).To the best of our knowledge, normal forms have yet not been incorporated in the automatic generators for minimal solvers in computer vision.This may be an interesting future topic to investigate.set of polynomials F ′ , is constructed such that each q j can be computed as a linear combination of the polynomials F ′8 . 5. [Offline] Let B = mon(F ′ ), which is partitioned (Byröd et al, 2009) as where B r and B e are respectively, what we call the reducible monomials and the excess monomials.6. [Offline] The set of equations F ′ = 0, is expressed in a matrix form, as where b e = vec(B e ) and b r = vec(B r ).The rows of C are assumed to be partitioned such that C 22 is a square invertible matrix.The matrix C is known as an elimination template.As C represents a coefficient matrix of F ′ constructed as described in step 4, if we perform a G-J elimination of C, we obtain the following The lower block row is rewritten as The entries of the action matrix M f can be then read off from the entries of the matrix C ′ 23 (Byröd et al, 2009;Kukelova, 2013).For x α j ∈ B a , if x k x α j ∈ B a for some x α i 1 ∈ B a , then j-th row of M f contains 1 in i 1 -th column and 0 in the remaining columns.But if x k x α j / ∈ B a , then x k x α j ∈ B r for some x α i 2 ∈ B r .In this case, j-th row of M f is the i 2 -th row of −C ′ 23 .We recover solutions to F = 0, from the eigenvalues (and eigenvectors) of the action matrix M f .Specifically, u 0 ∈ C is an eigenvalue of the matrix M f , iff u 0 is a value of the function f on the variety V .We refer to the book (Cox et al, 1998) for further details.In other words, if f = x k , then the eigenvalues of M f are the x k -coordinates of the solutions of (1).The solutions to the remaining variables can be obtained from the eigenvectors of M f .This means that after finding the multiplication matrix M f , we can recover the solutions by its eigendecompostion, for which efficient algorithms exist.The output of the offline stage is the elimination template Ĉ in step 6.In the online stage, step 7, for a given instance of the minimal problem, we fill in the coefficients of the elimination template Ĉ using the input data, and perform its G-J elimination to compute the action matrix M f .Eigendecomposition of M f , then gives us the solutions to the given instance of the minimal problem.
The first automatic approach for generating elimination templates and Gröbner basis solvers was presented in (Kukelova et al, 2008).Recently, an improvement to the automatic generator (Kukelova et al, 2008) was proposed in (Larsson et al, 2017a).It exploits the inherent relations between the input polynomial equations and it results in more efficient solvers than (Kukelova et al, 2008).The automatic method from (Larsson et al, 2017a) was later extended by a method for dealing with saturated ideals (Larsson et al, 2017b) and a method for detecting symmetries in polynomial systems (Larsson and Åström, 2016).

B.3 Row removal
It may happen that the reduced matrix C red (u 0 ) still has more rows than columns.Therefore, we propose an approach to remove the excess rows from C red (u 0 ), to transform it into a square matrix, which will be our sparse resultant matrix M(u 0 ).Towards this, we provide Alg. 3 to remove the extra rows from C red (u 0 ) by removing some monomial multiples from T red .It accepts the favourable monomial set B red and its corresponding monomial multiples T red , as input and returns a reduced set of monomial multiples, T red such that, along with the basis B red , leads to a square matrix.If we partitioned B red using the alternative approach (42), we just need to change step 17 in Alg. 3 to Algorithm  Proposition C.2 For a given system of polynomial equations F = 0 (1), let us assume to have generated a sparse resultant-based solver.Let us also consider an action matrix-based solver, generated after applying the changes C1A-C6A in Sec.4.2 to the offline stage, i.e. steps 1-6 in Sec.2.1.Then the two solvers are equivalent as defined in the Def. 2.
Proof: From (56), note that the elimination template a Ĉ can be expressed as This is exactly the same situation, as in the Prop.(C.1).Specifically, the Schur complement X, and the action matrix M f , are exactly the same matrices, with each row, either being a row from −C ′ 23 or a row containing 1's or 0's.Moreover (C15) implies that G-J elimination of a Ĉ can be replaced by the matrix product, Â−1 12 Â11 .Therefore, as both the conditions in the Def. 2 are satisfied, the action matrix-based solver is equivalent to the sparse resultant-based solver.
Proposition C.3 For a given system of polynomial equations F = 0 (1), let us assume to have generated a sparse resultant-based solver using the alternative partition of r B (31) and r C(u 0 ) (33).Let us also consider an action matrix-based solver, generated after applying the changes C1A ′ -C6A ′ in Sec.4.3 to the steps 1-6 in Sec.2.1, i.e. the offline stage.Then the two solvers are equivalent as defined in the Def. 2.
Proof: In the step C4A ′ , the monomial multiples are set to be the same as those in the sparse resultant method, i.e. a T i = r T i , i = 1, . . ., m + 1.Therefore, the upper block of the elimination template a C in (A8), is fixed by setting where Â11 Â12 (33) is the upper block of the sparse resultant matrix M(u 0 ).
where b is a column vector of the monomials in B ordered w.r.t. the given monomial ordering and C([c i,α ]) is the matrix of coefficients c i,α of F .The entries of C([c i,α ]) are row-indexed by the ordered set of polynomials in F and column-indexed by the monomials in b.

Fig. 1
Fig. 1 An example of the Newton polytopes of two polynomials as well as their Minkowski sum: a. P 1 = NP(f 1 ) b. P 2 = NP(f 2 ) and c.Q = P 1 + P 2

Fig. 2
Fig. 2 The integer points (shown in blue) in the interior of the Minkowski sum of the two Newton polytopes, Q = P 1 + P 2 after shifting it δ = [−0.1,−0.1].
C4R Replace the step 4(a) by directly determining the set partition of r B = B 1 ⊔ B 2 (30)-(31), as B 1 = B a and B 2 = Be ⊔ B r .Moreover, the monomial ordering in a b determines the vectors, b 1 = b a and b 2 = be b r .The action matrix-based solver corresponds to the first version of set partition of r B, considered in our sparse resultant method, i.e. the set partition as in (30).C5R In the step 4(b), the upper block A 11 A 12 of r C(u 0 ) (

)
Note that the set of monomials a B = B e ⊔ B r ⊔ B a , and B a ∩ B r = ∅.C6A ′ In the step 6, the monomial vectors are set as b a = b 1 , b r = vec(B r ) and b e = b 2 .This will also fix the vector of monomials monomials in the vector a b in the action matrix method are ordered in the same way as the monomials in the sparse resultant method in Sec.3.1.Moreover, as the monomials in B r are in a one-to-one correspondence w.r.t. the monomials in B 1 , the monomial ordering in b 1 fixes the order of the monomials in b r as well.

Fig. 3
Fig.3Histograms of Log 10 relative error in radial distortion (left) and focal length (right) for Rel.pose E+f λ 7pt (elim λ) problem for 10K randomly generated synthetic scenes.These scenes represent cameras with different radial distortions, poses and focal lengths.For comparison, we generated solvers using our sparse resultant method (green), Gröbner basis method(Larsson et al, 2017a) (black), heuristic method(Larsson et al, 2018) (red) and greedy parameter search method(Martyushev et al, 2022) (blue).

Fig. 3 shows
Log 10 of the relative error of the distortion parameter λ (left) and the focal length

Fig. 4
Fig.4Histograms of Log 10 of normalized equation residual error for nine selected minimal problems.The methods tested for comparison are based on Gröbner basis(Larsson et al, 2017a), heuristics(Larsson et al, 2017a), greedy parameter search(Martyushev et al, 2022) and our sparse resultant method.
. The errors are relative to the ground truth for all except rotation which is shown in degrees.The results for the SOTA P4Pfr solver of size 40 × 50, are taken from(Larsson et al, 2017c).

Fig. 5
Fig. 5 Top row: Example of an input image (left).Undistorted image using the proposed resultant-based P4Pfr solver (middle).Input 3D point cloud and an example of registered camera (right).Bottom row: Histograms of errors for 62 images.The measured errors are (left) the Log10 relative focal length |f − f GT |/f GT , radial distortion |k − k GT |/|k GT |, and the relative translation error t − t GT / t GT , and (right) the rotation error in degrees.

)
Note that, C 22 is a square invertible matrix.The submatrix C ′ 11 , may not square.This implies that some columns in C 11 C 21 are linearly dependent, which can be removed(Kukelova, 2013), along with the corresponding monomials from the set of excess monomials B e .Let the resulting monomial set be denoted as Be and the
a Ĉ = Ĉ11 C 12 C 13 Ĉ21 C 22 C 23 = Â12 Â11 .Therefore a G-J elimination of a Ĉ can be achieved by computing the matrix product Â−1 12 a Ĉ = I Â−1 12 Â11 .Comparing it with the G-J eliminated form of the matrix a Ĉ Definition 1 The coefficients {c i,α } of the polynomial system F in (2), are non-generic, if and only if there exists a non-vanishing polynomial in {c i,α } as variables.
45) where A 11 , A 12 , A 21 , A 22 , B 21 and B 22 are matrices dependent only on the coefficients of input polynomials in F a (28).Based on our assumption in the statement of this proposition, C 12 and hence A 12 , have full column rank.Substituting (45) in (43) gives

Table 3
Errors for the real Rotunda dataset