1 Introduction

The properness of parameterizations, defined by rational functions, has to do with the injective character of the rational map they induce, between a nonempty open Zariski subset of the parameter space and the parameterized variety. It is therefore, from the point of view of applications, or of algorithmic efficiency, or even from the point of view of the theoretical analysis of algebraic varieties, a very important property.

Let us be more precise. Let \({{\mathcal {P}}}(t_1,\ldots ,t_r)\) be a rational parameterization of a variety \(\mathcal {V}\) over a field \(\mathbb {K}\). If the induced rational map \({{\mathcal {P}}}: \mathbb {K}^r \rightarrow \mathcal {V}\) is not injective, the question of deciding the existence of an injective rational parameterization of \(\mathcal {V}\) and, if so, of computing it, arises. The problem about existence has its answer in Lüroth’s Theorem for the case of dimension one (see e.g. [16]), and in Castenuovo’s rationality Theorem for the case of dimension two (see e.g. [20]); in the first case, the existence of birational parameterizations (i.e. injective ones) is ensured over any field, while in the second the requirement is that the field is algebraically closed of characteristic zero. We recall that throughout this paper we refer to generically injective rational parameterizations by either proper or birational parameterizations.

Concerning to the calculation of proper parameterizations, the problem can be stated either from the implicit equations or from a parameterization. In the first case, one derives a birational parameterization using a parameterization algorithm. In the second case, the question is to compute a birational reparameterization of an input non-birational parameterization. The case of curves (i.e. dimension one) has been analyzed by several authors and there are effective answers in both approaches (see e.g. [7, 16]). For surfaces, the problem still lacks some answers. In the implicit case, the problem is solved (see e.g. [14]). However, if the starting point is parametric, an the implicit equation is not used, the problem is open. Although there are some partial solutions (see [7, 8]) where the problem can be reduced, in a certain sense, to the case of curves.

In this paper, we focus on the case of surfaces and from the parametric point of view. We give a general algorithm (see Algorithm 2), which does not require parameterization algorithms for implicitly defined surfaces, and therefore covers the open question mentioned above. To do this, in Theorem 1, it is shown how the problem is directly related to the generic fiber of the starting parameterization. Using this fact, solutions to the problem are determined. Additionally, we see how by imposing two additional hypotheses, both related to the base points of the parameterization, the general algorithm is considerably is simplified. This is described in Algorithm 3. These two hypotheses are: the existence of a birational parameterization without base points and the transversality of the input parameterization. The first hypothesis allows knowing in advance the degree of the solution space (see Lemma 6) and the second allows reducing the dimension of the solution space (see Theorem 3).

At this point in the introduction, the reader will wonder about the need to solve the problem without leaving the parametric environment; that is, the natural question that arises is why not implicitize and then use the solution provided by the parameterization algorithms. We would like to mention some reasons that, for us, although we do not include any compassion analysis, justify the strategic option of looking directly for a reparameterization, if the data is given parametrically. A first reason is that the parameterization algorithms are not particularly simple. On the other hand, developing an algorithmic solution to the problem that does not require parameterizing techniques supposes having a theoretical methodology to be able to address, as a future line of research, other problems as, for instance the properness problem for the case of rational varieties of any dimension for which a unirational parameterization is known; we recall that the problem of parameterizing in other dimensions, different from one and two, is open. Let us emphasize that dealing with rational varieties implies, by definition, the existence of birational parameterizations. In Sect. 6 we briefly comment on this issue.

The paper is structured as follows. In Sect. 2 we introduce, through several subsections, the preliminaries of the paper on the generic fiber and the base locus of rational maps. Section 3 is devoted to the theoretical analysis of the problem. In this section, in Theorem 1, we state the keys to computationally approach the properness. Section 4 deals with the development of the general algorithm. For this purpose, in Sects. 4.1 in 4.2 we discuss how to effectively deal with the generic fiber of the input parameterization and of the reparameterizing functions, respectively. In Sect. 4.3 the degree of the reparameterizing functions are studied. Finally, in Sect. 4.4 the general algorithm is outlined. In Sect. 5, the particular case of surfaces admitting a birational parameterization with empty locus is considered. The paper ends with a section devoted to conclusions and open related problems (see Sect. 6).

We finish this section by introducing the main notation used throughout this paper, and stating the problem we deal with.

Notation. Let \(\mathbb {K}\) be an algebraically closed field of characteristic zero. \(\mathbb {P}^{k}(\mathbb {K})\) denotes the k–dimensional projective space. Furthermore, for a generically finite rational map

$$\begin{aligned} \begin{array}{cccc} \mathcal {M}: &{} \mathbb {P}^{k_1}(\mathbb {K}) &{} \dashrightarrow &{} \mathbb {P}^{k_2}(\mathbb {K}) \\ &{} {{\overline{h}}}=(h_1:\cdots :h_{k_{1}+1}) &{} \longmapsto &{} (m_1({{\overline{h}}}):\cdots : m_{k_2+1}({{\overline{h}}})), \end{array} \end{aligned}$$

where the non-zero \(m_i\) are homogenous polynomials in \({{\overline{h}}}\) of the same degree, we denote by \(\textrm{deg}(\mathcal {M})\) the degree \(\textrm{deg}_{{{\overline{h}}}}(m_i)\), for \(m_i\) non-zero, and by \(\textrm{degMap}(\mathcal {M})\) the degree of the map \(\mathcal {M}\); that is, the cardinality of the generic fiber of \(\mathcal {M}\) (see e.g. [4]).

Let \(f\in \mathbb {L}[t_1,t_2,t_3]\) be homogeneous and non-zero, where \(\mathbb {L}\) is a field extension of \(\mathbb {K}\). Then \(\mathscr {C}_{\overline{\mathbb {L}}}(f)\) denotes the projective plane curve defined by f over the algebraic closure of \(\mathbb {L}\). When there is no risk of ambiguity, we will simply write \(\mathscr {C}(f)\). For \(A\in \mathbb {P}^{2}(\overline{\mathbb {L}})\), we represent by \(\textrm{mult}_A(\mathscr {C}(f),\mathscr {C}(g))\) the multiplicity of intersection of \(\mathscr {C}(f)\) and \(\mathscr {C}(g)\) at A. Also, we denote by \(\textrm{mult}(A,\mathscr {C}(f))\) the multiplicity of \(\mathscr {C}(f)\) at A.

Finally, \(\mathscr {S}\subset \mathbb {P}^3(\mathbb {K})\) represents a rational projective surface and we denote by \(\textrm{deg}(\mathscr {S})\), the degree of \(\mathscr {S}\). We assume that

$$\begin{aligned} {{\mathcal {P}}}(\,{{\overline{t}}}\,)=\left( p_1(\,{{\overline{t}}}\,): p_2(\,{{\overline{t}}}\,): p_3(\,{{\overline{t}}}\,): p_4(\,{{\overline{t}}}\,) \right) , \,\,{{\overline{t}}}\,=(t_1,t_2,t_3),\,\, \textrm{gcd}(p_1,\ldots ,p_4)=1,\nonumber \\ \end{aligned}$$
(1.1)

is a fixed non-birational projective rational parameterization of \(\mathscr {S}\). We assume w.l.o.g. that \(p_4\) is not the zero polynomial.

Throughout the paper we identify the set of all projective curves, including multiple component curves, of a fixed degree d, with the projective space (see [6, 16] or [19] for further details)

$$\begin{aligned} \mathscr {V}_d:=\mathbb {P}^{\frac{d (d+3)}{2}}(\mathbb {K}). \end{aligned}$$
(1.2)

More precisely, we identify the projective curves of degree d with the forms in \(\mathbb {K}[\,{{\overline{t}}}\,]\) of degree d, up to multiplication by non-zero \(\mathbb {K}\)-elements. Now, these forms are identified with the elements in \(\mathscr {V}_d\) corresponding to their coefficients, after fixing an order of the monomials. By abuse of notation, we will refer to the elements in \(\mathscr {V}_d\) by either their tuple of coefficients, or the associated form, or the corresponding curve.

Birational reparameterization problem statement. Given \(\mathscr {S}\) and \({{\mathcal {P}}}(\,{{\overline{t}}}\,)\) as above, determine a birational parameterization \({{\mathcal {Q}}}(\,{{\overline{t}}}\,)\) of \(\mathscr {S}\) as well as a rational map \({{\mathcal {S}}}:\mathbb {P}^{2}(\mathbb {K})\dasharrow \mathbb {P}^{2}(\mathbb {K})\) such that

$$\begin{aligned} {{\mathcal {P}}}(\,{{\overline{t}}}\,)={{\mathcal {Q}}}({{\mathcal {S}}}(\,{{\overline{t}}}\,)). \end{aligned}$$
(1.3)

In this case, we will say that \(({{\mathcal {Q}}},{{\mathcal {S}}})\) solves the birational reparameterization problem for \({{\mathcal {P}}}\).

2 Preliminaries, notation and problem statement

In this section we recall the main preliminaries to be used throughout the paper.

2.1 The generic fiber

Let \(\mathcal {M}\) be a generically finite rational map. That is, \(\mathcal {M}\) is a rational map

$$\begin{aligned} \mathcal {M}=(m_1(t_1,\ldots ,t_{k_1+1}):\cdots :m_{k_2+1}(t_1,\ldots ,t_{k_1+1})): \mathbb {P}^{k_1}(\mathbb {K}) \dashrightarrow \mathbb {P}^{k_2}(\mathbb {K}) \end{aligned}$$

where all non-zero polynomials \(m_i\) are homogenous of the same degree and such that there exists a non-empty Zariski open subset \(\Omega \subset \textrm{im}(\mathcal {M})\) and for every \(\overline{b}\in \Omega \) the cardinality of the fiber \(\mathcal {M}^{-1}(\overline{b})\) is invariant and finite (see e.g. [4] and [17] pg. 76). We call this number the degree of the map and we denote it by \(\textrm{degMap}(\mathcal {M})\). We define the generic fiber of \(\mathcal {M}\) as

$$\begin{aligned} \mathscr {F}_g(\mathcal {M}):=\{\overline{\alpha }\in \mathbb {P}^{k_1}(\overline{\mathbb {K}({{\overline{h}}})}) \,|\, \mathcal {M}(\overline{\alpha })=\mathcal {M}({{\overline{h}}}) \} \end{aligned}$$

where \({{\overline{h}}}=(h_1,\ldots ,h_{k_1+1})\) is a tuple of independent parameters and \(\overline{\mathbb {K}({{\overline{h}}})}\) is the algebraic closure of \(\mathbb {K}({{\overline{h}}})\). Note that \(\#(\mathscr {F}_g(\mathcal {M}))=\textrm{degMap}(\mathcal {M})\).

Let us assume w.l.o.g. that \(m_{k_2+1}\) is not the zero polynomial; if not, a change of coordinates can be applied. Associated to \(\mathcal {M}\) we may consider the affine rational map

$$\begin{aligned} \mathcal {M}_a=\left( \frac{m_1(t_1,\ldots ,t_{k_1},1)}{m_{k_2+1}(t_1,\ldots ,t_{k_1},1)},\ldots , \frac{m_{k_2}(t_1,\ldots ,t_{k_1},1)}{m_{k_2+1}(t_1,\ldots ,t_{k_1},1)}\right) : \mathbb {A}^{k_1}(\mathbb {K}) \dashrightarrow \mathbb {A}^{k_2}(\mathbb {K}).\nonumber \\ \end{aligned}$$
(2.1)

Then, the generic fiber of \(\mathcal {M}_a\) is defined as

$$\begin{aligned} \mathscr {F}_g(\mathcal {M}_a):=\{\overline{\alpha }\in \mathbb {A}^{k_1}(\overline{\mathbb {K}({{\overline{h}}})}) \,|\, \mathcal {M}_a(\overline{\alpha })=\mathcal {M}_a({{\overline{h}}}) \} \end{aligned}$$

where \({{\overline{h}}}=(h_1,\ldots ,h_{k_1})\) is a tuple of independent parameters and \(\overline{\mathbb {K}({{\overline{h}}})}\) is the algebraic closure of \(\mathbb {K}({{\overline{h}}})\). Note that

$$\begin{aligned} \#(\mathscr {F}_g(\mathcal {M}))=\textrm{degMap}(\mathcal {M})=\textrm{degMap}(\mathcal {M}_a)=\#(\mathscr {F}_g(\mathcal {M}_a)). \end{aligned}$$

Moreover, let us consider the dehomogenization and homogenization maps

(2.2)

Taking into account that for \(\overline{\alpha }\in \mathscr {F}_{g}(\mathcal {M})\), \(m_{k_2+1}(\overline{\alpha })\ne 0\) because \(m_{k_2+1}({{\overline{h}}})\ne 0\), we have that, abusing notation,

$$\begin{aligned} \mathscr {D}(\mathscr {F}_g(\mathcal {M}))=\mathscr {F}_g(\mathcal {M}_a), \,\, \mathscr {H}(\mathscr {F}_g(\mathcal {M}_a))=\mathscr {F}_g(\mathcal {M}). \end{aligned}$$

Remark 1

In Sects. 4.1 and 4.2 we will deal with the question of computing or describing the generic fibre of a parameterization and/or of a dominant rational map from \(\mathbb {P}^{2}(\mathbb {K})\) onto \(\mathbb {P}^{2}(\mathbb {K})\). Some of the techniques that will be used come from [10]. In the following we see that, for our purposes, one of the two required hypotheses in [10], can be avoided.

Given a rational affine surface parameterization \((a_1 /a_2,b_1 /b_2,c_1 /c_2)\), in reduced form, in Sect. 1.2. of [10] two general assumptions are introduced, namely, \(\{\nabla (a_1/a_2),\nabla (b_1/b_2)\}\) must be linearly independent as vectors in the \(\mathbb {K}\)-vector space \(\mathbb {K}(t_1,t_2)^2\), and (0 : 1 : 0) must belong to none of the projective curves defined by the numerators and denominators of the parameterization. Nevertheless, this second hypothesis can be omitted when dealing with the generic fiber of the parameterization, indeed: note that, taking into account Theorem 2 in [10] and Proposition 1 in [9], one may determine the degree of the rational map induced by the rational parameterization from the polynomial \(R_1\) or \(R_2\), introduced in [9], without imposing the second assumption. The underlying idea is that the resultant w.r.t. \(t_2\) (similarly with (1 : 0 : 0) w.r.t. \(t_1\)) may read wrongly the multiplicity of the point (0 : 1 : 0). Nevertheless, the polynomials \(R_i\) encode the coordinates of the non-constant intersection points. This second hypothesis is however needed for the specialization of the computation of the fibre at a particular point (see Lemma 6 and Theorems 5 and 6 in [10]).

Similarly, the partial degrees of an implicit equation can be computed without the assumption on the point at infinity (0:1:0). More precisely, in [11], Theorems 1, 2 and 4 are obtained from the results stated above and thus, if we do not specialize on the resultant, and we work with generic points, we do not need to impose that each of the numerators and denominators of the parameterization components passes through the point at infinity (0:1:0).

In addition, the techniques in [10] are easily extended to the case of dominant rational map from \(\mathbb {P}^{2}(\mathbb {K})\) onto \(\mathbb {P}^{2}(\mathbb {K})\) (see Sect. 4.2) by associating an auxiliary surface parameterization. As a consequence the comments above on the hypothesis on (0 : 1 : 0) are also applicable.

2.2 The base locus

We recall some basic notions on base points; for further information we refer to [3] and [13]. The base points of a projective rational map are the points where the map is not well-defined. In our case, we will need to speak of base points of rational maps induced by surface parameterizations and/or rational maps from \(\mathbb {P}^{2}(\mathbb {K})\) on \(\mathbb {P}^{2}(\mathbb {K})\). So, we unify both cases considering a rational map

$$\begin{aligned} \mathcal {M}=(m_1(\,{{\overline{t}}}\,):\cdots :m_{k+1}(\,{{\overline{t}}}\,)): \mathbb {P}^{2}(\mathbb {K}) \dashrightarrow \mathbb {P}^{k}(\mathbb {K}) \end{aligned}$$

where all non-zero polynomials \(m_i\) are homogenous of the same degree and \(\textrm{gcd}(m_1,\ldots ,m_k)=1\). Then, \(A\in \mathbb {P}^{2}(\mathbb {K})\) is called a base point of \(\mathcal {M}\) if \(A\in \bigcap _{i=1}^{k+1} \mathscr {C}(m_i)\). The set of all base points of \(\mathcal {M}\) is called the base locus of \(\mathcal {M}\), and we represent it by \(\mathscr {B}(\mathcal {M})\); note that, since \(\textrm{gcd}(m_1,\ldots ,m_{k+1})=1\), the base locus is either empty or finite. The multiplicity of a base point of \(\mathcal {M}\) is the multiplicity of the point as element of the base locus. The multiplicity of a base point can also be seen as follows. Associated to \(\mathcal {M}\) we introduce the polynomials

$$\begin{aligned} W_1(\,{{\overline{x}}}\,,\,{{\overline{t}}}\,):=\sum _{i=1}^{k+1} x_i\, m_i(\,{{\overline{t}}}\,),\,\, W_2(\,{{\overline{y}}}\,,\,{{\overline{t}}}\,):=\sum _{i=1}^{k+1} y_i\, m_i(\,{{\overline{t}}}\,), \end{aligned}$$
(2.3)

where \(x_i, y_i\) are new variables, and we consider the corresponding projective plane curves \(\mathscr {C}(W_{i})\) in \(\mathbb {P}^{2}(\mathbb {F})\) where \(\mathbb {F}\) is the algebraic closure of \(\mathbb {K}(\,{{\overline{x}}}\,,\,{{\overline{y}}}\,)\). Note that \(\mathscr {B}(\mathcal {M})\subset \mathscr {C}(W_1)\cap \mathscr {C}(W_2)\). Then, the multiplicity of a base point \(A\in \mathscr {B}(\mathcal {M})\) is the multiplicity of intersection of the curves \(\mathscr {C}(W_1)\) and \(\mathscr {C}(W_2)\) at A. We denote the multiplicity of a base point \(A\in \mathscr {B}(\mathcal {M})\) as

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}(\mathcal {M})). \end{aligned}$$
(2.4)

We say \(\mathcal {M}\) is transversal if, for every \(A\in \mathscr {B}(\mathcal {M})\), it holds that

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}(\mathcal {M}))=\textrm{mult}(A,\mathscr {C}(W_1))^2. \end{aligned}$$

In order to check the transversality of a parameterization, one may apply Algorithm 1 presented in Section 5 in [13].

Furthermore, we introduce the notion of multiplicity of the base locus of \({\mathcal {M}}\), denoted \(\textrm{mult}(\mathscr {B}(\mathcal {M}))\), as

$$\begin{aligned} \textrm{mult}(\mathscr {B}(\mathcal {M})):=\sum _{A\in \mathscr {B}({\mathcal {M}})} \textrm{mult}(A,\mathscr {B}(\mathcal {M})) \end{aligned}$$
(2.5)

Note that, since \(\mathscr {B}({{\mathcal {P}}})\) is either empty of finite the sum above well defined.

3 Theoretical approach

In this section, we prove a characterization on the pairs \(({{\mathcal {Q}}},{{\mathcal {S}}})\) solving the birational reparameterization problem for \({{\mathcal {P}}}\), where \({{\mathcal {P}}}\) is as in (1.1). This result will be crucial for the algorithmic approach. We start with some technical lemmas.

Lemma 1

Let \(({{\mathcal {Q}}},{{\mathcal {S}}})\) be a solution of the birational reparameterization problem. Then, \(\mathscr {F}_g({{\mathcal {S}}})=\mathscr {F}_g({{\mathcal {P}}})\).

Proof

Let \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {S}}})\). Then, \({{\mathcal {S}}}(\overline{\alpha })={{\mathcal {S}}}({{\overline{h}}})\), where \({{\overline{h}}}=(h_1,h_2)\) is the pair of new variables. So, \({{\mathcal {P}}}(\overline{\alpha })={{\mathcal {Q}}}({{\mathcal {S}}}(\overline{\alpha }))={{\mathcal {Q}}}({{\mathcal {S}}}({{\overline{h}}}))={{\mathcal {P}}}({{\overline{h}}})\), and hence \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {P}}})\). Conversely, let \(\overline{\beta }\in \mathscr {F}_g({{\mathcal {P}}})\). Then, \({{\mathcal {P}}}(\overline{\beta })={{\mathcal {P}}}({{\overline{h}}})\). Thus, \({{\mathcal {S}}}(\overline{\beta })={{\mathcal {Q}}}^{-1}({{\mathcal {P}}}(\overline{\beta }))= {{\mathcal {Q}}}^{-1}({{\mathcal {P}}}({{\overline{h}}}))={{\mathcal {S}}}({{\overline{h}}})\). So, \(\overline{\beta }\in \mathscr {F}_g({{\mathcal {S}}})\). \(\square \)

Lemma 2

Let \({{\mathcal {S}}}=(s_1:s_2:s_3): \mathbb {P}^{2}(\mathbb {K}) \dashrightarrow \mathbb {P}^{2}(\mathbb {K})\) be a generically finite rational map. Then,

$$\begin{aligned} \left\{ \nabla \left( \dfrac{s_1(t_1,t_2,1)}{ s_3(t_1,t_2,1)}\right) , \nabla \left( \dfrac{s_2(t_1,t_2,1)}{s_3(t_1,t_2,1)}\right) \right\} \end{aligned}$$

are linearly independent as vectors in the \(\mathbb {K}\)-vector space \(\mathbb {K}(\,{{\overline{t}}}\,)^2\).

Proof

Let \(\lambda ,\mu \in \mathbb {K}\), not both zero, such that \(\lambda \nabla s_1+\mu \nabla s_2={\bar{0}}\). Integrating w.r.t. \(t_1\) in \(\lambda \frac{\partial s_1}{\partial t_1}+\mu \frac{\partial s_2}{\partial t_1}=0\), we get that there exists \(g(t_2)\) such that \(\lambda s_1+\mu s_2+g(t_2)=0\). Now, differentiating the previous equality w.r.t. \(t_2\), and taking into account that \(\lambda \frac{\partial s_1}{\partial t_2}+\mu \frac{\partial s_2}{\partial t_2}=0\), we get that \(g(t_2)\) is a constant k. So, \(\lambda s_1+\mu s_2=-k\). But this implies that \((s_1,s_2)\) maps \(\mathbb {K}^2\) into the line \(\lambda x+\mu y=-k\) which is impossible because the map is dominant in \(\mathbb {K}^2\). \(\square \)

Lemma 3

Let \({{\mathcal {S}}}=(s_1:s_2:s_3): \mathbb {P}^{2}(\mathbb {K}) \dashrightarrow \mathbb {P}^{2}(\mathbb {K})\) be a generically finite rational map such that \(\mathscr {F}_g({{\mathcal {P}}})=\mathscr {F}_g({{\mathcal {S}}})\). For \(j\in \{ 1,2,3\}\),

$$\begin{aligned} \Phi _{j}(t_1,t_2):=\left( \dfrac{s_1(t_1,t_2,1)}{s_3(t_1,t_2,1)}, \dfrac{s_2(t_1,t_2,1)}{s_3(t_1,t_2,1)},\dfrac{p_j(t_1,t_2,1)}{p_4(t_1,t_2,1)}\right) \end{aligned}$$

parameterizes an affine surface whose irreducible defining polynomial has degree one w.r.t. z

Proof

Let \({{\mathcal {S}}}_a\) be the affine version of \({{\mathcal {S}}}\), as in (2.1). We start proving that \(\mathscr {F}_g({{\mathcal {S}}}_a)=\mathscr {F}_g(\Phi _j)\). Let \(\overline{\alpha }\in \mathscr {F}_g(\Phi _j)\). Then, \(\Phi _j(\overline{\alpha })=\Phi _j({{\overline{h}}})\), where \({{\overline{h}}}=(h_1,h_2)\) is the pair of new variables. In particular \({{\mathcal {S}}}_a(\overline{\alpha })={{\mathcal {S}}}_a({{\overline{h}}})\). So, \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {S}}}_a)\). Conversely, let \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {S}}}_a)\). Then, \({{\mathcal {S}}}_a(\overline{\alpha })={{\mathcal {S}}}_a({{\overline{h}}})\). On the other hand, since \(\mathscr {F}_g({{\mathcal {S}}}_a)=\mathscr {F}_g({{\mathcal {P}}}_a)\), then \({{\mathcal {P}}}_a(\overline{\alpha })={{\mathcal {P}}}_a({{\overline{h}}})\). In particular \(p_j(\overline{\alpha })/p_4(\overline{\alpha })=p_j({{\overline{h}}})/p_4({{\overline{h}}})\). So \(\phi _j(\overline{\alpha })=\phi _{j}({{\overline{h}}})\) and hence \(\overline{\alpha }\in \mathscr {F}_g(\Phi _j)\). In particular, this implies that \(\textrm{degMap}(\Phi _j)=\textrm{degMap}({{\mathcal {S}}})\) and that \(\Phi _j\) parameterizes a surface. Let \(H_j(x,y,z)\) be its irreducible defining polynomial. Now, applying Lemma 2, we have that \(\Phi _j\) satisfies the hypothesis in [11], page 120; see Remark 1. Applying Theorem 6 in [11], we get that

$$\begin{aligned} \textrm{deg}_z(H_j)=\dfrac{\textrm{degMap}({{\mathcal {S}}})}{\textrm{degMap}(\Phi _j)}=1. \end{aligned}$$

\(\square \)

The next theorem characterizes the solutions of the birational reparameterization problem for \({{\mathcal {P}}}\).

Theorem 1

Let \({{\mathcal {Q}}}\) and \({{\mathcal {S}}}\) be rational maps

$$\begin{aligned} {{\mathcal {Q}}}:\mathbb {P}^{2}(\mathbb {K}) \dashrightarrow \mathscr {S}\subset \mathbb {P}^{3}(\mathbb {K}), \,\, {{\mathcal {S}}}=(s_1:s_2:s_3):\mathbb {P}^{2}(\mathbb {K}) \dashrightarrow \mathbb {P}^{2}(\mathbb {K}), \end{aligned}$$

where \(s_3\ne 0\) and \({{\mathcal {Q}}}({{\mathcal {S}}})={{\mathcal {P}}}\). The following statements are equivalent

  1. 1.

    \({{\mathcal {Q}}}\) and \({{\mathcal {S}}}\) solve the birational reparameterization problem for \({{\mathcal {P}}}\).

  2. 2.

    \(\mathscr {F}_g({{\mathcal {P}}})=\mathscr {F}_g({{\mathcal {S}}})\).

Furthermore, if any of the above two equivalent conditions holds then

  1. (a)

    For \(j\in \{ 1,2,3\}\),

    $$\begin{aligned} \Phi _{j}(t_1,t_2):=\left( \dfrac{s_1(t_1,t_2,1)}{s_3(t_1,t_2,1)}, \dfrac{s_2(t_1,t_2,1)}{s_3(t_1,t_2,1)},\dfrac{p_j(t_1,t_2,1)}{p_4(t_1,t_2,1)}\right) \end{aligned}$$
    (3.1)

    parameterizes an affine surface defined by an irreducible polynomial of the form

    $$\begin{aligned} H_{j}(x,y,z):=A_{j,1}(x,y)-A_{j,0}(x,y) z, \end{aligned}$$
    (3.2)

    with \(A_{j,0}\) not zero.

  2. (b)

    \({{\mathcal {Q}}}(\,{{\overline{t}}}\,)\) is the homogenization, with \(t_3\) as homogenization variable, of

    $$\begin{aligned} \mathcal {T}:=\left( \dfrac{A_{1,1}(t_1,t_2)}{A_{1,0}(t_1,t_2)},\dfrac{A_{2,1}(t_1,t_2)}{A_{2,0}(t_1,t_2)},\dfrac{ A_{3,1}(t_1,t_2)}{A_{3,0}(t_1,t_2)}\right) . \end{aligned}$$
    (3.3)

Proof

We argue with affine coordinates; note that by hypothesis \(p_4\ne 0\) and \(s_3\ne 0\). Let \({{\mathcal {P}}}_a\) be the affine version of \({{\mathcal {P}}}\), as in (2.1), and let \({{\mathcal {S}}}_a\) be the affine version of \({{\mathcal {S}}}\), as in (2.1).

Let us see that (2) implies (1). By (2), we know that \({{\mathcal {S}}}\) is generically finite, in fact \(\textrm{degMap}({{\mathcal {S}}})=\textrm{degMap}({{\mathcal {P}}})\). So, \({{\mathcal {S}}}_a\) is dominant in \(\mathbb {K}^2\). Moreover, by Lemma 3, \(\Phi _{j}\) (see (3.1)) defines a surface which implicit equation has the form of \(H_j\) in (3.2). Since \({{\mathcal {S}}}_a\) is dominant in \(\mathbb {K}^2\), \(A_{j,0}\) does not vanish at \(\Phi _j\) and, hence, \(\mathcal {T}\) is well-defined (see (3.3)). Now, using that \(H_j(\Phi _j)=0\), we get that \(\mathcal {T}({{\mathcal {S}}}_a)={{\mathcal {P}}}_a\). Furthermore, since the degree is multiplicative under composition, we have that \(\textrm{degMap}({{\mathcal {Q}}})=1\), and hence \({{\mathcal {Q}}}\) is a birational affine parameterization of \(\mathscr {S}\). Thus, (1) holds.

(1) implies (2) follows from Lemma 1.

Note that the second part of the theorem, statements (a) and (b), has been shown to be consequence of (2). \(\square \)

Let

$$\begin{aligned} \mathscr {U}_d:=\{(s_1,s_2,s_3)\in \mathscr {V}_{d}^{3} \,|\, \textrm{gcd}(s_1,s_2,s_3)=1\}. \end{aligned}$$
(3.4)

Note that every \(\overline{s}\in \mathscr {U}_d\) defines a rational map, namely, \({{\mathcal {S}}}:\mathbb {P}^{2}(\mathbb {K})\dashrightarrow \mathbb {P}^{2}(\mathbb {K}); \,{{\overline{t}}}\,\mapsto \overline{s}(\,{{\overline{t}}}\,)\) with \(\textrm{deg}({{\mathcal {S}}})=d\). Conversely, every rational map \({{\mathcal {S}}}=(s_1:s_2:s_3):\mathbb {P}^{2}(\mathbb {K})\dashrightarrow \mathbb {P}^{2}(\mathbb {K})\), with \(\textrm{deg}({{\mathcal {S}}})=d\), defines an element in \(\mathscr {U}_d\), namely \((s_1,s_2,s_3)\). So, we will identify the rational maps of \(\mathbb {P}^{2}(\mathbb {K})\) with the elements in \(\mathscr {U}_d\) for the suitable d.

Based on the previous result we introduce the following notion.

Definition 1

Let \({{\mathcal {P}}}\) be as in (1.1). We define the birational reparameterization solution space, of a fixed degree d, and we denote it by \(\textrm{SolSpace}_d({{\mathcal {P}}})\), as a subset of \(\mathscr {U}_d\) (see (3.4)) such that each \({{\mathcal {S}}}\in \textrm{SolSpace}_d({{\mathcal {P}}})\) defines a rational map of \(\mathbb {P}^{2}(\mathbb {K})\) that satisfies statement 2 in Theorem 1. Let \(\textrm{SolSpace}({{\mathcal {P}}})\) be the union of all \(\textrm{SolSpace}_d({{\mathcal {P}}})\).

Remark 2

Note that

  1. 1.

    by Castelnuovo’s Theorem, \(\textrm{SolSpace}({{\mathcal {P}}})\ne \emptyset \), and

  2. 2.

    for every \({{\mathcal {S}}}\in \textrm{SolSpace}({{\mathcal {P}}})\) there exists a rational surface parameterization \({{\mathcal {Q}}}\) such that \(({{\mathcal {Q}}},{{\mathcal {S}}})\) solves the reparameterization problem for \({{\mathcal {P}}}\). For deriving \({{\mathcal {Q}}}\) from \({{\mathcal {S}}}\) and \({{\mathcal {P}}}\) see the next section.

4 The computational approach: the general case

The computational strategy will be to determine algorithmically d, as well as \(\textrm{SolSpace}_d({{\mathcal {P}}})\), such that \(\textrm{SolSpace}_d({{\mathcal {P}}})\ne \emptyset \). More precisely, let us assume that we are able to determine \(\textrm{SolSpace}_d({{\mathcal {P}}})\ne \emptyset \) for such d. Then, for \({{\mathcal {S}}}\in \textrm{SolSpace}_d({{\mathcal {P}}})\), we implicitize the surface parameterizations \(\Phi _1, \Phi _2,\Phi _3\) (see Theorem 1 (3.1)) to get \({{\mathcal {Q}}}\) as in Theorem 1 (b). Now, \(({{\mathcal {Q}}},{{\mathcal {S}}})\) is a solution of the birational reparameterization for \({{\mathcal {P}}}\).

4.1 On the generic fiber of \({{\mathcal {P}}}\)

In order to compute \(\textrm{SolSpace}_d({{\mathcal {P}}})\) we need to determine \(\textrm{degMap}({{\mathcal {P}}})\) and moreover the generic fiber \(\mathscr {F}_g({{\mathcal {P}}})\). Using Sect. 2.1, we may work affinely. So, let \({{\mathcal {P}}}_a\) be the affine parameterization obtained from \({{\mathcal {P}}}\) as in (2.1). Let \({{\mathcal {P}}}_a(t_1,t_2)\) be expressed as

$$\begin{aligned} {{\mathcal {P}}}_a(t_1,t_2)=\left( \dfrac{P_1(t_1,t_2)}{Q_1(t_1,t_2)},\dfrac{P_2(t_1,t_2)}{Q_2(t_1,t_2)},\dfrac{P_3(t_1,t_2)}{Q_3(t_1,t_2)} \right) \end{aligned}$$
(4.1)

where the rational functions are in reduced form. We show how to describe the points of \(\mathscr {F}_g({{\mathcal {P}}}_a)\). We consider the polynomials

$$\begin{aligned} G_ i=P_i(h_1,h_2) Q_i(t_1,t_2)-P_i(t_1,t_2)Q_i(h_1,h_2), \,\,i\in \{1,2,3\}. \end{aligned}$$
(4.2)

Furthermore, let \(W:=w \cdot \textrm{lcm}(Q_1,Q_2,Q_3)-1\) where w is a new variable. Also, we consider the projection

$$\begin{aligned} \begin{array}{ccc}\pi : \overline{\mathbb {K}(h_1,h_2)}^3 &{} \rightarrow &{} \overline{\mathbb {K}(h_1,h_2)}^2 \\ (t_1,t_2,w) &{}\mapsto &{} (t_1,t_2) \end{array}. \end{aligned}$$

Then, we have the following lemma.

Lemma 4

\(\mathscr {F}_g({{\mathcal {P}}}_a) = \overline{\pi (\mathbb {V}_{\overline{\mathbb {K}(h_1,h_2)}}(G_1,G_2,G_3,W))}\).

Proof

Let \(A:=\textrm{lcm}(Q_1,Q_2,Q_3)\) and \(\mathscr {W}:=\mathbb {V}_{\overline{\mathbb {K}(h_1,h_2)}}(G_1,G_2,G_3,W)\).

Let \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {P}}}_a)\subset \overline{\mathbb {K}(h_1,h_2)}^2\). Then \({{\mathcal {P}}}_a(\overline{\alpha })={{\mathcal {P}}}_a(h_1,h_2)\). Furthermore, \({{\mathcal {P}}}_a(\overline{\alpha })\) is well defined and hence \(A(\overline{\alpha })\ne 0\). Therefore, \((\overline{\alpha },1/A(\overline{\alpha }))\in \mathscr {W}\). So, \(\pi ((\overline{\alpha },1/A(\overline{\alpha })))=\overline{\alpha }\in \pi (\mathscr {W})\subset \overline{\pi (\mathscr {W})}\).

Conversely, let \((a,b,c)\in \mathscr {W}\). Then \(G_i(h_1,h_2,a,b)=0\) for all \(i\in \{1,2,3\}\), and \(c \,A(a,b)=1\). So, \(A(a,b)\ne 0\). Therefore, \({{\mathcal {P}}}_a(a,b)\) is well defined and \({{\mathcal {P}}}_a(a,b)={{\mathcal {P}}}(h_1,h_2)\). Thus, \((a,b)=\pi (a,b,c)\in \mathscr {F}_g({{\mathcal {P}}}_a)\). \(\square \)

Therefore, using the elimination property of Gröbner bases and the Closure Theorem (see e.g. [2] page 125, and [18] page 192), if \(\tilde{\mathscr {K}}\) is a Gröbner basis, w.r.t. lexicographic ordering with \(t_1<t_2<w\), of the ideal \(\tilde{\textrm{J}}:=<G_1,G_2,G_3,W> \subset \mathbb {K}(h_1,h_2)[t_1,t_2,w]\), and let \(\textrm{J}:=\tilde{\mathscr {K}} \cap \mathbb {K}(h_1,h_2)[t_1,t_2]\). Then

$$\begin{aligned} \mathbb {V}_{\overline{\mathbb {K}(h_1,h_2)}}(\textrm{J})=\mathscr {F}_g({{\mathcal {P}}}_a). \end{aligned}$$
(4.3)

Furthermore, if \(\tilde{\mathscr {K}}\) is minimal then \(\tilde{\mathscr {K}}=\{k_{1,1}, k_{2,1},\ldots ,k_{2,k_2}, k_{3,1},\ldots ,k_{3,k_3}\}\) where \(k_{1,1}\in \mathbb {K}(h_1,h_2)[t_1]\), \(k_{2,j}\in \mathbb {K}(h_1,h_2)[t_1,t_2]\) and \(k_{3,j}\in \mathbb {K}(h_1,h_2)[t_1,t_2,w]\) (see e.g. [18] page 194). Thus, \(\mathscr {F}_g({{\mathcal {P}}}_a)\) is described by

$$\begin{aligned} \mathscr {K}=\{k_{1,1}, k_{2,1},\ldots ,k_{2,k_2}\}. \end{aligned}$$
(4.4)

That is

$$\begin{aligned} \mathscr {F}_g({{\mathcal {P}}}_a)=\mathbb {V}(\mathscr {K})=\{ \overline{\alpha }\in \overline{\mathbb {K}( {h_1,h_2})}^2 \, | \, k(\overline{\alpha })=0 \,\,\hbox { for}\ k\in \mathscr {K}\}. \end{aligned}$$
(4.5)

On the other hand, since \(\mathscr {F}_g({{\mathcal {P}}}_a)\) is zero-dimensional, we may assume, maybe after a linear change of \(\{t_1,t_2\}\), that \(\textrm{J}\) is in general position w.r.t. \(t_1\) (see [18] page 194). So, if we work with \(\sqrt{\textrm{J}}\), we may apply the Shape Lemma (see [18] page 195). More precisely, the normal reduced Gröbner basis of \(\sqrt{\textrm{J}}\), w.r.t. the lexicographic order with \(t_1<t_2\), is of the form

$$\begin{aligned} \mathscr {G}:= \{ u(t_1),t_2-v(t_1) \}, \end{aligned}$$
(4.6)

with u square-free and \(\textrm{deg}(v)<\textrm{deg}(u)\). Therefore

$$\begin{aligned} \mathscr {F}_g({{\mathcal {P}}}_a)=\mathbb {V}(\mathscr {G})=\{ \overline{\alpha }:=(\alpha _1,v(\alpha _1))\in \overline{\mathbb {K}( {h_1,h_2})}^2 \, | \, u(\alpha _1)=0 \}. \end{aligned}$$
(4.7)

In order to determine \(\sqrt{\textrm{J}}\) we can, for instance, use Seidenberg Lemma (see e.g. [15] or [5]). More precisely, if \(\textrm{J}\cap \mathbb {K}(h_1,h_2) [t_1]=<f(t_1)>\) and \(\textrm{J}\cap \mathbb {K}(h_1,h_2) [t_2]=<g(t_2)>\), then (see (4.5))

$$\begin{aligned} \sqrt{\textrm{J}}=<k_{1,1}, k_{2,1},\ldots ,k_{2,k_2}, {\tilde{f}}, {\tilde{g}}>, \end{aligned}$$
(4.8)

where \({\tilde{f}}= f/\textrm{gcd}(f,f')\) and \({\tilde{g}}= g/\textrm{gcd}(g,g')\).

In [10], the authors introduce two polynomials whose roots describe, respectively, the coordinates of the elements in \(\mathscr {F}_g({{\mathcal {P}}}_a)\), and hence its degree provides \(\textrm{degMap}({{\mathcal {P}}})\). More precisely, let us assume that \(\{\nabla P_1/Q_1,\nabla P_2/Q_2 \}\) are linearly independent as vectors in the \(\mathbb {K}\) vector space \(\mathbb {K}(t_1,t_2)^2\), see Remark 1. Observe that this condition can be achieved after a (xyz)-affine change of coordinates. In addition \(\textrm{degMap}\) will not change and \(\mathscr {F}_g({{\mathcal {P}}}_a)\) will be obtained applying the corresponding inverse of the linear transformation. So, we assume that \({{\mathcal {P}}}_a\) does indeed satisfy the hypothesis. In this situation, let

$$\begin{aligned} \left\{ \begin{array}{l} {\hat{R}}_1(h_1,h_2,t_1)=\textrm{PrimPart}_{\{h_1,h_2\}}(\textrm{Content}_{Z}({\textrm{Res}}_{t_2}(G_1,G_2+Z G_3))) \\ {\hat{R}}_2(h_1,h_2,t_2)=\textrm{PrimPart}_{\{h_1,h_2\}}(\textrm{Content}_{Z}({\textrm{Res}}_{t_1}(G_1,G_2+Z G_3))).\end{array} \right. \end{aligned}$$
(4.9)

We observe that computing \(\textrm{PrimPart}_{\{h_1,h_2\}}\) in (4.9) we are avoiding the base points of \({{\mathcal {P}}}(\,{{\overline{t}}}\,)\). So, the polynomials above can be simplified by also avoiding the base points of \({{\mathcal {P}}}({{\overline{h}}})\). That is, we introduce the polynomials

$$\begin{aligned} \left\{ \begin{array}{l} {R}_1(h_1,h_2,t_1)=\textrm{PrimPart}_{t_1}({\hat{R}}_1) \\ {R}_2(h_1,h_2,t_2)=\textrm{PrimPart}_{t_2}(\hat{R_2}).\end{array} \right. \end{aligned}$$
(4.10)

Then, it holds that

$$\begin{aligned} \textrm{degMap}({{\mathcal {P}}})=\textrm{deg}_{t_1}({R}_1)=\textrm{deg}_{t_2}({R}_2). \end{aligned}$$
(4.11)

Moreover, if \({R}_1, {R}_2\) are considered as polynomials in \(\mathbb {K}(h_1,h_2)[t_1], \mathbb {K}(h_1,h_2)[t_2]\), respectively, the roots of \({R}_1\) (resp. of \({R}_2)\) are the first coordinates (resp. second coordinates) of the elements in the fiber. In addition, note that f and g involved in (4.8) are \(R_1,\) and \(R_2\), respectively.

4.2 On the generic fiber of \({{\mathcal {S}}}\)

The reasonings in Sect. 4.1 can be analogously performed to compute the generic fiber of rational maps from \(\mathbb {P}^2(\mathbb {K})\) onto \(\mathbb {P}^2(\mathbb {K})\). In the sequel, we see how this can be reduced to the case in Sect. 4.1. More precisely, let \({{\mathcal {S}}}=(s_1:s_2:s_3): \mathbb {P}^2(\mathbb {K}) \rightarrow \mathbb {P}^2(\mathbb {K})\) be a generically finite rational map, where we assume w.o.l.g. that \(s_3\) is not zero. We associate to \({{\mathcal {S}}}\) the map

$$\begin{aligned} \begin{array}{lccc} \tilde{{{\mathcal {S}}}}: &{} \mathbb {P}^2(\mathbb {K}) &{} \dashrightarrow &{} \mathbb {P}^{3}(\mathbb {K}) \\ &{} \,{{\overline{t}}}\,&{}\longmapsto &{} (s_1(\,{{\overline{t}}}\,):s_2(\,{{\overline{t}}}\,):s_3(\,{{\overline{t}}}\,): s_3(\,{{\overline{t}}}\,)). \end{array} \end{aligned}$$

Observe that \(\tilde{{{\mathcal {S}}}}(\mathbb {P}^2(\mathbb {K}))\) is dense in \(\{(x_1:\cdots :x_4)\in \mathbb {P}^3(\mathbb {K}) \,|\, x_3=x_4\}\). So, \(\tilde{{{\mathcal {S}}}}\) parameterizes the plane \(x_3=x_4\). Let (see (2.1))

$$\begin{aligned} \tilde{{{\mathcal {S}}}}_a=\left( \dfrac{A_1(t_1,t_2)}{B_1(t_1,t_2)},\dfrac{A_2(t_1,t_2)}{B_2(t_1,t_2)}, 1\right) , \end{aligned}$$

where the fractions are in reduced form. Let us check that \(\tilde{{{\mathcal {S}}}}_a\) satisfies the hypothesis in Sect. 1.2. in [10]; see Remark 1. We need to ensure the hypothesis requiring that \(\{\nabla (A_1/B_1),\nabla (A_2/B_2)\}\) are linearly independent as vectors in the \(\mathbb {K}\) vector space \(\mathbb {K}(t_1,t_2)^2\). However, if they were linearly dependent, reasoning as in Sect. 4.1, we would get that the image of \({{\mathcal {S}}}\) would be included in a line which is a contradiction with the fact that \({{\mathcal {S}}}(\mathbb {P}^2(\mathbb {K}))\) is dense in \(\mathbb {P}^2(\mathbb {K})\). In this situation, we consider the polynomials

$$\begin{aligned} {\tilde{G}}_i:= A_i(h_1,h_2)B_i(t_1,t_2)-A_i(t_1,t_2)B_i(h_1,h_2),\,\,\,i\in \{1,2\}. \end{aligned}$$
(4.12)

and

$$\begin{aligned} \left\{ \begin{array}{l} {\tilde{R}}_1=\textrm{PrimPart}_{t_1}(\textrm{PrimPart}_{\{h_1,h_2\}}({\textrm{Res}}_{t_2}({\tilde{G}}_1,{\tilde{G}}_2))),\\ {\tilde{R}}_2=\textrm{PrimPart}_{t_2}(\textrm{PrimPart}_{\{h_1,h_2\}}({\textrm{Res}}_{t_1}({\tilde{G}}_1,{\tilde{G}}_2))). \end{array}\right. \end{aligned}$$
(4.13)

Then

$$\begin{aligned} \textrm{degMap}({{\mathcal {S}}})=\textrm{deg}_{t_1}({\tilde{R}}_1)=\textrm{deg}_{t_2}({\tilde{R}}_2). \end{aligned}$$

4.3 On the degree of \({{\mathcal {S}}}\)

The first problem that we find, in order to determine the solution space, is to know d such that \(\textrm{SolSpace}_{d}({{\mathcal {P}}})\ne \emptyset \).

Lemma 5

Let \({{\mathcal {S}}}\in \textrm{SolSpace}({{\mathcal {P}}})\) (see Remark 2 (2)) then

$$\begin{aligned} \textrm{deg}({{\mathcal {S}}})=\sqrt{\textrm{degMap}({{\mathcal {P}}})+\textrm{mult}(\mathscr {B}({{\mathcal {S}}}))}\ge \lceil \sqrt{\textrm{degMap}({{\mathcal {P}}})} \rceil . \end{aligned}$$

Proof

By Theorem 7 (1) in [3], \(\textrm{deg}({{\mathcal {S}}})^2=\textrm{degMap}({{\mathcal {S}}})+\textrm{mult}(\mathscr {B}({{\mathcal {S}}}))\). Since \({{\mathcal {S}}}\in \textrm{SolSpace}({{\mathcal {P}}})\), by Remark 2 (2), there exists \({{\mathcal {Q}}}\) proper such that \({{\mathcal {Q}}}({{\mathcal {S}}})={{\mathcal {P}}}\). So, \(\textrm{degMap}({{\mathcal {S}}})=\textrm{degMap}({{\mathcal {P}}})\) and hence the statement follows. \(\square \)

Observe that the minimal expected degree of an element in \(\textrm{SolSpace}({{\mathcal {P}}})\) is \(\lceil \sqrt{\textrm{degMap}({{\mathcal {P}}})}\rceil \) which holds when \(\mathscr {B}({{\mathcal {S}}})=\emptyset \). In the next lemma, we analyze the case where the birational parameterization \({{\mathcal {Q}}}\) has empty base locus.

Lemma 6

Let us assume that \(\mathscr {S}\) admits a birational parameterization without base points. Then, there exists \({{\mathcal {S}}}\in \textrm{SolSpace}({{\mathcal {P}}})\) such that

$$\begin{aligned} \textrm{deg}({{\mathcal {S}}})=\dfrac{\textrm{deg}({{\mathcal {P}}})}{\sqrt{\textrm{deg}(\mathscr {S})}}. \end{aligned}$$

Proof

Let \({{\mathcal {Q}}}\) be a birational parameterization of \(\mathscr {S}\) such that \(\mathscr {B}({{\mathcal {Q}}})=\emptyset \). Then, \(({{\mathcal {Q}}},{{\mathcal {Q}}}^{-1}\circ {{\mathcal {P}}})\) solves the birational reparameterization problem. So, by Theorem 1, \({{\mathcal {S}}}:={{\mathcal {Q}}}^{-1}\circ {{\mathcal {P}}}\in \textrm{SolSpace}({{\mathcal {P}}})\). Moreover, by Corollary 10 in [3], \(\textrm{deg}({{\mathcal {P}}})=\textrm{deg}({{\mathcal {Q}}})\textrm{deg}({{\mathcal {S}}})\). Now, by Theorem 3 in [3] applied to \({{\mathcal {Q}}}\), \(\textrm{deg}({{\mathcal {Q}}})=\sqrt{\textrm{deg}(\mathscr {S})}\). This concludes the proof. \(\square \)

4.4 General algorithm

In this subsection, combining all previous results and ideas, the general algorithm is derived. For this purpose, let \(\mathscr {V}_d\) be as in (1.2).

In the sequel, let us denote by

$$\begin{aligned} E_d(\Lambda ,\,{{\overline{t}}}\,)\in \mathbb {K}[\lambda ,\,{{\overline{t}}}\,] \end{aligned}$$
(4.14)

the generic \(\,{{\overline{t}}}\,\)–homogeneous polynomial in \(\mathscr {V}_d\), where \(\Lambda \) is the tuple of undetermined coefficients.

First, we outline the auxiliary Algorithm 1 that computes the set \(\textrm{SolSpace}_d\). In the sequel, we describe the basic ideas of our approach. We will see that \(\textrm{SolSpace}_d\) is an open subset, maybe empty, of a closed subset \(\mathscr {W}\) of \(\mathscr {U}_d\) (see (3.4)). For this purpose, let

$$\begin{aligned} E_{d}^{i}:=E(\Lambda _{i},\,{{\overline{t}}}\,), \,\, i\in \{1,2,3\} \end{aligned}$$
(4.15)

where \(\Lambda _{1},\Lambda _2,\Lambda _{3}\) are different tuples of undetermined coefficients; we will use the notation \(\overline{\Lambda }:= (\overline{\Lambda }_1,\overline{\Lambda }_2,\overline{\Lambda }_3)\). We will find \(U_1,\ldots ,U_r\in \mathbb {K}[\Lambda _1,\Lambda _2,\Lambda _3]\) and a variety \(\mathscr {W}\subset \mathscr {U}_{d}\) such that \(\mathcal {U}=\mathscr {W}{\setminus } \cup _{i=1}^{r}\mathbb {V}(U_i)\). The reasoning is as follows. \(\mathcal {E}:=(E_{d}^{1}:E_{d}^{2}:E_{d}^{3})\) defines a generic rational map from \(\mathbb {P}^2(\mathbb {K})\) into \(\mathbb {P}^2(\mathbb {K})\). We look for those \(\mathcal {E}\) such that \(\mathscr {F}_g(\mathcal {E})=\mathscr {F}_g({{\mathcal {P}}})\). To achieve this, we will ensure that \(\mathscr {F}_g({{\mathcal {P}}})\subset \mathscr {F}_g(\mathcal {E})\) and that \(\textrm{degMap}({{\mathcal {P}}})=\textrm{degMap}(\mathcal {E})\).

First let \(U_1\) contain the parameters in \(\Lambda _3\); this guarantees that for \(\mathcal {E}\in \mathscr {U}_{d}{\setminus } \mathbb {V}(U_1)\), \(E_{d}^{3}\) is not zero. We need to impose first that \(\mathscr {F}_g({{\mathcal {P}}})\subset \mathscr {F}_g(\mathcal {E})\). But, since \(E_{d}^{3}\ne 0\), according to Sect. 2.1, it is enough to ask that \(\mathscr {F}_g({{\mathcal {P}}}_a)\subset \mathscr {F}_g(\mathcal {E}_a)\), where \(e_{d}^{i}:=E_{d}^{i}(\Lambda _{i},t_1,t_2,1)\) and

$$\begin{aligned} \mathcal {E}_a:=\left( \dfrac{e_{d}^{1}}{e_{d}^{3}}, \dfrac{e_{d}^{2}}{e_{d}^{3}}\right) :\mathbb {A}^{2}(\mathbb {K})\dashrightarrow \mathbb {A}^{2}(\mathbb {K}). \end{aligned}$$
(4.16)

Let \(U_2\) be the set containing the conditions on the parameters \(\overline{\Lambda }\) that ensure that the rank of the Jacobian matrix of \(\mathcal {E}_a\) is smaller than 2; this guarantees that for \(\mathcal {E}\in \mathscr {U}_{d}{\setminus } \cup _{i=1}^{2} \mathbb {V}(U_i)\) the hypothesis in Sect. 1.2. in [10] on the linear independency is satisfied (see Remark 1) and that \(\mathcal {E}_a\) is dominant. We observe that the denominator of the determinant Jacobian is a power of \(e_{d}^{3}\) and, because of \(U_1\), the determinant of the Jacobian is always well-defined. Thus, in \(U_2\), we only need to collect the coefficient w.r.t. \(\{t_1,t_2\}\) of the numerator of the determinant of the Jacobian. Now, we introduce the polynomials [(compare to (4.12)]

$$\begin{aligned} {\tilde{G}}_{i}^{\mathcal {E}}:=e_{d}^{i}(\Lambda _{i},h_1,h_2)e_{d}^{3}(\Lambda _{3},t_1,t_2)- e_{d}^{i}(\Lambda _{i},t_1,t_2)e_{d}^{3}(\Lambda _{3},h_1,h_2),\,\,i\in \{1,2\},\nonumber \\ \end{aligned}$$
(4.17)

and we require that \({\tilde{G}}_{1}^{\mathcal {E}}(\Lambda _{1},\Lambda _{3},h_1,h_2,\overline{\alpha })=0\) for every \(\overline{\alpha }\in \mathscr {F}_g({{\mathcal {P}}}_a)\). We observe that \((h_1,h_2)\in \mathscr {F}_g({{\mathcal {P}}}_a)\) and that \({\tilde{G}}_{1}^{\mathcal {E}}(\Lambda _{1},\Lambda _{3},h_1,h_2,h_1,h_2)=0\). So, we may work with \(\mathscr {F}_g({{\mathcal {P}}}_a){\setminus } \{(h_1,h_2)\}\). Furthermore, \(\mathscr {F}_g({{\mathcal {P}}}_a)=\mathbb {V}(\mathscr {G})\), see (4.7), where \(\mathscr {G}\) is as in (4.6). Therefore, let us replace the univariate polynomial \(u(t_1)\) of \(\mathscr {G}\), that is the square-free part of \(R_1\) (see (4.10)), by

$$\begin{aligned} u^{*}(t_1):=u(t_1)/(t_1-h_1) \end{aligned}$$

in \(\mathscr {G}\); let

$$\begin{aligned} \mathscr {G}^*:=\{u^*(t_1),t_2-v(t_1)\} \end{aligned}$$
(4.18)

be the resulting set. Then, \(\mathscr {G}^*\) is still a Gröbner basis and

$$\begin{aligned} \mathscr {F}_g({{\mathcal {P}}}_a){\setminus } \{(h_1,h_2)\}=\mathbb {V}(\mathscr {G}^*). \end{aligned}$$
(4.19)

Thus, to achieve the condition above, one has to require the normal form \(N_i\) of \({\tilde{G}}_{i}^{\mathcal {E}}, i\in \{1,2\}\), w.r.t. \(\mathscr {G}^*\) to be zero. We have the following result.

Lemma 7

Let \(N_i\) be the normal form of \({\tilde{G}}_{i}^{\mathcal {E}}, i\in \{1,2\}\), w.r.t. \(\mathscr {G}^*\). It holds that

  1. 1.

    \(N_i, i\in \{1,2\}\), is the remainder of the division of \({\tilde{G}}_{i}^{\mathcal {E}}(t_1,v(t_1))\) by \(u^*(t_1)\) w.r.t. \(t_1\).

  2. 2.

    For \(\mathcal {E}\in \mathscr {V}_{d}^{3}{{\setminus }} \cup _{i=1}^{2} \mathbb {V}(U_i)\), \(N_1,N_2\) specializes properly.

Proof

It follows from the special form of the Gröbner basis \(\mathscr {G}^*\), and from the fact that the polynomials in \(\mathscr {G}^*\) does not depend on \(\Lambda _i\) \(\square \)

If \( {N_1}\) is not zero and does not depend on \(\Lambda _{1},\Lambda _{3}\), then there exists no element in \(\mathcal {E}\in \mathscr {U}_{d}{{\setminus }} \cup _{i=1}^{2} \mathbb {V}(U_i)\) satisfying Theorem 1 (2). So, let us assume that \( {N_1}\) does depend on \(\Lambda _{1},\Lambda _3\). The condition \(N_1=0\) provides a finite set \(C_{1,3}(\Lambda _1,\Lambda _3)\) of polynomials in the parameters \(\Lambda _{1},\Lambda _{3}\), namely the non-zero coefficients w.r.t. \(\{h_1,h_2\}\) of the coefficients w.r.t. \(\{t_1,t_2\}\) of \(N_1\). Let \(C_{2,3}(\Lambda _2,\Lambda _3)\) the corresponding set to \(N_2\). By symmetry \(C_{2,3}(\Lambda _2,\Lambda _3)= C_{1,3}(\Lambda _2,\Lambda _3)\).

As a consequence, we have the following result.

Lemma 8

Let \(C_{1,3}\) and \(U_i\) be as above. For every \((E_1:E_3),(E_2:E_3)\in \mathbb {V}(C_{1,3})\) such that \(\mathcal {E}:=(E_1:E_2:E_3)\not \in \cup _{i=1}^{2} \mathbb {V}(U_i)\) it holds that \(\mathscr {F}_g({{\mathcal {P}}})\subset \mathscr {F}_g(\mathcal {E})\).

We observe that the polynomials in \(C_{1,2}\) are easy to handle.

Lemma 9

The polynomials in \(C_{1,3}\) are bilinear forms in the undetermined parameters \(\Lambda _{1}\) and \(\Lambda _{3}\).

Proof

The result follows taking into account the reduction process for computing the normal form, that \(G_{1}^{\mathcal {E}}\) is bilinear in the parameters \(\Lambda _{1},\Lambda _{3}\), and that no polynomial in \(\mathscr {G}^*\) depends on \(\Lambda _{1},\Lambda _{3}\). \(\square \)

In this situation, solving the algebraic system of bilinear forms \(\{h(\Lambda _{1},\Lambda _{3})=0\}_{h\in C_{1,3}}\) yields in general to a finite set of (parametric) solutions which provides a description, by means of rational generic elements, on the irreducible components of \(\mathbb {V}(C_{1,3})\). In this case, the denominators of each generic point will be included in the set \(U_2\); or equivalently the condition requiring that the projective point does not collapse to the zero tuple. Now, for every generic solution \((E_1,E_3)\) of \(\mathbb {V}(C_{1,3})\), let \(\,{{\overline{v}}}\,=(E_1:E_2:E_3)\) where \(E_2\) is the result of substituting in \(E_1\) the parameters \(\Lambda _1\) by \(\Lambda _2\). We denote by \(\mathcal {V}_{1,2,3}\), the set of all points constructed in this way.

Let \(\,{{\overline{v}}}\,\in \mathcal {V}_{1,2,3}\). We look for the conditions such that \(\textrm{degMap}({{\mathcal {P}}})=\textrm{degMap}(\mathcal {E})\). For this we use Sect. 4.2. More precisely, let (see (4.17))

$$\begin{aligned} \breve{G}_{i}^{\mathcal {E}}:={\tilde{G}}_{i}^{\mathcal {E}}(\,{{\overline{v}}}\,,h_1,h_2,t_1,t_2),\,\,i\in \{1,2\}, \end{aligned}$$
(4.20)

and

$$\begin{aligned} \breve{R}_{1}^{\mathcal {E}}=\textrm{PrimPart}_{\{t_1,t_2\}}(\textrm{PrimPart}_{\{h_1,h_2\}} ({\textrm{Res}}_{t_2}(\breve{G}_{1}^{\mathcal {E}},\breve{G}_{2}^{\mathcal {E}}))). \end{aligned}$$
(4.21)

Let \(\breve{R}_{1}^{\mathcal {E}}\) be expressed as

$$\begin{aligned} \breve{R}_{1}^{\mathcal {E}}=\sum c_{i}(\Lambda _1,\Lambda _{2},\Lambda _{3},h_1,h_2) t^i. \end{aligned}$$
(4.22)

Since \(\textrm{degMap}(\mathcal {E})=\textrm{deg}_{t_1}(\breve{R}_{1}^{\mathcal {E}})\), let \(C^*\) be the set of all non-zero coefficients of \(c_{i}\) w.r.t. \(\{h_1,h_2\}\) and \(i>\textrm{degMap}({{\mathcal {P}}})\). In addition, let \(U_3\) be the set of all non-zero coefficients of \(c_{\textrm{degMap}({{\mathcal {P}}})}\) w.r.t. \(\{h_1,h_2\}\). Then, we have the following result.

Theorem 2

Let \(C^*\) and \(U_i\) be as above. For every \(\mathcal {E}\in \mathbb {V}(C^*){{\setminus }} \cup _{i=1}^{3} \mathbb {V}(U_i)\) it holds that \(\mathscr {F}_g({{\mathcal {P}}})= \mathscr {F}_g(\mathcal {E})\).

In the following algorithm, we summarize all these ideas.

figure a
figure b

Finally, the general algorithm is derived. We assume that the algorithm in [8] has been applied and has not provide an answer to the problem.

figure c

We finish this section with an example where we illustrate how the general algorithm works. The interested reader may find, at https://jct.web.uah.es/research.html (Software section below), a Maple 2021 sheet where all details on the computation of this example can be followed.

Example 1

Let \({{\mathcal {P}}}(\,{{\overline{t}}}\,)= (t_1^3+t_2t_{3}^{2}: t_1^3: t_2t_{3}^{2}:t_3^3)\) be the input of Algorithm 2; that is, it is a rational projective parameterization of an algebraic surface \(\mathscr {S}\) as in (1.1). The parameterization is quite simple and one could implicitize and compute a proper rational parameterization. Nevertheless, let us use this parameterization to illustrate carefully Algorithm 2 and Algorithm 1. For this purpose, in Steps 1 and 2 of Algorithm 2, we first compute \(\ell =\sqrt{\textrm{degMap}({{\mathcal {P}}})}=\sqrt{3}\) and the set of polynomials

$$\begin{aligned} \mathscr {G}^*:=\{ h_{1}^{2}+h_{1} t_{1}+t_{1}^{2}, t_2-h_2\} \end{aligned}$$

as in (4.18). Thus, \(u^*(t_1)= h_{1}^{2}+h_{1} t_{1}+t_{1}^{2}\) and \(v(t_1)=h_2\).

The for-loop in Step 3 starts with \(d=2\). We apply Algorithm 1 to \(\mathscr {G}^*\) and \(d=2\):

Algorithm 1 starts.

Step 1. \(C=U_1=U_2=\emptyset \).

Step 2. \(E_2^i(\Lambda _i, t_1, t_2):= \lambda _{i,1}t_1^2+\lambda _{i,2}t_2^2+\lambda _{i,3}t_1t_2+\lambda _{i,4}t_1t_3+\lambda _{i,5}t_2t_3+\lambda _{i,6}t_3^2,\,i\in \{1,2,3\}\).

Steps 3–6. \(U_1=\{\lambda _{3,1},\ldots ,\lambda _{3,6}\}\). For determining \(U_2\), let

$$\begin{aligned} \mathcal {E}_a=\left( \frac{e_2^1}{e_2^3},\frac{e_2^2}{e_2^3}\right) , \end{aligned}$$

where \(e_2^i=E_2^i(t_1, t_2, 1)\) for \(i=1,2,3\). Let T be the determinant of the Jacobian of \(\mathcal {E}_a\) w.r.t. \(\{t_1,t_2\}\). Then, \(U_2\) collects the coefficients of the numerator of T w.r.t. \(\{t_1,t_2\}\). In this case, \(U_2\) consists in ten 3-linear polynomials in \(\lambda _{1,i}, \lambda _{2,j}, \lambda _{3,k}\).

Step 7. Let \({\tilde{G}}_i^{\mathcal {E}}=e_2^i(\Lambda _i, h_1, h_2)e_2^3(\Lambda _3, t_1, t_2)-e_2^3(\Lambda _3, h_1, h_2)e_2^i(\Lambda _i, t_1, t_2),\,i=1,2,3.\)

Step 8. We get that the normal form \(N_1\)of \({\tilde{G}}_1^{{\mathcal {E}}}\) w.r.t \({{\mathcal {G}}}^{\star }\) is

\(N_1=-h_{1}^{3} h_{2} \lambda _{1,1} \lambda _{3,3}+h_{1}^{3} h_{2} \lambda _{1,3} \lambda _{3,1}-2 h_{1}^{2} h_{2}^{2} \lambda _{1,1} \lambda _{3,2}+2 h_{1}^{2} h_{2}^{2} \lambda _{1,2} \lambda _{3,1}-2 h_{1}^{2} h_{2} \lambda _{1,1} \lambda _{3,3} t_{1}+2 h_{1}^{2} h_{2} \lambda _{1,3} \lambda _{3,1} t_{1}+h_{1} h_{2}^{3} \lambda _{1,2} \lambda _{3,3}-h_{1} h_{2}^{3} \lambda _{1,3} \lambda _{3,2}-h_{1} h_{2}^{2} \lambda _{1,1} \lambda _{3,2} t_{1}+h_{1} h_{2}^{2} \lambda _{1,2} \lambda _{3,1} t_{1}-h_{2}^{3} \lambda _{1,2} \lambda _{3,3} t_{1}+h_{2}^{3} \lambda _{1,3} \lambda _{3,2} t_{1}-h_{1}^{3} \lambda _{1,1} \lambda _{3,4}+h_{1}^{3} \lambda _{1,4} \lambda _{3,1}-2 h_{1}^{2} h_{2} \lambda _{1,1} \lambda _{3,5}+2 h_{1}^{2} h_{2} \lambda _{1,5} \lambda _{3,1}-2 h_{1}^{2} \lambda _{1,1} \lambda _{3,4} t_{1}+2 h_{1}^{2} \lambda _{1,4} \lambda _{3,1} t_{1}+h_{1} h_{2}^{2} \lambda _{1,2} \lambda _{3,4}-h_{1} h_{2}^{2} \lambda _{1,3} \lambda _{3,5}-h_{1} h_{2}^{2} \lambda _{1,4} \lambda _{3,2}+h_{1} h_{2}^{2} \lambda _{1,5} \lambda _{3,3}-h_{1} h_{2} \lambda _{1,1} \lambda _{3,5} t_{1}+h_{1} h_{2} \lambda _{1,5} \lambda _{3,1} t_{1}-h_{2}^{2} \lambda _{1,2} \lambda _{3,4} t_{1}+h_{2}^{2} \lambda _{1,3} \lambda _{3,5} t_{1}+h_{2}^{2} \lambda _{1,4} \lambda _{3,2} t_{1}-h_{2}^{2} \lambda _{1,5} \lambda _{3,3} t_{1}-2 h_{1}^{2} \lambda _{1,1} \lambda _{3,6}+2 h_{1}^{2} \lambda _{1,6} \lambda _{3,1}-h_{1} h_{2} \lambda _{1,3} \lambda _{3,6}-h_{1} h_{2} \lambda _{1,4} \lambda _{3,5}+h_{1} h_{2} \lambda _{1,5} \lambda _{3,4}+h_{1} h_{2} \lambda _{1,6} \lambda _{3,3}-h_{1} \lambda _{1,1} \lambda _{3,6} t_{1}+h_{1} \lambda _{1,6} \lambda _{3,1} t_{1}+h_{2} \lambda _{1,3} \lambda _{3,6} t_{1}+h_{2} \lambda _{1,4} \lambda _{3,5} t_{1}-h_{2} \lambda _{1,5} \lambda _{3,4} t_{1}-h_{2} \lambda _{1,6} \lambda _{3,3} t_{1}-h_{1} \lambda _{1,4} \lambda _{3,6}+h_{1} \lambda _{1,6} \lambda _{3,4}+\lambda _{1,4} \lambda _{3,6} t_{1}-\lambda _{1,6} \lambda _{3,4} t_{1} \)

Step 9–11. Since \(N_1 \not \in {\mathbb {K}}(h_1, h_2)[t_1, t_2]{{\setminus }}\{0\}\), we continue in Step 12.

Step 12. We have that

\(C_{1,3}=\{-2 \lambda _{1,1} \lambda _{3,2}+2 \lambda _{1,2} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,2}+\lambda _{1,2} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,3}+2 \lambda _{1,3} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,3}+\lambda _{1,3} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,4}+2 \lambda _{1,4} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,4}+\lambda _{1,4} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,5}+2 \lambda _{1,5} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,5}+\lambda _{1,5} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,6}+2 \lambda _{1,6} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,6}+\lambda _{1,6} \lambda _{3,1}, -\lambda _{1,2} \lambda _{3,3}+\lambda _{1,3} \lambda _{3,2}, \lambda _{1,2} \lambda _{3,3}-\lambda _{1,3} \lambda _{3,2}, -\lambda _{1,4} \lambda _{3,6}+\lambda _{1,6} \lambda _{3,4}, \lambda _{1,4} \lambda _{3,6}-\lambda _{1,6} \lambda _{3,4}, -\lambda _{1,2} \lambda _{3,4}+\lambda _{1,3} \lambda _{3,5}+\lambda _{1,4} \lambda _{3,2}-\lambda _{1,5} \lambda _{3,3}, \lambda _{1,2} \lambda _{3,4}-\lambda _{1,3} \lambda _{3,5}-\lambda _{1,4} \lambda _{3,2}+\lambda _{1,5} \lambda _{3,3}, -\lambda _{1,3} \lambda _{3,6}-\lambda _{1,4} \lambda _{3,5}+\lambda _{1,5} \lambda _{3,4}+\lambda _{1,6} \lambda _{3,3}, \lambda _{1,3} \lambda _{3,6}+\lambda _{1,4} \lambda _{3,5}-\lambda _{1,5} \lambda _{3,4}-\lambda _{1,6} \lambda _{3,3} \}\)

is the set non-zero coefficients w.r.t. \(\{h_1, h_2\}\) of the coefficients w.r.t. \(\{t_1, t_2\}\) of \(N_1\).

Step 13. We solve the system of bilinear forms \(C_{1,3}\). We get \(\mathcal {V}\) as the set solutions

\( \{\{\lambda _{1,1} = 0, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = 0, \lambda _{1,4} = 0, \lambda _{1,5} = \lambda _{1,5}, \lambda _{1,6} = \lambda _{1,6}, \lambda _{3,1} = 0, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,5} = \lambda _{3,5}, \lambda _{3,6} = \lambda _{3,6}\}, \{\lambda _{1,1} = 0, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{1,6} = - {\lambda _{1,4} \left( \lambda _{1,2} \lambda _{1,4}-\lambda _{1,3} \lambda _{1,5}\right) }/{\lambda _{1,3}^{2}}, \lambda _{3,1} = 0, \lambda _{3,2} = 0, \lambda _{3,3} = 0, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = {\lambda _{1,2} \lambda _{3,4}}/{\lambda _{1,3}}, \lambda _{3,6} = - {\lambda _{3,4} \left( \lambda _{1,2} \lambda _{1,4}-\lambda _{1,3} \lambda _{1,5}\right) }/{\lambda _{1,3}^{2}} \}, \{\lambda _{1,1} = 0, \lambda _{1,2} = {\lambda _{1,3} \lambda _{3,2}}/{\lambda _{3,3}}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = - {\lambda _{1,3} \lambda _{3,2} \lambda _{3,4}-\lambda _{1,3} \lambda _{3,3}}{ \lambda _{3,5}-\lambda _{1,4} \lambda _{3,2} \lambda _{3,3}}/{\lambda _{3,3}^{2}}, \lambda _{1,6} = - {\lambda _{1,4} \left( \lambda _{3,2} \lambda _{3,4}-\lambda _{3,3} \lambda _{3,5}\right) }/{\lambda _{3,3}^{2}}, \lambda _{3,1} = 0, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = \lambda _{3,5}, \lambda _{3,6} = - {\lambda _{3,4} \left( \lambda _{3,2} \lambda _{3,4}-\lambda _{3,3} \lambda _{3,5}\right) }/{\lambda _{3,3}^{2}} \}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = 0, \lambda _{1,4} = 0, \lambda _{1,5} = \lambda _{1,5}, \lambda _{1,6} = \lambda _{1,6}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = {\lambda _{1,2} \lambda _{3,1}}/{\lambda _{1,1}}, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,5} = {\lambda _{1,5} \lambda _{3,1}}/{\lambda _{1,1}}, \lambda _{3,6} = {\lambda _{1,6} \lambda _{3,1}}/{\lambda _{1,1}}\}, \{ \lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{1,6} = \lambda _{1,6}, \lambda _{3,1} = 0, \lambda _{3,2} = 0, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,5} = 0, \lambda _{3,6} = 0\}, \{\lambda _{1,1} = {\lambda _{1,3} \lambda _{3,1}}/{\lambda _{3,3}}, \lambda _{1,2} = {\lambda _{1,3} \lambda _{3,2}}/{\lambda _{3,3}}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = {\lambda _{1,3} \lambda _{3,4}}/{\lambda _{3,3}}, \lambda _{1,5} = {\lambda _{1,3} \lambda _{3,5}}/{\lambda _{3,3}}, \lambda _{1,6} = {\lambda _{1,3} \lambda _{3,6}}/{\lambda _{3,3}}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = \lambda _{3,5}, \lambda _{3,6} = \lambda _{3,6} \}, \{ \lambda _{1,1} = {\lambda _{1,4} \lambda _{3,1}}/{\lambda _{3,4}}, \lambda _{1,2} = {\lambda _{1,4} \lambda _{3,2}}/{\lambda _{3,4}}, \lambda _{1,3} = 0, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = {\lambda _{1,4} \lambda _{3,5}}/{\lambda _{3,4}}, \lambda _{1,6} = {\lambda _{1,4} \lambda _{3,6}}/{\lambda _{3,4}}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = 0, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = \lambda _{3,5}, \lambda _{3,6} = \lambda _{3,6} \} \}. \)

\(\mathcal {V}\subset \mathscr {V}_d\times \mathscr {V}_d\) contains 7 generic solutions.

Step 14 Replicating the \(\Lambda _i\)-elements in \(\mathcal {V}\) we construct \(\mathcal {V}_{1,2,3}\).

Step 15–18. We delete from \(\mathcal {V}_{1,2,3}\) those solutions vanishing all polynomials in, at least, one of the sets \(U_i,\, i = 1,2\). We get that the resulting set is an empty set. So, the output of Algorithm 1 is the empty set. Therefore, we set \(d=3\), and we go to Step 3 in Algorithm 1 to repeat the above steps.

Algorithm 1 starts.

Step 1. This step was already performed in the first iteration of Algorithm 1.

Step 2. Let \(E_3^i:= \lambda _{i,1} t_{3} t_{1}^{2}+ \lambda _{i,2}t_{2}^{3}+ \lambda _{i,3}t_{2}^{2} t_{3}+\lambda _{i,4}t_{2} t_{3} t_{1}+ \lambda _{i,5} t_{3}^{2}t_{1}+\lambda _{i,6}t_{2} t_{3}^{2} +\lambda _{i,7}t_{3}^{3} +\lambda _{i,8} t_{1}^{3}+t_{2} \lambda _{i,9} t_{1}^{2}+\lambda _{i,10}t_{2}^{2} t_{1} \), \(i\in \{1,2,3\}\).

Step 3–6. \(U_1=\{\lambda _{3,1},\ldots ,\lambda _{3, 10}\}\) and reasoning as above \(U_2\) consists in 28 3-linear polynomials in \(\lambda _{1,i},\lambda _{2,j},\lambda _{3,k}\).

Step 7. Let \({\tilde{G}}_i^\mathcal{E}=e_3^i(\Lambda _i, h_1, h_2)e_3^3(\Lambda _3, t_1, t_2)-e_3^3(\Lambda _3, h_1, h_2)e_3^i(\Lambda _i, t_1, t_2),\,i=1,2,3.\)

Step 8–11. We compute the normal form \(N_1\) of \({\tilde{G}}_1^{{\mathcal {E}}}\) w.r.t \({{\mathcal {G}}}^{\star }\) and we check that \(N_1 \not \in {\mathbb {K}}(h_1, h_2)[t_1, t_2]{\setminus }\{0\}\).

Step 12. We have that

\(C_{1,3}=\{-2 \lambda _{1,1} \lambda _{3,5}+2 \lambda _{1,5} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,5}+\lambda _{1,5} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,7}+2 \lambda _{1,7} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,7}+\lambda _{1,7} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,8}+2 \lambda _{1,8} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,8}+\lambda _{1,8} \lambda _{3,1}, \lambda _{1,2} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,2}, 2 \lambda _{1,2} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,2}, -\lambda _{1,2} \lambda _{3,10}+\lambda _{1,10} \lambda _{3,2}, \lambda _{1,2} \lambda _{3,10}-\lambda _{1,10} \lambda _{3,2}, -\lambda _{1,4} \lambda _{3,8}+\lambda _{1,8} \lambda _{3,4}, \lambda _{1,4} \lambda _{3,8}-\lambda _{1,8} \lambda _{3,4}, -\lambda _{1,5} \lambda _{3,7}+\lambda _{1,7} \lambda _{3,5}, \lambda _{1,5} \lambda _{3,7}-\lambda _{1,7} \lambda _{3,5}, -\lambda _{1,5} \lambda _{3,8}+\lambda _{1,8} \lambda _{3,5}, \lambda _{1,5} \lambda _{3,8}-\lambda _{1,8} \lambda _{3,5}, \lambda _{1,8} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,8}, 2 \lambda _{1,8} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,8}, -\lambda _{1,8} \lambda _{3,10}+\lambda _{1,10} \lambda _{3,8}, \lambda _{1,8} \lambda _{3,10}-\lambda _{1,10} \lambda _{3,8}, -2 \lambda _{1,9} \lambda _{3,10}+2 \lambda _{1,10} \lambda _{3,9}, -\lambda _{1,9} \lambda _{3,10}+\lambda _{1,10} \lambda _{3,9}, -2 \lambda _{1,1} \lambda _{3,2}+2 \lambda _{1,2} \lambda _{3,1}+2 \lambda _{1,3} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,3}, -\lambda _{1,1} \lambda _{3,2}+\lambda _{1,2} \lambda _{3,1}+\lambda _{1,3} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,3}, -2 \lambda _{1,1} \lambda _{3,3}+2 \lambda _{1,3} \lambda _{3,1}+2 \lambda _{1,6} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,6}, -\lambda _{1,1} \lambda _{3,3}+\lambda _{1,3} \lambda _{3,1}+\lambda _{1,6} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,6}, -2 \lambda _{1,1} \lambda _{3,4}+2 \lambda _{1,4} \lambda _{3,1}+2 \lambda _{1,5} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,5}, -\lambda _{1,1} \lambda _{3,4}+\lambda _{1,4} \lambda _{3,1}+\lambda _{1,5} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,5}, -2 \lambda _{1,1} \lambda _{3,10}+2 \lambda _{1,4} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,4}+2 \lambda _{1,10} \lambda _{3,1}, -\lambda _{1,1} \lambda _{3,10}+\lambda _{1,4} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,4}+\lambda _{1,10} \lambda _{3,1}, -2 \lambda _{1,1} \lambda _{3,6}+2 \lambda _{1,6} \lambda _{3,1}+2 \lambda _{1,7} \lambda _{3,9}-2 \lambda _{1,9} \lambda _{3,7}, -\lambda _{1,1} \lambda _{3,6}+\lambda _{1,6} \lambda _{3,1}+\lambda _{1,7} \lambda _{3,9}-\lambda _{1,9} \lambda _{3,7}, -\lambda _{1,2} \lambda _{3,4}-\lambda _{1,3} \lambda _{3,10}+\lambda _{1,4} \lambda _{3,2}+\lambda _{1,10} \lambda _{3,3}, \lambda _{1,2} \lambda _{3,4}+\lambda _{1,3} \lambda _{3,10}-\lambda _{1,4} \lambda _{3,2}-\lambda _{1,10} \lambda _{3,3}, -\lambda _{1,4} \lambda _{3,7}-\lambda _{1,5} \lambda _{3,6}+\lambda _{1,6} \lambda _{3,5}+\lambda _{1,7} \lambda _{3,4}, \lambda _{1,4} \lambda _{3,7}+\lambda _{1,5} \lambda _{3,6}-\lambda _{1,6} \lambda _{3,5}-\lambda _{1,7} \lambda _{3,4}, -\lambda _{1,2} \lambda _{3,5}-\lambda _{1,3} \lambda _{3,4}+\lambda _{1,4} \lambda _{3,3}+\lambda _{1,5} \lambda _{3,2}-\lambda _{1,6} \lambda _{3,10}+\lambda _{1,10} \lambda _{3,6}, \lambda _{1,2} \lambda _{3,5}+\lambda _{1,3} \lambda _{3,4}-\lambda _{1,4} \lambda _{3,3}-\lambda _{1,5} \lambda _{3,2}+\lambda _{1,6} \lambda _{3,10}-\lambda _{1,10} \lambda _{3,6}, -\lambda _{1,3} \lambda _{3,5}+\lambda _{1,4} \lambda _{3,6}+\lambda _{1,5} \lambda _{3,3}-\lambda _{1,6} \lambda _{3,4}-\lambda _{1,7} \lambda _{3,10}+\lambda _{1,10} \lambda _{3,7}, \lambda _{1,3} \lambda _{3,5}-\lambda _{1,4} \lambda _{3,6}-\lambda _{1,5} \lambda _{3,3}+\lambda _{1,6} \lambda _{3,4}+\lambda _{1,7} \lambda _{3,10}-\lambda _{1,10} \lambda _{3,7} \}. \)

Step 13. We solve the system of bilinear forms \(C_{1,3}\). We get \(\mathcal {V}\) as the set solutions. It contains 21 generic solutions.

Step 14 Replicating the \(\Lambda _i\)-elements in \(\mathcal {V}\) we construct \(\mathcal {V}_{1,2,3}\).

Step 15–18. We delete from \(\mathcal {V}_{1,2,3}\) those solutions vanishing all polynomials in, at least, one of the sets \(U_i,\, i = 1,2\). We get that \(\mathcal {V}_{1,2,3}\) only contains one solution, namely,

\(\mathcal {V}_{1,2,3}=\{\lambda _{1,1} = 0, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = 0, \lambda _{1,5} = 0, \lambda _{1,6} = \lambda _{1,6}, \lambda _{1,7} = \lambda _{1,7}, \lambda _{1,8} = \lambda _{1,8}, \lambda _{1,9} = 0, \lambda _{1,10} = 0, \lambda _{3,1} = 0, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = 0, \lambda _{3,5} = 0, \lambda _{3,6} = \lambda _{3,6}, \lambda _{3,7} = \lambda _{3,7}, \lambda _{3,8} = \lambda _{3,8}, \lambda _{3,9} = 0, \lambda _{3,10} = 0\} \)

Step 19. We initialize \(C:=\emptyset \) and \(U_3:=\emptyset \).

Step 20. Since \(\#(\mathcal {V}_{1,2,3})=1\) the for-loop consists only in one iteration.

Steps 21–24. We compute the specializations \(\breve{G}_{1}^{\mathcal {E}}, \breve{G}_{2}^{\mathcal {E}}\) as well as the resultant \(\breve{R}_{1}^{\mathcal {E}}\). \(\textrm{deg}_{t_1}(\breve{R}_{1}^{\mathcal {E}})=9\). So we go to Step 26.

Steps 25–29. We compute \(C^*\) (which is C in this case) that consists in 21 polynomials

Steps 30–38. \(U_3\) consists in 18 polynomials.

Step 39. We solve \(C^*\) and we get \(\mathcal {V}\) with 12 generic solutions.

Steps 40. Filtering the solutions with \(U_1,U_2,U_3\) we get that the new \(\mathcal {V}\) contains 3 solutions, namely

\(\mathcal {V}=\{\{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \frac{\lambda _{1,10} \lambda _{2,3}}{\lambda _{2,10}}, \lambda _{1,4} = \frac{\lambda _{1,10} \lambda _{2,4}}{\lambda _{2,10}}, \lambda _{1,10} = \lambda _{1,10}, \lambda _{2,1} = \lambda _{2,1}, \lambda _{2,2} = \lambda _{2,2}, \lambda _{2,3} = \lambda _{2,3}, \lambda _{2,4} = \lambda _{2,4}, \lambda _{2,10} = \lambda _{2,10}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,10} = 0\}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,10} = \lambda _{1,10}, \lambda _{2,1} = \lambda _{2,1}, \lambda _{2,2} = \lambda _{2,2}, \lambda _{2,3} = 0, \lambda _{2,4} = 0, \lambda _{2,10} = 0, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,10} = 0\}, \{ \lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \frac{\lambda _{1,10} \lambda _{3,3}}{\lambda _{3,10}}, \lambda _{1,4} = \frac{\lambda _{1,10} \lambda _{3,4}}{\lambda _{3,10}}, \lambda _{1,10} = \lambda _{1,10}, \lambda _{2,1} = \lambda _{2,1}, \lambda _{2,2} = \lambda _{2,2}, \lambda _{2,3} = \frac{\lambda _{2,10} \lambda _{3,3}}{\lambda _{3,10}}, \lambda _{2,4} = \frac{\lambda _{2,10} \lambda _{3,4}}{\lambda _{3,10}}, \lambda _{2,10} = \lambda _{2,10}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,10} = \lambda _{3,10}\}\}\)

Back to Algorithm 2 at Step 9.

At this step of the algorithm one take a particular solution in the \(\mathcal {V}\) (see above). In first generic point of \(\mathcal {V}\) we take, for instance,

\(\{\lambda _{1,1} = 1, \lambda _{1,2} = -1, \lambda _{1,10} = -1, \lambda _{2,1} = -1, \lambda _{2,2} = -3, \lambda _{2,3} = 2, \lambda _{2,4} = -2, \lambda _{2,10} = -2, \lambda _{3,1} = -2, \lambda _{3,2} = 0\},\)

which produces the solution

$$\begin{aligned} {{\mathcal {S}}}_a=\left( \frac{1}{2} t_{1}^{3}+\frac{1}{2} t_2 -\frac{1}{2}+\frac{1}{2} t_{2}^{3}-\frac{1}{2} t_{2}^{2}, t_{1}^{3}+t_{2}^{3}-t_{2}^{2}+\frac{3}{2} t_{2} +\frac{1}{2} \right) \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ll} {{\mathcal {Q}}}_a=&{}( 64 t_{1}^{3}-96 t_{1}^{2} t_{2} +48 t_{2}^{2} t_{1} -8 t_{2}^{3}+160 t_{1}^{2}-160 t_{2} t_{1} +40 t_{2}^{2}+134 t_{1} -66 t_{2} +37, \\ {} &{} 64 t_{1}^{3}-96 t_{1}^{2} t_{2} +48 t_{2}^{2} t_{1} -8 t_{2}^{3}+160 t_{1}^{2}-160 t_{2} t_{1} +40 t_{2}^{2}+138 t_{1} -68 t_{2} +40, \\ {} &{} -3-4 t_{1} +2 t_{2}). \end{array} \end{aligned}$$

5 The computational approach: the case of empty base locus

In this section, we present some improvements to the previous general computational approach. There are two sources of computational complication in Algorithm 2. On the one hand, it is necessary to increase the value of the degree d until \(\textrm{SolSpace}({{\mathcal {P}}})_d\ne \emptyset \) (see Algorithm 1) and, on the other hand, the dimension of the linear space can be high. To deal with these two difficulties we introduce two additional hypotheses. More precisely, in the sequel, we assume that the surface \(\mathscr {S}\), parameterized by \({{\mathcal {P}}}\), admits a birational parameterization \({{\mathcal {Q}}}\) with empty base locus and we also assume that \({{\mathcal {P}}}\) is transversal (see Sect. 2.2). We will see that finding \({{\mathcal {Q}}}\) and its corresponding \({{\mathcal {S}}}\) is easier than in the general case. Indeed, Lemma 6, provides a particular value of d such that \(\textrm{SolSpace}_d({{\mathcal {P}}})\ne \emptyset \), and hence Algorithm 1 is only executed once and, in Algorithm 2, the loop in Step 3 reduces to one execution.

Let us take d as in Lemma 6, and let us see that under our new hypotheses the dimension of the solution space can be reduced. In the general case, \(\textrm{SolSpace}_d({{\mathcal {P}}})\) is constructed from \(\mathscr {U}_d\) imposing the condition \(\mathscr {F}_g({{\mathcal {S}}})=\mathscr {F}_g({{\mathcal {P}}})\). Below we show that \(\mathscr {V}_d\) can be replaced by a lower dimensional linear subsystem, generated from the base points of \({{\mathcal {P}}}\) (see Sect. 2.2).

Theorem 3

Let \({{\mathcal {P}}}\) be transversal and let \({{\mathcal {Q}}}\) be a birational parameterization of \(\mathscr {S}\) such that \(\mathscr {B}({{\mathcal {S}}})=\emptyset \). Let

$$\begin{aligned} D:=\sum _{A\in \mathscr {B}({{\mathcal {P}}})} \sqrt{ \dfrac{\textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))}{\textrm{deg}(\mathscr {S})}} \, A, \end{aligned}$$
(5.1)

and let \(\mathscr {L}_d(D)\) be the linear system defined by the divisor D, where d is as in Lemma 6. If \(({{\mathcal {Q}}},{{\mathcal {S}}})\) solves the reparameterization problem for \({{\mathcal {P}}}\), where \({{\mathcal {S}}}:=(s_1:s_2:s_3)\) and \(\textrm{deg}(s_i)=d\), then \(s_1,s_2,s_3\in \mathscr {L}_d(D)\).

Proof

By Lemma 12, in [13], it holds that \(\mathscr {B}({{\mathcal {S}}})=\mathscr {B}({{\mathcal {P}}}).\) Furthermore, by the same lemma, for \(A\in \mathscr {B}({{\mathcal {S}}})\) it holds that

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}({{\mathcal {S}}}))= \dfrac{\textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))}{\textrm{deg}(\mathscr {S})}. \end{aligned}$$
(5.2)

We observe that, by Theorem 1 in [13], since \({{\mathcal {P}}}\) is transversal then \({{\mathcal {S}}}\) is also transversal. Let \(V_1:=x_1 s_1+x_2s_2+x_3 s_3,\, V_2:=y_1 s_1+y_2s_2+y_3 s_3\) where \(x_i,y_j\) are new variables. Then, for \(i\in \{1,2,3\}\) and \(A\in \mathscr {B}({{\mathcal {S}}})\) it holds that

$$\begin{aligned} \begin{array}{rclr} \textrm{mult}(A,\mathscr {C}(s_i))^2 &{} \ge &{} \min \{ \textrm{mult}(A,\mathscr {C}(s_i))\,|\,i\in \{1,2,3\} \}^2 &{} \\ &{} = &{}\textrm{mult}(A,\mathscr {C}(V_1))^2 &{} \text {(see Lemma 2 in [13])}\\ &{}= &{} \textrm{mult}(A,\mathscr {B}({{\mathcal {S}}})) &{} ({{\mathcal {S}}}\text { is transversal)}\\ &{} = &{} \dfrac{\textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))}{\textrm{deg}(\mathscr {S})} &{} \text {(see } (5.2)) \end{array} \end{aligned}$$

Therefore,

$$\begin{aligned} \textrm{mult}(A,\mathscr {C}(s_i)) \ge \sqrt{ \dfrac{\textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))}{\textrm{deg}(\mathscr {S})}}. \end{aligned}$$

Thus, since \(\textrm{deg}(s_i)=d\), then \(s_i\in \mathscr {L}_d(D)\). \(\square \)

Let us define \(\textrm{SolSpace}_{d}^{*}({{\mathcal {P}}})\) as the subset of \(\textrm{SolSpace}_d({{\mathcal {P}}})\) such that if \({{\mathcal {S}}}\in \textrm{SolSpace}_{d}^{*}({{\mathcal {P}}})\) then \(\mathscr {B}({{\mathcal {Q}}})=\emptyset \) where \({{\mathcal {Q}}}\) is such that \(({{\mathcal {Q}}},{{\mathcal {S}}})\) solves the reparameterization problem for \({{\mathcal {P}}}\). Then the following result holds.

Corollary 1

With the hypotheses of Theorem 3, it holds that if \({{\mathcal {S}}}:=(s_1:s_2,s_3)\in \textrm{SolSpace}_{d}^{*}({{\mathcal {P}}})\) then \(s_1,s_2,s_3\in \mathscr {L}_d(D)\).

Now, one can proceed as in Sect. 4.4 but, instead of starting with \(\mathscr {V}_d\), we start with \(\mathscr {L}_d(D)\). Therefore, the polynomial \(E_d\) (see (4.14)) is now the defining polynomial of the linear system \(\mathscr {L}_d(D)\).

Let us discuss how to computationally treat (5.1). First, let us deal with \(\mathscr {B}({{\mathcal {P}}})\). We recall that (see Sect. 2.2 and (1.1))

$$\begin{aligned} \mathscr {B}({{\mathcal {P}}})=\mathscr {C}(p_1)\cap \mathscr {C}(p_2)\cap \mathscr {C}(p_3)\cap \mathscr {C}(p_4). \end{aligned}$$

Let us decompose \(\mathscr {B}({{\mathcal {P}}})\) as

$$\begin{aligned} \mathscr {B}({{\mathcal {P}}})=\mathscr {B}({{\mathcal {P}}})^a \cup \mathscr {B}({{\mathcal {P}}})^{\infty }, \end{aligned}$$

where \(\mathscr {B}({{\mathcal {P}}})^a\) and \(\mathscr {B}({{\mathcal {P}}})^{\infty }\) represent, respectively, the sets of affine base points and base points at infinity. Let \(p(t_1,t_2):=\textrm{gcd}(p_1(t_1,t_2,0), \ldots , p_4(t_1,t_2,0))\). Then

$$\begin{aligned} \mathscr {B}({{\mathcal {P}}})^{\infty }=\{ (a:b:0)\in \mathbb {P}^{2}(\mathbb {K}) \,|\, p(a,b)=0\}. \end{aligned}$$

For the affine base point, one may consider a minimal Gröbner basis \(\mathscr {G}_{\mathscr {B}({{\mathcal {P}}})}\), w.r.t. lexicographic ordering with \(t_1<t_2\), of the ideal \(<p_1(t_1,t_2,1), \ldots ,p_4(t_1,t_2,1) > \subset \mathbb {K}[t_1,t_2]\). Then \(\mathscr {G}_{\mathscr {B}({{\mathcal {P}}})}=\{g_{1,1}, g_{2,1},\ldots ,g_{2,k_2}\} \) where \(g_{1,1}\in \mathbb {K}[t_1]\), \(g_{2,j}\in \mathbb {K}[t_1,t_2]\) (see e.g. [18] page 194). Then,

$$\begin{aligned} \mathscr {B}({{\mathcal {P}}})^a=\{(a:b:1)\in \mathbb {P}^{2}(\mathbb {K})\,|\, g(a,b)=0 \,\,\hbox { for}\ g\in \mathscr {G}_{\mathscr {B}({{\mathcal {P}}})}\}. \end{aligned}$$

Alternatively, since one has to compute the intersection of finitely many plane curves, one can approach the problem by means of resultants; see e.g. [16]. We also observe that, after a suitable linear change of parameters \(t_1,t_2\) one can proceed similarly as in Sect. 4.4 and use the Shape Lemma.

Concerning the coefficient of the base point in the divisor in (5.1), we first recall that we have already commented how to determine \(\textrm{deg}(\mathscr {S})\). Furthermore, the numerators, namely, \(\textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))\) can be computed by using the curves \(\mathscr {C}(W_1),\mathscr {C}(W_2)\) where (see (2.3))

$$\begin{aligned} W_1=x_1 p_1(\,{{\overline{t}}}\,)+\cdots + x_4 p_4(\,{{\overline{t}}}\,),\, \,W_2=y_1 p_1(\,{{\overline{t}}}\,)+\cdots + y_4 p_4(\,{{\overline{t}}}\,) \end{aligned}$$

and taking into account that

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))=\textrm{mult}_A(\mathscr {C}(W_1),\mathscr {C}(W_2)). \end{aligned}$$

Moreover, since \({{\mathcal {P}}}\) is transversal, then

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}({{\mathcal {P}}}))=\textrm{mult}(A,\mathscr {C}(W_1))^2. \end{aligned}$$

In addition, by Proposition 2.5, in [3], it holds that

$$\begin{aligned} \textrm{mult}(A,\mathscr {C}(W_1))=\textrm{mult}(A,\mathscr {C}(W_2))=\min \{\textrm{mult}(A,\mathscr {C}(p_i))\,|\, i\in \{1,\ldots ,4\}\}. \end{aligned}$$

Therefore, because of the transversality,

$$\begin{aligned} \textrm{mult}(A,\mathscr {B}({{\mathcal {P}}})) =\min \{\textrm{mult}(A,\mathscr {C}(p_i))\,|\, i\in \{1,\ldots ,4\}\}^2. \end{aligned}$$

In the following algorithm we use these ideas to derive a general algorithm when the additional hypotheses of this section are assumed. In order to check the transversality of the input parameterization, we refer to Algorithm 1 presented in Section 5 in [13]. In addition, we assume that the algorithm in [8] has been applied and has not provided an answer to the problem.

figure d

We finish this section with an example where we illustrate how the previous algorithm works. One may check that algorithm in [8] does not provide an answer to the problem.

Example 2

Let \({{\mathcal {P}}}(\,{{\overline{t}}}\,)= (p_1: p_2: p_3: p_4)\) be the input of Algorithm 3, where

\(p_1(\,{{\overline{t}}}\,)=-t_{1}^{2} \left( t_{1}^{3}-t_{1}^{2} t_{3}-2 t_{1} t_{2} t_{3}-t_{2}^{2} t_{3}\right) t_{2},\)

\(p_2(\,{{\overline{t}}}\,)=t_{1}^{6}-2 t_{1}^{5} t_{3}+t_{1}^{4} t_{2}^{2}-4 t_{1}^{4} t_{2} t_{3}+t_{1}^{4} t_{3}^{2}-t_{1}^{3} t_{2}^{3}+4 t_{1}^{3} t_{2} t_{3}^{2}+t_{1}^{2} t_{2}^{3} t_{3}+6 t_{1}^{2} t_{2}^{2} t_{3}^{2}+4 t_{1} t_{2}^{3} t_{3}^{2}+t_{2}^{4} t_{3}^{2}\)

\(p_3(\,{{\overline{t}}}\,)=t_1^4 t_2^2, \)

\(p_4(\,{{\overline{t}}}\,)=t_{1}^{2} \left( t_{1}^{2}+t_{1} t_{2}-t_{1} t_{3}-t_{2}^{2}\right) ^{2}.\)

\({{\mathcal {P}}}\) is a transversal rational projective parameterization (see Sect. 2.2) of an algebraic surface \(\mathscr {S}\). In Step 1, we obtain \(\ell =3\). In Step 5, we compute D (see (5.1)) as well as the linear system \(\mathscr {L}_d(D)\) associated to \(d=\ell =3\). We get

$$\begin{aligned} D=2\cdot (0:0:1)+1\cdot (1:0:1)+1\cdot (0:1:0). \end{aligned}$$

Let \(E_{3}^{i}(\Lambda ,\,{{\overline{t}}}\,)\) be the defining polynomial of \(\mathscr {L}_D\) and for \(i=1,2,3\). We get

$$\begin{aligned} E_{3}^{i}:=E(\Lambda _{i},\,{{\overline{t}}}\,)=t_1^3 \lambda _{i,1} + t_1^2 t_2 \lambda _{i,2} - t_1^2 t_3 \lambda _{i,1} + t_1 t_2^2 \lambda _{i,3} + t_1 t_2 t_3 \lambda _{i,4}+ t_2^2 t_3 \lambda _{i,5}\nonumber \\ \end{aligned}$$
(5.3)

where \(\Lambda _i\) are different tuples of new parameters. In Step 6 we compute \(\mathscr {G}^*:=\{u_{1}^{*}(t_1), t_2v(t_1)\}\). We get

$$\begin{aligned} \begin{array}{lcl} u_{1}^{*}&{}=&{} \left( 2 t_{1}^{2}-2 t_{1}\right) h_{1}^{4}+\left( \left( 2 h_{2}-2\right) t_{1}^{2}+\left( h_{2}+2\right) t_{1}+h_{2}\right) h_{1}^{3}\\ &{}&{} +2 h_{2} \left( \left( h_{2}-\frac{5}{2}\right) t_{1}+h_{2}+\frac{1}{2}\right) t_{1} h_{1}^{2}+h_{2}^{2} t_{1}^{2} \left( h_{2}-4\right) h_{1}-h_{2}^{3} t_{1}^{2}. \\ v(t_1)&{}=&{} -\left( -2 h_{1}^{4}-2 h_{1}^{3} h_{2}-2 h_{1}^{2} h_{2}^{2}-h_{1} h_{2}^{3}+2 h_{1}^{3}+5 h_{1}^{2} h_{2}+4 h_{1} h_{2}^{2}+h_{2}^{3}\right) t_{1}^{2}\\ &{}&{}-\left( 2 h_{1}^{5}+2 h_{1}^{4} h_{2} -2 h_{1}^{4}-4 h_{1}^{3} h_{2}-2 h_{1}^{2} h_{2}^{2}\right) t_{1}+h_{1}^{4} h_{2}. \end{array} \end{aligned}$$

Algorithm 1 starts.

Step 1. We set \(C=U_1=U_2=\emptyset \).

Step 2. \(E_3^i(\Lambda _i, t_1, t_2),\) for \(i\in \{1,2,3\}\), are as in (5.3).

Steps 3–6. \(U_1=\{\lambda _{3,1},\ldots ,\lambda _{3,6}\}\). For determining \(U_2\), let

$$\begin{aligned} \mathcal {E}_a=\left( \frac{e_2^1}{e_2^3},\ \frac{e_2^2}{e_2^3}\right) , \end{aligned}$$

where \(e_2^i=E_2^i(t_1, t_2, 1)\) for \(i=1,2,3\). Let T be the determinant of the Jacobian of \(\mathcal {E}_a\) w.r.t. \(\{t_1,t_2\}\). Then, \(U_2\) collects the coefficients of the numerator of T w.r.t. \(\{t_1,t_2\}\). In this case, \(U_2\) consists in seven 3-linear polynomials in \(\lambda _{1,i}, \lambda _{2,j}, \lambda _{3,k}\).

Step 7. Let \({\tilde{G}}_i^{\mathcal {E}}=e_2^i(\Lambda _i, h_1, h_2)e_2^3(\Lambda _3, t_1, t_2)-e_2^3(\Lambda _3, h_1, h_2)e_2^i(\Lambda _i, t_1, t_2),\,i=1,2,3.\)

Steps 8–11. We compute the normal form \(N_1\)of \({\tilde{G}}_1^{{\mathcal {E}}}\) w.r.t \({{\mathcal {G}}}^{\star }\), and we observe that \(N_1 \not \in {\mathbb {K}}(h_1, h_2)[t_1, t_2]{\setminus }\{0\}\).

Steps 12–13. We determine \(C_{1,3}\) that is is the set non-zero coefficients w.r.t. \(\{h_1, h_2\}\) of the coefficients w.r.t. \(\{t_1, t_2\}\) of \(N_1\). We solve the system of bilinear forms \(C_{1,3}\). We get \(\mathcal {V}\) as the set solutions:

\(\{\{\lambda _{1,1} = 0, \lambda _{1,2} = 0, \lambda _{1,3} = 0, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = 0, \lambda _{3,1} = 0, \lambda _{3,2} = 0, \lambda _{3,3} = 0, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = 0\}, \{\lambda _{1,1} = 0, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = 0, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = 0, \lambda _{3,1} = 0, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = 0, \lambda _{3,4} = \frac{\lambda _{1,4} \lambda _{3,2}}{\lambda _{1,2}}, \lambda _{3,5} = 0 \}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = 0, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = 0, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \frac{\lambda _{1,2} \lambda _{3,1}}{\lambda _{1,1}}, \lambda _{3,3} = 0, \lambda _{3,4} = \frac{\lambda _{1,4} \lambda _{3,1}}{\lambda _{1,1}}, \lambda _{3,5} = 0\}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = 0, \lambda _{3,1} = \frac{\lambda _{1,1} \lambda _{3,3}}{\lambda _{1,3}}, \lambda _{3,2} = \frac{\lambda _{1,2} \lambda _{3,3}}{\lambda _{1,3}}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = \frac{\lambda _{1,4} \lambda _{3,3}}{\lambda _{1,3}}, \lambda _{3,5} = 0 \}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = \lambda _{1,4}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{3,1} = 0, \lambda _{3,2} = 0, \lambda _{3,3} = 0, \lambda _{3,4} = 0, \lambda _{3,5} = 0\}, \{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = -\lambda _{1,1}-\lambda _{1,5}, \lambda _{1,4} = 2 \lambda _{1,5}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = -\lambda _{3,1}-\lambda _{3,5}, \lambda _{3,4} = 2 \lambda _{3,5}, \lambda _{3,5} = \lambda _{3,5}\}, \{\lambda _{1,1} = \frac{\lambda _{1,5} \lambda _{3,1}}{\lambda _{3,5}}, \lambda _{1,2} = \frac{\lambda _{1,5} \lambda _{3,2}}{\lambda _{3,5}}, \lambda _{1,3} = \frac{\lambda _{1,5} \lambda _{3,3}}{\lambda _{3,5}}, \lambda _{1,4} = \frac{\lambda _{1,5} \lambda _{3,4}}{\lambda _{3,5}}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = \lambda _{3,4}, \lambda _{3,5} = \lambda _{3,5}\}, \{\lambda _{1,1} = -\lambda _{1,3}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = 0, \lambda _{1,5} = 0, \lambda _{3,1} = -\lambda _{3,3}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = \lambda _{3,3}, \lambda _{3,4} = 0, \lambda _{3,5} = 0\}, \{\lambda _{1,1} = -\lambda _{1,3}-\lambda _{1,5}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = 2 \lambda _{1,5}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = -\lambda _{3,1}, \lambda _{3,4} = 0, \lambda _{3,5} = 0\}\}.\)

\(\mathcal {V}\subset \mathscr {V}_d\times \mathscr {V}_d\) contains 9 generic solutions.

Step 14–18 We construct \(\mathcal {V}_{1,2,3}\). We delete from \(\mathcal {V}\) those solutions vanishing all polynomials in, at least, one of the sets \(U_i,\, i = 1,2\). We get that \(\mathcal {V}_{1,2,3}=\mathcal {V}\).

Step 19. We initialize \(C:=\emptyset \) and \(U_3:=\emptyset \).

Steps 20–40. Since \(\#(\mathcal {V}_{1,2,3})=9\) the for-loop consists in nine iterations. We compute the specializations \(\breve{G}_{1}^{\mathcal {E}}, \breve{G}_{2}^{\mathcal {E}}\) as well as the resultant \(\breve{R}_{1}^{\mathcal {E}}\). \(\textrm{deg}_{t_1}(\breve{R}_{1}^{\mathcal {E}})=3\). After some algebraic manipulations, we that the new \(\mathcal {V}\) is

\(\{\{\lambda _{1,1} = -\lambda _{1,3}-\lambda _{1,5}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = \lambda _{1,3}, \lambda _{1,4} = 2 \lambda _{1,5}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{2,1} = -\lambda _{2,3}-\lambda _{2,5}, \lambda _{2,2} = \lambda _{2,2}, \lambda _{2,3} = \lambda _{2,3}, \lambda _{2,4} = 2 \lambda _{2,5}, \lambda _{2,5} = \lambda _{2,5}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = -\lambda _{3,1}, \lambda _{3,4} = 0, \lambda _{3,5} = 0\},\,\,\{\lambda _{1,1} = \lambda _{1,1}, \lambda _{1,2} = \lambda _{1,2}, \lambda _{1,3} = -\lambda _{1,1}-\lambda _{1,5}, \lambda _{1,4} = 2 \lambda _{1,5}, \lambda _{1,5} = \lambda _{1,5}, \lambda _{2,1} = \lambda _{2,1}, \lambda _{2,2} = \lambda _{2,2}, \lambda _{2,3} = -\lambda _{2,1}-\lambda _{2,5}, \lambda _{2,4} = 2 \lambda _{2,5}, \lambda _{2,5} = \lambda _{2,5}, \lambda _{3,1} = \lambda _{3,1}, \lambda _{3,2} = \lambda _{3,2}, \lambda _{3,3} = -\lambda _{3,1}-\lambda _{3,5}, \lambda _{3,4} = 2 \lambda _{3,5}, \lambda _{3,5} = \lambda _{3,5}\}\}. \)

and thus we have 2 solutions.

Back to Algorithm 3 at Step 12.

At this step of the algorithm one take a particular solution in the \(\mathcal {V}\). In the second generic point of \(\mathcal {V}\) we take, for instance,

\(\{\lambda _{1,2} = 1, \lambda _{1,3} = 1, \lambda _{1,5} = -1, \lambda _{2,2} = 0, \lambda _{2,3} = 0, \lambda _{2,5} = 1, \lambda _{3,1} = 0, \lambda _{3,2} = 1\},\)

which produces the solution

$$\begin{aligned} {{\mathcal {S}}}_a=\left( \frac{t_1^2 + t_1 t_2 - t_2^2 - t_1}{t_1t_2},\,-\frac{t_1^3 - t_1^2 - 2 t_1 t_2 - t_2^2}{t_1^2t_2}\right) , \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {Q}}}_a=\left( \frac{t_2}{(t_1 + t_2 - 2)^2}, -\frac{-t_2^2 + t_1 - 2}{(t_1 + t_2 - 2)^2}, \frac{1}{(t_1 + t_2 - 2)^2}\right) . \end{aligned}$$

6 Conclusions and future work

In this paper, given a surface \(\mathscr {S}\), rationally parameterized by \({{\mathcal {P}}}(\,{{\overline{t}}}\,)\), we present a method (see Algorithm 2) that determine a birational parameterization \({{\mathcal {Q}}}(\,{{\overline{t}}}\,)\) of \(\mathscr {S}\) as well as a rational map \({{\mathcal {S}}}:\mathbb {P}^{2}(\mathbb {K})\dasharrow \mathbb {P}^{2}(\mathbb {K})\) such that \({{\mathcal {P}}}(\,{{\overline{t}}}\,)={{\mathcal {Q}}}({{\mathcal {S}}}(\,{{\overline{t}}}\,)).\)

In addition, we present some improvements to this previous general computational approach by introducing two additional hypotheses. More precisely, we assume that the input surface \(\mathscr {S}\) admits a birational parameterization \({{\mathcal {Q}}}\) with empty base locus and that \({{\mathcal {P}}}\) is transversal. As a consequence of these two hypotheses, we see that the previous general method simplifies considerably.

As topics for further research on the problem, we mention some potential lines of work.

  1. 1.

    In order to simplify further the running performance of the method, one may consider computing probabilistically the fibre of the input parameterization; this could be done working on a suitable open subset as described in [10] (Algorithm-2). In this case, one should compute the fibre of the input parameterization by choosing several random points on the surface.

  2. 2.

    As mentioned in the introduction, the methods presented in this paper could be used as a general strategy to approach the problem of computing birational reparameterizations of both rational surfaces of any codimension and rational varieties of any dimension. We recall that, by definition, every rational variety admits a birational parameterization. For this purpose, one would have to develop further the theory of bases points and generic fiber. For that, one may use the results in [12] and study generalizations of the results in [3] and [10] to arbitrary rational varieties.

  3. 3.

    In this paper, we have not paid attention to the field required to provide the birational parameterization. An interesting question is to analyze the optimality of the field extension used in the process. And specially interesting, for the case of applications, is to analyze the problem when the ground field is \(\mathbb {Q}\) or a real extension of \(\mathbb {Q}\) and the output is required to be expressed over \(\mathbb {R}\). The situation now turns to be more difficult in the sense that real properness is not guaranteed but ideas as those developed in [1] could be combined. In this sense, and in particular, in Step 39 of Algorithm 1, in general, we will not get rational coefficients (considered as solutions over the field of complex numbers). Although the algorithm is assumed to work over an algebraically closed field, it is important, from an implementation point of view, how to deal computationally with these solutions. This is not the goal of the paper but it is another future work that is worth considering.