# New Complexity Trade-Offs for the (Multiple) Number Field Sieve Algorithm in Non-Prime Fields

- 11 Citations
- 1.8k Downloads

## Abstract

The selection of polynomials to represent number fields crucially determines the efficiency of the Number Field Sieve (NFS) algorithm for solving the discrete logarithm in a finite field. An important recent work due to Barbulescu et al. builds upon existing works to propose two new methods for polynomial selection when the target field is a non-prime field. These methods are called the generalised Joux-Lercier (GJL) and the Conjugation methods. In this work, we propose a new method (which we denote as \(\mathcal {A}\)) for polynomial selection for the NFS algorithm in fields \(\mathbb {F}_{Q}\), with \(Q=p^n\) and \(n>1\). The new method both subsumes and generalises the GJL and the Conjugation methods and provides new trade-offs for both *n* composite and *n* prime. Let us denote the variant of the (multiple) NFS algorithm using the polynomial selection method “X” by (M)NFS-X. Asymptotic analysis is performed for both the NFS-\(\mathcal {A}\) and the MNFS-\(\mathcal {A}\) algorithms. In particular, when \(p=L_Q(2/3,c_p)\), for \(c_p\in [3.39,20.91]\), the complexity of NFS-\(\mathcal {A}\) is better than the complexities of all previous algorithms whether classical or MNFS. The MNFS-\(\mathcal {A}\) algorithm provides lower complexity compared to NFS-\(\mathcal {A}\) algorithm; for \(c_p\in (0, 1.12] \cup [1.45,3.15]\), the complexity of MNFS-\(\mathcal {A}\) is the same as that of the MNFS-Conjugation and for \(c_p\notin (0, 1.12] \cup [1.45,3.15]\), the complexity of MNFS-\(\mathcal {A}\) is lower than that of all previous methods.

## Keywords

Number Field Sieve (NFS) NFS Algorithm Polynomial Selection Conjugation Method Medium Prime Case## 1 Introduction

Let \(\mathfrak {G}=\langle {\mathfrak {g}} \rangle \) be a finite cyclic group. The discrete log problem (DLP) in \(\mathfrak {G}\) is the following. Given \(({\mathfrak {g}},{\mathfrak {h}})\), compute the minimum non-negative integer \(\mathfrak {e}\) such that \({\mathfrak {h}}={\mathfrak {g}}^{\mathfrak {e}}\). For appropriately chosen groups \(\mathfrak {G}\), the DLP in \(\mathfrak {G}\) is believed to be computationally hard. This forms the basis of security of many important cryptographic protocols.

Studying the hardness of the DLP on subgroups of the multiplicative group of a finite field is an important problem. There are two general algorithms for tackling the DLP on such groups. These are the function field sieve (FFS) [1, 2, 16, 18] algorithm and the number field sieve (NFS) [11, 17, 19] algorithm. Both these algorithms follow the framework of index calculus algorithms which is currently the standard approach for attacking the DLP in various groups.

For small characteristic fields, the FFS algorithm leads to a quasi-polynomial running time [6]. Using the FFS algorithm outlined in [6, 15], Granger et al. [12] reported a record computation of discrete log in the binary extension field \(\mathbb {F}_{2^{9234}}\). FFS also applies to the medium characteristic fields. Some relevant works along this line are reported in [14, 18, 25].

For medium to large characteristic finite fields, the NFS algorithm is the state-of-the-art. In the context of the DLP, the NFS was first proposed by Gordon [11] for prime order fields. The algorithm proceeded via number fields and one of the main difficulties in applying the NFS was in the handling of units in the corresponding ring of algebraic integers. Schirokauer [26, 28] proposed a method to bypass the problems caused by units. Further, Schirokauer [27] showed the application of the NFS algorithm to composite order fields. Joux and Lercier [17] presented important improvements to the NFS algorithm as applicable to prime order fields.

Joux, Lercier, Smart and Vercauteren [19] later showed that the NFS algorithm is applicable to all finite fields. Since then, several works [5, 13, 20, 24] have gradually improved the NFS in the context of medium to large characteristic finite fields.

The efficiency of the NFS algorithm is crucially dependent on the properties of the polynomials used to construct the number fields. Consequently, polynomial selection is an important step in the NFS algorithm and is an active area of research. The recent work [5] by Barbulescu et al. extends a previous method [17] for polynomial selection and also presents a new method. The extension of [17] is called the generalised Joux-Lercier (GJL) method while the new method proposed in [5] is called the Conjugation method. The paper also provides a comprehensive comparison of the trade-offs in the complexity of the NFS algorithm offered by the various polynomial selection methods.

The NFS based algorithm has been extended to multiple number field sieve algorithm (MNFS). The work [8] showed the application of the MNFS to medium to high characteristic finite fields. Pierrot [24] proposed MNFS variants of the GJL and the Conjugation methods. For more recent works on NFS we refer to [4, 7, 22].

Our contributions: In this work, we build on the works of [5, 17] to propose a new method of polynomial selection for NFS over \(\mathbb {F}_{p^n}\). The new method both subsumes and generalises the GJL and the Conjugation methods. There are two parameters to the method, namely a divisor *d* of the extension degree *n* and a parameter \(r\ge k\) where \(k=n/d\).

For \(d=1\), the new method becomes the same as the GJL method. For \(d=n\) and \(r=k=1\), the new method becomes the same as the Conjugation method. For \(d=n\) and \(r>1\); or, for \(1<d<n\), the new method provides polynomials which leads to different trade-offs than what was previously known. Note that the case \(1<d<n\) can arise only when *n* is composite, though the case \(d=n\) and \(r>1\) arises even when *n* is prime. So, the new method provides new trade-offs for both *n* composite and *n* prime.

^{1}while for \(c_p > 4.1\), complexity of new method (MNFS-\(\mathcal {A}\)) is lower than the complexity \(L_Q(1/3,\theta _1)\) of MNFS-GJL method.

The complexity of MNFS-\(\mathcal {A}\) with \(k=1\) and using linear sieving polynomials can be written as \(L_Q(1/3,\mathbf {C}(c_p,r))\), where \(\mathbf {C}(c_p,r)\) is a function of \(c_p\) and a parameter *r*. For every integer \(r\ge 1\), there is an interval \([\epsilon _0(r),\epsilon _1(r)]\) such that for \(c_p\in [\epsilon _0(r),\epsilon _1(r)]\), \(\mathbf {C}(c_p,r)<\mathbf {C}(c_p,r^{\prime })\) for \(r\ne r^{\prime }\). Further, for a fixed *r*, let *C*(*r*) be the minimum value of \(\mathbf {C}(c_p,r)\) over all \(c_p\). We show that *C*(*r*) is monotone increasing for \(r\ge 1\); \(C(1)=\theta _0\); and that *C*(*r*) is bounded above by \(\theta _1\) which is its limit as *r* goes to infinity. So, for the new method the minimum complexity is the same as MNFS-Conjugation method. On the other hand, as *r* increases, the complexity of MNFS-\(\mathcal {A}\) remains lower than the complexities of all the prior known methods. In particular, the complexity of MNFS-\(\mathcal {A}\) interpolates nicely between the complexity of the MNFS-GJL and the minimum possible complexity of the MNFS-Conjugation method. This is depicted in Fig. 1. In Fig. 4 of Sect. 8.1, we provide a more detailed plot of the complexity of MNFS-\(\mathcal {A}\) in the boundary case.

The complete statement regarding the complexity of MNFS-\(\mathcal {A}\) in the boundary case is the following. For \(c_p\in (0,1.12]\cup [1.45,3.15]\), the complexity of MNFS-\(\mathcal {A}\) is the same as that of MNFS-Conjugation; for \(c_p\notin (0,1.12]\cup [1.45,3.15]\), the complexity of MNFS-\(\mathcal {A}\) is lower than that of all previous methods. In particular, the improvements for \(c_p\) in the range (1.12, 1.45) is obtained using \(k=2\) and 3; while the improvements for \(c_p>3.15\) is obtained using \(k=1\) and \(r>1\). In all cases, the minimum complexity is obtained using linear sieving olynomials.

## 2 Background on NFS for Non-Prime Fields

We provide a brief sketch of the background on the variant of the NFS algorithm that is applicable to the extension fields \(\mathbb {F}_{Q}\), where \(Q=p^n\), *p* is a prime and \(n>1\). More detailed discussions can be found in [5, 17].

Following the structure of index calculus algorithms, NFS has three main phases, namely, relation collection (sieving), linear algebra and descent. Prior to these, is the set-up phase. In the set-up phase, two number fields are constructed and the sieving parameters are determined. The two number fields are set up by choosing two irreducible polynomials *f*(*x*) and *g*(*x*) over the integers such that their reductions modulo *p* have a common irreducible factor \(\varphi (x)\) of degree *n* over \(\mathbb {F}_{p}\). The field \(\mathbb {F}_{p^n}\) will be considered to be represented by \(\varphi (x)\). Let \({\mathfrak {g}}\) be a generator of \(\mathfrak {G}=\mathbb {F}_{p^n}^\star \) and let *q* be the largest prime dividing the order of \(\mathfrak {G}\). We are interested in the discrete log of elements of \(\mathfrak {G}\) to the base \({\mathfrak {g}}\) modulo this largest prime *q*.

*f*(

*x*) and

*g*(

*x*) are crucial to the algorithm. These greatly affect the overall run time of the algorithm. Let \(\alpha ,\beta \in \mathbb {C}\) and \(m \in \mathbb {F}_{p^{n}}\) be the roots of the polynomials \(f(x),\;g(x)\) and \(\varphi (x)\) respectively. We further let

*l*(

*f*) and

*l*(

*g*) denote the leading coefficient of the polynomials

*f*(

*x*) and

*g*(

*x*) respectively. The two number fields and the finite field are given as follows.

*B*is the smoothness bound and is to be chosen appropriately. An algebraic integer is said to be

*B*-smooth if the principal ideal generated by it factors into the prime ideals of norms less than

*B*. As mentioned in the paper [5], independently of choice of

*f*and

*g*, the size of the factor basis is \(B^{1+o(1)}\). For asymptotic computations, this is simply taken to be

*B*. The work flow of NFS can be understood by the diagram in Fig. 2.

*t*coefficients) is chosen and the principal ideals generated by its images in the two number fields are checked for smoothness. If both of them are smooth, then

**virtual logarithm**of the unit \(u_{i,j}\), \(X_{i,j}= h_i^{-1} \log _g \overline{\varepsilon _{i,j}}\) is an unknown

**virtual logarithm**of prime ideal \(\mathfrak {q}_{i,j}\) and \(\lambda _{i,j}: \mathcal {O}_i \mapsto \mathbb {Z}/q\mathbb {Z}\) is Schirokauer map [19, 26, 28]. We skip the details of virtual logarithms and Schirokauer maps, as these details will not affect the polynomial selection problem considered in this work.

*q*in the unknown virtual logs. More than \((\#\mathcal {F}_1+\#\mathcal {F}_2+ r_1 + r_2)\) such relations are collected by sieving over suitable \(\phi (x)\). The linear algebra step solves the resulting system of linear equations using either the Lanczos or the block Wiedemann algorithms to obtain the virtual logs of factor basis elements.

After the linear algebra phase is over, the descent phase is used to compute the discrete logs of the given elements of the field \(\mathbb {F}_{p^{n}}\). For a given element \(\mathfrak {y}\) of \(\mathbb {F}_{p^{n}}\), one looks for an element of the form \(\mathfrak {y}^{i}{\mathfrak {g}}^{j}\), for some \(i,j\in \mathbb {N}\), such that the principal ideal generated by preimage of \(\left( \mathfrak {y}^{i}{\mathfrak {g}}^{j}\right) \) in \(\mathcal {O}_1\), factors into prime ideals of norms bounded by some bound \(B_1\) and of degree at most \(t-1\). Then the special-\(\mathfrak {q}\) descent technique [19] is used to write the ideal generated by the preimage as a product of prime ideals in \(\mathcal {F}_1\), which is then converted into a linear equation involving virtual logs. Putting the value of virtual logs, obtained after linear algebra phase, the value of \(\log _{\mathfrak {g}}(\mathfrak {y})\) is obtained. For more details and recent work on the descent phase, we refer to [13, 19].

## 3 Polynomial Selection and Sizes of Norms

It is evident from the description of NFS that the relation collection phase requires polynomials \(\phi (x)\in \mathbb {Z}[x]\) whose images in the two number fields are simultaneously smooth. For ensuring the smoothness of \(\phi (\alpha )\) and \(\phi (\beta )\), it is enough to ensure that their norms viz, \(\mathrm{Res}(f,\phi )\) and \(\mathrm{Res}(g,\phi )\) are *B*-smooth. We refer to [5] for further explanations.

*f*.

*E*be such that the coefficients of \(\phi \) are in \(\left[ -\frac{1}{2}E^{2/t},\frac{1}{2}E^{2/t}\right] \). So, \(||\phi ||_\infty \approx E^{2/t}\) and the number of polynomials \(\phi (x)\) that is considered for the sieving is \(E^2\). Whenever \(p=L_Q(a,c_p)\) with \(a > \frac{1}{3}\), we have the following bound on the \(\mathrm{Res}(f,\phi ) \times \mathrm{Res}(g,\phi )\) (for details we refer to [5]).

*n*, the sieving polynomial \(\phi (x)\) is taken to be linear, i.e., \(t=2\) and then the norm bound becomes approximately \(||f ||_\infty ||g ||_\infty E^{(\deg f +\deg g)}\).

*f*and

*g*result in the coefficients of one or both of these polynomials to depend on

*Q*. So, the right hand side of (7) is determined by

*Q*and

*E*. All polynomial selection algorithms try to minimize the RHS of (7). From the bound in (7), it is evident that during polynomial selection, the goal should be to try and keep the degrees and the coefficients of both

*f*and

*g*to be small. Ensuring both degrees and coefficients to be small is a nontrivial task and leads to a trade-off. Previous methods for polynomial selections provide different trade-offs between the degrees and the coefficients. Estimates of

*Q*-

*E*trade-off values have been provided in [5] and is based on the CADO factoring software [3]. Table 1 reproduces these values where

*Q*(

*dd*) represents the number of decimal digits in

*Q*.

Estimate of *Q*-*E* values [5].

| 100 | 120 | 140 | 160 | 180 | 200 | 220 | 240 | 260 | 280 | 300 |
---|---|---|---|---|---|---|---|---|---|---|---|

| 333 | 399 | 466 | 532 | 598 | 665 | 731 | 798 | 864 | 931 | 997 |

| 20.9 | 22.7 | 24.3 | 25.8 | 27.2 | 28.5 | 29.7 | 30.9 | 31.9 | 33.0 | 34.0 |

As mentioned in [5, 13], presently the following three polynomial selection methods provide competitive trade-offs.

Brief descriptions of these methods are given below.

**JLSV1.** Repeat the following steps until *f* and *g* are obtained to be irreducible over \(\mathbb {Z}\) and \(\varphi \) is irreducible over \(\mathbb {F}_p\).

- 1.
Randomly choose polynomials \(f_0(x)\) and \(f_1(x)\) having small coefficients with \(\deg (f_1) < \deg (f_0) = n\).

- 2.
Randomly choose an integer \(\ell \) to be slightly greater than \(\lceil \sqrt{p}\rceil \).

- 3.
Let (

*u*,*v*) be the rational reconstruction of \(\ell \) in \(\mathbb {F}_p\), i.e., \(\ell \equiv u/v \mod p\). - 4.
Define \(f(x)=f_0(x)+ \ell f_1(x)\) and \(g(x)=vf_0(x)+uf_1(x)\) and \(\varphi (x)=f(x)\mod p\).

Note that \(\deg (f)=\deg (g)=n\) and both \(||f||_\infty \) and \(||g ||_\infty \) are \(O\left( p^{1/2}\right) =O\left( Q^{1/(2n)}\right) \) and so (7) becomes \(E^{4n/t}Q^{(t-1)/n}\) which is \(E^{2n}Q^{1/n}\) for \(t=2\).

**GJL.** The basic Joux-Lercier method [17] works for prime fields. The generalised Joux-Lercier method extends the basic Joux-Lercier method to work over composite fields \(\mathbb {F}_{p^n}\).

*r*, define an \((r+1)\times (r+1)\) matrix \(M_{\varphi ,r}\) in the following manner.

*g*(

*x*) given by (9). By construction, \(\varphi (x)\) is a factor of

*g*(

*x*) modulo

*p*.

The GJL procedure for polynomial selection is the following. Choose an \(r\ge n\) and repeat the following steps until *f* and *g* are irreducible over \(\mathbb {Z}\) and \(\varphi \) is irreducible over \(\mathbb {F}_p\).

- 1.
Randomly choose a degree \((r+1)\)-polynomial

*f*(*x*) which is irreducible over \(\mathbb {Z}\) and having coefficients of size \(O(\ln (p))\) such that*f*(*x*) has a factor \(\varphi (x)\) of degree*n*modulo*p*which is both monic and irreducible. - 2.
Let \(\varphi (x)=x^n+\varphi _{n-1}x^{n-1}+\cdots +\varphi _1x+\varphi _0\) and \(M_{\varphi ,r}\) be the \((r+1)\times (r+1)\) matrix given by (8).

- 3.
Let \(g(x)=\mathrm{LLL}\left( M_{\varphi ,r}\right) \).

The polynomial *f*(*x*) has degree \(r+1\) and *g*(*x*) has degree *r*. The procedure is parameterised by the integer *r*.

The determinant of *M* is \(p^n\) and so from the properties of the LLL-reduced basis, the coefficients of *g*(*x*) are of the order \(O\left( p^{n/(r+1)}\right) = O\left( Q^{1/(r+1)}\right) \). The coefficients of *f*(*x*) are \(O(\ln p)\).

The bound on the norm given by (7) in this case is \(E^{2(2r+1)/t}Q^{(t-1)/(r+1)}\) which becomes \(E^{2r+1}Q^{1/(r+1)}\) for \(t=2\). Increasing *r* reduces the size of the coefficients of *g*(*x*) at the cost of increasing the degrees of *f* and *g*. In the concrete example considered in [5] and also in [24], *r* has been taken to be *n* and so *M* is an \((n+1)\times (n+1)\) matrix.

**Conjugation.** Repeat the following steps until *f* and *g* are irreducible over \(\mathbb {Z}\) and \(\varphi \) is irreducible over \(\mathbb {F}_p\).

- 1.
Choose a quadratic monic polynomial \(\mu (x)\), having coefficients of size \(O(\ln p)\), which is irreducible over \(\mathbb {Z}\) and has a root \(\mathfrak {t}\) in \(\mathbb {F}_p\).

- 2.
Choose two polynomials \(g_0(x)\) and \(g_1(x)\) with small coefficients such that \(\deg g_1 < \deg g_0 = n\).

- 3.
Let (

*u*,*v*) be a rational reconstruction of \(\mathfrak {t}\) modulo*p*, i.e., \(\mathfrak {t}\equiv u/v\mod p\). - 4.
Define \(g(x)=v g_0(x) + u g_1(x)\) and \(f(x)=\mathrm{Res}_y \big (\mu (y),g_0(x)+y\;g_1(x)\big )\).

Note that \(\deg (f)=2n\), \(\deg (g)=n\), \(||f||_\infty = O(\ln p)\) and \(||g||_\infty = O(p^{1/2})=O(Q^{1/(2n)})\). In this case, the bound on the norm given by (7) is \(E^{6n/t}Q^{(t-1)/(2n)}\) which becomes \(E^{3n}Q^{1/(2n)}\) for \(t=2\).

## 4 A Simple Observation

For the GJL method, while constructing the matrix *M*, the coefficients of the polynomial \(\varphi (x)\) are used. If, however, some of these coefficients are zero, then these may be ignored. The idea is given by the following result.

### **Proposition 1**

*n*be an integer,

*d*a divisor of

*n*and \(k=n/d\). Suppose

*A*(

*x*) is a monic polynomial of degree

*k*. Let \(r\ge k\) be an integer and set \(\psi (x)=\mathrm{LLL}(M_{A,r})\). Define \(g(x)=\psi (x^d)\) and \(\varphi (x)=A(x^d)\). Then

- 1.
\(\mathrm{deg}(\varphi )=n\) and \(\mathrm{deg}(g)=rd\);

- 2.
\(\varphi (x)\) is a factor of

*g*(*x*) modulo*p*; - 3.
\(||g||_{\infty } = p^{n/(d(r+1))}\).

### *Proof*

The first point is straightforward. Note that by construction *A*(*x*) is a factor of \(\psi (x)\) modulo *p*. So, \(A(x^d)\) is a factor of \(\psi (x^d)=g(x)\) modulo *p*. This shows the second point. The coefficients of *g*(*x*) are the coefficients of \(\psi (x)\). Following the GJL method, \(||\psi ||_{\infty }=p^{k/(r+1)}=p^{n/(d(r+1))}\) and so the same holds for \(||g||_{\infty }\). This shows the third point. \(\square \)

Note that if we had defined \(g(x)=\mathrm{LLL}(M_{\varphi ,rd})\), then \(||g||_{\infty }\) would have been \(p^{n/(rd+1)}\). For \(d>1\), the value of \(||g||_{\infty }\) given by Proposition 1 is smaller.

**A Variant.**The above idea shows how to avoid the zero coefficients of \(\varphi (x)\). A similar idea can be used to avoid the coefficients of \(\varphi (x)\) which are small. Suppose that the polynomial \(\varphi (x)\) can be written in the following form.

*O*(1). Some or even all of these \(\varphi _j\)’s could be zero. A \((k+1)\times (k+1)\) matrix

*M*is constructed in the following manner.

*M*has only one row obtained from \(\varphi (x)\) and it is difficult to use more than one row. Apply the LLL algorithm to

*M*and write the first row of the resulting LLL-reduced matrix as \([g_{i_1},\ldots ,g_{i_k},g_n]\). Define

*g*(

*x*) is

*n*and the bound on the coefficients of

*g*(

*x*) is determined as follows. The determinant of

*M*is \(p^k\) and by the LLL-reduced property, each of the coefficients \(g_{i_1},\ldots ,g_{i_k},g_n\) is \(O(p^{k/(k+1)})=O(Q^{k/(n(k+1))})\). Since \(\varphi _j\) for \(j\notin \{i_1,\ldots ,i_k\}\) are all

*O*(1), it follows from (13) that all the coefficients of

*g*(

*x*) are \(O(Q^{k/(n(k+1))})\) and so \(||g||_{\infty }=O(Q^{k/(n(k+1))})\).

## 5 A New Polynomial Selection Method

In the simple observation made in the earlier section, the non-zero terms of the polynomial *g*(*x*) are powers of \(x^d\). This creates a restriction and does not turn out to be necessary to apply the main idea of the previous section. Once the polynomial \(\psi (x)\) is obtained using the LLL method, it is possible to substitute any degree *d* polynomial with small coefficients for *x* and still the norm bound will hold. In fact, the idea can be expressed more generally in terms of resultants. Algorithm \(\mathcal {A}\) describes the new general method for polynomial selection.

### **Proposition 2**

The outputs *f*(*x*), *g*(*x*) and \(\varphi (x)\) of Algorithm \(\mathcal {A}\) satisfy the following.

- 1.
\(\mathrm{deg}(f)=d(r+1)\); \(\mathrm{deg}(g)=rd\) and \(\mathrm{deg}(\varphi )=n\);

- 2.
both

*f*(*x*) and*g*(*x*) have \(\varphi (x)\) as a factor modulo*p*; - 3.
\(||f||_{\infty }=O(\ln (p))\) and \(||g||_{\infty }=O(Q^{1/(d(r+1))})\).

### *Proof*

By definition \(f(x)=\mathrm{Res}_y\left( A_1(y), C_0(x) + y\,C_1(x) \right) \) where \(A_1(x)\) has degree \(r+1\), \(C_0(x)\) has degree *d* and \(C_1(x)\) has degree \(d-1\), so the degree of *f*(*x*) is \(d(r+1)\). Similarly, one obtains the degree of \(\varphi (x)\) to be *n*. Since \(\psi (x)\) is obtained from \(A_2(x)\) as \(\mathrm{LLL}(M_{A_2,r})\) it follows that the degree of \(\psi (x)\) is *r* and so the degree of *g*(*x*) is *rd*.

Since \(A_2(x)\) divides \(A_1(x)\) modulo *p*, it follows from the definition of *f*(*x*) and \(\varphi (x)\) that modulo *p*, \(\varphi (x)\) divides *f*(*x*). Since \(\psi (x)\) is a linear combination of the rows of \(M_{A_2,r}\), it follows that modulo *p*, \(\psi (x)\) is a multiple of \(A_2(x)\). So, \(g(x)=\mathrm{Res}_y\left( \psi (y), C_0(x) + y\,C_1(x) \right) \) is a multiple of \(\varphi (x)=\mathrm{Res}_y\left( A_2(y), C_0(x) + y\,C_1(x) \right) \) modulo *p*.

Since the coefficients of \(C_0(x)\) and \(C_1(x)\) are *O*(1) and the coefficients of \(A_1(x)\) are \(O(\ln p)\), it follows that \(||f||_{\infty }=O(\ln p)\). The coefficients of *g*(*x*) are *O*(1) multiples of the coefficients of \(\psi (x)\). By third point of Proposition 1, the coefficients of \(\psi (x)\) are \(O(p^{n/(d(r+1))})=Q^{1/(d(r+1))}\) which shows that \(||g||_{\infty }=O(Q^{1/(d(r+1))})\). \(\square \)

Proposition 2 provides the relevant bound on the product of the norms of a sieving polynomial \(\phi (x)\) in the two number fields defined by *f*(*x*) and *g*(*x*). We note the following points.

- 1.
If \(d=1\), then the norm bound is \(E^{2(2r+1)/t}Q^{(t-1)/(r+1)}\) which is the same as that obtained using the GJL method.

- 2.
If \(d=n\), then the norm bound is \(E^{2n(2r+1)/t}Q^{(t-1)/(n(r+1))}\). Further, if \(r=k=1\), then the norm bound is the same as that obtained using the Conjugation method. So, for \(d=n\), Algorithm \(\mathcal {A}\) is a generalisation of the Conjugation method. Later, we show that choosing \(r>1\) provides asymptotic improvements.

- 3.
If

*n*is a prime, then the only values of*d*are either 1 or*n*. The norm bounds in these two cases are covered by the above two points. - 4.
If

*n*is composite, then there are non-trivial values for*d*and it is possible to obtain new trade-offs in the norm bound. For concrete situations, this can be of interest. Further, for composite*n*, as value of*d*increases from \(d=1\) to \(d=n\), the norm bound nicely interpolates between the norm bounds of the GJL method and the Conjugation method.

**Existence of**\(\mathbb {Q}\)

**-automorphisms:**The existence of \(\mathbb {Q}\)-automorphism in the number fields speeds up the NFS algorithm in the non-asymptotic sense [19]. Similar to the existence of \(\mathbb {Q}\)-automorphism in GJL method, as discussed in [5], the first polynomial generated by the new method, can have a \(\mathbb {Q}\)-automorphism. In general, it is difficult to get an automorphism for the second polynomial as it is generated by the LLL algorithm. On the other hand, we can have a \(\mathbb {Q}\)-automorphism for the second polynomial also in the specific cases. Some of the examples are reported in [10].

## 6 Non-asymptotic Comparisons and Examples

We compare the norm bounds for \(t=2\), i.e., when the sieving polynomial is linear. In this case, Table 2 lists the degrees and norm bounds of polynomials for various methods. Table 3 compares the new method with the JLSV1 and the GJL method for concrete values of *n*, *r* and *d*. This shows that the new method provides different trade-offs which were not known earlier.

*Q*of 300 dd (refer to Table 1). As mentioned in [5], when the differences between the methods are small, it is not possible to decide by looking only at the size of the norm product. Keeping this in view, we see that the new method is competitive for \(n=6\) as well. These observations are clearly visible in the plots given in the Fig. 3. From the

*Q*-

*E*pairs given in Table 1, it is clear that the increase of

*E*is slower than that of

*Q*. This suggests that the new method will become competitive when

*Q*is sufficiently large.

Parameterised efficiency estimates for NFS obtained from the different polynomial selection methods.

Methods | \(\deg f\) | \(\deg g\) | \(||f ||_\infty \) | \(||g ||_\infty \) | \(||f ||_\infty ||g ||_\infty E^{(\deg f +\deg g)}\) |
---|---|---|---|---|---|

JLSV1 | | | \(Q^{\frac{1}{2n}}\) | \(Q^{\frac{1}{2n}}\) | \(E^{2n}Q^{\frac{1}{n}}\) |

GJL (\(r\ge n\)) | \(r+1\) | | \(O(\ln p)\) | \(Q^{\frac{1}{r+1}}\) | \(E^{2r+1}Q^{\frac{1}{r+1}}\) |

Conjugation | 2 | | \(O(\ln p)\) | \(Q^{\frac{1}{2n}}\) | \(E^{3n}Q^{\frac{1}{2n}}\) |

\(\mathcal {A}\) ( | \(d(r+1)\) | | \(O(\ln p)\) | \(Q^{\frac{1}{d(r+1)}}\) | \(E^{d(2r+1)}Q^{1/(d(r+1))}\) |

Comparison of efficiency estimates for composite *n* with \(d=2\) and \(r=n/2\).

\(\mathbb {F}_{Q}\) | method | \((\deg f,\deg g)\) | \(||f ||_\infty \) | \(||g ||_\infty \) | \(||f ||_\infty ||g ||_\infty E^{(\deg f +\deg g)}\) |
---|---|---|---|---|---|

\(\mathbb {F}_{p^{4}}\) | GJL | (5, 4) | \(O(\ln p)\) | \(Q^{\frac{1}{5}}\) | \(E^9 Q^{\frac{1}{5}}\) |

JLSV1 | (4, 4) | \(Q^{\frac{1}{8}}\) | \(Q^{\frac{1}{8}}\) | \(E^8 Q^{\frac{1}{4}}\) | |

\(\mathcal {A}\) | (6, 4) | \(O(\ln p)\) | \(Q^{\frac{1}{6}}\) | \(E^{10} Q^{\frac{1}{6}}\) | |

\(\mathbb {F}_{p^{6}}\) | GJL | (7, 6) | \(O(\ln p)\) | \(Q^{\frac{1}{7}}\) | \(E^{13} Q^{\frac{1}{7}}\) |

JLSV1 | (6, 6) | \(Q^{\frac{1}{12}}\) | \(Q^{\frac{1}{12}}\) | \(E^{12} Q^{\frac{1}{6}}\) | |

\(\mathcal {A}\) | (8, 6) | \(O(\ln p)\) | \(Q^{\frac{1}{8}}\) | \(E^{14} Q^{\frac{1}{8}}\) | |

\(\mathbb {F}_{p^{8}}\) | GJL | (9, 8) | \(O(\ln p)\) | \(Q^{\frac{1}{9}}\) | \(E^{17} Q^{\frac{1}{9}}\) |

JLSV1 | (8, 8) | \(Q^{\frac{1}{16}}\) | \(Q^{\frac{1}{16}}\) | \(E^{16} Q^{\frac{1}{8}}\) | |

\(\mathcal {A}\) | (10, 8) | \(O(\ln p)\) | \(Q^{\frac{1}{10}}\) | \(E^{18} Q^{\frac{1}{10}}\) | |

\(\mathbb {F}_{p^{9}}\) | GJL | (10, 9) | \(O(\ln p)\) | \(Q^{\frac{1}{10}}\) | \(E^{19} Q^{\frac{1}{10}}\) |

JLSV1 | (9, 9) | \(Q^{\frac{1}{18}}\) | \(Q^{\frac{1}{18}}\) | \(E^{18} Q^{\frac{1}{9}}\) | |

\(\mathcal {A}\) | (12, 9) | \(O(\ln p)\) | \(Q^{\frac{1}{12}}\) | \(E^{21} Q^{\frac{1}{12}}\) |

Next we provide some concrete examples of polynomials *f*(*x*), *g*(*x*) and \(\varphi (x)\) obtained using the new method. The examples are for \(n=6\) and \(n=4\). For \(n=6\), we have taken \(d=1,2,3\) and 6 and in each case *r* was chosen to be \(r=k=n/d\). For \(n=4\), we consider \(d=2\) with \(r=k=n/d\) and \(r=k+1\); and \(d=4\) with \(r=k\). These examples are to illustrate that the method works as predicted and returns the desired polynomials very fast. We have used Sage [29] and MAGMA computer algebra system [9] for all the computations done in this work.

### *Example 1*

*p*is a 201-bit prime given below.

### *Example 2*

*p*is a 301-bit prime given below.

## 7 Asymptotic Complexity Analysis

The goal of the asymptotic complexity analysis is to express the runtime of the NFS algorithm using the L-notation and at the same time obtain bounds on *p* for which the analysis is valid. Our description of the analysis is based on prior works predominantly those in [5, 17, 19, 24].

*a*will be determined later. Also, for each \(c_p\), the runtime of the NFS algorithm is the same for the family of finite fields \(\mathbb {F}_{p^n}\) where

*p*is given by (15).

From Sect. 3, we recall the following.

- 1.
The number of polynomials to be considered for sieving is \(E^2\).

- 2.
The factor base is of size

*B*.

*b*will be determined later. Set

*B*-smooth. Let \(\Gamma =L_Q(z,\zeta )\) and \(B=L_Q(b,c_b)\). Using the L-notation version of the Canfield-Erdös-Pomerance theorem,

*B*relations are required, obtaining this number of relations requires trying \(B\pi ^{-1}\) trials. Balancing the cost of sieving and the linear algebra steps requires \(B\pi ^{-1}=B^2\) and so

*B*allows solving for \(c_b\). Balancing the costs of the sieving and the linear algebra phases leads to the runtime of the NFS algorithm to be \(B^2=L_Q(b,2c_b)\). So, to determine the runtime, we need to determine

*b*and \(c_b\). The value of

*b*will turn out to be 1/3 and the only real issue is the value of \(c_b\).

### **Lemma 1**

*k*and

*d*. Using the expressions for

*p*and \(E(=B)\) given by (15) and (16), we obtain the following.

### *Proof*

### **Theorem 1**

**(Boundary Case).**Let

*k*divide

*n*, \(r\ge k\), \(t\ge 2\) and \(p=L_Q(2/3,c_p)\) for some \(0<c_p<1\). It is possible to ensure that the runtime of the NFS algorithm with polynomials chosen by Algorithm \(\mathcal {A}\) is \(L_Q(1/3,2c_b)\) where

### *Proof*

*L*-expressions given by (21) have the same first component and so the product of the norms is

### **Corollary 1**

**(Boundary Case of the Conjugation Method**[5]

**).**Let \(r=k=1\). Then for \(p=L_Q(2/3,c_p)\), the runtime of the NFS algorithm is \(L_Q(1/3,2c_b)\) with

Allowing *r* to be greater than *k* leads to improved asymptotic complexity. We do not perform this analysis. Instead, we perform the analysis in the similar situation which arises for the multiple number field sieve algorithm.

### **Theorem 2**

**(Medium Characteristic Case).** Let \(p=L_Q(a,c_p)\) with \(a>1/3\). It is possible to ensure that the runtime of the NFS algorithm with the polynomials produced by Algorithm \(\mathcal {A}\) is \(L_Q(1/3,(32/3)^{1/3})\).

### *Proof*

*t*is chosen as follows [5]. For \(0<c<1\), let \(t=c_tn((\ln Q)/(\ln \ln Q))^{-c}\). For the asymptotic analysis, \(t-1\) is also assumed to be given by the same expression for

*t*. Then the expressions given by (21) become the following.

*t*in (21) and further by using the expression for

*n*given in (15).

*c*in the equation for \(c_b\), we obtain \(c_b=(2/3)^{2/3}\times ((2(2r+1))/(r+1))^{1/3}\). The value of \(c_b\) is the minimum for \(r=1\) and this value is \(c_b=(4/3)^{1/3}\). \(\square \)

Note that the parameter *a* which determines the size of *p* is not involved in any of the computation. The assumption \(a>1/3\) is required to ensure that the bound on the product of the norms can be taken to be the expression given by (7).

### **Theorem 3**

**(Large Characteristic).** It is possible to ensure that the runtime of the NFS algorithm with the polynomials produced by Algorithm \(\mathcal {A}\) is \(L_Q(1/3,(64/9)^{1/3})\) for \(p\ge L_Q(2/3,(8/3)^{1/3})\).

### *Proof*

*r*in (21), we obtain

*B*, we obtain

*t*which is \(t=2\). This gives \(c_b=(8/9)^{1/3}\).

Using \(2(a-e)=1+b\) and \(b=1/3\) we get \(a-e=2/3\). Note that \(r\ge k\) and so \(p\ge p^{k/r}=L_Q(a,(c_pk)/r)=L_Q(a-e,(2c_pk)/c_r)\). With \(t=2\), the value of \((c_pk)/c_r\) is equal to \((1/3)^{1/3}\) and so \(p\ge L_Q(2/3,(8/3)^{1/3})\). \(\square \)

Theorems 2 and 3 show that the generality introduced by *k* and *r* do not affect the overall asymptotic complexity for the medium and large prime case and the attained complexities in these cases are the same as those obtained for previous methods in [5].

## 8 Multiple Number Field Sieve Variant

As the name indicates, the multiple number field sieve variant uses several number fields. The discussion and the analysis will follow the works [8, 24].

There are two variants of multiple number field sieve algorithm. In the first variant, the image of \(\phi (x)\) needs to be smooth in at least any two of the number fields. In the second variant, the image of \(\phi (x)\) needs to be smooth in the first number field and at least one of the other number fields.

We have analysed both the variants of multiple number field sieve algorithm and found that the second variant turns out to be better than the first one. So we discuss the second variant of MNFS only. In contrast to the number field sieve algorithm, the right number field is replaced by a collection of *V* number fields in the second variant of MNFS. The sieving polynomial \(\phi (x)\) has to satisfy the smoothness condition on the left number field as before. On the right side, it is sufficient for \(\phi (x)\) to satisfy a smoothness condition on at least one of the *V* number fields.

Recall that Algorithm \(\mathcal {A}\) produces two polynomials *f*(*x*) and *g*(*x*) of degrees \(d(r+1)\) and *dr* respectively. The polynomial *g*(*x*) is defined as \(\mathrm{Res}_y(\psi (y),C_0(x)+yC_1(x))\) where \(\psi (x)=\mathrm{LLL}(M_{A_2,r})\), i.e., \(\psi (x)\) is defined from the first row of the matrix obtained after applying the LLL-algorithm to \(M_{A_2,r}\).

Methods for obtaining the collection of number fields on the right have been mentioned in [24]. We adapt one of these methods to our setting. Consider Algorithm \(\mathcal {A}\). Let \(\psi _1(x)\) be \(\psi (x)\) as above and let \(\psi _2(x)\) be the polynomial defined from the second row of the matrix \(M_{A_2,r}\). Define \(g_1(x)=\mathrm{Res}_y(\psi _1(y),C_0(x)+yC_1(x))\) and \(g_2(x)=\mathrm{Res}_y(\psi _2(y),C_0(x)+yC_1(x))\). Then choose \(V-2\) linear combinations \(g_i(x)=s_ig_1(x)+t_ig_2(x)\), for \(i=3,\ldots ,V\). Note that the coefficients \(s_i\) and \(t_i\) are of the size of \(\sqrt{V}\). All the \(g_i\)’s have degree *dr*. Asymptotically, \(||\psi _2||_{\infty }=||\psi _1||_{\infty }=Q^{1/(d(r+1))}\). Since we take \(V=L_Q(1/3)\), all the \(g_i\)’s have their infinity norms to be the same as that of *g*(*x*) given by Proposition 2.

*B*be the bound on the norms of the ideals which are in the factor basis defined by

*f*. For each of the right number fields, let \(B^{\prime }\) be the bound on the norms of the ideals which are in the factor basis defined by each of the \(g_i\)’s. So, the size of the entire factor basis is \(B+VB^{\prime }\). The following condition balances the left portion and the right portion of the factor basis.

*at least*one of the right factor bases. Further, let \(\Gamma _1=\mathrm{Res}_x(f(x),\phi (x))\) be the bound on the norm corresponding to the left number field and \(\Gamma _2=\mathrm{Res}_x(g_i(x),\phi (x))\) be the bound on the norm for any of the right number fields. Note that \(\Gamma _2\) is determined only by the degree and the \(L_{\infty }\)-norm of \(g_i(x)\) and hence is the same for all \(g_i(x)\)’s. Heuristically, we have

*B*relations are obtained in about \(B\pi ^{-1}\) trials. Balancing the cost of linear algebra and sieving, we have as before \(B=\pi ^{-1}\).

*B*and

*V*are made.

*B*and

*V*, it is possible to analyse the MNFS variant for Algorithm \(\mathcal {A}\) for three cases, namely, the medium prime case, the boundary case and the large characteristic case. Below we present the details of the boundary case. This presents a new asymptotic result.

### **Theorem 4**

**(MNFS-Boundary Case).**Let

*k*divide

*n*, \(r\ge k\), \(t \ge 2\) and

### *Proof*

### 8.1 Further Analysis of the Boundary Case

Theorem 4 expresses \(2c_b\) as a function of \(c_p\), *t*, *k* and *r*. Let us write this as \(2c_b=\mathbf {C}(c_p,t,k,r)\). It turns out that fixing the values of (*t*, *k*, *r*) gives a set *S*(*t*, *k*, *r*) such that for \(c_p\in S(t,k,r)\), \(\mathbf {C}(c_p,t,k,r)\le \mathbf {C}(c_p,t^{\prime },k^{\prime },r^{\prime })\) for any \((t^{\prime },k^{\prime },r^{\prime })\ne (t,k,r)\). In other words, for a choice of (*t*, *k*, *r*), there is a set of values for \(c_p\) where the minimum complexity of MNFS-\(\mathcal {A}\) is attained. The set *S*(*t*, *k*, *r*) could be empty implying that the particular choice of (*t*, *k*, *r*) is sub-optimal.

*S*(

*t*, 1, 1) for \(t\ge 3\). Note that the choice \((t,k,r)=(t,1,1)\) specialises MNFS-\(\mathcal {A}\) to MNFS-Conjugation. So, for \(c_p\in (0,1.12]\cup [1.45,3.15]\) the complexity of MNFS-\(\mathcal {A}\) is the same as that of MNFS-Conjugation.

Choices of (*t*, *k*, *r*) and the corresponding *S*(*t*, *k*, *r*).

( | |
---|---|

( | \(\bigcup _{t\ge 3} S(t,1,1)\approx (0,1.12]\) |

(2, 3, 3) | \([(1/3)(4\sqrt{21} + 20)^{1/3},(\sqrt{78}/9 + 29/36)^{1/3}]\approx [1.12,1.21]\) |

(2, 2, 2) | \([(\sqrt{78}/9 + 29/36)^{1/3},(1/2)(4\sqrt{11}+ 11)^{1/3}]\approx [1.21,1.45]\) |

(2, 1, 1) | \([(1/2)(4\sqrt{11}+ 11)^{1/3},(2\sqrt{62} + 31/2)^{1/3}]\approx [1.45,3.15]\) |

(2, 1, 2) | \([(2\sqrt{62} + 31/2)^{1/3},(8\sqrt{33} + 45)^{1/3}]\approx [3.15,4.5]\) |

In Fig. 4, we have plotted \(2c_b\) given by Theorem 4 against \(c_p\) for some values of *t*, *k* and *r* where the minimum complexity of MNFS-\(\mathcal {A}\) is attained. The plot is labelled MNFS-\(\mathcal {A}\). The sets *S*(*t*, *k*, *r*) are clearly identifiable from the plot. The figure also shows a similar plot for NFS-\(\mathcal {A}\) which shows the complexity in the boundary case given by Theorem 1. For comparison, we have plotted the complexities of the GJL and the Conjugation methods from [5] and the MNFS-GJL and the MNFS-Conjugation methods from [24].

Based on the plots given in Fig. 4, we have the following observations.

- 1.
Complexities of NFS-\(\mathcal {A}\) are never worse than the complexities of NFS-GJL and NFS-Conjugation. Similarly, complexities of MNFS-\(\mathcal {A}\) are never worse than the complexities of MNFS-GJL and MNFS-Conjugation.

- 2.
For both the NFS-\(\mathcal {A}\) and the MNFS-\(\mathcal {A}\) methods, increasing the value of

*r*provides new complexity trade-offs. - 3.
There is a value of \(c_p\) for which the minimum complexity is achieved. This corresponds to the MNFS-Conjugation. Let \(L_Q(1/3,\theta _0)\) be this complexity. The value of \(\theta _0\) is determined later.

- 4.
Let the complexity of the MNFS-GJL be \(L_Q(1/3,\theta _1)\). The value of \(\theta _1\) was determined in [24]. The plot for MNFS-\(\mathcal {A}\) approaches the plot for MNFS-GJL from below.

- 5.
For smaller values of \(c_p\), it is advantageous to choose \(t>2\) or \(k>1\). On the other hand, for larger values of \(c_p\), the minimum complexity is attained for \(t=2\) and \(k=1\).

From the plot, it can be seen that for larger values of \(c_p\), the minimum value of \(c_b\) is attained for \(t=2\) and \(k=1\). So, we decided to perform further analysis using these values of *t* and *k*.

### 8.2 Analysis for \(t=2\) and \(k=1\)

### **Theorem 5**

- 1.
\(C(1)=\theta _0=\left( \frac{146}{261} \, \sqrt{22} + \frac{208}{87}\right) ^{1/3}\).

- 2.
For \(r\ge 1\),

*C*(*r*) is monotone increasing and bounded above. - 3.
The limiting upper bound of

*C*(*r*) is \(\theta _1=\left( \frac{2\times (13\sqrt{13}+46) }{ 27} \right) ^{1/3}\).

### *Proof*

*C*(in terms of

*r*) as

*C*(

*r*), viz, \(\frac{\rho (r)}{3 \, {\left( r + 1\right) }}\), \( \frac{{\left( 3 \, r + 2\right) } r}{36 \, \rho (r)^{2}}\) and \( \frac{2 \, r + 1}{3 \, \rho (r)}\) are monotonic increasing. This can be verified through computation (with a symbolic algebra package) as follows. Let \(s_r\) be any one of these sequences. Then computing \(s_{r+1}/s_r\) gives a ratio of polynomial expressions from which it is possible to directly argue that \(s_{r+1}/s_r\) is greater than one. We have done these computations but, do not present the details since they are uninteresting and quite messy. Since all the three sequences \(\frac{\rho (r)}{3 \, {\left( r + 1\right) }}\), \( \frac{{\left( 3 \, r + 2\right) } r}{36 \, \rho (r)^{2}}\) and \( \frac{2 \, r + 1}{3 \, \rho (r)}\) are monotonic increasing so is

*C*(

*r*).

*C*(

*r*) is also bounded above. Being monotone increasing and bounded above

*C*(

*r*) is convergent. We claim that

*C*(

*r*) as

*r*goes to infinity is the value of \(\theta _1\) where \(L_Q(1/3,\theta _1)\) is the complexity of MNFS-GJL as determined in [24]. This shows that as

*r*goes to infinity, the complexity of MNFS-\(\mathcal {A}\) approaches the complexity of MNFS-GJL from below.

*C*(

*r*) is monotone increasing for \(r\ge 1\). So, the minimum value of

*C*(

*r*) is obtained for \(r=1\). After simplifying

*C*(1), we get the minimum complexity of MNFS-\(\mathcal {A}\) to be

### *Note 1*

As mentioned earlier, for \(r=k=1\), the new method of polynomial selection becomes the Conjugation method. So, the minimum complexity of MNFS-\(\mathcal {A}\) is the same as the minimum complexity for MNFS-Conjugation. Here we note that the value of the minimum complexity given by (38), is not same as the one reported by Pierrot in [24]. This is due to an error in the calculation in [24]^{2}.

**Complexity of NFS-**\(\mathcal {A}\)

**:**From Fig. 4, it can be seen that there is an interval for \(c_p\) for which the complexity of NFS-\(\mathcal {A}\) is better than both MNFS-Conjugation and MNFS-GJL. An analysis along the lines as done above can be carried out to formally show this. We skip the details since these are very similar to (actually a bit simpler than) the analysis done for MNFS-\(\mathcal {A}\). Here we simply mention the following two results:

- 1.
For \(c_p\ge {\left( 2 \, \sqrt{89} + 20\right) }^{\frac{1}{3}}\approx 3.39\), the complexity of NFS-\(\mathcal {A}\) is better than that of MNFS-Conjugation.

- 2.
For \(c_p\le \frac{1}{8} \, \sqrt{390} \sqrt{{\left( 5 \, \sqrt{13} - 18\right) } {\left( \frac{26}{27} \, \sqrt{13} + \frac{92}{27}\right) }^{\frac{1}{3}}} + \frac{45}{8} \, {\left( \frac{26}{27} \, \sqrt{13} + \frac{92}{27}\right) }^{\frac{2}{3}} \approx 20.91\), the complexity of NFS-\(\mathcal {A}\) is better than that of MNFS-GJL.

- 3.
So, for \(c_p\in [3.39,20.91]\), the complexity of NFS-\(\mathcal {A}\) is better than the complexity of all previous method including the MNFS variants.

**Current state-of-the-art:**The complexity of MNFS-\(\mathcal {A}\) is lower than that of NFS-\(\mathcal {A}\). As mentioned earlier (before Table 4) the interval (0, 1.12] is the union of \(\cup _{t\ge 3}S(t,1,1)\). This fact combined with Theorem 5 and Table 4 show the following. For \(p=L_Q(2/3,c_p)\), when \(c_p\in (0,1.12]\cup [1.45,3.15]\), the complexity of MNFS-\(\mathcal {A}\) is the same as that of MNFS-Conjugation; for \(c_p\notin (0,1.12]\cup [1.45,3.15]\) and \(c_p>0\), the complexity of MNFS-\(\mathcal {A}\) is smaller than all previous methods. Hence, MNFS-\(\mathcal {A}\) should be considered to provide the current state-of-the-art asymptotic complexity in the boundary case.

### 8.3 Medium and Large Characteristic Cases

In a manner similar to that used to prove Theorem 4, it is possible to work out the complexities for the medium and large characteristic cases of the MNFS corresponding to the new polynomial selection method. To tackle the medium prime case, the value of *t* is taken to be \(t=c_tn\left( (\ln Q)(\ln \ln Q)\right) ^{-1/3}\) and to tackle the large prime case, the value of *r* is taken to be \(r=c_r/2\left( (\ln Q)(\ln \ln Q)\right) ^{1/3}\). This will provide a relation between \(c_b,c_v\) and *r* (for the medium prime case) or *t* (for the large prime case). The method of Lagrange multipliers is then used to find the minimum value of \(c_b\). We have carried out these computations and the complexities turn out to be the same as those obtained in [24] for the MNFS-GJL (for large characteristic) and the MNFS-Conjugation (for medium characteristic) methods. Hence, we do not present these details.

## 9 Conclusion

In this work, we have proposed a new method for polynomial selection for the NFS algorithm for fields \(\mathbb {F}_{p^n}\) with \(n>1\). Asymptotic analysis of the complexity has been carried out both for the classical NFS and the MNFS algorithms for polynomials obtained using the new method. For the boundary case with \(p=L_Q(2/3,c_p)\) for \(c_p\) outside a small set, the new method provides complexity which is lower than all previously known methods.

## Footnotes

- 1.
The value of \(\theta _0\) obtained in [24] is incorrect.

- 2.
The error is the following. The solution for \(c_b\) to the quadratic \((18t^2c_p^2)c_b^2-(36tc_p)c_b+8-3t^2(t-1)c_p^3=0\) is \(c_b=1/(tc_p) + \sqrt{5/(9(c_pt)^2)+(c_p(t-1))/6}\) with the positive sign of the radical. In [24], the solution is erroneously taken to be \(1/(tc_p) + \sqrt{5/((9c_pt)^2)+(c_p(t-1))/6}\).

## References

- 1.Adleman, L.M.: The function field sieve. In: Adleman, L.M., Huang, M.-D. (eds.) ANTS 1994. LNCS, vol. 877, pp. 108–121. Springer, Heidelberg (1994)CrossRefGoogle Scholar
- 2.Adleman, L.M., Huang, M.-D.A.: Function field sieve method for discrete logarithms over finite fields. Inf. Comput.
**151**(1–2), 5–16 (1999)MathSciNetCrossRefzbMATHGoogle Scholar - 3.Bai, S., Bouvier, C., Filbois, A., Gaudry, P., Imbert, L., Kruppa, A., Morain, F., Thomé, E., Zimmermann, P.: CADO-NFS, an implementation of the number field sieve algorithm. CADO-NFS, Release 2.1.1 (2014). http://cado-nfs.gforge.inria.fr/
- 4.Barbulescu, R.: An appendix for a recent paper of Kim. IACR Cryptology ePrint Archive 2015:1076 (2015)Google Scholar
- 5.Barbulescu, R., Gaudry, P., Guillevic, A., Morain, F.: Improving NFS for the discrete logarithm problem in non-prime finite fields. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 129–155. Springer, Heidelberg (2015)Google Scholar
- 6.Barbulescu, R., Gaudry, P., Joux, A., Thomé, E.: A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 1–16. Springer, Heidelberg (2014)CrossRefGoogle Scholar
- 7.Barbulescu, R., Gaudry, P., Kleinjung, T.: The tower number field sieve. In: Iwata, T., et al. (eds.) ASIACRYPT 2015. LNCS, vol. 9453, pp. 31–55. Springer, Heidelberg (2015). doi: 10.1007/978-3-662-48800-3_2 CrossRefGoogle Scholar
- 8.Barbulescu, R., Pierrot, C.: The multiple number field sieve for medium and high characteristic finite fields. LMS J. Comput. Math.
**17**, 230–246 (2014)MathSciNetCrossRefzbMATHGoogle Scholar - 9.Bosma, W., Cannon, J., Playoust, C.: The Magma algebra system. I. The user language. J. Symbolic Comput.
**24**(3–4), 235–265 (1997). Computational algebra and number theory (London, 1993)MathSciNetCrossRefzbMATHGoogle Scholar - 10.Gaudry, P., Grmy, L., Videau, M.: Collecting relations for the number field sieve in \(\text{GF}(p^6)\). Cryptology ePrint Archive, Report 2016/124 (2016). http://eprint.iacr.org/
- 11.Gordon, D.M.: Discrete logarithms in \(\text{ GF }(p)\) using the number field sieve. SIAM J. Discrete Math.
**6**, 124–138 (1993)MathSciNetCrossRefzbMATHGoogle Scholar - 12.Granger, R., Kleinjung, T., Zumbrägel, J.: Discrete logarithms in \(\text{ GF }(2^{9234})\). NMBRTHRY list, January 2014Google Scholar
- 13.Guillevic, A.: Computing individual discrete logarithms faster in GF(\(p^n\)). Cryptology ePrint Archive, Report 2015/513, (2015). http://eprint.iacr.org/
- 14.Joux, A.: Faster index calculus for the medium prime case application to 1175-bit and 1425-bit finite fields. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 177–193. Springer, Heidelberg (2013)CrossRefGoogle Scholar
- 15.Joux, A.: A new index calculus algorithm with complexity L(1/4 + o(1)) in small characteristic. In: Lange, T., Lauter, K., Lisoněk, P. (eds.) SAC 2013. LNCS, vol. 8282, pp. 355–379. Springer, Heidelberg (2014)CrossRefGoogle Scholar
- 16.Joux, A., Lercier, R.: The function field sieve is quite special. In: Fieker, C., Kohel, D.R. (eds.) ANTS 2002. LNCS, vol. 2369, pp. 431–445. Springer, Heidelberg (2002)CrossRefGoogle Scholar
- 17.Joux, A., Lercier, R.: Improvements to the general number field sieve for discrete logarithms in prime fields. A comparison with the gaussian integer method. Math. Comput.
**72**(242), 953–967 (2003)MathSciNetCrossRefzbMATHGoogle Scholar - 18.Joux, A., Lercier, R.: The function field sieve in the medium prime case. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 254–270. Springer, Heidelberg (2006)CrossRefGoogle Scholar
- 19.Joux, A., Lercier, R., Smart, N.P., Vercauteren, F.: The number field sieve in the medium prime case. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 326–344. Springer, Heidelberg (2006)CrossRefGoogle Scholar
- 20.Joux, A., Pierrot, C.: The special number field sieve in \(\mathbb{F}_{p^{n}}\). In: Cao, Z., Zhang, F. (eds.) Pairing 2013. LNCS, vol. 8365, pp. 45–61. Springer, Heidelberg (2014)CrossRefGoogle Scholar
- 21.Kalkbrener, M.: An upper bound on the number of monomials in determinants of sparse matrices with symbolic entries. Math. Pannonica
**8**(1), 73–82 (1997)MathSciNetzbMATHGoogle Scholar - 22.Kim, T.: Extended tower number field sieve: a new complexity for medium prime case. IACR Cryptology ePrint Archive, 2015:1027 (2015)Google Scholar
- 23.Matyukhin, D.: Effective version of the number field sieve for discrete logarithm in a field GF\((p^k)\). Trudy po Discretnoi Matematike
**9**, 121–151 (2006). (in Russian), 2006. http://m.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tdm&paperid=144&option_lang=eng - 24.Pierrot, C.: The multiple number field sieve with conjugation and generalized joux-lercier methods. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9056, pp. 156–170. Springer, Heidelberg (2015)Google Scholar
- 25.Sarkar, P., Singh, S.: Fine tuning the function field sieve algorithm for the medium prime case. IEEE Transactions on Information Theory, 99: 1–1 (2016)Google Scholar
- 26.Schirokauer, O.: Discrete logarithms and local units. Philosophical Transactions: Physical Sciences and Engineering
**345**, 409–423 (1993)MathSciNetCrossRefzbMATHGoogle Scholar - 27.Schirokauer, O.: Using number fields to compute logarithms in finite fields. Math. Comp.
**69**(231), 1267–1283 (2000)MathSciNetCrossRefzbMATHGoogle Scholar - 28.Schirokauer, O.: Virtual logarithms. J. Algorithms
**57**(2), 140–147 (2005)MathSciNetCrossRefzbMATHGoogle Scholar - 29.Stein, W.A., et al.: Sage Mathematics Software. The Sage Development Team (2013). http://www.sagemath.org