Abstract
Recent progress on NFS imposed a new estimation of the security of pairings. In this work we study the best attacks against some of the most popular pairings and propose new key sizes using an analysis which is more precise than the analysis in a recent article of Menezes, Sarkar and Singh. We also select pairingfriendly curves for standard security levels.
Introduction
Pairingbased cryptography has been introduced in the beginning of the century allowing new protocols that could not be realized otherwise such as identitybased cryptography [10], short signature schemes [22], or broadcast encryption [17]. It has now many more practical applications in various fields. A pairing is a nondegenerate bilinear map
It is usually realized thanks to elliptic curves. More precisely, the groups \({\mathbb {G}}_1\) and \({\mathbb {G}}_2\) are subgroups or quotient groups of an elliptic curve defined over a finite field \({\mathbb {F}}_q\) or one of its extensions, and \({\mathbb {G}}_3\) is a subgroup or a quotient group of \({\mathbb {F}}_{q^k}^*\) where k is called the embedding degree. A suitable pairing for cryptographic applications requires that the discrete logarithm problem is sufficiently difficult on these three groups. The security of pairings defined over \({\mathbb {F}}_q\) having embedding degree k and group order r is determined by:

1.
the cost of the discrete logarithm problem (DLP) on an order r subgroup or quotient group of an elliptic curve defined over \({\mathbb {F}}_q\) (the curve side);

2.
the cost of the DLP in a quotient of the multiplicative group of \({\mathbb {F}}_{q^k}\) (the finite field side).
The security evaluation on the curve side is simple: If s is the desired level of security, we select r such that \(\log _2r\ge 2s\) because of Pollard’s rho algorithm (and by consequence \(\log _2q\ge 2s\)).
Attacks on the field side, however, are harder to estimate since the best algorithms belong to the Index Calculus family, and their complexity is hard to write down explicitly. The recommended key sizes can be found in the reports of the standardization organizations : ISO [57] , IEEE [56]. The NIST has not standardized pairingbased cryptography, but it published reports [81] which do specify key sizes. The recommendations of ECRYPT [93], which were published by ENISA [84], corroborate with those of the ISO, the IEEE and the NIST.
Since 2013 there have been a series of attacks on the field side of pairings of small characteristic which completely invalidated the use of these pairings. The classical algorithms, Coppersmith [27] and the function field sieve [2, 5, 59, 61], were replaced by algorithms of smaller complexity [42, 63, 63], followed by a heuristic quasipolynomial algorithm [15]. ENISA reacted immediately, and in its standard document from October 2013 [84, page 32], the agency forbid the use of small characteristic pairings. Improvements and records continued [7, 43, 65], and a second quasipolynomial algorithm was proposed [46, 47]. Two record computations broke 128bit pairings in characteristic 2 [45] and respectively 3 [1].
In the case of nonsmall characteristic, the bestknown algorithm is the number field sieve [30, 49, 60, 62, 89, 90, 92]. Since 2013, there have been a series of new variants and improvements [11, 12, 16, 24, 64, 87, 95,96,97] and record computations [11, 13, 36, 48, 69]. The extended tower number field [58, 67] changed considerably the complexity of the attacks and a precise analysis in the same article of a popular pairing, BarretoNaehrig of 128 bits of security, showed that the key sizes must be reevaluated.
The implementation of SexTNFS requires to code from scratch a subprogram which can be called “sieving in higher dimension.” The first implementations of this subprogram [52, 102] and [44] were all accompanied by algorithmic improvements, and the latest was used in a record computation [41]. The CADONFS software package [14] has a branch called nfshd which corresponds to sieving in higher dimension, but it could take years before its development is finished. (CADONFS has more than 200000 lines of code and its development has already lasted 10 years, with continuous improvements in all the stages of NFS.)
The goal of this paper is to give a precise evaluation of the complexity of these algorithms in the absence of computational records. In the case of pairings, where the characteristic of the base field is parametrized by a polynomial, we obtain parameters sizes for the 128, 192, and 256bit security levels and propose pairingfriendly curves which have this security. We compare our analysis to that of a contemporaneous article of Menezes, Sarkar and Singh [82]. The cost of an attack with SexTNFS depends on the size of the norms of the algebraic numbers in the sieving domain. The two analysis differ in the way in which the size of the norms is estimated : In their approach the mathematical upper bound is used whereas we rely on the experimental values.
Roadmap After explaining the necessity of a new and more precise evaluation of key sizes in Sect. 2, we recall the most popular families of pairings in Sect. 3 and identify the best variant of NFS that an attacker can use against these families in Sect. 4. Then we estimate the best parameters by seeing it as an optimization problem (Sect. 5). At the end of the same section, we explain the difference between the analysis of [82] and that of this work. The proposal of new curves is done in three steps: First we solve the optimization problem on the precise case of the most popular pairings families and find what are the field sizes which correspond to 128, 192, and 256 bits of security (Sect. 6), then we search for curves of this size (Sect. 7), and finally we do an analysis even more precise than before for each of the curves we propose (Sect. 8). We conclude the article by estimating the complexity of an optimal ate pairing for the new curves proposed for 128 bits of security (Sect. 9).
Big Lines of NFS and a Simple Estimation of Complexity
Whether the goal is to factor a composite integer N or to compute discrete logarithms in a field of \(p^n\) elements, NFS works in a similar manner. We select a number ring \({\mathbb {Z}}_i\), which is simply \({\mathbb {Z}}\) when factoring and is such that p is inert for discrete logarithms. Then we select two polynomials \(f,g\in {\mathbb {Z}}_i[x]\) having a common factor \(\varphi \) modulo q, where \(q=N\) for factoring and \(q=p\) for discrete logarithms. This allows to draw a commutative diagram which is the core of NFS:
where \(\alpha _f\) and \(\alpha _g\) are roots of f and g in their number fields and where \(\mathcal {O}_f\) and \(\mathcal {O}_g\) are the rings of integers of these same number fields.
The algorithm starts with a stage in which small polynomials \(\phi (x)\) are enumerated and put in the top of the diagram. What a small polynomial is changes from variant to variant, but the degree and the coefficients are small, the simplest example being \(\phi (x)=abx\) with integers a, b smaller in absolute value than some parameter. If \(\phi (\alpha _f)\) and \(\phi (\alpha _g)\) are Bsmooth for a parameter B (factor into ideals of norm less than B), then we obtain a multiplicative relation in \({\mathbb {Z}}_i[x]/\langle q,\varphi \rangle \). At this step, the two variants of NFS split: Either one transforms multiplicative relations into linear equations and computes a right kernel to obtain a large number of discrete logarithms, or one writes a matrix of valuations and computes a left kernel to obtain a nontrivial solution to the equation \(x^2\equiv 1\hbox { mod }N\). In both cases, one finishes with a step of negligible cost.
The classical variant of NFS has complexity \(L_Q[64]^{1+o(1)}\) where \(Q=N\) or \(p^n\) and
Each of the variants of NFS requires its own complexity analysis, but it is always of the form \(L_Q[c]^{1+o(1)}\) for some constant. Joux and Pierrot [64] invented a method of polynomial selection which obtains \(c=32\) for some finite fields where the characteristic p has a special form. Barbulescu et al. [12] proposed new methods of polynomial selection which achieve \(c=48\) in some cases that are intractable with the previous method. Later Barbulescu et al. [16] proposed to replace \({\mathbb {Z}}\) by a larger number ring \({\mathbb {Z}}_i\) and also obtained \(c=32\) for some finite fields, in particular proving that a popular pairings curve estimated to 128 bits can be the target of this variant. Finally, Kim and Barbulescu [67] showed how to use the new methods of polynomial selection together with the new choices of \({\mathbb {Z}}_i\) and obtained \(c=32\) for a very large range of finite fields. It is reassuring to note that one can give arguments that one cannot go below the \(c=32\) constant (cf. Appendix B).
o(1)less estimation. What is the impact of these new constants in the complexity on the reallife security ? To get a first idea one can start by dropping the o(1) term, so that the cost of each variant of NFS is \(2^\kappa L_Q[c]\) where \(\kappa \) and c are two constants. We use the same convention as in [72, Section 2.4.6] and count a clock cycle as one operation. Thanks to reallife record computations, we have a relatively good estimation of \(\kappa \) as summarized in Table 1 and we conclude on the security estimations in Figure 1. For those fields where the fastest variant applies, it seems that we have to use 5004 bit fields for 128 bits of security and 12871 for 192 bits of security.
Note that the previous analysis is very similar to that of [82, Section 6.2], but they did not consider the curve corresponding to SexTNFS as they argue in Remark 4.
The goal of this article is to go beyond the o(1)less estimation and to study in each case what is the best variant of NFS which applies, concluding on new key sizes. This type of estimations seems to be rare, but we can note the works of Lenstra [73] and of Bos et al. [18] which evaluate the security of RSA, DSA, and DH.
Families of PairingFriendly Curves
Depending on the required embedding degree, some families of curves have been built [38]. We recall here the most popular ones.
BN Curves
A BN curve [23] is an elliptic curve E defined over a finite field \({\mathbb {F}}_p\), \(p\ge 5\), such that its order r and p are prime numbers parametrized by
for some well chosen u in \({\mathbb {Z}}\). It has an equation of the form \(y^2 = x^3 +b\), where \(b\in {\mathbb {F}}_p^*\). BN curves have an embedding degree equal to 12. They were widely used for the 128bit security level until the recent results on the discrete logarithm problem in \({\mathbb {F}}_{p^{12}}^*\). Indeed, a 256bit prime p leads to a 256bit curve and to pairings taking values in \({\mathbb {F}}_{p^{12}}^*\), which is a 3072bit multiplicative group. Both groups involved are then supposed to match the 128bit security level according to the NIST recommendations [86] (which are however now invalidate by [67]). Incidentally, BN curves have been the object of numerous recent publications [6, 31, 34, 39, 51, 85, 98].
Finally, BN curves always have order 6 twists. If \(\xi \) is an element which is neither a square nor a cube in \({\mathbb {F}}_{p^2}\), the twisted curve \(E'\) of E is defined over \({\mathbb {F}}_{p^2}\) by the equation \( y^2 = x^3 +b'\) with \(b' = b/\xi \) or \(b' =b\xi \). In order to simplify the computations, the element \(\xi \) should also be used to represent \({\mathbb {F}}_{p^{12}}\) as a degree 6 extension of \({\mathbb {F}}_{p^2}\) (\({\mathbb {F}}_{p^{12}}={\mathbb {F}}_{p^2}[\gamma ]\) with \(\gamma ^6=\xi \)) [34, 76].
BLS Curves
BLS curves were introduced in [21]. They are also defined over a parametrized prime field \({\mathbb {F}}_p\) by an equation of the form \(y^2 = x^3 +b\) and have a twist of order 6 defined in the same way as BN curves. Contrary to BN curves, they do not have prime order but their order is divisible by a large parametrized prime r and the pairing will be defined on the rtorsions points. They are available for different embedding degrees but we are only interested here by the BLS12 and BLS24 families having embedding degrees 12 and 24 with respect to r. Until now, they were used for the 192bit security level [3]. The parametrizations are given by
KSS Curves
KSS curves are also available for different embedding degrees [70]. If the required embedding degree is 18, this is very similar to BLS curves (same defining equation, degree 6 twist, parametrized primes p and \(r\#E({\mathbb {F}}_p)\)). In this case, the parametrization is given by
If the required embedding degree is 16, the KSS16 curves are defined over a parametrized prime field \({\mathbb {F}}_p\) by an equation of the form \(y^2 = x^3 +ax\) and have a twist of order only 4. Again they do not have a prime order, but it is divisible by a parametrized prime r and the pairing will be defined on the rtorsions points. In this case, the parametrization is
Whatever the family, a curve is always obtained by finding a parameter u such that both p and r are prime numbers. The curve and its twist are generated by finding suitable coefficients which can usually be chosen small. More details on the generation process are given in Sect. 7.
Optimal Ate Pairing
There are several available pairings (Weil, Tate, ate, Rate, ...), but the most efficient pairing is always the socalled optimal ate pairing [99]. Let us recall this pairing in the context of ordinary elliptic curves defined over prime fields and more precisely in the case of the considered families.
Let E be an elliptic curve defined over the prime field \({\mathbb {F}}_p\). Let r be a prime divisor of \(\#E({\mathbb {F}}_p)\) and k the embedding degree relatively to r. We also assume that \(r^2\not \mid p^k1\) to ensure the nondegeneracy of the pairing. Let \(\tilde{E}\) be a degree d twist of E defined over \({\mathbb {F}}_{p^{e}}\) where \(e=k/d\) [54]. The optimal ate pairing is defined over \({\mathbb {G}}_1\times {\mathbb {G}}_2\) and takes its values in \({\mathbb {G}}_3\) where

\({\mathbb {G}}_1\) is the set of rational points on E of order r.

\({\mathbb {G}}_2\) is the image of \(\tilde{E}({\mathbb {F}}_{p^{e}})[r]\) in \(E({\mathbb {F}}_{p^k})\) by the twisting isomorphism.

\({\mathbb {G}}_3\) is the order r subgroup of \({\mathbb {F}}^*_{p^k}\)
For the considered parametrized curves, the optimal ate pairing of P and Q is mainly made of 2 parts. The first one (usually called the Miller loop) is the computation of \(f_{u,Q}(P)\), where u is (usually) the family parameter and the second one is an exponentiation to the power \(\frac{p^k1}{r}\). Assuming \(\ell _{A,B}\) denotes the line through points A and B, the precise pairings are given in Table 2 [54, 99].
The Spectrum of Possibilities for an Attack on the Field Side
An attacker who uses an algorithm of Index calculus type can make a series of choices : Decide which algorithm and variant to use, make practical improvements, select polynomials, and optimize the main parameters. In this section we explain what are the reasonable choices for an attacker and give arguments to eliminate other choices.
Choice of Algorithm
Let us make a list of the algorithms which can be implemented on a classical computer.
We discard the FFS algorithm [2, 5, 59, 61] and its pinpointing variant [63] by estimating the size of the factor base. Indeed, when the target is \({\mathbb {F}}_{p^k}\), the factor base of FFS is formed of all the monic polynomials \({\mathbb {F}}_{p}[x]\) of degree less than a parameter b. This has been confirmed by implementations of FFS [53, 55, 61] and pinpointing [63, 94]. Hence the factor base has at least p elements and then the linear algebra step has a cost of at least \(p^2\) operations, which is more than the security on the curve side evaluated to \(p^\frac{1}{2}\) operations.
We also discard the MNFS variants, i.e., the variants of NFS in which more than two sides are used. Indeed, the asymptotic complexity is close to that of NFS ([67, Table 2] so the “o(1)less” extrapolation leads us to results which are similar to those of the classical case (see Fig. 1). Detrey [32] and Lenstra and al. [68] made proofofconcept implementations of FFS and NFS for factoring, which are similar to NFS for discrete logarithms. Their results seem to show that the crossing point between classical and MNFS variants of NFS is around 1000 bits, but the gain is small, say less than 2 bits of security, so that we can ignore it in this article.
The three variants of NFS, classical [49, 60, 89], TNFS [16, 90] and JLSV [62], can be seen as particular cases of exTNFS [67], which remains the only algorithm to consider.
When p can be written as P(u) / v, for some polynomial \(P\in {\mathbb {Z}}[x]\) and some integers u and v (as it is the case for pairing applications), the polynomial selection is done differently and one of f and g has small coefficients. To emphasize this difference we give a different name to the algorithm by adding the letter S: the “special” variant of NFS is called SNFS, the special variant of exTNFS is called SexTNFS, the corresponding variant of TNFS is STNFS and the special variant of JLSV will be called SJLSV or simply JouxPierrot. This case encompasses but is not restricted to lowweight primes p, e.g., in an article [91] discussing the complexity of NFS on numbers which are midway between having a general form (NFS) and a polynomial form.(SNFS), these numbers are described as “low weigh numbers”.
In order to fix the notations, we recall the SexTNFS algorithm [67]:

1.
Polynomial selection Given a parameter \(\eta \), chosen among the divisors of n, one selects a polynomial \(h\in {\mathbb {Z}}[x]\) of degree \(\eta \) which is irreducible modulo p. Then one selects two polynomials f and g in \({\mathbb {Z}}[t,x]\) so that \(f\hbox { mod }\langle h(t),p\rangle \) and \(g\hbox { mod }\langle h(t),p\rangle \), seen as elements of \({\mathbb {F}}_{p^\eta }[x]\), have a common factor \(\varphi (x)\) which is irreducible of degree \(\kappa :=k/\eta \). In the particular case \(\gcd (\eta ,\kappa )=1\) we can take \(f,g\in {\mathbb {Z}}[x]\) which share an irreducible factor of degree \(\kappa \), whereas in the case \(\gcd (\eta ,\kappa )\ne 1\), we have to guarantee that f and g are not defined over a proper subfield of the number field of h.

2.
Sieve Given two parameters A and B, one collects all (up to sign) the degree 1 polynomials in \({\mathbb {F}}_{p^k}[x]\) or equivalently tuples in the set \(\{(a_0,\ldots ,a_{\eta 1},b_0,\ldots ,b_{\eta 1}\in [A,A]^{2\eta }\mid a_0\ge 0\}\), called sieving domain, so that \(N_f\) and \(N_g\) are Bsmooth (all prime factors are less than B), where
$$\begin{aligned} N_f={{\text {Res }}}_t\left( {{\text {Res }}}_x\left( \sum _{i=0}^{\eta 1}a_it^ix\sum _{i=0}^{\eta 1}b_it^i,f(t,x)\right) ,h(t)\right) \end{aligned}$$is the norm on the f side, and similarly for g instead of f. In order to emphasize the analogy with the simpler variants of NFS, we put \(E=A^\eta \) which is a good approximation of the square root of the cardinality of the sieving domain.

3.
Filtering Unknowns which occur in a single relation are called singletons and are deleted together with the corresponding equation. Additionally, using elementary transformations of the matrix one can create new singletons. This leads to a smaller matrix and hence a faster resolution of the linear system.

4.
Linear algebra step One computes the right kernel of the sparse matrix obtained after the filtering using the Wiedemann algorithm [101] or the Lanczos algorithm [71, 77] or their block variants [29] and [28, 80]. The coordinates of the kernel vector are called virtual logarithms.

5.
Individual logarithms Given a generator g of \({\mathbb {F}}_{p^n}\) and an element h, compute the discrete logarithm \(\log _gh\) using the virtual logarithms.
Practical Improvements
Although the complexity of NFS for DLP in \({\mathbb {F}}_p\) hasn’t changed for almost 30 years, its reallife speed has been improving continuously. In the jargon of the NFS community an improvement which changes only the o(1) term in the complexity is called a practical improvement.
Filtering
If an ideal occurs in a single relation then we can erase this ideal and its relation from the matrix. Thanks to the exceeding number of relations compared to the cardinality of the factor base, one can erase rows and do linear operations on the rows in order to create new singletons [26, Ch 3]. Table 3 summarizes how does the filtering behave in practice. It is hard to compare the different rows of the table because the authors of different records made different choices, some of which collected much more relations than needed (oversieved) and hence helped the filtering step reduce considerably the matrix.
We made an asymptotic estimation of the number of ideals which might be used to reduce the matrix and we obtained the following statement.
Conjecture 1
In the filtering step of NFS one reduces the matrix by a factor \((\log B)^{1+o(1)}\), where B is the smoothness bound.
Justification: Let \({\mathfrak {q}}\) be an ideal in the factor base of NFS lying above a prime q and let N denote the size of the norms product and B the smoothness bound. We shall argue that the following statements are true:

1.
If \(q<B/(\log B)^{1+\epsilon }\) with \(\epsilon >0\), then \({\mathfrak {q}}\) occurs in a number of relations which tends to infinity as B and N go to infinity.

2.
If \(q>B/(\log B)^{1\epsilon }\) with \(\epsilon >0\), then \({\mathfrak {q}}\) will occur in a number of relations which tends to 0 as B and N go to infinity.
The sieving domain has \(B^2\) elements (parameter tuning in NFS implies \(E=B\) where E is the square root of the number of sieved pairs [20]) and a proportion of 1 / q are divisible by \({\mathfrak {q}}\). They produce relations if the cofactor of size N / q is Bsmooth, for which we have no proven formula, but which is approximated by the proportion of integers in the interval [1, N / q] which are Bsmooth. Due to the theorem of Canfield, Erdös and Pomerance [25], this proportion is \(\rho \left( \frac{\log (N/q)}{\log B}\right) \) where \(\rho \) is Dickman’s function, i.e., the function such that \(\rho (v)=1\) for \(v\le 1\) and \(\rho '(v)=\rho (v1)/v\) for \(v>1\).
Recall that in NFS we set B so that \( \rho \left( \frac{\log N}{\log B}\right) ^{1}=B\) (once again see [20]). We put \(v=\frac{\log N}{\log B}\), so that we have \(\log B=v\log v\), \(\log N=v^2\log v\) and \(q>B/v^{1+2\epsilon }\) (resp. \(q<B/v^{12\epsilon }\)). We replace all variables on the righthand side by their expressions in terms of v and obtain that its logarithm is equivalent to \(v^{1+\epsilon }v\). It tends to \(\infty \) if \(\epsilon >0\) so the ideals of norm \(q<B/(\log B)\) occur in a very large number of relations and are unlikely to create singletons, so they are not erased during filtering. The righthand side tends to \(\infty \) if \(\epsilon <0\) so the ideals of norm \(q>B/\log (B)\) occur in almost no relations, and are very likely to be used during filtering.
Hence the filtering erases most of the ideals of norm larger than \(B/(\log B)^{1+o(1)}\) and keeps all but a negligible fraction of the others, so that the matrix size is reduced by a factor \((\log B)^{1+o(1)}\).\(\square \)
It seems then plausible that the filtering gain is a constant times \(\log (B)\), and by comparing it with Table 3, we model the gain by \(\log _2 B\).
Exploiting Automorphisms
Record computations with FFS [53, 55] and NFS [12] showed that if the target field is of the form \(p^{\kappa \eta }\) for two integers \(\eta \) and \(\kappa \) so that \(\kappa \) is small, then one can gain a factor \(\kappa \) in the sieve and a factor \(\kappa ^2\) in the linear algebra.
Kim and Barbulescu [67] explained that one has a similar gain in SexTNFS, where \(\kappa \) is to be replaced by \(\mathcal {A}\), the number of automorphisms of h which fix g times the number of automorphisms of g. If \(\kappa =1\) and h has \(\eta \) automorphisms, then the exact number of automorphisms is \(\mathcal {A}=\eta \), e.g., \(\mathcal {A}=\ell 1\) if \(h=\Phi _\ell \), the \(\ell \)th cyclotomic polynomial, for some prime \(\ell \). If \(\kappa =2\), one doubles the number of automorphisms thanks to the automorphisms of g. For example, if \(h=\Phi _7\) and \(g=x^2+\alpha x+\beta +t^4+t^2+tu\) for some integers \(\alpha \), \(\beta \), then \(\mathcal {A}=6\) because any automorphism in the set \(\{\tau ^i\sigma ^j, 0\le i\le 1,0\le j\le 2\}\) can be used (here \(\sigma : t\mapsto t^2\ \text{ and } \ \tau : x \mapsto \alpha x).\) Finally, if \(\kappa =3\) and \(\eta =4\) an attacker might use \(h=\phi _8\) and find polynomials g which have three automorphisms, so for a worst case analysis we count \(\mathcal {A}=12\).
Selection of Polynomials
The polynomial selection consists of selecting h, f, and g.
Choice of h
The polynomial \(h\in {\mathbb {Z}}[x]\) has two constraints, its degree is \(\eta \) and it is irreducible modulo p. Among the possible choices we select those having small norms for \(N_f\) and \(N_g\), which generally corresponds to the case when h has small coefficients. In all examples, we could select h with coefficients in \(\{0,1,1\}\) and experiments confirmed that the best choice is never much better than \(h=t^\eta t1\).
In Sect. 4.2.2, we saw that in order to use the Galois automorphisms the attacker has to find a polynomial h with nontrivial automorphisms. We ran an exhaustive search on the polynomials in \({\mathbb {Z}}[x]\) of degree less than 19 having coefficients less than 6 in absolute value. In this set, the only polynomials that have automorphisms of order different from 2 are those listed in Table 4.
Construction of f and g
One produces a large number of pairs of polynomials using one of the following methods: basem [20], basemSNFS [75], JouxPierrot [64], Conjugation [12], JLSV1 [62, Section 2.3], GJL [12, 78], algorithms A,B,C or D of Sarkar and Singh [95,96,97].
In this article, we focus on families of pairings where p is parametrized, then one choice of polynomials is by far the most natural. Let \(P(x)\in {\mathbb {Z}}[x]\) and the integers u, v be such that \(p=P(u)/v\). Then one can take \(f=P(x^\kappa +S(t,x))\) and \(g=x^\kappa +S(t,x)u\) for some \(S\in {\mathbb {Z}}[t,x]\) of degree in x less than \(\kappa \) so that g is irreducible in \(({\mathbb {F}}_p[t]/h)[x]\). In most cases, this is the only choice but for instance in the case of KSS 18 one can also take \(f=P(x2)\) and \(g=x2u\), with a nonnegligible effect on the complexity estimation.
How can we be sure that the attacker cannot find choices of f that we could not predict ? See [36] for a discussion about the consequences of this question on discrete logarithms in \({\mathbb {F}}_p\). The attacker cannot use the fastest versions of NFS (SNFS, STNFS, SexTNFS, JouxPierrot) unless he finds three polynomials, \(T(x,y)\in {\mathbb {Z}}[x,y]\) and \(U,V\in {\mathbb {Z}}(x)\) whose coefficients are bounded by an absolute constant, so that \(p=T(U(u),V(u))\) for some integer u, in which case he sets
In the case of SexTNFS, the coefficients of f occur at large powers in the norms, and hence we can restrict the search to very small constants. We ran the exhaustive search and obtained that the only alternative choices are \(f=P(x1)\) for KSS 16, \(f=P(x2)\) for KSS 18 and \(f=4x^44x^3 +12x^2 10x + 7\) and \(g=x(3u+1)\) for BN. In the rest of the security evaluation, we considered the alternative choices together with the natural ones.
Optimization
Murphy [83] introduced a map \(\alpha :{\mathbb {Q}}[x]\rightarrow {\mathbb {R}}\) which allows to decide which are the best polynomials for NFS. Barbulescu and Lachand [19] proved, when f is quadratic of fundamental negative discriminant, that for a random pair of relatively prime integers the norm \(N={{\text {Res }}}_x(abx,f)\) has the same probability to be Bsmooth (for a parameter B) as a random integer less than \(e^{\alpha (f)}N\). Because of the uncertainty on \(\alpha \) we cannot predict the exact cost of a DLP computation with NFS. In the previous paragraph, we saw that in the case of parametrized pairings we only have one or two choices of f and g. For each choice, we verify directly that \(\alpha (f)\approx 0\) whereas for linear polynomials the value of \(\alpha \) is constant equal to 0.56..., which is also the average value of \(\alpha \) on all polynomials [19].
Optimization of Parameters
Given a field \({\mathbb {F}}_{p^n}\) where the characteristic is parametrized by a polynomial P(u) / v of degree d, we decided to use SexTNFS with \(f=P(x^\kappa +S(t,x))\) and \(g=x^\kappa +S(t,x)u\) for some polynomial S of degree in x less than n. We also decided to use, if possible, h from Table 4 and otherwise \(h=t^{n/\kappa }t1\) because it is the simplest one and then the one providing the smallest norms. This choice is the best possible for the attacker. At this point, we need to decide which value of \(\kappa \) to use and to optimize parameters A and B.
Choice of \(\kappa \)
According to [67, Section 4.1] the parameter \(\kappa \) is chosen to minimize the norms product \(N_fN_g\approx E^{(d+1)\kappa }Q^\frac{1}{d\kappa }\), where E is the square root of the cardinality of the sieve space and Q is \(p^n\). This corresponds to
It was useful for us to guess the optimal value of \(\kappa \), which is the most likely to be optimal, but we do nevertheless an exhaustive search. Our method was to approximate \(\log _2 Q\) from Fig. 1 and to take \(E^2=2^s\) where s is the security level, which leads to Table 5. We verified that in every case the best value is in this table.
Optimization of the Bounds A and B
As before B denotes the smoothness bound and A the bound on the coefficients of the sieved polynomials. A pair of values is valid if the sieve produces enough relations, so we need to estimate the number of relations. The sieving space is formed of the pairs a(t), b(t) in \({\mathbb {Z}}(t)/h\) so that \(\deg a,\deg b\le \eta 1\). If \(\mu (t)\) is a root of unity of the number field of h then the pairs \((\mu a,\mu b)\) and (a, b) give the same multiplicative relation. In Sect. 4.1 we restricted \(a_0\) to positive values to account for the unit \(1\), here the sieving space shrinks further by the number of roots of unity divided by two.
where w is the index of \(\{1,1\}\) in the group of roots of unity. By a Monte Carlo integration (Appendix A) we estimate the bit size of the norms: We considered random tuples \((a_0,\ldots ,a_{\eta 1},b_0,\ldots ,b_{\eta 1})\) each of the components being uniformly chosen in the interval \([A,A]\). We call bit size of the norms the arithmetic mean of the bits sizes of the norms for each tuple in a sample of 25600 tuples (see Appendix A for more details). We emphasize that we average the logarithms \(\log _2 (N_f)\) and \(\log _2 (N_g)\), rather than \(N_f\) and \(N_g\), because the logarithms are used to compute the smoothness probabilities \(p_f=\rho \left( \frac{\log _2 N_f}{\log _2 B}\right) \) and \(p_g=\rho \left( \frac{\log _2 N_g}{\log _2 B}\right) \). This gives us the total number of relations which is
The factor base is formed of the prime ideals of norm less than B in the number fields of f and g, so the cardinality of the factor base is asymptotically equal to \(2B/\log (B)\). In some record computations, the number of relations is less than the cardinality of the factor base, e.g., \(68\%\) in [4], but for simplicity and without changing the complexity results by more than one bit, we consider that the attacker must collect at least as many relations as elements in the factor base. Hence, the validity condition is
Due to Galois automorphisms (see the discussion in Sect. 4.2.2) \(\frac{2B}{\mathcal {A}\log (B)}\) nonconjugate relations can be used to obtain 2B / log(B) relations (where \(\mathcal {A}\) is the number of automorphisms of h times the number of automorphisms of \({\mathbb {F}}_{p^n}/{\mathbb {F}}_{p^\eta }\) which fix f and g). Equivalently, we collect only \(\frac{2B}{\mathcal {A}\log (B)}\) relations and we keep one ideal in each class of conjugacy so that the cardinality of the reduced factor base becomes \(\frac{2B}{\mathcal {A}\log (B)}\). Each relation is obtained on average after testing \(p_f^{1}p_g^{1}\) elements of the sieving space, so the total number of enumerated (or sieved) elements is \(2B/(\mathcal {A}\log (B)p_fp_g)\).
The ratio between the real cost of the sieve and the number of tuples enumerated (or sieved) in the sieve is hard to evaluate so we call it \(c_\text {sieve}\). According to Table 6, \(c_\text {sieve}\) is almost constant in various computations realized with various variants of NFS. We stay on the safe side and model \(c_\text {sieve}\) to be a constant equal to 1.
One might ask if in the case of the new variants, TNFS, exTNFS, and SexTNFS, one can approximate \(c_\text {sieve}\) by its value in the classical variants. Examples abound where new attacks with better asymptotic complexity were actually slower in practice because of hidden constants [82]. In a recent record computation [41, Section 2.1], Grémy, Guillevic and Morain sieved a dimension three lattice with \(A=2^{16}\) in 359 CPU hours, which accounts for \(c_\text {sieve}\approx 27\). This value can decrease with the size of A and thanks to practical improvements. However, it is safe to assume that \(c_\text {sieve}\) will remain \(\ge 1\).
In the rest of the analysis we consider \(c_\text {sieve}=1\) which means that the sieving time equals the number of elements of the sieving domain. Finally we obtain
The size of the matrix sent to filtering is \(2B/\mathcal {A}\log (B)\). As explained in Section 4.2, it is reduced by a factor \(\log _2 B\). The number of nonzero entries per row in the reduced matrix varies between 100 and 200 in all records that we consider and we will approximate it by 128. Let then \(c_\text {lin.alg}\) be such that the cost of the linear algebra is \(c_\text {lin.alg} 2^7 B^2/(\mathcal {A}\log (B)\log _2(B))^2\), as it is expected to be using Wiedemann’s algorithm. The factor \(c_\text {lin.alg}\) accounts for the cost of a multiplication in \({\mathbb {F}}_r\), where r is the order of the pairings group. Since \(\log _2r\) varies by at most a factor 2 between various types of pairings and various security levels between 128 and 256, we expect \(c_\text {lin.alg}\) to be a constant. The records we summarized in Table 6 confirm that \(c_\text {lin.alg}\) is a constant close to 1.
We conclude this section with a model of the cost:
where \(\mathcal {A}\) can be upper bounded by \(\eta \kappa /\gcd (\eta ,\kappa )\).
For each pairing curve and choice of polynomials one has to solve an optimization problem: Find the values of \(\log _2 A\) and \(\log _2 B\) which minimize the cost in Equation 2 under the condition in Eq. 1.
Comparison to the Analysis of Menezes, Sarkar and Singh
At this point of the article, we can explain the difference between our analysis and that of [82].
Impreciseness in the Estimation of \(\log _2N_f\) and \(\log _2 N_g\)
In order to estimate the bit size of the norms \(N_f\) and \(N_g\), Menezes, Sarkar and Singh used the mathematical upper bounds. In an experiment we computed the distribution of the bit sizes of norms, and we present our results in Fig. 2. A script to reproduce the experiment on the same sample of 1000 pairs \((\mathbf a ,\mathbf b )\) is available online at [9]. The target is the 3072bit finite field corresponding to a BN curve in Sect. 6.2. The bitsize \(\log _2 N_f\) (in the left) varies between 175 and 244, which is much smaller than 740, the mathematical upper bound. Similarly, \(\log _2 N_g\) varies between 417 and 472 which is much smaller than 853, the mathematical upper bound.
In a similar experiment [82, Table 3], Menezes, Sarkar and Singh observed a similar situation : “It is possible that a nonnegligible fraction of these have norms close to the upper bounds. Our experiments only indicate that this fraction is less than 1/1000.” This corroborates with our results, but the figure shows at which point the upper bound used in [82] is far away from 99.9% of the values. We remark, however, that their method is adapted for a practitioner to convince himself that a NFS computation is feasible, because it gives an upper bound on the complexity, but it cannot be used to obtain security estimations which ideally would require a lower bound.
This difference is well known to practitioners of factoring and discrete logarithm. Hence, the CADONFS software package [14] sacrifices a few percents of relations by skipping the pairs \((\mathbf a ,\mathbf b )\) having large norms. For example, from file params.c90 in the parameters directory of CADONFS we learn that “lambda0/lambda1 is the early abort sieving parameter, and if [...] the approximation of the log of the remaining cofactor is larger than lambda times lpb, we reject”, where lambda0, lambda1 and lpb are parameters which control the percentage of relations we sacrifice. In this light, the size of the largest 0.1% of the norms has no impact on the behavior of NFS implementations.
Other Differences
Moreover, the analysis of Menezes, Sarkar and Singh in Section 6.3 does not mention a series of aspects which can decrease the running time of SexTNFS.

1.
Filtering On page 13 of [82] one reads “the linear algebra phase will have a cost approximatively \(B^2\)”. This ignores that the number of rows and columns of the matrix is reduced during the filtering step. There is no evidence that the reduction factor is a constant, and in Sect. 4.2.1 we give heuristic arguments that it is approximatively \(\log _2 B\). Moreover the size of the factor base is \(2B/\log (B)\) rather than B. The discussion on the arithmetic modulo r (\(\ell \) in their notations) on the same page is not necessary because elements of \({\mathbb {F}}_r\) are implemented on two or three machine words already in the records that we list in Table 6, and elements in \({\mathbb {F}}_r\) are stored on at most 4 machine words even for 256 bits of security.

2.
\((2A+1)\)instead ofA We did not find in [82] a discussion on the relation between E and A other than the asymptotic relation \(E\sim A^\eta \). An attacker can use (and is likely to do so) as sieving domain the set of tuples \((\mathbf a ,\mathbf b )\in {\mathbb {Z}}^{2\eta }\) which have the smallest value of \(\max ({ a }_\infty ,{ b }_\infty )\le A\), with A as small as possible so that the cardinality of the sieving domain allows to obtain enough relations. In this case we have the relation \((2A+1)^{2\eta }=E^2\). To our understanding, Menezes, Sarkar and Singh used the formula \(A=E^{1/\eta }\), which is less precise, especially when the quotient \(\eta /\log _2 E\) is not very small.

3.
Automorphisms and roots of unity Menezes, Sarkar and Singh do not use the Galois automorphisms that we discussed in Sect. 4.2.2.
Estimating SexTNFS Complexity on the Most Popular Pairings
In this section, we use the results of the previous section to estimate the security level provided by a given finite field \({\mathbb {F}}_{p^k}\) when p is parametrized by a polynomial P(x).
Summarizing the Process for Computing SexTNFS Cost
Let us first summarize the way to estimate the complexity of the SexTNFS algorithm. It is made of four steps.

Step 1: Parameter selection The first choice to be made is the one of the \(\kappa \). All divisors of k must be tested so that the following steps are done once for each \(\kappa \). However, the first values to try are the ones in Table 5. Then one has to choose the polynomial h such that \(\mathcal {A}\) is as large as possible and h is as simple as possible (small and few coefficients) and the polynomials f and g to define the commutative diagram given in the introduction. The details on the ways to choose these polynomials are given in Sect. 4.3. In this step, we also determine the number of roots of unity divided by two w and the number of automorphisms \(\mathcal {A}\).

Step 2: Choice of the bounds A and B These bounds will define the number of enumerated relations and the size of the factor basis, so they have a direct impact on the complexity. As already explained they must be chosen to minimize the cost in Eq. 2 under the condition in Eq. 1. This optimization problem will be solve by brute force because we do not need a very high accuracy. We first enumerate only integer values of \(\log _2 A\in [1,\frac{100}{\eta }]\) and \(\log _2 B\in [1,100]\) because the cost is lower bounded by \((A^{2\eta }+B^2)/1000\) which is more than \(2^{192}\) for larger values of A and B. We call \(\log _2 A_0\) and \(\log _2 B_0\) the optimum of this integer search. In a second time, we test all values of \(\log _2 A\) in the set \(\{\log _2 A_0+i/100\mid i\text { integer in }[100,100] \}\) and all values of \(\log _2 B\) in the set \(\{\log _2 B_0+j/5\mid j\text { integer in }[25,25]\}\). When the optimal values of A is less than 10, we switch from enumerating values of \(\log _2A\) to enumerating integer values of A.

Step 3: Verification At this point, we know what is the security level. For completeness, we continue by verifying once again by hand that the values of A and B, which were found by a nonproven program, are indeed valid parameters for SexTNFS by checking that the number of relations is larger than the cardinality of the factor base.

Step 4: Conclusion We inject A and B in Eq. 2 and verify once again that the cost of SexTNFS is that found by our unproven solver of the optimization problem.
Example: A BN Curve Where the Finite Field Has 3072 Bits
One of the most popular BN curve is the one associated with \(u=2^{62}2^{55}1\) which was evaluated to 128 bits of security before the recent developments on NFS. Let us follow Sect. 6.1 to estimate its real security level.

Step 1: Parameter selection We decide to use the SexTNFS algorithm with \(\kappa =2\) and \(\eta =6\) because it gives the best result from the viewpoint of the attacker. The intermediate field will be defined by \(h=t^6t^3t1\) which is irreducible modulo p. Indeed the cyclotomic polynomials \(\Phi _7,\Phi _9,\Phi _{14}\) and \(\Phi _{18}\) are not irreducible in this case and h is the "smallest" irreducible polynomial. (It has only 4 nonzero coefficients which moreover equal \(\pm 1\)) We tried several polynomials and found that \(x^2+tu\) is irreducible in \({\mathbb {F}}_{p^6}={\mathbb {F}}_p[t]/h(t)\) so that \({\mathbb {F}}_{p^{12}}={\mathbb {F}}_{p^6}[x]/(x^2+tu)\). Hence we can take \(f=P(x^2+t)\) (where P is the polynomial parametrizing p given in Sect. 3.1) and \(g=x^2+tu\). In this case, we have no nontrivial roots of unity (\(w=1\)) and \(\mathcal {A}=2\) because g has degree two (as explained in Sect. 4.2.2).

Step 2: Choice of the bounds A and B As explained in Sect. 6.1, we applied Steps 3 and 4 for many values of A and B to find that \(\log _2A=7.36\) and \(\log _2(B)=57\) are minimizing the cost given by Eq. 2.

Step 3: Verification The total number of tuples in the sieving space is \((2A+1)^{2\eta }/(2w)\), where \(w=1\) is the number of roots of unity of the number field of h, divided by 2, so the size of the sieving space is \(2^{99.45}\). By Monte Carlo integration (Appendix A), we estimate the norms on the two sides of the commutative diagram, and then one can approximate the smoothness probability using Dickman’s function
$$\begin{aligned}&\log _2(N_f)\approx 414.7\Rightarrow \rho \left( \frac{\log _2(N_f)}{\log _2(B)}\right) \approx 2^{21.41}\ \text{ and } \\&\log _2(N_g)\approx 460.8\Rightarrow \rho \left( \frac{\log _2(N_g)}{\log _2(B)}\right) \approx 2^{25.30} \end{aligned}$$Hence the number of relations is approximatively \(2^{99.4521.4125.30}\approx 2^{52.74}\). On the other hand, the cardinality of the factor base is approximatively \(2B/\log (B)\approx 2^{52.70}\), which is less than the number of relations, so we have enough relations (Eq. 1 is satisfied).

Step 4: Conclusion Equation 2 gives a security level of 99.69 bits. The details are as follows: The number of relations we need to collect is \(2^{51.70}\) and each relation is obtained after testing on average \(2^{21.41+25.30}=2^{46.71}\) pairs (a, b); hence the cost of the sieve is \(c_\text {sieve}2^{51.70+46.71}\approx 2^{98.41}\) assuming \(c_\text {sieve}\approx 1\); on the other hand, the filtering stage allows to reduce the matrix size by a factor around \(\log _2B=57\), its new size being \(N=2^{51.70}/57\approx 2^{46.87}\); the cost of the algorithms of sparse linear algebra is given by \(2^5N^2=2^{98.73}\) times the cost of an addition modulo p, which counts here for an elementary operation. Finally, we get the overall cost by adding the cost of the relation collection and the one of the linear algebra :\(2^{98.65}+2^{98.73}=2^{99.69}\) which means that the BN curve used in most of the existing implementations ensures no more than the 100bit security level.
General Results and Recommendations
The goal of this section is to determine the required size of the finite field involved in the pairings given in Sect. 3 to ensure the \(128, 192\), and 256bit security levels. For this, we follow the strategy given in Sect. 6.1 for each family of curves making at each step the most favorable choice (for the attacker). For example, we assumed that the number of automorphisms \(\mathcal {A}\) is maximal. If the parameter u (and therefore p) is selected such that the attacker cannot use the best polynomials listed in Table 7, then we observed an increase of up to 3 bits of security (Sects. 8.1.1 and 8.1.2). However, for the purpose of general recommendations, we consider that the attacker can use the best polynomials. The results are given in Tables 8, 9 and 10, which then contain our recommendations for the size of \(p^k\) where k is the embedding degree. Note that in the case of KSS16 and KSS18 curves for 128 bits of security the parameter A is very small (\(A=9\)), and one might want to compute the proportion of elements in the sieving space having each possible value of norms bit size. In every other case in this article, we checked that such a precise analysis arrives to the same results as our analysis.
New Parameters for the 128, 192, and 256bit Security Level
The goal of this section is to propose new parameters for the 128bit security level for the main families of curves given in Sect. 3 (BN, BLS12, KSS16, and KSS18). This is done in two steps. The first one consists in finding the size of the extension field ensuring this security level in the general case which means that we assume that all the improvements of the NFSlike algorithms can be used. This was done in Sect. 6.3 and the results are given in Table 8. We must also take care that the rtorsion subgroup of the elliptic curve involved in the pairing computation ensures the 128bit security level. For example, this is the limiting factor in the KSS cases. Then, for each family, we know the size of the curve parameter u that should be used to ensure the 128bit security level (Table 11) in the general case. These sizes guarantee the security level as soon as the parameter u has the required bit size.
The second step is to generate the best possible parameter u having the good bit size. Let us start with the generation of a BN curve.
New BN Parameter for Level 128
The way to build the parameter u is detailed in [33]: it should be chosen sparse and congruent to 7 or 11 mod 12 so that building \({\mathbb {F}}_{p^{12}}\) can be done via \(Y^6(1+\mathbf{i})\) over \({\mathbb {F}}_{p^2}={\mathbb {F}}_p[\mathbf{i}]\). We also impose the condition that the curve obtained is twistsecure [100] which means that \(p+1+t\) should have a 256bit prime factor (where t is the trace of the Frobenius as usual). We then performed an exhaustive search on u having increasing Hamming weight. There are no results of weight 2. We found some values having Hamming weight 3 but not satisfying the congruence. More precisely, the extension tower should be built using \(\sqrt{5}\) which is much less interesting in terms of \({\mathbb {F}}_{p^{12}}\) arithmetic. Finally, we found the value \(u=2^{114}+2^{101}2^{14}1\) which satisfies all the required conditions. The curve E defined over \({\mathbb {F}}_p\) by
is twistsecure (\(p+1+t\) has a 280bit prime factor) and \(u=7\) mod 12 so that \({\mathbb {F}}_{p^2}\) is defined by \(X^2+1\) and \({\mathbb {F}}_{p^{12}}\) by \(Y^6(1+\mathbf{i})\). The twisted curve \(E'\) is defined over \({\mathbb {F}}_{p^2}\) by
New BLS12 Parameter for Level 128
Most of the results of [33] can be used for BLS curves because the extension degree is also 12. Again, we performed an exhaustive search on the parameter u having increasing Hamming weight. We did not find any value of weight 2 but we found two having Hamming weight 3, \(2^{77}+2^{50}+2^{33}\) and \(2^{77}2^{59}+2^9\). In both cases \({\mathbb {F}}_{p^{12}}\) can be built via \(Y^6(1+\mathbf{i})\) over \({\mathbb {F}}_{p^2}={\mathbb {F}}_p[\mathbf{i}]\), which provides the best possible \({\mathbb {F}}_{p^{12}}\) arithmetic. We recommend to use the first one because if the second one is used, the cyclotomic polynomial \(\Phi _7\) is irreducible and can be used for h, which improves the SexNFS attack. Then, for \(u=2^{77}+2^{50}+2^{33}\), the elliptic curve E (resp. its twist \(E'\)) is defined over \({\mathbb {F}}_p\) (resp. \({\mathbb {F}}_{p^2}\)) by
E is of course twistsecure (thanks to a 273 prime factor).
New KSS16 Parameter for Level 128
In this case, the parameter u should have at least 34 bits to ensure the 128bit security level on the elliptic curve side. Unfortunately, an exhaustive search does not provide any suitable value of the parameter having Hamming weight less than or equal to 5. The sparser parameter we found is \(2^{34} + 2^{27}  2^{23} + 2^{20}  2^{11} + 1\). In this case, the extension field is defined by \(X^{16}2\) which provides the best possible \({\mathbb {F}}_{p^{16}}\) arithmetic. The elliptic curve E (resp. its twist \(E'\)) is defined over \({\mathbb {F}}_p\) (resp. \({\mathbb {F}}_{p^4}\)) by
And again, E is twistsecure (thanks to a 318bit prime factor). However, we found a suitable 35bit parameter having Hamming weight 5. Such a parameter will of course involve an additional doubling/squaring step in the exponentiation algorithms, but it will also involve one addition/multiplication step less. The impact on the Miller loop is negligible, but in the final exponentiation this means that a \({\mathbb {F}}_{p^{16}}\) multiplication is replaced by a cyclotomic squaring and this happens 9 times since 9 exponentiations by u are performed (see Sect. 9 for details). Since a cyclotomic squaring is more than twice faster than a \({\mathbb {F}}_{p^{16}}\) multiplication, it is better to use the 35bit parameter as long as \({\mathbb {F}}_p\) arithmetic is not impacted. For example, p has 330 bits for the 34bit value of u and 340 for the 35bit value. Hence, if a 32bit device is used, both values of p require 11 words so the \({\mathbb {F}}_p\) arithmetic is not impacted. On the contrary, if a 16bit device is used, choosing the 35bit value of u implies that p requires 22 words instead of 21. Then the 34bit value may be preferred in this case. This parameter is \(u=2^{35}2^{32}2^{18}+2^8+1\), \({\mathbb {F}}_{p^{16}}\) is also defined by \(X^{16}2\) and the elliptic curve E (resp. its twist \(E'\)) is defined over \({\mathbb {F}}_p\) (resp. \({\mathbb {F}}_{p^4}\)) by
E is of course twistsecure (thanks to a 281bit prime factor).
New KSS18 Parameter for Level 128
Again, the limiting factor for the security level is the elliptic curve size so that u should have at least 44 bits. Our exhaustive search provides no values having weight 2 or 3 and only one having weight 4. It is \(u= 2^{44} + 2^{22}  2^9 + 2\). In this case, \({\mathbb {F}}_{p^{18}}\) cannot be defined by \(X^{18}2\) but by \(X^{18}3\). The elliptic curves are defined by
The curve E is twistsecure (thanks to a 333bit prime factor).
Discussion on SubgroupSecure Curves for Level 128
All the curves provided are not protected against the socalled subgroup attacks. These attacks use the fact that the three groups involved in the pairing may have small cofactors [74]. They can be prevented by choosing resistant parameters. However they do not occur for all protocols. They can also be prevented by the use of some (potentially expensive) subgroup membership tests. Then subgroupsecure parameters are not always used in the literature and in reallife implementations. That is the reason why we provided non subgroupsecure parameters in the general case (better efficiency) and subgroupsecure ones in this higher security section (that should be preferred in some situations).
The definition of subgroup security for pairing is given in [8] and implies that one should be able to find factors of \({\mathbb {G}}_1,{\mathbb {G}}_2\), and \({\mathbb {G}}_3\). This can be done using the ECM method but it is very costly so one cannot perform an exhaustive search checking subgroup security at each step. As explained in [8], the most reasonable way to find a subgroupsecure curve for pairing applications is to find a parameter u such that \(\# {\mathbb {G}}_2/r\) and \(\# {\mathbb {G}}_3/r\) are primes. This is of course much easier to check but on the other hand there are much fewer candidates.
According Sect. 9, we are only interested in BLS12 and KSS16 curves in the case of security level 128. We then made an exhaustive search of increasing Hamming weight values of u satisfying this condition. For BLS12 curves, we find some parameters in weight 7. We give only one here, but the other ones are not so difficult to find: \(u=2^{77}2^{71}2^{64}+2^{37}+2^{35}+2^{22}2^5\). In this cases \({\mathbb {F}}_{p^{12}}\) can be built via \(Y^6(1+\mathbf{i})\) over \({\mathbb {F}}_{p^2}={\mathbb {F}}_p[\mathbf{i}]\) which provides the best possible \({\mathbb {F}}_{p^{12}}\) arithmetic. The elliptic curve E (resp. its twist \(E'\)) is defined over \({\mathbb {F}}_p\) (resp. \({\mathbb {F}}_{p^2}\)) by
E is of course twistsecure (thanks to a 433 prime factor).
The case of KSS16 curves is more complicated. We first remark that \(\# {\mathbb {G}}_2/r\) and \(\# {\mathbb {G}}_3/r\) are always even and often divisible by 17 [40] so we have interest to relax the condition. Unfortunately it was not sufficient to find a parameter of Hamming weight less than or equal to 10. This is due to the fact that \(\log _2(u)=34\) implies that there are not enough possibilities for u to have a reasonable probability that all the numbers involved (p, r, \(\# {\mathbb {G}}_2/2r\), \(\# {\mathbb {G}}_3/2r\)) are primes together (up to some \(17^n\) factor). As a consequence, it is probably more interesting to choose the previous subgroupsecure BLS curve or the non subgroupsecure KSS16 curve given in Sect. 8.1.3 together with the necessary subgroup membership tests (depending on the protocol).
New Parameters for Level 192
In the case of higher levels of security, we prefer to be more cautious. Instead of a comparison of the best curves, we simply give our own proposals. In terms of security, we are once again cautious, our curves having more than 192 bits of security. This is due to the nature of our approach. (The targeted extension field size is first determined in the worst case.) We give only a KSS18 and a BLS24 curve since there is no doubt that BN, BLS12, and KSS16 will be less efficient.
New KSS18 Parameter for Level 192
We saw in Table 9 that the parameter u should be chosen such that \(\log _2(u)\ge 85\). As in the 128bit case, we perform an exhaustive search of low Hamming weight values for u. The best value we found is \(u=2^{85}2^{31}2^{26}+2^6\). In this case, \({\mathbb {F}}_{p^{18}}\) can be defined by \(X^{18}2\). The elliptic curves are defined by
The curve E is twistsecure (thanks to a 652bit prime factor).
New BLS24 Parameter for Level 192
We saw in Table 9 that the parameter u should be chosen such that \(\log _2(u)\ge 56\). As in the 128bit case, we perform an exhaustive search of low Hamming weight values for u. The best value we found is \(u=2^{56}2^{43}+2^92^6\). In this case, \({\mathbb {F}}_{p^{24}}\) can be built via \(Y^{12}(1+\mathbf{i})\) over \({\mathbb {F}}_{p^2}={\mathbb {F}}_p[\mathbf{i}]\) which provides the best possible \({\mathbb {F}}_{p^{24}}\) arithmetic. The elliptic curves are defined by
E is of course twistsecure (thanks to a 427 prime factor).
New Parameters for Lever 256
As in the case of level 192, the fast pairings correspond to KSS18 and BLS24, thanks to theirs embedding degrees which are higher than that of BN, BLS12, and KSS16.
In order to keep the complexity low, we use only values of u which can be written as a small number of terms of the form \(2^a\) for some integers a. A side effect is that \(u\approx 2^a\) for some a and therefore \(p\approx u^{18}\) and \(p\approx u^{24}\) in the cases of KSS18 and respectively BLS24. This makes it difficult to tune \(\log _2 p^k\) precisely.
New KSS18 Parameter for Level 256
The parameter
allows to have a finite field bit size of 26,700. In this case, \({\mathbb {F}}_{p^{18}}\) can be defined by \(X^{18}2\). The elliptic curves are defined by
The curve E is twistsecure (thanks to a 1205bit prime factor).
New BLS24 Parameter for Level 256
We propose parameter
which allows to have a finite field of bit size 24760. In this case, \({\mathbb {F}}_{p^{24}}\) can be also built via \(Y^{12}(1+\mathbf{i})\) over \({\mathbb {F}}_{p^2}={\mathbb {F}}_p[\mathbf{i}]\). The elliptic curves are defined by
E is of course twistsecure (thanks to a 581 prime factor).
Effective Security of the Selected Curves
Let us now apply the strategy given in Sect. 6.1 to evaluate the real security of the proposed curves.
Level 128
BN
We study the BN curve proposed in the previous section, which has parameter \(u= 2^{114}+ 2^{101} 2^{14}  1\).

Step 1 The best results are obtained with \(\kappa =2\) and \(\eta =6\). The best choices for the polynomials are \(h=t^6  t^4 + t^2 + 1, g=x^2tu\) and \(f=P(x^2t)\). In this case, we have \(w=1\) and \(\mathcal {A}=2\) as in Sect. 6.2. As a consequence, we will find a higher security level here than in the general case.

Step 2\(A=1098\approx 2^{10.10}\) and \(B=2^{74.2}\) are minimizing Eq. 2 and satisfying Eq. 1.

Step 3 The size of the sieving space is \((2A+1)^{12}/2\approx 2^{132.21}\). The Monte Carlo integration (Appendix A) gives \(\log _2(N_f)\approx 557.0\) and \(\log _2(N_g)\approx 808.9\). Then the smoothness probabilities are approximatively equal to \(\rho \left( \frac{\log _2(N_f)}{\log _2(B)}\right) \approx 2^{22.87}\) and \(\rho \left( \frac{\log _2(N_g)}{\log _2(B)}\right) \approx 2^{40.52}\). Hence we expect a number of \(2^{132.2122.8740.52}\approx 2^{68.82}\) relations which is larger than the cardinality of the factor base which is around \(2^{68.78}\).

Step 4 Evaluating Eq. 2 with these data finally gives an overall complexity of \(2^{131.3}\).
Remark 1
The size of the parameters for this pairing are so that they guarantee 128 bits of security for arbitrary parameters in the BN family. However, this particular choice offers 131 bits of security.
BLS 12
The recommended parameter is \(u=2^{77}+2^{50}+2^{33}\).

Step 1 We chose \(\kappa =2\) and \(\eta =6\). The best polynomials are \(h=t^6t1\), \(f=P(x^2+t+t^2+t^4+1)\) where \(P(x)=(x1)^2(x^4x^2+1)+3x\) and \(g=x^2+t+t^2+t^4+1u\). In this case, we have \(w=7\) and \(\mathcal {A}=2\) (because g is quadratic).

Step 2\(A=1169\) and \(\log _2B=73.50\)

Step 3

\(\log _2(\text {sieve space})=133.30\)

\(\log _2 (N_f)=791.2\Rightarrow \log _2(\text {smoothness probability on the f side})=39.17\)

\(\log _2(N_g)=584.8\Rightarrow \log _2(\text {smoothness probability on the g side})=24.67\)

\(\log _2(\text {relations})= 69.46\)

\(\log _2(\text {reduced factor base})=67.83\) (enough relations)


Step 4 security=131.8
Remark 2
Zhaohui Cheng communicated to us two choices of BLS12 curses which have 127 bits of security:

equation \(y^2=x^3+9\) and parameter \(u=(2^{73}+2^{72}+2^{50}+2^{24})\);

equation \(y^2=x^3+7\) and parameter \(u=( 2^{12}+2^{48}+2^{49}+2^{50}+2^{51}+2^{52}+2^{53}+2^{54}+2^{55}+2^{56}+2^{57}+2^{58}+2^{59}+2^{60}+2^{61}+2^{62}+2^{63}+2^{64}+2^{65}+2^{66}+2^{67}+2^{68}+2^{69}+2^{70}+2^{72}+2^{73})\).
The field \(p^k\) is 5280 bits long instead of the 5530 bits required by the general estimations in Table 7. Hence, our approach of first finding general recommendations for each family (assuming the attacker can apply all improvements), then checking specific values of u, only loses \(5\%\) in length.
KSS 16
The recommended parameter is \(u=2^{35}  2^{32}  2^{18} + 2^8 + 1\).

Step 1 We chose \(\kappa =1\) and \(\eta =16\). The best polynomials are \(h=\Phi _{17}\), \(f=P(x1)\) and \(g=xu1\). In this case, we have \(w=17\) and \(\mathcal {A}=16\).

Step 2\(A=12\) and \(\log _2B=80\)

Step 3

\(\log _2(\text {sieve space})=143.52\)

\(\log _2 (N_f)=920.4\Rightarrow \log _2(\text {smoothness probability on the} f\text { side})=43.23\)

\(\log _2(N_g)=628.9\Rightarrow \log _2(\text {smoothness probability on the} g\text { side})=24.21\)

\(\log _2(\text {relations})= 76.08\)

\(\log _2(\text {reduced factor base})=71.20\) (enough relations)


Step 4 security=139.0. Note that this is the security only on the finite field side. The security on the elliptic curve side is 128 as required.
KSS 18
The recommended parameter is \(u=2^{44}+2^{22}2^{9}+2\).

Step 1 We chose \(\kappa =1\) and \(\eta =18\). The best polynomials are \(h=t^{18}t^4t^2t1\), \(f=P(x2)\) and \(g=xu2\). In this case, we have \(w=1\) and \(\mathcal {A}=1\).

Step 2\(A=11\) and \(\log _2B=82.5\)

Step 3

\(\log _2(\text {sieve space})=161.85\)

\(\log _2 (N_f)=920.4\Rightarrow \log _2(\text {smoothness probability on the }f\text { side})=36.21\)

\(\log _2(N_g)=628.9\Rightarrow \log _2(\text {smoothness probability on the }g\text { side})=38.33\)

\(\log _2(\text {relations})= 87.31\)

\(\log _2(\text {reduced factor base})=77.66\) (enough relations)


Step 4 security=152.4. Note that this is the security only on the finite field side. The security on the elliptic curve side is 128 as required.
Level 192
KSS18 for Level 192
To evaluate its real security, we use the way described in Sect. 6.1, and we get

Step 1 We chose \(\kappa =1\) and \(\eta =18\). The best polynomials are \(h=t^{18}t^4t^2t1\), \(f=P(x2)\) and \(g=xu2\). In this case, we have \(w=1\) and \(\mathcal {A}=1\).

Step 2\(A=34\) and \(\log _2B=108.9\)

Step 3

\(\log _2(\text {sieve space})=161.85\)

\(\log _2 (N_f)=1114\Rightarrow \log _2(\text {smoothness probability on the } f\text { side})=36.29\)

\(\log _2(N_g)=1642\Rightarrow \log _2(\text {smoothness probability on the } g\text { side})=63.99\)

\(\log _2(\text {relations})= 118.62\)

\(\log _2(\text {reduced factor base})=103.66\) (enough relations)


Step 4 security \(=204.09\).
BLS24 for Level 192
To evaluate its real security, we use the way described in Sect. 6.1 and we get

Step 1 We chose \(\kappa =1\) and \(\eta =24\). The best polynomials are \(h=t^{24} + t^4  t^3  t  1\), \(f=P(x)\) and \(g=xu\). In this case, we have \(w=1\) and \(\mathcal {A}=1\).

Step 2\(A=9\) and \(\log _2B=109.8\)

Step 3

\(\log _2(\text {sieve space})=202.90\)

\(\log _2 (N_f)=1295\Rightarrow \log _2(\text {smoothness probability on the } f\text { side})=44.85\)

\(\log _2(N_g)=1460\Rightarrow \log _2(\text {smoothness probability on the } g\text { side})=53.42\)

\(\log _2(\text {relations})= 104.63\)

\(\log _2(\text {reduced factor base})=104.55\) (enough relations)


Step 4 security \(=203.72\).
Level 256
KSS18

Step 1 We chose \(\kappa =2\) and \(\eta =9\). The best polynomials are \(h= t^9 +t^8+t^7t^61\), \(f=P(x^22)\) where \(P= x^8 + 5x^7+ 7x^6 + 37x^5 + 188x^4 + 259x^3 + 343x^2 + 1763x + 240\), \(g=x^22u\).

Step 2\(A=11747\) and \(\log _2B=137.7\)

Step 3

\(\log _2(\text {sieve space})=260.36\)

\(\log _2(N_f)=2185\) and \(\log _2(N_g)=1928\)

\(\log _2(\text {relations})=134.35\) > \(\log _2(\text {factor base})=131.12\) (enough relations)


Step 4 security = 257.13
BLS24

Step 1 We choose \(\kappa =1\) and \(\eta =24\). The best polynomials are \(h=t^{24}t^{23}t^{21}+t^{20}1\), \(f=P=(x1)^2*(x^8x^4+1)+3*x\), \(g=xu\).

Step 2\(A=23\) and \(\log _2B=138.5\)

Step 3

\(\log _2(\text {sieving domain})=262.62\)

\(\log _2(N_f)=1522\) and \(\log _2(N_g)=2619\)

\(\log _2(\text {relations})=137.28>131.92=\log _2(\text {factor base})\) (enough relations)


Step 4 security = 260.9
Complexity Estimations and Comparisons for the 128Bit Security Level
The goal of this section is to compare the pairing computation cost for the curves given in Sect. 7 at the 128bit security level. For this, we evaluate the cost of an optimal pairing [99] (because it is by far the most efficient at this security level) computation in each case (BN, BLS12, KSS16 and KSS18). Let us first recall the steps of the computation.
Optimal ate Pairing Computation
We do not give here the detailed algorithm to compute pairings but only what is necessary to analyze its complexity. More details can be found for example in [35].
The Miller Loop
Miller explains how to compute \(f_{u,Q}\) in [79]. The algorithm is based on the computation of [u]Q using the double and add algorithm. At each step of this algorithm, f is updated with the line function involved in the elliptic curve operation. This algorithm has been improved by many authors in particular using the twisted curve to eliminate denominators and replace \({\mathbb {F}}_{p^k}\) multiplications by sparse ones. The bestknown complexity for each step are obtained using projective coordinates [50]. They are given below

If \(d=6\), the doubling step requires one squaring in \({\mathbb {F}}_{p^k}\), denoted \(S_k\), one sparse multiplication in \({\mathbb {F}}_{p^k}\), denoted \(sM_k\) (for updating f) together with 2 multiplications in \({\mathbb {F}}_{p^e}\), denoted \(M_e\), 7 squarings in \({\mathbb {F}}_{p^e}\) and 2e multiplications in \({\mathbb {F}}_p\), denoted M (for doubling on the curve and computing the line involved in this doubling). If \(d=4\), the curve side requires one additional \(S_e\).

If \(d=6\), the mixed addition step requires one \(sM_k\) for updating f together with \(11M_e\), \(2S_e\) and \(2e\,M\) (or \(9M_e\), \(5S_e\) and \(2e\,M\) if \(d=4\)).

Additional lines in the pairing given in Table 2 are nothing but extra addition steps. In term of complexity, the last one is usually less expensive (\(4M_e\) and \(2e\,M\) for the curve side) because the resulting point on the curve is useless.

The computation of points of the form [p]Q is very easy because Q is in the peigenspace of the Frobenius map. Then it requires no more than 2 Frobenius mapping in \({\mathbb {F}}_{p^k}\), denoted \(F_k\). In practice, it requires even less but there is no interest to get into these kind of details for this comparison work.
The Final Exponentiation
It is usually split into two parts, an easy one with the exponent \(\frac{p^k1}{\phi _k(p)}\) (where \(\phi _k\) is the kth cyclotomic polynomial) and a hard one with the exponent \(\frac{\phi _k(p)}{r}\). The easy part is made of an inversion, denoted \(I_k\), and few multiplications and Frobenius mappings in \({\mathbb {F}}_{p^k}\). The hard part is much more expensive but Scott et al. [88] reduce this cost by writing the exponent in base p (because pth powering is only a Frobenius mapping). As p is polynomially parametrized by u, the result is obtained thanks to deg\(_u(p)1\) exponentiations by u and some additional \({\mathbb {F}}_{p^k}\) operations. The number of these additional operations can be reduce by considering powers of the pairing [37]. Note also that, thanks to the easy part of the final exponentiation, the squaring operations (which are widely used during the hard part) can be simplified. We can either use cyclotomic squarings [50], denoted \(cS_k\), or compressed squarings [6, 66], denoted \(s_k\). Compressed squarings are usually more efficient. However, this method has been developed in the case of degree 6 twists [6, 66]. There is no doubt that it can be adapted to the case of degree 4 twists (and then to KSS16 curves) but we did not find explicit formulas in the literature. Then, for a fairer comparison between the curves, we chose to consider both squaring methods in the following.
Finite Field Arithmetic
In order to compare the different candidates, we need a common base. It cannot be the field \({\mathbb {F}}_p\) because p has not the same size in all cases. So we have to go to the datawords level. We will only give global estimates so we need to make some assumptions that are close to an average environment. Then, we assume that we work on a 32 bits device because it is a good average between software, FPGA and embedded devices. We will also assume that \({\mathbb {F}}_p\) arithmetic is quadratic (even if the multiplication complexity can be subquadratic, the reduction usually stays quadratic). Finally, for simplicity, we will assume that \({\mathbb {F}}_p\) multiplications and squarings have almost the same cost, and we will neglect additions. Of course, these assumptions are very dependent on the device so we do not pretend that our result is valid in every case. Anyway, our goal here is not to get an universal comparison (which is not possible) but to have an idea of which curve has to be chosen to get the best efficiency. For a given precise device or context, such general estimates cannot replace a real implementation for a fair comparison.
Pairing computation makes a large use of \({\mathbb {F}}_{p^e}\) arithmetic. Let us first recall them in Table 12 for the considered values of e.
Concerning the \({\mathbb {F}}_{p^k}\) arithmetic, the complexities are given in the literature in the pairing context for extensions of degree 12 [3, 33], 16 [103] and 18 [3]. They are summarized in Table 13.
We made the simplistic assumption that the cost of Frobenius mapping in \({\mathbb {F}}_{p^k}\) is always \((k1)M\) which is not always the case (for example for \(p^2\) or \(p^3\) powering), but this has negligible impact on our comparison (there are few such mapping and this remark holds for all the considered cases).
\({\mathbb {F}}_p\) Complexities Estimations
BN Curve
In this case, the optimal ate pairing is given by
It is explained in Sect. 7 that \(u=2^{114}+2^{101}2^{14}1\) should be chosen to ensure the 128bit security level and the best possible extension field arithmetic. Then \(6u+2\) has length 116 and Hamming weight 7. As a consequence, the Miller loop requires 116 doubling steps and 6 addition steps. Extra lines computations require four Frobenius mapping (to compute [p]Q and \([p^2]Q\)), one addition step and one incomplete addition step. Then the overall cost is
Using Tables 12 and 13, this step requires 12068 multiplications in \({\mathbb {F}}_p\).
There are many ways to compute the final exponentiation for BN curves. The most efficient one is given in [37] and requires \(I_{12}+12M_{12}+3cS_{12}+4F_{12}\) in addition to the 3 exponentiation by u (because p has degree 4 in u). As u has length 114 and Hamming weight 4, each of these exponentiations requires 114 squarings and 3 multiplications. If the cyclotomic squaring are used, we need \(114cS_{12}+3M_{12}=2214M\) according Table 13. If the compressed squaring technique is used, we additionally need the simultaneous decompression of 4 elements. Then, according to Table 13 each exponentiation by u requires \(1621M+I\).
The final exponentiation then requires \(7485M+I\) or \(5706M+4I\) depending on the way to perform squarings. Finally, computing the optimal ate pairing for BN curve ensuring the 128bit security level requires \(19553M+I\) or \(17774M+4I\) depending on the way to perform squarings during the final exponentiation.
BLS12 Curve
The optimal ate pairing is simpler in this case since it is given by
We have seen that the best choice of u is \(2^{77}+2^{50}+2^{33}\), so that the Miller loop is made of 77 doubling steps and 2 addition steps. Then, its cost is
According [3], the final exponentiation requires \(I_{12}+12M_{12}+2cS_{12}+4F_{12}=825M+I\) in addition to the 5 exponentiation by u (because p has degree 6 in u). As u has length 77 and Hamming weight 3, each of these exponentiations requires 77 squarings and 2 multiplications. If the cyclotomic squaring are used, we need \(77cS_{12}+2M_{12}=1494M\). If the compressed squaring technique is used, we additionally need the simultaneous decompression of 3 elements so that each exponentiation by u requires \(1099M+I\).
The final exponentiation then requires \(8295M+I\) or \(6320M+6I\) depending on the way to perform squarings. Finally computing the optimal ate pairing for BLS12 curve ensuring the 128bit security level requires \(16003M+I\) or \(14028M+6I\).
KSS16 Curve
For KSS16 curves, the optimal ate pairing is given by
and u has been chosen to be \(2^{35}2^{32}2^{18}+2^8+1\) in Sect. 7. Then the Miller loop requires 35 doubling steps and four addition steps. According [103], extra lines computations require three Frobenius mapping (2 to compute [p]Q and one to raise to \(p^3\)) and two incomplete addition steps. The overall cost is then
According [40], the final exponentiation requires \(I_{16}+32M_{16}+34cS_{16}+24M_4+8F_{16}\) in addition to the 9 exponentiation by u (because p has degree 10 in u). As u has length 35 and Hamming weight 5, each of these exponentiations requires 35 cyclotomic squarings and 4 multiplications. According Table 13, each exponentiation by u then requires 1584M. Note that we do not find in the literature formulas for compressed squaring in the KSS16 case.
The final exponentiation then requires \(18542M+I\). Finally computing the optimal ate pairing for KSS16 curve ensuring the 128bit security level requires \(26076M+I\).
KSS18 Curve
In this case, the optimal ate pairing is given by
The best choice of u to ensure the 128bit security level is \(2^{44}+2^{22}2^9+2\) so that the Miller loop is made of 44 doubling steps and 3 addition steps. Extra lines computations requires one addition step and one Frobenius mapping (to compute \(f_{3,Q}(P)^p\)) together with one \({\mathbb {F}}_{p^{18}}\) multiplication (to multiply the result by \(f_{u,Q}(P)\)), 2 Frobenius mappings and one incomplete addition step [3]. Then its cost is
According [3, 37], the final exponentiation requires \(I_{18}+54M_{18}+8cS_{18}+29F_{18}=6785M+I\) in addition to the 7 exponentiation by u (because p has degree 8 in u). As u has length 44 and Hamming weight 4, each of these exponentiations requires 44 squarings and 3 multiplications. If the cyclotomic squaring are used, we need \(44cS_{18}+3M_{18}=1908M\). If the compressed squaring technique is used, we additionally need the simultaneous decompression of 4 elements so that each exponentiation by u requires \(1578M+I\).
The final exponentiation then requires \(20141M+I\) or \(17831M+8I\) depending on the way to perform squarings. Finally computing the optimal ate pairing for KSS18 curve ensuring the 128bit security level requires \(29572M+I\) or \(27262M+8I\).
Comparison
Let us first summarize the complexities obtained in the previous subsections.
We can obviously conclude that BLS12 curve is more efficient than BN one and that KSS16 is better than KSS18. It is more complicated to compare BLS12 and KSS16 because the base fields are not the same. For this, let us first compare the costs of M which is depending of p. As explained in Sect. 9.2, we assumed that we are working on a 32bit architecture but the theoretical results would be similar with another choice and, in any case, this will not replace reallife implementations. For BN and BLS12 curves, p has 461 bits so that 15 32bit words are necessary. For the KSS curves, 11 32bit words are necessary. As a consequence, we can assume that \(M=15^2=225\) for BN and BLS12 curves while \(M=11^2=121\) for KSS ones. Reporting these values in Table 14, we get the comparative Table 15.
In any case, the KSS16 curve gives the best result which was not expected at the beginning of this work. Of course the complexity for the BLS12 curve using compressed squaring is very close to the complexity of the KSS16 curve with cyclotomic squarings and a practical implementation should be done to confirm the estimated result obtained here. But KSS16 curves have not been studied as much as the BN curves and, more generally, as curves having a degree 6 twist. Then we are quite confident that optimal pairing on the KSS16 curve given in Sect. 8.1.3 can be improved for example by computing the formulas for compressed squaring in this case.
Conclusion
It was already known that the BN curves, widely used in the literature for the 128bit security level, do not ensure this security level because of the SexTNFS algorithm. In a recent paper, Menezes, Sarkar, and Singh [82] proposed new key sizes but their analysis is not precise enough. In this paper, we carefully estimated the complexity of this algorithm in the context of most common pairing families. We also explained why it is much more realistic than the one given in [82] for any reallife SexTNFS implementation. As a consequence, we give the updated security level of this curve which is in fact 100 bits. We also use this complexity estimation to determine the sizes of the finite field extensions that has to be used to ensure the 128, 192 and 256bit security levels, and obtained values which are more than \(66\%\) larger than the formerly used ones. According to these recommendations, we generate new pairing parameters especially in the \(128\), \(192\), and 256bit security levels that are twistsecure (but also some that are twist and subgroupsecure). Finally, for the curves ensuring 128 bits of security, we estimated the complexity of the optimal ate pairing for each proposed curve and concluded that BLS12 and, more surprisingly, KSS16 are the most efficient choices. Therefore, we encourage the community to study more precisely these curves and to propose software or hardware implementation to confirm our conclusions. This study which is focused on the most popular families is probably not complete since other families and/or embedding degrees could be more interesting.
References
 1.
G. Adj, I. CanalesMartínez, N. C. Cortés, A. Menezes, T. Oliveira, L. RiveraZamarripa, F. RodríguezHenríquez, Computing discrete logarithms in cryptographicallyinteresting characteristicthree finite fields. Cryptology ePrint Archive, Report 2016/914 (2016)
 2.
L.M. Adleman, The function field sieve, in Algorithmic Number Theory Symposium—ANTS I. Lecture Notes in Computer Science, vol. 877 (1994), pp. 108–121
 3.
D.F. Aranha, L. FuentesCastañeda, E. Knapp, A. Menezes, F. RodríguezHenríquez, Implementing pairings at the 192bit security level, in Pairingbased cryptography—PAIRING 2012. Lecture Notes in Computer Science, vol. 7708 (2013)
 4.
K. Aoki, J. Franke, T. Kleinjung, A. Lenstra, D.A. Osvik, A kilobit special number field sieve factorization. in Advances in Cryptology—ASIACRYPT 2007. Lecture notes in computer science, vol. 4833 (2007), pp. 1–12
 5.
L.M. Adleman, M.D.A. Huang, Function field sieve method for discrete logarithms over finite fields. Inf. Comput. 151(1), 5–16 (1999)
 6.
D. Aranha, K. Karabina, P. Longa, C. H. Gebotys, J López, Faster explicit formulas for computing pairings over ordinary curves, in Advances in Cryptology EUROCRYPT 2011. Lecture Notes in Computer Science, vol. 6632 (2011), pp. 48–68
 7.
G. Adj, A. Menezes, T. Oliveira, F. RodriguezHenriquez, Weakness of \({\mathbb{F}} _{3^{6\cdot 1429}}\) and \({\mathbb{F}} _{2^{4\cdot 3041}}\) for discrete logarithm cryptography. Finite Fields Their Appl. 32, 148–170 (2015)
 8.
P.S.L.M. Barreto, C. Costello, R. Misoczki, M. Naehrig, G.C.C.F. Pereira, G. Zanon, Subgroup security in pairingbased cryptography, in K. Lauter, F. RodríguezHenríquez, editors, Progress in Cryptology – LATINCRYPT 2015: 4th International Conference on Cryptology and Information Security in Latin America, Guadalajara, Mexico, August 23–26, 2015, Proceedings (Springer International Publishing, Cham, 2015), pp. 245–265
 9.
R. Barbulescu, S. Duquesne, Online supplement for “updating keysizes of pairings” (2017). Downloadable from https://webusers.imjprg.fr/~razvan.barbaud/Pairings/Pairings.html.
 10.
D. Boneh, M. Franklin, Identitybased encryption from the Weil pairing, in Advances in Cryptology—CRYPTO 2001. Lecture notes in computer science, vol. 2139 (2001), pp. 213–229
 11.
R. Barbulescu, P. Gaudry, A. Guillevic, F. Morain, Discrete logarithms in GF(\(p^2\))—160 digits (2014). Announcement available at the NMBRTHRY archives, item 004706
 12.
R. Barbulescu, P. Gaudry, A. Guillevic, F. Morain, Improving NFS for the discrete logarithm problem in nonprime finite fields, in Advances in Cryptology—EUROCRYPT 2015. Lecture Notes in Computer Science, vol. 9056 (2015), pp. 129–155
 13.
R. Barbulescu, P. Gaudry, A. Guillevic, F. Morain, New record in \({\mathbb{F}}_{p^3}\), (2015). Available online at https://webusers.imjprg.fr/~razvan.barbaud/p3dd52.pdf
 14.
C. Bouvier, P. Gaudry, L. Imbert, H. Jeljeli, E. Thomé, Discrete logarithms in GF(p)—180 digits, (2014). Announcement available at the NMBRTHRY archives, item 004703
 15.
R. Barbulescu, P. Gaudry, A. Joux, E. Thomé, A heuristic quasipolynomial algorithm for discrete logarithm in finite fields of small characteristic, in Advances in Cryptology—EUROCRYPT 2014. Lecture Notes in Computer Science, vol. 8441 (2014), pp. 1–16
 16.
R. Barbulescu, P. Gaudry, T. Kleinjung, The tower number field sieve, in Advances in Cryptology—ASIACRYPT 2015. Lecture Notes in Computer Science, vol. 9453 (2015), pp. 31–55
 17.
D. Boneh, C. Gentry, B. Waters, Collusion resistant broadcast encryption with short ciphertexts and private keys, in Advances in Cryptology—CRYPTO 2005. Lecture Notes in Computer Science, vol. 3621 (2005), pp. 258–275
 18.
J. Bos, M. Kaihara, T. Kleinjung, A. Lenstra, P. Montgomery, On the security of 1024bit RSA and 160bit elliptic curve cryptography. Cryptology ePrint Archive, Report 2009/389
 19.
R. Barbulescu, A. Lachand, Some mathematical remarks on the polynomial selection in NFS. Math. Comput. 86(303), 397–418 (2017)
 20.
J. P. Buhler, H. Lenstra Jr., C. Pomerance, Factoring integers with the number field sieve, in The Development of the Number Field Sieve. Lecture Notes in Mathematics, vol. 1554 (Springer, 1993), pp. 50–94
 21.
P. Barreto, B. Lynn, M. Scott, Constructing elliptic curves with prescribed embedding degrees, in Security in Communication Networks. Lecture Notes in Computer Science, vol. 2576 (2003), pp. 257–267
 22.
D. Boneh, B. Lynn, H. Shacham, Short signatures from the Weil pairing. J. Cryptol. 17(4), 297–319 (2004)
 23.
P. Barreto, M. Naehrig, Pairingfriendly elliptic curves of prime order. in Selected Areas in Cryptography–SAC 2005. Lecture Notes in Computer Science, vol. 3006 (2005), pp. 319–331
 24.
R. Barbulescu, C. Pierrot, The multiple number field sieve for medium and highcharacteristic finite fields. LMS J. Comput. Math. 17, 230–246 (2014). The published version contains an error which is corrected in version 2 available at https://hal.inria.fr/hal00952610.
 25.
E. R. Canfield, P. Erdös, C. Pomerance, On a problem of Oppenheim concerning factorisatio numerorum. J. Number Theory 17(1), 1–28 (1983)
 26.
S. Cavallar Hedwig, On the number field sieve integer factorisation algorithm. PhD thesis, Universiteit Leiden (2002)
 27.
D. Coppersmith, Fast evaluation of logarithms in fields of characteristic two. IEEE Trans. Inf. Theory 30(4), 587–594 (1984)
 28.
D. Coppersmith, Solving linear equations over GF(2): block Lanczos algorithm. Linear Algebra Appl. 192, 33–60 (1993)
 29.
D. Coppersmith, Solving homogeneous linear equations over GF(2) via block Wiedemann algorithm. Math. Comput. 62(205), 333–350 (1994)
 30.
A. Commeine, I. Semaev, An algorithm to solve the discrete logarithm problem with the number field sieve, in Public Key Cryptography—PKC 2006. Lecture Notes in Computer Science, vol. 3958 (2006), pp. 174–190
 31.
R. Cheung, S.Duquesne, J. Fan, N. Guillermin, I. Verbauwhede, G. X. Yao, FPGA implementation of pairings using residue number system and lazy reduction, in Cryptographic Hardware and Embedded Systems—CHES 2011. Lecture Notes in Computer Science, vol. 6917 (2011), pp. 421–441
 32.
J. Detrey, FFS factory: Adapting Coppersmith’s “factorization factory” to the function field sieve. Cryptology ePrint Archive, Report 2014/419 (2014)
 33.
S. Duquesne, N. El Mrabet, S. Haloui, F. Rondepierre, Choosing and generating parameters for low level pairing implementation on BN curves. Cryptology ePrint Archive, Report 2015/1212 (2015)
 34.
A. J. Devegili, M. Scott, R. Dahab, Implementing cryptographic pairings over Barreto–Naehrig curve, in PairingBased Cryptography—Pairing 2007. Lecture Notes in Computer Science, vol. 4575 (2007), pp. 197–207
 35.
N. El Mrabet, M. Joye, Guide to PairingBased Cryptography. Chapman & Hall/CRC Cryptography and Network Security Series (CRC Press, 2017)
 36.
J. Fried, P. Gaudry, N. Heninger, E. Thomé, A kilobit hidden SNFS discrete logarithm computation, in Annual International Conference on the Theory and Applications of Cryptographic Techniques. Lecture Notes in Computer Science, vol. 10210 (2017), pp. 202–231
 37.
L. FuentesCastañeda, E. Knapp, F. RdríuezHenríquez, Faster hashing to \({\mathbb{G}}_{2}\), in Selected Areas in Cryptography—SAC 2011. Lecture Notes in Computer Science, vol. 7118 (2011), pp. 412–430
 38.
D. Freeman, M. Scott, E. Teske, A taxonomy of pairingfriendly elliptic curves. J. Cryptol. 23(2), 224–280 (2010)
 39.
G. Grewal, R. Azarderakhsh, P. Longa, S. Hu, D. Jao, Efficient implementation of bilinear pairings on ARM processors, in Selected Areas in Cryptography. Lecture Notes in Computer Science, vol. 7707 (2013), pp. 149–165
 40.
L. Ghammam, E. Fouotsa, Adequate elliptic curves for computing the product of n pairings, in Arithmetic of Finite Fields—WAIFI 2016. Lecture Notes in Computer Science, vol. 10064 (2016), pp. 36–352
 41.
L. Grémy, A. Guillevic, F. Morain, E. Thomé, Computing discrete logarithms in GF(\(p^6\)), in Selected Areas in Cryptography—SAC 2017. Lecture notes in computer science (2017)
 42.
F. Göloğlu, R. Granger, G. McGuire, J. Zumbrägel, On the function field sieve and the impact of higher splitting probabilities: application to discrete logarithms in \({\mathbb{F}}_{2^{1971}}\) (2013), Cryptology ePrint Archive, Report 2013/074
 43.
F. Göloğlu, R. Granger, G. McGuire, J. Zumbrägel, Solving a 6120bit DLP on a desktop computer, in Selected Areas in Cryptography—SAC. Lecture Notes in Computer Science, vol. 8282 (2013), pp. 136–152
 44.
P. Gaudry, L. Grémy, M. Videau, Collecting relations in the number field sieve in GF(\(p^6\)). LMS J. Comput. Math. 19(A), 332–350 (2016)
 45.
R. Granger, T. Kleinjung, J. Zumbrägel, Breaking 128bit secure supersingular binary curves, in Advances in Cryptology—CRYPTO 2014. Lecture Notes in Computer Science, vol. 8617 (2014), pp. 126–145
 46.
R. Granger, T. Kleinjung, J. Zumbrägel, On the powers of 2. Cryptology ePrint Archive, Report 2014/300 (2014)
 47.
R. Granger, T. Kleinjung, and J. Zumbrägel, On the discrete logarithm problem in finite fields of fixed characteristic. Trans. Am. Math. Soc. (2017)
 48.
A. Guillevic, F. Morain, E. Thomé, Solving discrete logarithms on a 170bit MNT curve by pairing reduction, in Selected Areas in Cryptography—SAC 2016. Lecture Notes of Computer Science, vol. 10532 (2016)
 49.
D. Gordon, Discrete logarithms in GF(\(p\)) using the number field sieve. SIAM J. Discret. Math. 6(1), 124–138 (1993)
 50.
R. Granger, M. Scott, Faster squaring in the cyclotomic subgroup of sixth degree extensions. in Public Key Cryptography—PKC 2010. Lecture Notes in Computer Science, vol. 6056 (2010), pp. 209–223
 51.
C.C.F. Pereira Geovandro, M.A. Simplıcio Jr., M. Naehrig, P. Barreto, A family of implementationfriendly BN elliptic curves. J. Syst. Softw. 84(8), 1319–1326 (2011)
 52.
K. Hayasaka, K. Aoki, T. Kobayashi, T. Takagi, A construction of 3dimensional lattice sieve for number field sieve over GF(\(p^n\)). Cryptology ePrint Archive, Report 2015/1179 (2015) http://eprint.iacr.org/2014/300
 53.
T. Hayashi, T. Shimoyama, N. Shinohara, T. Takagi, Breaking pairingbased cryptosystems using \(\eta _t\) pairing over GF(\(3^{97}\)), in Advances in cryptology—ASIACRYPT 2012. Lecture Notes in Computer Science, vol. 7658 (2012), pp. 43–60
 54.
F. Hess, N. Smart, F. Vercauteren, The Eta pairing revisited. IEEE Trans. Inf. Theory 52(10), 4595–4602 (2006)
 55.
T. Hayashi, N. Shinohara, L. Wang, S. Matsuo, M. Shirase, T. Takagi, Solving a 676bit discrete logarithm problem in GF(\(3^{6n}\)), in Public Key Cryptography—PKC 2010. Lecture Notes in Computer Science, vol. 6056 (2010), pp. 351–367
 56.
IEEE, 1363.32013—IEEE standard for identitybased cryptographic techniques using pairings (2017). Can be purchased online at http://ieeexplore.ieee.org/document/6662370/
 57.
ISO, Iso/iec 180335:2015 (2015). Can be purchased online at https://www.iso.org/obp/ui/#iso:std:59948:en
 58.
J. Jeong, T. Kim, Extended tower number field sieve with application to finite fields of arbitrary composite extension degree. Cryptology ePrint Archive, Report 2016/526 (2016). http://eprint.iacr.org/2016/526
 59.
A. Joux, R. Lercier, The function field sieve is quite special, in Algorithmic Number Theory Symposium—ANTS V. Lecture notes in computer science, vol. 2369 (2002), pp. 431–445
 60.
A. Joux, R. Lercier, Improvements to the general number field for discrete logarithms in prime fields. Math. Comput. 72(242), 953–967 (2003)
 61.
A. Joux, R. Lercier, The function field sieve in the medium prime case, in Advances in Cryptology—EUROCRYPT 2006. Lecture Notes in Computer Science, vol. 4005 (2006), pp. 254–270
 62.
A. Joux, R. Lercier, N. Smart, F. Vercauteren, The number field sieve in the medium prime case, in Advances in Cryptology—CRYPTO 2006. Lecture Notes in Computer Science, vol. 4117 (2006), pp. 326–344
 63.
A. Joux, Faster index calculus for the medium prime case application to 1175bit and 1425bit finite fields, in Advances in cryptology—EUROCRYPT 2013. Lecture Notes in Computer Science, vol. 7881 (2013), pp. 177–193
 64.
A. Joux, C. Pierrot, The special number field sieve in \({\mathbb{F}}_{p^n}\)—application to pairingfriendly constructions, in PairingBased Cryptography—Pairing 2013. Lecture Notes in Computer Science, vol. 8365 (2013), pp. 45–61
 65.
A. Joux, C. Pierrot, Improving the polynomial time precomputation of Frobenius representation discrete logarithm algorithms, in Advances in Cryptology—ASIACRYPT 2014. Lecture Notes in Computer Science, vol. 8873 (2014), pp. 378–397
 66.
K. Karabina, Squaring in cyclotomic subgroups. Math. Comput. 82(281) (2013)
 67.
T. Kim, R. Barbulescu, The extended tower number field sieve: A new complexity for the medium prime case, in Advances in Cryptology—CRYPTO 2016. Lecture Notes in Computer Science, vol. 9814 (2016), pp. 543–571
 68.
T. Kleinjung, J. Bos, A. Lenstra, Mersenne factorization factory, in International Conference on the Theory and Application of Cryptology and Information Security. Lecture Notes in Computer Science, vol. 8873 (2014), pp. 358–377
 69.
T. Kleinjung, C. Diem, A. Lenstra, C. Priplata, C. Stahlke, Discrete logarithms in GF(p)—768 bits (2016). Announcement available at the NMBRTHRY archives, item 004917
 70.
E.J. Kachisa, E.F. Schaefer, M. Scott, Constructing BrezingWeng pairingfriendly elliptic curves using elements in the cyclotomic field, in PairingBased Cryptography—Pairing 2008. Lecture Notes in Computer Science, vol. 5209 (2008), pp. 126–135
 71.
C. Lanczos, Solution of systems of linear equations by minimized iterations. J. Res. Nat. Bur. Standards 49(1), 33–53 (1952)
 72.
A. Lenstra, Unbelievable security matching AES security using public key systems, in International Conference on the Theory and Application of Cryptology and Information Security. Lecture Notes in Computer Science, vol. 2188 (2001), pp. 67–86
 73.
A. Lenstra, Unbelievable security: Matching AES security using public key systems, in Advances in cryptology—ASIACRYPT 2001. Lecture Notes in Computer Science, vol. 2248 (2001), pp. 67–86
 74.
C.H. Lim, P.J. Lee, A key recovery attack on discrete logbased schemes using a prime order subgroup, in Advances in Cryptology—CRYPTO ’97. Lecture Notes in Computer Science, vol. 1294 (1997), pp. 249–263
 75.
A. Lenstra, H. Lenstra Jr., M. Manasse, J. Pollard, The number field sieve, in Proceedings of the TwentySecond Annual ACM Symposium on Theory of Computing. ACM (1990), pp. 564–572
 76.
R. Lidl, H. Niederreiter, Finite Fields. (Cambridge University Press, 1997)
 77.
B. LaMacchia, A. Odlyzko, Solving large sparse linear systems over finite fields, in Advances in Cryptology—CRYPTO 1990. Lecture Notes in Computer Science, vol. 537 (1990), pp. 109–133
 78.
D. Matyukhin, Effective version of the number field sieve for discrete logarithms in the field GF\((p^k)\) (in Russian). Trudy po Discretnoi Matematike 9, 121–151 (2006)
 79.
V. Miller, The Weil pairing and its efficient calculation. J. Cryptol. 17(4), 235–261 (2004)
 80.
P. Montgomery, A block Lanczos algorithm for finding dependencies over GF(2), in Advances in Cryptology—EUROCRYPT 1995. vol. 921 (Springer, 1995), pp. 106–120
 81.
D. Moody, R.C. Peralta, R.A. Perlner, A.R. Regenscheid, A.L. Roginsky, L. Chen, Report on pairingbased cryptography2015. Can be freely downloaded from http://nvlpubs.nist.gov/nistpubs/jres/120/jres.120.002.pdf
 82.
A. Menezes, P. Sarkar, S. Singh, Challenges with assessing the impact of NFS advances on the security of pairingbased cryptography, in Paradigms in Cryptology—Mycrypt 2016. Lecture Notes in Computer Science, vol. 10311 (2016)
 83.
B. Murphy, Modelling the yield of number field sieve polynomials, in Algorithmic Number Theory Symposium—ANTS III. Lecture Notes in Computer Science, vol. 1423 (1998), pp. 137–150
 84.
European Network and Information Security Agency, Algorithms, key sizes and parameters report—2013 (2013)
 85.
M. Naehrig, R. Niederhagen, P. Schwabe, New software speed records for cryptographic pairings, in Progress in Cryptology—LATINCRYPT 2010. Lecture Notes in Computer Science, vol. 6212 (2010), pp. 109–123
 86.
National Institute of Standards and Technology (NIST), NIST special publication 80057 part 1 (revised): recommendation for key management, part 1: General (revised), (July 2012). Publication available online at http://csrc.nist.gov/publications/nistpubs/80057/sp80057Part1revised2_Mar082007.pdf
 87.
C. Pierrot, The multiple number field sieve with conjugation and generalized JouxLercier methods, in Advances in Cryptology—EUROCRYPT 2015. Lecture Notes in Computer Science, vol. 9056 (2015), pp. 156–170
 88.
M. Scott, N. Benger, M. Charlemagne, L.J. Dominguez Perez, E.J. Kachisa, On the final exponentiation for calculating pairings on ordinary elliptic curves, in PairingBased Cryptography—PAIRING 2009. Lecture Notes in Computer Science, , vol. 5671 (2009), pp. 78–88
 89.
O. Schirokauer, Discrete logarithms and local units. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 345(1676), 409–423 (1993)
 90.
O. Schirokauer, Using number fields to compute logarithms in finite fields. Math. Comput. 69(231), 1267–1283 (2000)
 91.
O. Schirokauer, The number field sieve for integers of low weight. Math. Comput. 79(269), 583–602 (2010)
 92.
I. Semaev, Special prime numbers and discrete logs in finite prime fields. Math. Comput. 71(237), 363–377 (2002)
 93.
N. Smart, ECRYPT II yearly report on algorithms and key sizes (20112012). (2012)
 94.
P. Sarkar, S. Singh, Fine tuning the function field sieve algorithm for the medium prime case. IEEE Trans. Inf. Theory 62(4), 2233–2253 (2016)
 95.
P. Sarkar, S. Singh, A generalisation of the conjugation method for polynomial selection for the extended tower number field sieve algorithm. Cryptology ePrint Archive, Report 2016/537 (2016)
 96.
P. Sarkar, S. Singh, New complexity tradeoffs for the (multiple) number field sieve algorithm in nonprime fields, in Annual International Conference on the Theory and Applications of Cryptographic Techniques (2016), pp. 429–458
 97.
P. Sarkar, S. Singh, Tower number field sieve variant of a recent polynomial selection method. Cryptology ePrint Archive, Report 2016/401 (2016)
 98.
T. Unterluggauer, E. Wenger, Efficient pairings and ECC for embedded systems, in Cryptographic Hardware and Embedded Systems—CHES 2014. Lecture Notes in Computer Science, vol. 8731 (2014), pp. 298–315
 99.
F. Vercauteren, Optimal pairings. IEEE Trans. Inf. Theory 56, 455–461 (2009)
 100.
F. Valette, R. Lercier, P.A. Fouque, D. Réal, Fault attack on elliptic curve Montgomery ladder implementation, in 5th Workshop on Fault Diagnosis and Tolerance in Cryptography. IEEE (2008), pp. 92–98
 101.
D. Wiedemann, Solving sparse linear equations over finite fields. IEEE Trans. Inf. Theory 32(1), 54–62 (1986)
 102.
P. Zajac, On the use of the lattice sieve in the 3D NFS. Tatra Mountains Mathematical Publications 45(1), 161–172 (2010)
 103.
X. Zhang, D. Lin, Analysis of optimum pairing products at high security levels, in Progress in Cryptology—INDOCRYPT 2012. Lecture Notes in Computer Science, vol. 7668 (2012), pp. 412–430
Author information
Affiliations
Corresponding author
Additional information
Communicated by Nigel Smart.
Appendices
Numerical Integration
The size of the norms can by computed via numerical methods. Due to the known upper bounds, we can certify that our results are correct up to an error probability of \(2^{128}\), so that our chances to be wrong are equal to the chances of an attacker to break the system by pure luck.
Given a polynomial f and a sieve parameter A let c(f, A) be the average of value of {\(\log _2 N_f(e) \mid e\) tuple in sieving domain} and U(f, A) an upper bound on the norms on the f side for pairs in the sieving domain. Let \(e_1\), \(\ldots \), \(e_T\) be random tuples in the sieving domain, uniformly and independently chosen. Then the Chernoff theorem applied to the random variables \(\frac{\log _2 N_f(e_1)}{\log _2 U(f)}\),\(\ldots \), \(\frac{\log _2 N_f(e_T)}{\log _2 U(f)}\) states that for any constant \(\varepsilon >0\)
For \(\varepsilon =0.05\), we solve the equation \(e^{2\varepsilon ^2T}=2^{128}\) and obtain \(T=25{,}600\).
One Cannot Change the Complexity Inside the Frame of NFS
Variants of NFS where p is parametrized (SNFS, STNFS, SexTNFS, JouxPierrot) are considered to be the dream situation for an attacker. Fried et al. [36, Sec 4.1] made a series of arguments very similar to the arguments that we use below.
Fact 1
Let \(p^n\) be a prime power and let \(f,g\in {\mathbb {Z}}[x]\) two polynomials which have a common factor \(\varphi \) modulo p which is irreducible of degree n. A variant of NFS which uses these polynomials and find all relations in a time proportional to or larger than the size of the sieving space (enumeration or sieving) has complexity at least \(L[32]^{1+o(1)}\) where \(L[c]=\exp (c^{1/3}\log {p^n}^{1/3}\log \log (p^n)^{2/3})\).
Argument
Step 1 We start by proving that \(p^n\) divides the resultant of f and g : \(p^n\mid {{\text {Res }}}(f,g)\). Indeed, the resultant is the discriminant of the Sylvester matrix and further the volume of the lattice \(L=f {\mathbb {Z}}[x]+g{\mathbb {Z}}[x]\) inside of \({\mathbb {Z}}[x]\). Since \(L'=p{\mathbb {Z}}[x]+\varphi {\mathbb {Z}}[x]\) is a lattice which contains L we conclude that the volume of the latter divides the volume of the former : \(p^n\) divides \({{\text {Res }}}(f,g)\).
Step 2 Let \(d_f\) and \(d_g\) be the degrees of f and g and let E be the sieve parameter (\(E=A^\eta \) in STNFS and SexTNFS). We have \({{\text {Res }}}(f,g)\le d_f\log _2{ g }+d_g\log _2{ f }\) which creates the constraint
We easily compute the size of the norms as \(\log _2{ f }+\log _2{ g }+(d_f+d_g)\log _2E\). At this point we note that we have to solve an optimization problem for which we set \(K:=\log _2(p^n)\) and \(\log _2E\), which are constants, and we set the variables \(x_1=d_f \log _2 E\), \(x_2=d_g\log _2 E\), \(y_1=\max (1,\log _2{ g })\) et \(y_2=\max (1,\log _2{ f })\). Hence the problem becomes:
Step 3 Put \(F(x_1,y_1,x_2,y_2)=x_1y_1+x_2y_2\) and \(G(x_1,y_1,x_2,y_2)=x_1+y_1+x_2+y_2\). The local extrema of G on an set where F is constant are obtained in one of the three situations:

1.
\(\nabla F\parallel \nabla G\)

2.
One of the variables is on the boundary (\(y_1=1\), \(y_2=1\), \(x_1=\log _2E\) or \(x_2=\log _2E\)), say \(y_1=1\), and \(\nabla F\mid _{\{y_1=1\}}\parallel \nabla G\mid _{\{y_1=1\}}\)

3.
Two or more variables are on the boundary.
We check that all extrema of points (i) and (ii) are maxima, so we are left with case (iii) which further divides in four cases:

1.
\(y_1=y_2=1\), i.e., \(\log _2{ f }\le 1\) and \(\log _2{ g }\le 1\);

2.
\(x_1=x_2=\log _2E\), i.e., \(d_f=d_g=1\);

3.
\(y_2=1\) and \(x_1=\log _2E\), i.e., \(\log _2{ f }\le 1\) and \(d_f=1\) or viceversa.

4.
\(x_1=\log _2 E\) and \(y_1=1\) or the same for \(x_2\) and \(y_2\). This is the NFS case with \(\log _2{ f }=d_g=1\).
In case (1), the minimum is \(x_1+x_2+y_1+y_2=K+2\) while in the case (2) the minimum is \(2\log _2 E+K/\log _2E\). In case (3) we find that the optimized expression becomes \(x_2+(Kx_2\log _2E)+1+\log _2E\) whose minimum is obtained when \(y_1=1\), and we are again in case (2). Finally, in case (4), we have \(x_2y_2=K1\) and we have to minimize \(2+x_2+y_2\). This happens when \(x_2=y_2=\sqrt{K1}\) and \(x_1+y_1+x_2+y_2=2+2\sqrt{K1}\).
We can now compare the local minima and conclude that the global minimum is
Note that the minimum is independent on the value of E.
step 4 It is classical to estimate the cost of NFS as \(B\rho (\frac{\log _2N}{\log _2 B})+B^2\) where N is the norms product and \(\rho \) is Dickman’s function, which is \(2+2\sqrt{K1}\approx 2\sqrt{\log _2(p^n)}\). Then the classical analysis of NFS leads to the complexity of SexTNFS: \(L[32]^{1+o(1)}\).
Fried et al [36] noted that a multiple variant of SexTNFS is impossible, so it is safe to say that Fig. 1 cannot contain a curve below the one used in this article to approximate the security of parametrized pairings.
Practical improvements will continue to come but they will modify only the o(1) term. A hypothetical algorithm which would beat SexTNFS needs to produce relations faster than by enumerating all elements of a sieving space, as it happened in small characteristic with pinpointing, or it would have to completely abandon the NFS diagram. Such an algorithm would be a great discontinuity, comparable to a possible subexponential algorithm for DLP on elliptic curves.
Rights and permissions
About this article
Cite this article
Barbulescu, R., Duquesne, S. Updating Key Size Estimations for Pairings. J Cryptol 32, 1298–1336 (2019). https://doi.org/10.1007/s0014501892805
Received:
Revised:
Published:
Issue Date:
Keywords
 Pairingbased cryptology
 Number field sieve
 Discrete logarithm