1 Introduction

The elliptic curve discrete logarithm problem (ECDLP) is widely believed to be one of the hardest computational number theory problem used in cryptography. While integer factorization and discrete logarithms over finite fields suffer from index calculus attacks of subexponential or even quasipolynomial complexity, recommended key sizes for elliptic curve cryptography correspond to the birthday paradox bound complexity of generic discrete logarithm algorithms.

In the last ten years starting from the seminal work of Semaev [19], index calculus algorithms have progressively been adapted to elliptic curve discrete logarithm problems. However, the most efficient attacks target parameters that are not used in standards; attacks against binary curves rely on poorly understood Gröbner basis assumptions; and almost no attacks at all have been proposed against the most important family of curves, namely elliptic curves defined over prime fields.

Contributions. In this paper, we provide new index calculus algorithms to solve elliptic curve discrete logarithm problems over prime fields of cardinality p.

The factor bases in our algorithms are of the form \(\mathcal {F}:=\{(x,y)\in E(K) | L(x)=0\}\), where L is a large-degree rational map. We additionally require that L is a composition of small-degree rational maps \(L_j\), \(j=1,\ldots ,n'\), such that the large-degree constraint \(L(x)=0\) can be replaced by a system of low degree constraints \(x_2=L_1(x)\), \(x_3=L_2(x_2)\), \(x_4=L_3(x_3)\), ..., \(x_{n'}=L_{n'-1}(x_{n'-1})\), \(L_{n'}(x_{n'})=0\). Relations are computed by solving a polynomial system constructed from Semaev’s summation polynomials and the above decomposition of the map L.

Our factor bases generalize the factor bases used in previous works: Diem and Gaudry’s attacks [6, 10] implicitly use \(L(x)=x^q-x\) where q is the size of a subfield; small characteristic, prime degree extension attacks [79, 12, 17, 21] implicitly use the linearized polynomial corresponding to a vector space; and Semaev’s original factor basis [19] implicitly uses \(L(x)=\prod _{\alpha <B}(x-\alpha )\) for some B of appropriate size. The potential advantage of our polynomials L compared to the one implicitly used by Semaev is that they can be re-written in the form of a system of low degree polynomial equations, similar to systems occurring in the characteristic 2 case, which we then solve using Gröbner basis algorithms.

We specify two concrete instances of the above algorithm. In the first instance, we assume that \(p-1\) has a large divisor which is smooth, and we define L such that its roots form precisely a coset of a subgroup of smooth order. In the second instance, we assume the knowledge of an auxiliary curve over the same field with a large enough smooth subgroup, and we define L using the isogeny corresponding to that subgroup. We complete the second instance with two different algorithms to compute an auxiliary curve over a finite field, and we compare both methods.

Interestingly, the standardized curve NIST P-224 falls into the framework of our first algorithm. We also show that computing a finite field and an auxiliary curve for this field is as far as we know much easier than computing an auxiliary curve for a given finite field.

The complexity of our algorithms remains an open problem. We implemented both of them in Magma, and compared their performances to previous attacks on binary curves of comparable sizes. The experimental results suggest that in spite of a common structure, the systems are a bit more efficient to solve in binary cases than in prime cases. They also suggest that all the systems we studied are easier to solve than generic systems of “comparable parameters”. This may look encouraging from a cryptanalytic point of view, but we stress that the set of experiments is too limited to draw any conclusion at this stage (see also [11] for a criticism of the analysis of [17]). At the moment all attacks are outperformed by generic discrete logarithm algorithms for practically relevant parameters.

Perspectives. Our paper introduces a new algorithmic framework to solve ECDLP over prime fields. We hope that these ideas revive research in this area and lead to a better understanding of the elliptic curve discrete logarithm problem.

Proving meaningful complexity bounds for our algorithms appears very challenging today as they use Gröbner basis algorithms on non-understood families of polynomial systems with a special structure. At the time of writing it is not clear yet whether the special structure introduced in this paper leads to asymptotic improvements with respect to generic discrete logarithm algorithms. Of course, Gröbner basis algorithms may also not be the best tools to solve these systems. At the end of the paper we suggest that better, dedicated algorithms to solve these systems, inspired from existing root-finding algorithms, could perhaps lead to substantial efficiency and analysis improvements of our algorithms.

Related work. In recent years many index calculus algorithms have been proposed for elliptic curves [610, 12, 17, 19, 21]. All these papers except Semaev’s paper [19] focus on elliptic curves defined over extension fields, and Semaev did not provide an algorithm to compute relations. Moreover, our work offers a natural large prime counterpart to recent characteristic 2 approaches, and an avenue to generalize any future result on these approaches to the even more interesting large prime case.

We are aware of two other types of attacks that first exploited smoothness properties of \(p-1\) and were later generalized using elliptic curves. The first one is Pollard’s \(p-1\) factorization method generalized to the celebrated elliptic curve factorization method [13]. The second one is den Boer’s reduction of the computational Diffie-Helman problem to the discrete logarithm problem, which was generalized by Maurer [5, 14]. We point out that the smoothness requirements on the auxiliary curve order are much weaker in our attacks than in these contexts.

Because of these attacks, there may also be a folklore suspicion in the community that using primes with special properties could lead to improved attacks on elliptic curves, but to the best of our knowledge this was not supported by any concrete attack so far, and in fact all NIST curves use generalized Mersenne primes.

Outline. The remaining of the paper is organized as follows. In Sect. 2 we describe related work, particularly on binary curves. In Sect. 3 we describe our main results. We first sketch our main idea and provide a partial analysis of our general algorithm, leaving aside precomputation details and the complexity of the Gröbner basis step. We then describe the \(p-1\) smooth and isogeny versions of our algorithms, and we analyze the complexity of computing an auxiliary curve in the second case. In Sect. 4 we describe our experimental evaluation of the attack. Finally, Sect. 5 summarizes our results and provides routes towards improvements.

2 Previous Work on (binary) Curves

Let K be a finite field; let E be an elliptic curve defined over K; and let \(P,Q\in E(K)\) such that Q is in the subgroup \(G\subset E(K)\) generated by P. The discrete logarithm problem is the problem of finding an integer k such that \(Q=kP\). In the following we assume that the order r of G is prime, as it is usually the case in cryptographic applications.

2.1 Index Calculus for Elliptic Curves

Index calculus algorithms use a subset \(\mathcal {F}\subset G\) often called a factor basis. The simplest algorithms run in two stages. The first stage consists in collecting relations of the form

$$\begin{aligned} a_iP+b_iQ+\sum _{P_{j}\in \mathcal {F}}e_{ij}P_j=0. \end{aligned}$$

The second stage consists in performing linear algebra on these relations to deduce a relation of the form

$$\begin{aligned} aP+bQ=0, \end{aligned}$$

from which the discrete logarithm \(k=-a/b\mod r\) is easily deduced.

Since the seminal work of Semaev [19], index calculus algorithms for elliptic curves have used a basis of the form

$$\begin{aligned} \mathcal {F}:=\{(x,y)\in E(K) | x\in V\} \end{aligned}$$

where V is some subset of K. Relations are obtained by computing \(R=(X,Y)=aP+bQ\) for random a and b, then solving a polynomial equation

$$\begin{aligned} S_{m+1}(x_1,\ldots ,x_m,X)=0 \end{aligned}$$

with the additional constraints that \(x_i\in V\) for all i. Here \(S_\ell \) is such that for \(X_1,\ldots , X_\ell \in \overline{K}\) one has \(S_\ell (X_1,\ldots ,X_\ell )=0\) if and only if there exist \(P_i=(X_i,Y_i) \in E(\overline{K})\) with \(P_1+\ldots +P_\ell =0\). The polynomials \(S_\ell \) are called summation polynomials.

When \(K=\mathbb {F}_p\), Semaev originally proposed to use \(V=\{x\in \mathbb {Z}_{\ge 0}| x<p^{1/m}\}\). This was inspired by the factor bases used for discrete logarithms over finite fields. However, Semaev did not suggest any algorithm to compute relations with this factor basis.

2.2 Weil Restriction on Vector Spaces

In the case of an extension field \(K=\mathbb {F}_{q^n}\), developments of Semaev’s ideas by Gaudry, Diem and Faugère-Perret-Petit-Renault [68, 10] led to choosing V as a linear subspace of \(\mathbb {F}_{q^n}/\mathbb {F}_{q}\) with dimension \(n'\approx \lceil n/m\rceil \). In order to compute relations, we then proceed to a Weil descent or Weil restriction of the summation polynomial onto the vector space.

Concretely we fix a basis \(\{v_1,\ldots ,v_{n'}\}\) for V, we define \(mn'\) variables \(x_{ij}\) over \(\mathbb {F}_{q}\), we substitute \(x_i\) by \(\sum _j x_{ij}v_j\) in \(S_{m+1}\), and by fixing a basis \(\{\theta _1,\ldots ,\theta _n\}\) of \(\mathbb {F}_{q^n}/\mathbb {F}_{q}\) we see the resulting equation over \(\mathbb {F}_{q^n}\) as a system of n polynomial equations over \(\mathbb {F}_{q}\). Namely, we write

$$\begin{aligned} S_{m+1}\left( \sum _j x_{1j}v_j,\ldots ,\sum _j x_{mj}v_j,X\right) =0 \end{aligned}$$

in the form

$$\begin{aligned} \sum _k\theta _kf_k(x_{ij})=0 \end{aligned}$$

which implies that for all k we have

$$\begin{aligned} f_k(x_{ij})=0. \end{aligned}$$

This polynomial system is then solved using generic methods such as resultants or Gröbner basis algorithms.

A particular case of this approach consists in taking \(V:=\mathbb {F}_q\). The resulting index calculus algorithm is more efficient than generic algorithms for fixed \(n>3\) and large enough q, and has subexponential time when q and n increase simultaneously in an appropriate manner [6, 10].

Another particular case occurs when q is a very small constant (typically \(q=2\)). In this case the efficiency of Gröbner basis algorithms is increased by adding the so-called field equations \(x_{ij}^q-x_{ij}=0\) to the system. Experimental results and a heuristic analysis led Petit and Quisquater to conjecture that the algorithm could also have subexponential time in that case [17].

2.3 Limits of Previous Works

From a practical point of view, the subexponential result in [6] is of little interest as elliptic curves that appear in leading cryptographic standards are defined either over prime fields or binary fields with a prime extension degree. Semaev’s seminal paper [19] proposes one factor basis for the prime case, but as mentioned above, it does not provide any corresponding algorithm to compute relations.

Binary curves may be vulnerable to index calculus algorithms for large enough parameters, according to Petit and Quisquater’s analysis and following works [9, 12, 17, 21]. However, generic algorithms currently outperform these algorithms for the parameters used in practice, and the complexity estimates for larger parameters depend on the so-called first fall degree assumption. This assumption on Gröbner basis algorithms holds in some cases including for HFE systems [11, 15], but it is also known to be false in general. The systems occurring in binary ECDLP attacks are related to HFE systems, but at the time of writing it is not clear whether or not, or to which extent the assumption holds in their case. On the other hand, as the systems in play are clearly not generic, one should a priori be able to replace Gröbner basis by other, more dedicated tools.

2.4 Alternative Systems

One idea in that direction is to completely avoid the Weil descent methodology. The vector space constraints \(x_i\in V\) are equivalent to the constraints \(L(x_i)=0\) where

$$\begin{aligned} L(x):=\prod _{v\in V}(x-v). \end{aligned}$$

It is easy to prove (see [2, Chap. 11]) that L is a linearized polynomial, in other words L can be written as

$$\begin{aligned} L(x)=\sum _{j=0}^{n'}c_jx^{q^j} \end{aligned}$$

where \(c_j\in \mathbb {F}_{q^n}\). Moreover (see also [16]) L can be written as a composition of degree q maps

$$\begin{aligned} L(x)=(x^q-\alpha _{n'}x)\circ \ldots \circ (x^q-\alpha _1x) \end{aligned}$$

for well-chosen \(\alpha _i\in \mathbb {F}_{q^n}\). Abusing the notations \(x_{ij}\), the problem of finding \(x_i\in V\) with \(S_{m+1}(x_1,\ldots ,x_m,X)=0\) can now be reduced to solving either

$$\begin{aligned} {\left\{ \begin{array}{ll} S_{m+1}(x_{11},\ldots ,x_{m1},X)=0 &{}\\ x_{ij}=x_{i,j-1}^q &{}i=1,\ldots ,m; j=2,\ldots ,n'\\ \sum _{j=0}^{n'}c_ix_{ij}=0 &{}i=1,\ldots ,m \end{array}\right. } \end{aligned}$$


$$\begin{aligned} {\left\{ \begin{array}{ll} S_{m+1}(x_{11},\ldots ,x_{m1},X)=0 &{}\\ x_{ij}=x_{i,j-1}^q -\alpha _{j-1} x_{i,j-1} &{}i=1,\ldots ,m; j=2,\ldots ,n'\\ 0=x_{i,n'}^q -\alpha _{n'} x_{i,n'} &{}i=1,\ldots ,m. \end{array}\right. } \end{aligned}$$

The two systems have been suggested in [11, 15, 16]. Compared to polynomial systems arising from a Weil descent, both systems have the disadvantage to be defined over the field \(\mathbb {F}_{q^n}\) but on the other hand they are much sparser and a priori easier to study. In fact, these systems are equivalent to polynomial systems arising from a Weil descent under linear changes of equations and variables, and in the univariate case (\(m=1\)) their study has allowed to derive bounds on the corresponding Weil descent systems [11, 15].

While Systems (2) and (3) can be solved with generic Gröbner basis algorithms, their simple structures might lead to better algorithms in the future. Most importantly for this article, they open the way to a generalization of previous algorithms to elliptic curves over prime fields.

3 Algebraic Attacks on Prime Curves

3.1 Main Idea

We replace the map L in Eq. (1) by another algebraic or rational map over \(\mathbb {F}_p\) which for a given m similarly satisfies the following two conditions

  1. 1.

    \(\left| \left\{ x\in \mathbb {F}_p|L(x)=0\right\} \right| \approx \left| \left\{ x\in \overline{\mathbb {F}_p}|L(x)=0\right\} \right| \approx p^{1/m}\),

  2. 2.

    L can be written as a composition of low degree maps \(L_j\).

figure a

The resulting index calculus algorithm is summarized as Algorithm 1. For an optimal efficiency, the parameter m will have to be fixed depending on the cost of the relation search. At the moment, we have not investigated the existence of any algorithm better than Gröbner basis algorithms to solve System (4). The parameter \(\varDelta \) can be a priori fixed to 10; its aim is to account for linear depencies that may occur with a low probability between the relations.

The above conditions on L are such that: (1) most solutions of the system are defined over \(\mathbb {F}_p\); (2) heuristically, we expect that the system has a constant probability to have a solution; (3) all the equations in the system have low degree. Note that System (4) is very similar to System (2) and System (3). We now show how these conditions can be satisfied, first for primes p such that \(p-1\) has a large smooth factor, and then for arbitrary primes.

3.2 Partial Analysis

We consider a computation model where both arithmetic operations in \(\mathbb {F}_p\) and elliptic curve scalar multiplications have unitary cost. This is of course a very rough approximation as scalar multiplications require a polynomial number of field operations, but the approximation will be sufficient for our purposes.

Let T(EmL) be the time needed to solve System (4) for X chosen as in the algorithm. Let P(pm) be the precomputation time required to perform Step 2. The expected number of different solutions of the system in Steps 4b and 4c is about

$$\begin{aligned} \frac{(\deg L)^m}{m! \cdot p}. \end{aligned}$$

Indeed, \(|\mathcal {F}|\) is about \(\deg (L)\) and |E(K)| is about p. A given point (XY) is in the image of \(\mathcal {F}^n \rightarrow E(K)\), \((P_r)_{r=1}^n \rightarrow \sum _{r=1}^n P_r\) about \(\frac{(\deg L)^m}{m! \cdot p}\) times on average. The cost of Step 5 is

$$\begin{aligned} (\deg L)^\omega \end{aligned}$$

where \(2<\omega \le 3\) depends on the algorithm used for linear algebra. The total cost of the attack is therefore

$$\begin{aligned} P(p,m) + \frac{m! \cdot p }{(\deg L)^{m-1}} T(E,m,L) + (\deg L)^\omega . \end{aligned}$$

Our algorithm will outperform generic discrete logarithm algorithms when this complexity is smaller than \(p^{1/2}\). When \((\deg L)^m \approx m! \cdot p\), this will happen when one can solve T(EmL) more efficiently than \(p^{1/2-1/m}\).

3.3 Attack When \(p-1\) Has a Large Smooth Factor

Let us first assume that \(p-1=r\prod _{i=1}^{n'}p_i\) where the \(p_i\) are not necessarily distinct primes, all smaller than B, and \(\prod _{i=1}^{n'}p_i\approx p^{1/m}\). We do not impose any particular condition on r. We define V as the subgroup G of order \(\prod _{i=1}^{n'}p_i\) in \(\mathbb {F}_p^*\). We then set \(L_j(x)=x^{p_j}\) for \(j=1,\ldots ,n'-1\), and \(L_{n'}(x)=1-x^{p_{n'}}\). The function \(L:=\circ _{j=1}^{n'}L_j\) satisfies all the properties required.

Alternatively, we could also choose V as a coset aG of G, and adapt the maps accordingly.

Due to Pohlig-Hellman’s attack [18], finite fields with smooth order, or an order with some large smooth factor, have long been discarded for the discrete logarithm problem over finite fields, but to the best of our knowledge there has been no similar result nor even warning with respect to elliptic curves. In fact, NIST curves use pseudo-Mersenne numbers and are therefore potentially more vulnerable to our approach than other curves. In particular, the prime number used to define NIST P-224 curve is such that

$$\begin{aligned} p-1=2^{96}\cdot 3\cdot 5\cdot 17\cdot 257\cdot 641\cdot 65537\cdot 274177\cdot 6700417\cdot 67280421310721 \end{aligned}$$

hence it satisfies the prerequisites of our attack already for \(m \ge 3\) and \(B=2\).

3.4 Generalization to Arbitrary p

Let now p be an arbitrary prime number, in particular not necessarily of the previous form. Our second attack assumes the knowledge of an auxiliary elliptic curve \(E'/\mathbb {F}_p\) with an order \(N=r\prod _{i=1}^{n'}p_i\) where the \(p_i\) are not necessarily distinct primes, all smaller than B, and \(\prod _{i=1}^{n'}p_i\approx p^{1/m}\). Note that the auxiliary curve is a priori unrelated to the curve specified by the elliptic curve discrete logarithm problem, except that it is defined over the same field. Let H be a subgroup of \(E'(\mathbb {F}_p)\) of cardinality \(\prod _{i=1}^{n'}p_i\). The set V will consist of the x-coordinates of all points \((x,y)\in E'(\mathbb {F}_p)\) in a coset of H. Let \(\varphi : E' \rightarrow E'\) be the isogeny with kernel H. This isogeny can be efficiently written as a composition

$$\begin{aligned} \varphi =\varphi _{n'}\circ \ldots \circ \varphi _1 \end{aligned}$$

where \(\deg \varphi _i=p_i\) and moreover all these isogenies can be efficiently computed using Vélu’s formulae [23]. There exist polynomials \(\xi _j,\omega _j,\psi _j\) such that

$$\begin{aligned} \varphi _j=\left( \frac{\xi _j(x)}{\psi _j^2(x)},y\frac{\omega _j(x)}{\psi _j^3(x)}\right) . \end{aligned}$$

We then choose \(L_j=\frac{\xi _j(x)}{\psi _j^2(x)}\) for \(j=1,\ldots ,n'-1\) and \(L_{n'}=\frac{\xi _{n'}(x)}{\psi _{n'}^2(x)}-\chi \), where \(\chi \) is the x-coordinate of a point in the image of \(\varphi \) which is not 2-torsion. It is easy to check that the map \(L=\circ _{j=1}^{n'}L_j\) then satisfies all properties required:

Lemma 1

In the above construction, \(\{x\in \mathbb {F}_p|L(x)=0\}\) has size \(\prod _{i=1}^{n'}p_i\).


By construction, the isogeny \(\varphi \) has a kernel of size N / s, and so does any kernel coset. We claim that all the points in a coset have distinct x-coordinate if \(\chi \) is not the x-coordinate of a point of order 2. Indeed, let \(P_1\ne P_2\) with \(\varphi (P_1)=\varphi (P_2)\). If \(P_1\) and \(P_2\) have the same x-coordinate, then we have \(P_2=-P_1\) hence \(\varphi (P_2)=\varphi (-P_1)=-\varphi (P_1)\). Therefore \(\varphi (P_1)=-\varphi (P_1)\) has order 2.    \(\square \)

In Sect. 3.5 we discuss how the auxiliary curve \(E'\) can be found, first assuming that p has been fixed and cannot be changed, second assuming that we have some flexibility in choosing p as well.

3.5 Finding an Auxiliary Curve

We now consider the cost of Step 2 of our algorithm for general prime numbers. We propose two algorithms to perform this task: the first one just picks curves at random until one that has the good properties is found, the second one uses the theory of complex multiplication. As many applications will be using standardized curves such as NIST curves, these costs can be considered as precomputation costs in many applications. Finally, we show that they can be greatly reduced for an attacker who can choose the prime p.

Random Curve Selection. The simplest method to perform the precomputation is to pick curves over \(\mathbb {F}_p\) at random until one is found with a smooth enough order. To simplify the analysis, let us first consider a smoothness bound \(B=2\). The probability that the order of a random curve over \(\mathbb {F}_p\) can be written as \(N=2^s \cdot r\) with \(2^s\approx p^{1/m}\) is approximately \(1/2^s \approx p^{-1/m}\), hence we expect to try about \(p^{1/m}\) curves before finding a good one. Note that \(p^{1/m}\) is essentially the size of the factor basis, hence the precomputation costs will always be dominated by at least the linear algebra costs in the whole index calculus algorithm. In practice we might be able to choose B bigger than 2, and this will make the precomputation cost even smaller, as shown by Table 1.

Table 1. Expected number of trials before finding a good curve, such that a factor at least \(p^{1/m}\) is B-smooth. A number k in the table means that \(2^k\) trials are needed on average. The numbers provided are for \(|p|=160\) and \(|p|=256\).

Complex Multiplication. The existence of a curve with N points over \(\mathbb {F}_p\) within the Hasse-Weil bound is equivalent to the existence of an integer solution to the equation

$$\begin{aligned} (N+1-p)^2-Df^2=4N \end{aligned}$$

with \(D<0\) (see [3, Eq. 4.3]). Once this solution is known, the curve can be constructed using the complex multiplication algorithm [3, p.30], provided however that the reduced discriminant D is not too large to compute the Hilbert class polynomial \(H_D\mod p\). To the best of our knowledge, the best algorithm for this task is due to Sutherland [22] and runs in quasi-linear time in |D|. Sutherland reports computations up to \(|D|\approx 10^{13}\).

We can rewrite the above equation as \((p+1-N)^2-Df^2=4p\) and try to solve it for some small D using Cornacchia’s algorithm [4]. More precisely, we can solve the equation \(x^2-Dy^2=4p\) and check if the solution produces a number N which is divisible by a large enough smooth factor. This approach is relatively slow since the number of such N is relatively small. With \(B=2\), one needs to try about \(p^{1/m}\) different values of D.

Faster Precomputation for Chosen p. We now consider a different attack scenario, where p is not fixed but can be chosen by the attacker. In this setting, we first construct a number N in such a way that we know its factorization, and that N has a large enough smooth factor. We then solve \(x^2-Dy^2=4N\) for some small |D| (using the factorization of N). We check if the appropriate value for p is indeed prime, and if not we try a different small |D|. The probability of p being prime is about \(1/\log (p) \approx 1/\log (N)\). This method allows to use much smaller |D| and it will outperform previous methods in general.

We remark that this approach can potentially be applied to produce a sort of “back door” when choosing primes for elliptic curve cryptography standards. However, this seems unlikely for the following two reasons. First, as soon as a user is aware of the potential existence of such a back door, it can easily detect it by solving the above equation for the given p and all small values of D. Second, other equally useful auxiliary curves can be constructed in a time dominated by other steps of the index calculus algorithm.

4 Gröbner Basis Experiments

In this section we describe preliminary computer experiments to evaluate the complexity of relation search in our approach, and compare it to the binary case.

4.1 Experimental Set-Up

In the binary case, we selected random curves over \(\mathbb {F}_{2^{2n_1}}\) and a fixed vector space \(V=\langle 1,x,x^2,\ldots ,x^{n_1}\rangle \), for \(1\le n_1\le 11\).

For the attack of Sect. 3.3 we chose the smallest prime p such that \(2^{n_1}\) divides \(p-1\) with \(p \ge 2^{2n_1}\), and V equal to the subgroup of order \(2^{n_1}\) in \(\mathbb {F}_p^*\).

For the attack of Sect. 3.4 we fixed \(D=7\). We selected parameters N and p such that there exists a curve of order N over \(\mathbb {F}_p\), \(2^{n_1}\) divides N, \(N\in [2^{2n_1}-2^{2n_1-2}; 2^{2n_1}+2^{2n_1-1}]\) and N is the closest to \(2^{2n_1}\) among those parameters. Using complex multiplication, we generated an elliptic curve \(E'\) over \(\mathbb {F}_p\) with N rational points, and we computed a reduced Weierstrass model for this curve. We finally chose V as the projection on the x-coordinate of a coset of a subgroup of order \(2^{n_1}\) of \(E'\), such that V had cardinality \(2^{n_1}\).

In all cases we selected a random (reduced Weierstrass model) curve over the field of consideration and a random point P on the curve. We then attempted to write \(P=P_1+P_2\) with \(P_i\) in the factor basis by reduction to polynomial systems and resolution of these systems with the Gröbner Basis routine of Magma. In the binary case we experimented on systems of the forms (2) and (3). In the other two cases we generated the systems as described in Sect. 3. We repeated all sets of experiments 100 times.

All experiments were performed on a CPU with 16-cores Intel Xeon Processor 5550, running at 2.67 GHz with a L3 cache of 18MB. The Operating System was Linux Ubuntu 12.04.5 LTS with kernel version GNU/Linux 3.5.0-17-generic x86_64 and 24GB memory. The programming platform was Magma V2.18-5 in its 64-bit version.

4.2 Experimental Results

In the tables below (Tables 2, 3, 4 and 5) nbsols is the average number of solutions of the system, Av. time is the average time in seconds and Max. mem is the maximum amount of memory used. The values \(D_{av}\) and \(D_{av}^{corr}\) are the average values of two measures of the degree of regularity from Magma’s verbose output. For \(D_{av}\) we take the largest “step degree” occurring during a Gröbner Basis computation. This corresponds to the degrees reported in [17]. For \(D_{av}^{corr}\) we correct that by removing any step in which no pair was reduced, as these steps should arguably not significantly impact the overall complexity of the algorithm. This corresponds to the degrees reported in [20] and [11].

Table 2. Binary case, SRA system
Table 3. Binary case, System (2)
Table 4. Prime case, \(p-1\) subgroups
Table 5. Prime case, isogeny kernel

Based on this (limited) set of experiments we make the following observations:

  1. 1.

    The corrected version of the degree of regularity is a very stable measure: except for very small parameters, no variation was observed within any set of 100 experiments.

  2. 2.

    In our experiments, systems in the form (2) require much less memory than the corresponding SRA systems.

  3. 3.

    Timing comparison is less clear: while systems in the form (2) are more efficient up to \(n_1=10\), SRA systems are much better at \(n_2=11\).

  4. 4.

    The degrees of regularity, time and memory requirements are similar in the subgroup and isogeny versions of our attack.

  5. 5.

    The degrees of regularity, time and memory requirements seem to increase a bit faster in the prime case than in the binary case in general.

According to Bardet [1, Prop 4.1.2], homogeneous semi-generic systems with n equations of degree 2 and 1 equation of degree 4 in n variables have a degree of regularity equal to \((3+n)/2\). In all our experiments we observed a much smaller dependency in n of the degree of regularity, suggesting that the systems occurring in our attacks are easier to solve than semi-generic systems with comparable parameters.

5 Conclusion, Further Work and Perspectives

Our algorithms generalize previous index calculus attacks from binary curves to prime curves, and therefore considerably increase their potential impact. All attacks including ours (implicitly) reduce the relation search in index calculus algorithms to an instance of the following problem:

Problem 1

(Generalized Root-Finding Problem). Given a finite field K, given \(f\in K[X_1,\ldots ,X_m]\), and given \(L\in K(X)\), find \(X_i\in K\) such that \(f(X_1,\ldots ,X_m)=0\) and \(L(X_i)=0\) for all i.

We have suggested to focus on special polynomials L, which can be written as compositions of low degree maps, so that the generalized root-finding problem can be reduced to a polynomial system similar to “SRA systems” [16, Sect. 6], and then solved using Gröbner basis algorithms. Our computer experiments suggest that the resulting systems are a bit harder to solve than the corresponding systems in binary cases, but easier to solve than generic systems of comparable parameters.

The attacks are not practical at the moment and we do not know their asymptotic complexity. Still, we believe that they do unveil potential vulnerabilities that cryptanalysts need to study further. In particular, we showed that the standardized curve NIST P-224 satisfies the requirements of our first attack.

Following a suggestion by the PKC 2016 committee, we have also compared our approach with a variant of Semaev’s original attack using Groebner basis algorithms to solve the system \(S(x_1,x_2,X)=0\), \(L(x_1)=0\), \(L(x_2)=0\) with \(L(x)=\prod _{\alpha <B}(x-\alpha )\). Intriguingly, our preliminary results show that these systems are easier to solve than ours using Groebner basis algorithms on similar parameters. This can perhaps be explained by the much smaller number of variables, and may either suggest that our approach is unlikely to be efficient asymptotically, or that Semaev’s original attack should be revisited from an algebraic perspective.

Important open problems include providing a satisfactory theoretical explanation for our experiments, and predicting the complexity of all algorithms for large parameters.

An even more important problem is to design a dedicated algorithm for the generalized root-finding problem, which would not rely on Gröbner basis algorithms at all. It is worth noticing that the Weil descent and Gröbner basis approach, when applied to classical root-finding problems (where f is univariate and \(L(x)=x^{|K|}-x\)), provides an algorithm with complexity exponential in \(O(\log n\cdot \deg f)\) under a somewhat controversial heuristic assumption, whereas the best algorithms for this problem have a provable complexity exponential in \(O(\log n+ \deg f)\). A similar improvement for the above generalized version of the root-finding problem will greatly impact elliptic curve cryptography.