A ThreeLevel Sieve Algorithm for the Shortest Vector Problem
Abstract
In AsiaCCS 2011, Wang et al. proposed a twolevel heuristic sieve algorithm for the shortest vector problem in lattices, which improves the NguyenVidick sieve algorithm. Inspired by their idea, we present a threelevel sieve algorithm in this paper, which is shown to have better time complexity. More precisely, the time complexity of our algorithm is \(2^{0.3778n+o(n)}\) polynomialtime operations and the corresponding space complexity is \(2^{0.2833n+o(n)}\) polynomially many bits.
Keywords
Lattice Shortest vector problem Sieve algorithm Sphere covering1 Introduction
Lattices are discrete subgroups of \(\mathbb {R}^n\) and have been widely used in cryptology. The shortest vector problem(SVP) refers the question to find a shortest nonzero vector in a given lattice, which is one of the most famous and widely studied computational problems on lattices.
It is well known that SVP is NPhard under random reductions [2], so no polynomial time exact algorithms for it are expected to exist. Up to now, only approximation algorithms, such as [7, 8, 13, 25], are efficient and all known exact algorithms are proven to cost exponential time. However, almost all known approximation algorithms (such as [8, 25]) invoke some exact algorithm for solving SVP on some low dimensional lattices to improve the quantity of their outputs. Therefore, it is important to know how fast the best exact algorithm can be. What’s more, algorithms for SVP play a very important role in cryptanalysis (see [19] for a survey). For example, nearly all knapsackbased publickey cryptosystems have been broken with a lattice algorithm (see [1, 14, 27]) and many latticebased publickey cryptosystems can be broken by solving some SVP, including the famous NTRU [10]. Hence, better exact algorithm for SVP can also help us to know the security of these latticebased publickey cryptosystems better, and choose more appropriate parameters for these cryptosystems.
The exact algorithms for SVP can be classified into two classes by now: deterministic algorithms and randomized sieve algorithms.
The first deterministic algorithm to find the shortest vector in a given lattice was proposed by Fincke, Pohst [5, 6] and Kannan [11], by enumerating all lattice vectors shorter than a prescribed bound. If the input is an LLLreduced basis, the running time is \(2^{O(n^2)}\) polynomialtime operations. Kannan [11] also showed the running time can reach \(2^{O(nlog n)}\) polynomialtime operations by choosing a suitable preprocessing algorithm. Schnorr and Euchner [26] presented a zigzag strategy for enumerating the lattice vectors to make the algorithm have a better performance in practice. In 2010, Gama, Nguyen and Regev [9] introduced an extreme pruning technique and improved the running time in both theory and practice. All enumeration algorithms above require a polynomial space complexity. Another deterministic algorithm for SVP was proposed by Micciancio and Voulgaris [15] in 2010. Different from the previous algorithms, it is based on Voronoi cell computation and is the first deterministic single exponential time exact algorithm for SVP. The time complexity is \(2^{2n+o(n)}\) polynomialtime operations. One drawback of the algorithm is that its space requirement is not polynomial but \(2^{O(n)}\).
The randomized sieve algorithm was discovered by Ajtai, Kumar and Sivakumar (AKS) [3] in 2001. The running time and space requirement were proven to be \(2^{O(n)}\). Regev’s alternative analysis [22] showed that the hidden constant in \(O(n)\) was at most 16, and it was further decreased to 5.9 by Nguyen and Vidick [20]. Blömer and Naewe [4] generalized the results of AKS to \(l_p\) norms. Micciancio and Voulgaris [16] presented a provable sieving variant called the ListSieve algorithm, whose running time is \(2^{3.199n+o(n)}\) polynomialtime operations and space requirement is \(2^{1.325n+o(n)}\) polynomially many bits. Subsequently, Pujol and Stehlé [21] improved the theoretical bound of the ListSieve algorithm to running time \(2^{2.465n+o(n)}\) and space \(2^{1.233n+o(n)}\) by introducing the birthday attack strategy. In the same work [16], Micciancio and Voulgaris also presented a heuristic variant of the ListSieve algorithm, called the GaussSieve algorithm. However, no upper bound on the running time of the GaussSieve Algorithm is currently known and the space requirement is provably bounded by \(2^{0.41n}\). In [23], Schneider analyzed the GaussSieve algorithm and showed its strengths and weakness. What’s more, a parallel implementation of the GaussSieve algorithm was presented by Milde and Schneider [17]. Recently, Schneider [24] presented an IdealListSieve algorithm to improve the ListSieve algorithm for the shortest vector problem in ideal lattices and the practical speed up is linear in the degree of the field polynomial. He also proposed a variant of the heuristic GaussSieve algorithm for ideal lattice with the same speedup.
To give a correct analysis of its complexity, the AKS algorithm involves some perturbations. However, getting rid of the perturbations, Nguyen and Vidick [20] proposed the first heuristic variant of the AKS algorithm, which in practice performs better and can solve SVP up to dimension 50. Its running time was proven to be \(2^{0.415n+o(n)}\) polynomialtime operations under some nature heuristic assumption of uniform distribution of the sieved lattice vectors. By introducing a twolevel technique, Wang et al. [30] gave an algorithm (WLTB) to improve the NguyenVidick algorithm. Under a similar assumption of the distribution of sieved lattice vectors, the WLTB algorithm has the best theoretical time complexity so far, that is, \(2^{0.3836n+o(n)}\). Both the heuristic assumptions can be supported by the experimental results on low dimensional lattices.
Our Contribution. Observing that the WLTB algorithm involves some data structure like skip list to reduce the time complexity, we present a threelevel sieve algorithm in this paper. To estimate the complexity of the algorithm, it needs to compute the volume of some irregular spherical cap, which is a very complicated and tough work. By involving a smart technique, we simplify the complicated computation and prove that the optimal time complexity is \(2^{0.3778n+o(n)}\) polynomialtime operations and the corresponding space complexity is \(2^{0.2833n+o(n)}\) polynomially many bits under a similar natural heuristic assumption.
Complexities of some heuristic algorithms for SVP
Algorithm  Time complexity  Space complexity 

GaussSieve algorithm    \(2^{0.41n+o(n)}\) 
NguyenVidick algorithm  \(2^{0.415n+o(n)}\)  \(2^{0.2075n+o(n)}\) 
WLTB algorithm  \(2^{0.3836n+o(n)}\)  \(2^{0.2557n+o(n)}\) 
Our threelevel algorithm  \(2^{0.3778n+o(n)}\)  \(2^{0.2883n+o(n)}\) 
A natural question is whether we can improve the time complexity by fourlevel or higherlevel algorithm. It may have a positive answer. However, by our work, it seems that the improvements get smaller and smaller, whereas the analysis of the complexity becomes more and more difficult when the number of levels increases.
Road Map. The rest of the paper is organized as follows. In Sect. 2 we provide some notations and preliminaries. We present our threelevel sieve algorithm and the detailed analysis of its complexity in Sect. 3. Some experimental results are described in Sect. 4. Finally, Sect. 5 gives a short conclusion.
2 Notations and Preliminaries
Notations. Bold lowercase letters are used to denote vectors in \(\mathbb {R}^n\). Denote by \(v_i\) the \(i\)th entry of a vector \(\varvec{v}\). Let \(\Vert \cdot \Vert \) and \(\langle \cdot ,\cdot \rangle \) be the Euclidean norm and inner product of \(\mathbb {R}^n\). Matrices are written as bold capital letters and the \(i\)th column vector of a matrix \(\varvec{B}\) is denoted by \(\varvec{b}_i\).
Let \(B_n(\varvec{x},R)=\{\varvec{y}\in \mathbb {R}^n\;\;\Vert \varvec{y}\varvec{x}\Vert \le R\}\) be the \(n\)dimensional ball centered at \(\varvec{x}\) with radius \(R\). Let \(B_n(R)=B_n(\mathbf {O},R)\). Let \(C_n(\gamma , R)=\{\varvec{x}\in \mathbb {R}^n\;\;\gamma R\le \Vert \varvec{x}\Vert \le R\}\) be a spherical shell in \(B_n(R)\), and \(S^n=\{\varvec{x}\in \mathbb {R}^n\;\;\Vert \varvec{x}\Vert =1\}\) be the unit sphere in \(\mathbb {R}^n\). Denote by \(S^n\) the area of \(S^n.\)
2.1 Lattices
Let \(\varvec{B}=\{{\varvec{b}_1,\varvec{b}_2,\ldots ,\varvec{b}_n}\}\subset \mathbb {R}^{m}\) be a set of \(n\) linearly independent vectors. The lattice \(\mathcal {L}\) generated by the basis \(\varvec{B}\) is defined as \( \mathcal {L}(\varvec{B})=\left\{ \sum _{i=1}^{n}x_i\mathbf b _i:x_i\in \mathbb {Z}\right\} .\) \(n\) is called the rank of the lattice. Denote by \(\lambda _1(\mathcal {L})\) the norm of a shortest nonzero vector of \(\mathcal {L}\).
2.2 The Basic Framework of Some Heuristic Sieve Algorithms
In general the Sieve Algorithm in Line 8 will output a set \(S'\) of shorter lattice vectors than those in \(S\). When we repeat the sieve process enough times, a shortest vector is expected to be found.
Denote by \(R'\) (resp. \(R\)) the maximum length of those vectors in \(S'\) (resp. \(S\)). To find \(S'\), the sieve algorithm usually tries to find a set \(C\) of lattice vectors in \(S\) such that the balls centered at these vectors with radius \(R'\) can cover all the lattice points in some spherical shell \(C_n(\gamma , R)\). By subtracting the corresponding center from every lattice point in every ball, shorter lattice vectors will be obtained, which form the set \(S'\).

The NguyenVidick algorithm checks every lattice point in \(S'\) sequentially to decide whether it is also in some existing ball or it is a new vector in \(C\) (see Fig. 1 for a geometric description).

The WLTB algorithm involves a twolevel strategy, that is, the bigballlevel and the smallballlevel. It first covers the spherical shell with big balls centered at some lattice vectors, then covers the intersection of every big ball and \(C_n(\gamma , R)\) with small balls centered at some lattice points in the intersection. The centers of the small balls form \(C\). It can be shown that it is faster to decide whether a lattice vector is in \(C\) or not. We first check whether it is in some big ball or not. If not, it must be a new point in \(C\). If so, we just check whether it is in some small ball in the big ball it belongs to, regardless of those small balls of the other big balls (see Fig. 2 for a geometric description).
For either the NguyenVidick algorithm or the WLTB algorithm, to analyze its complexity needs a natural assumption below.
Heuristic Assumption 1: At any stage in Algorithm 1, the lattice vectors in \(S'\cap C_n(\gamma , R)\) are uniformly distributed in \(C_n(\gamma , R)\).
3 A ThreeLevel Sieve Algorithm
3.1 Description of the ThreeLevel Sieve Algorithm
3.2 Complexity of the Algorithm
Denote by \(N_1\), \(N_2\) and \(N_3\) the corresponding upper bound on the expected number of lattice points in \(C_1\), \(C_2^{\varvec{c}_1}\) (for any \(\varvec{c}_1\in C_1\)) and \(C_3^{\varvec{c}_1,\varvec{c}_2}\) (for any \(\varvec{c}_1\in C_1,\varvec{c}_2\in C_2^{\varvec{c}_1}\)).
The Space Complexity. Notice that the total number of big, medium and small balls can be bounded by \(N_1\), \(N_1N_2\) and \(N_1N_2N_3\) respectively. As in [20] and [30], if we sample \(\mathrm {poly}(n)N_1N_2N_3\) vectors, after a polynomial number of iterations in Algorithm 1, it is expected that a shortest nonzero lattice vector can be obtained with the left vectors. So the space complexity is bounded by \(O(N_1N_2N_3)\).
The Time Complexity. The initial size of \(S\) is \(\mathrm {poly}(n)N_1N_2N_3\). In each iteration in Algorithm 1, steps 3–19 in Algorithm 2 repeat \(\mathrm {poly}(n)N_1N_2N_3\) times, and in each repeat, at most \(N_1+N_2+N_3\) comparisons are needed. Therefore, the total time complexity can be bounded by \(O(N_1N_2N_3(N_1+N_2+N_3))\) polynomialtime operations.
We next give the estimation of \(N_1\), \(N_2\) and \(N_3\). Without loss of generality, we restrict \(R=1\) and let \(C_n(\gamma )=C_n(\gamma ,1)=\{\varvec{x}\in \mathbb {R}^n\;\;\gamma R\le \Vert \varvec{x}\Vert \le 1\}\) through our proofs for simplicity.
The Upper Bound of \(N_1.\) Nguyen and Vidick [20] first gave a proof of the upper bound \(N_1\), and a more refined proof was given by Wang et al. [30].
Theorem 1

\(\varOmega _n(\gamma _1)\) be the fraction of \(C_n(\gamma _3)\) that is covered by a ball of radius \(\gamma _1\) centered at a point of \(C_n(\gamma _3)\),

\(\varGamma _n(\gamma _1,\gamma _2)\) be the fraction of \(C_n(\gamma _3)\) covered by a big spherical cap \(B_n(\varvec{c}_2,\gamma _2)\cap B_n(\varvec{c}_1,\gamma _1)\cap \) \( C_n(\gamma _3)\),

\(\varOmega _n(\gamma _1,\gamma _2)\) be the fraction of \(B_n(\varvec{c}_1,\gamma _1)\cap C_n(\gamma _3)\) covered by \(B_n(\varvec{c}_2,\gamma _2)\cap B_n(\varvec{c}_1,\gamma _1)\) \(\cap C_n(\gamma _3)\), where \(\varvec{c}_2\in C_2^{\varvec{c}_1},\varvec{c}_1\in C_1\).
Lemma 1
Note that the proportion \(\varGamma _n(\gamma _1,\gamma _2)\) is different from that of Lemma 4 in [30], as the radius of \(B_n(\varvec{c}_2,\gamma _2)\) is larger than the inside radius of the shell \(C_n(\gamma _3)\). Thus, it leads to the slightly different bounds of \(\varGamma _n(\gamma _1,\gamma _2)\) from that of Lemma 4 in [30]. If \(\varvec{c}_2\) lies on the sphere of a big ball \(B_n(\varvec{c}_1,\gamma _1)\), the fraction \(\varGamma _n(\gamma _1,\gamma _2)\) is minimal. Lemma 2 gives the minimal and maximal value of \(\varGamma _n(\gamma _1,\gamma _2)\) when \(\varvec{c}_2\) lies on the sphere of a big ball \(B_n(\varvec{c}_1,\gamma _1)\).
Lemma 2
Proof
Note that \(\gamma _3\) is very close to \(1\). We just consider the proportion on the sphere covering as in [30].
Theorem 2
Proof

\(\varGamma _n(\gamma _1,\gamma _2,\gamma _3)\) be the fraction of \(C_n(\gamma _3)\) that is covered by a small spherical cap \( B_n(\varvec{c}_3,\gamma _3)\cap B_n(\varvec{c}_2,\gamma _2)\cap \) \(B_n(\varvec{c}_1,\gamma _1)\cap \) \( C_n(\gamma _3)\),

\(\varOmega _n(\gamma _1,\gamma _2,\gamma _3)\) the fraction of \(B_n(\varvec{c}_2,\gamma _2)\cap B_n(\varvec{c}_1,\gamma _1)\) \(\cap C_n(\gamma _3)\) covered by \(B_n(\varvec{c}_3,\gamma _3)\) \(\cap B_n(\varvec{c}_2,\gamma _2)\cap B_n(\varvec{c}_1,\gamma _1)\cap \) \( C_n(\gamma _3)\), where \(\varvec{c}_3\in C_3^{\varvec{c}_1,\varvec{c}_2},\varvec{c}_2\in C_2^{\varvec{c}_1},\varvec{c}_1\in C_1.\)
Lemma 3
Proof
The region of \((x_2,x_3)\) for \(D\) is the shadow of Fig. 4, and that of \((x_4,\ldots ,x_n)\) is an \((n3)\)dimensional ball with radius \(r=\sqrt{1\!a^2\!\left( \frac{ba\beta _1}{\beta _2}\right) ^2\!\!\left( \frac{f\delta _1a\delta _2\frac{ba\beta _1}{\beta _2}}{\delta _3}\right) ^2}.\)
Theorem 3
Proof

\(N_1=(\frac{1}{\gamma _1\sqrt{1\gamma _1^2/4}})^n\lceil 3\sqrt{2\pi }n^{3/2}\rceil \) is unrelated to \(\gamma _3\).

\(N_{2}=c_2(\frac{c_{\mathcal {H}_2}}{d_{\min }})^n\lceil n^{\frac{3}{2}}\rceil \). Only \(c_{\mathcal {H}_2}=\frac{\gamma _{1}}{\gamma _3}\sqrt{1\frac{\gamma _{1}^2}{4\gamma _3^2}}=\sqrt{1(1\frac{\gamma _1^2}{2\gamma _3^2})^2}\) is related to \(\gamma _3\), and it is easy to see that \(c_{\mathcal {H}_2}\) decreases with respect to \(\gamma _3\), which implies that \(N_2\) is a monotonically decreasing function of \(\gamma _3\).

\(N_{3}=c_3n^{3}\left( \frac{\sqrt{1\left( \frac{\gamma _3^2\gamma _1^2+1}{2\gamma _3}\right) ^2 \left( \frac{1}{c_{\mathcal {H}_2}}\left( \frac{\gamma _3^2+1\gamma _2^2}{2} \frac{2\gamma _3^2\gamma _1^2}{2\gamma _3}\frac{\gamma _3^2\gamma _1^2+1}{2\gamma _3}\right) \right) ^2}}{\sqrt{c_{\mathcal {H}_3}\left( 1\frac{\gamma _3^2}{2c_{\mathcal {H}_3}}\right) ^2}}\right) ^n\). First, the denominator of \(N_3\) increases with \(\gamma _3\), since \(c_{\mathcal {H}_3}\) is unrelated to \(\gamma _3\). By \(\gamma _1>1\), we have \(\left( \frac{\gamma _3^2\gamma _1^2+1}{2\gamma _3}\right) ^{'}=\frac{\gamma _3^2+\gamma _1^21}{2\gamma _3^2}>0\), and \(\left( \frac{\gamma _3^2+1\gamma _2^2}{2} \frac{2\gamma _3^2\gamma _1^2}{2\gamma _3}\frac{\gamma _3^2\gamma _1^2+1}{2\gamma _3}\right) ^{'} =\gamma _3\frac{2\gamma _3^2+\gamma _1^2}{2\gamma _3^2}\frac{\gamma _3^2\gamma _1^2+1}{2\gamma _3}\frac{2\gamma _3^2\gamma _1^2}{2\gamma _3}\frac{\gamma _3^2+\gamma _1^21}{2\gamma _3^2} =\frac{\gamma _1^2(\gamma _1^21)}{2\gamma _3^3}>0\). Together with \(\frac{1}{c_{\mathcal {H}_2}}\) increases with \(\gamma _3\), then we have the numerator of \(N_3\) decreases with \(\gamma _3\). Thus, \(N_3\) decreases with respect to \(\gamma _3\).
Since the expression of the time complexity is complicated, we solve a numerical optimal solution. Take \(\gamma _3=1\). Let \(\gamma _1\) go through from 1 to 1.414 by 0.0001 and for a fixed \(\gamma _1\), let \(\gamma _2\) go through from 1 to \(\gamma _1\) by 0.0001, then we can easily find the minimal value of the exponential constant for the running time. Thus, we obtain the numerical optimal time complexity of our threelevel sieve algorithm.
Theorem 4
The optimal time complexity of the algorithm is \(2^{0.3778n+o(n)}\) polynomialtime operations with \(\gamma _3\rightarrow 1,\gamma _1=1.1399,\gamma _2=1.0677\), and the corresponding space complexity is \(2^{0.2833n+o(n)}\) polynomially many bits under Heuristic Assumption 1.
Remark 1
As in [20], the number of iterations is usually linear in the dimension of lattices. Regardless of the number of iterations, the polynomial factors hidden in the time complexity in NV algorithm and WLTB algorithm are respectively \(n^{3}\) and \(n^{4.5}\). In our three level sieve algorithm, the polynomial parts of \(N_1, N_2\) and \(N_3\) given by Theorem 1, 2, and 3 are \(n^{3/2}, n^{3/2}\) and \(n^{3}\) respectively. So the hidden polynomial factor in our algorithm is \(n^{9}\) without the number of iterations.
Remark 2
It is natural to extend the threelevel sieve algorithm to multiplelevel, such as fourlevel algorithm. However, the number of small balls will increase as the number of the levels increases. Therefore, we conjecture that the time complexity may be decreased with small number levels, but will increase if the number of levels is greater than some positive integer.
4 Experimental Results
4.1 Comparison with the Other Heuristic Sieve Algorithms
We implemented the NV algorithm, the WLTB algorithm and our threelevel sieve algorithm on a PC with Windows 7 system, 3.00 GHz Intel 4 processor and 2 GByte RAM using Shoup’s NTL library version 5.4.1 [28]. Instead of implementing the GaussSieve algorithm, we directly applied the GaussSieve Alpha V.01 published by Voulgaris [29] on a PC with Fedora 15 system, 3.00 GHz Intel 4 processor and 2 GByte RAM.
We performed experiments to compare our threelevel sieve algorithm with the other three algorithms. For every dimension \(n\), we first used the method in [18] to pick some random \(n\)dimensional lattice and computed the LLLreduced basis, then we sampled the same number of lattice vectors, and performed the NV algorithm with \(\gamma =0.97\), the WLTB algorithm with \(\gamma _1=1.0927,\gamma _2=0.97\) and our threelevel sieve algorithm with \(\gamma _1=1.1399,\gamma _2=1.0667,\gamma _3=0.97\) using these samples. We performed one experiments on lattices with dimension 10, 20 with more than 100000 samples, but about fifty experiments with fewer samples, and two experiments on dimension 25, 30, 40, 50. Instead of using our samples, we just performed the GaussSieve Alpha V.01 with the selected lattices as its inputs. The experimental results of the four algorithms are shown in Table 2, where \(\varvec{v}\) is the output vector of the corresponding algorithm.
Experimental results.
Dimension  10  20  25  30  40  50  60  

Number of sample  150000  100000  8000  5000  5000  3000  2000  
Time of sample (s)  301  810  87833  73375  147445  120607  167916  
Time (s)  NV algorithm  25005  64351  120  220  625  254  187 
WLTB algorithm  23760  18034  35  42  93  46  47  
Our algorithm  20942  13947  27  27  57  29  30  
GaussSieve algorithm  0.003  0.013  0.068  0.098  0.421  3.181  42.696  
\(\frac{\Vert \varvec{v}\Vert }{\lambda _1}\)  NV algorithm  1  1  23.8  38.3  170.1  323  347.7 
WLTB algorithm  1  1  25.9  35.1  170.1  323  347.7  
Our threelevel algorithm  1  1  21.2  38.3  170.1  323  347.7  
GaussSieve algorithm  1  1  1  1  1  1  1 
Compared with the NV and WLTB algorithms, it seems that our algorithm may be slower for low dimensional lattices due to the larger hidden polynomial factor. However, on one hand, the number of sieved vectors in each iteration of our algorithm decreases faster because the number of small balls is larger, which implies that the number of iterations is smaller and the number of the vectors to be sieved in the next iteration is smaller as well. On the other hand, the time complexity is for the worst case. In practice, we need not to check all the big balls, medium balls and small balls to decide which small ball the sieved vector belongs to. Thus, with the same number of samples in our experiments, our algorithm runs faster than the NV and WLTB algorithms. Since the sample procedure is very fast when the dimension \(n\) is not greater than twenty, we can sample enough lattice vectors to ensure that the three algorithms can find a shortest nonzero lattice vector. In such case, the time of sieving overwhelms the time of sampling, so our algorithm usually costs the least total time.
4.2 On Heuristic Assumption 1
5 Conclusion
In this paper, we propose a threelevel heuristic sieve algorithm to solve SVP and prove that the optimal running time is \(2^{0.3778n+o(n)}\) polynomialtime operations and the space requirement is \(2^{0.2833n+o(n)}\) polynomially many bits under Heuristic Assumption 1.
Notes
Acknowledgement
We like to thank Michael Schneider very much for his valuable suggestions on how to improve this paper. We also thank the anonymous referees for their helpful comments. We are grateful to Panagiotis Voulgaris for the publication of his implementation of the GaussSieve algorithm. Pan would like to thank Hai Long for his help on the programming.
References
 1.Adleman, L.M.: On breaking generalized knapsack public key cryptosystems. In: The 15th Annual ACM Symposium on Theory of Computing Proceedings, pp. 402–412. ACM, April 1983Google Scholar
 2.Ajtai, M.: The shortest vector problem in \(l_2\) is NPhard for randomized reductions. In: Proceedings of the 30th STOC. ACM (1998)Google Scholar
 3.Ajtai, M., Kumar, R., Sivakumar, D.: A sieve algorithm for the shortest lattice vector problem. In: Proceedings of the 33rd STOC, pp. 601–610. ACM (2001)Google Scholar
 4.Blömer, J., Naewe, S.: Sampling methods for shortest vectors, closest vectors and successive minima. Theor. Comput. Sci. 410(18), 1648–1665 (2009)CrossRefzbMATHGoogle Scholar
 5.Fincke, U., Pohst, M.: A procedure for determining algebraic integers of given norm. In: van Hulzen, J.A. (ed.) EUROCAL. LNCS, vol. 162, pp. 194–202. Springer, Heidelberg (1983)Google Scholar
 6.Fincke, U., Pohst, M.: Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comp. 44(170), 463–471 (1985)CrossRefzbMATHMathSciNetGoogle Scholar
 7.Gama, N., HowgraveGraham, N., Koy, H., Nguyên, P.Q.: Rankin’s constant and blockwise lattice reduction. In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 112–130. Springer, Heidelberg (2006) CrossRefGoogle Scholar
 8.Gama, N., Nguyen, P.Q.: Finding short lattice vectors within Mordell’s inequality. In: STOC ’08Proceedings of the 40th ACM Symposium on the Theory of Computing. ACM (2008)Google Scholar
 9.Gama, N., Nguyen, P.Q., Regev, O.: Lattice enumeration using extreme pruning. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 257–278. Springer, Heidelberg (2010) CrossRefGoogle Scholar
 10.Hoffstein, J., Pipher, J., Silverman, J.H.: NTRU: a ringbased public key cryptosystem. In: Buhler, J.P. (ed.) ANTS 1998. LNCS, vol. 1423, pp. 267–288. Springer, Heidelberg (1998) Google Scholar
 11.Kannan, R.: Improved algorithms for integer programming and related lattice problems. In: Proceedings of the 15th STOC, pp. 193–206. ACM (1983)Google Scholar
 12.Klein, P.N.: Finding the closest lattice vector when it’s unusually close. In: Proceedings of the SODA, pp. 937–941. ACM (2000)Google Scholar
 13.Lenstra, A.K., Lenstra Jr, H.W., Lovász, L.: Factoring polynomials with rational coefficients. Math. Ann. 261, 513–534 (1982)CrossRefGoogle Scholar
 14.Lagarias, J.C., Odlyzko, A.M.: Solving lowdensity subset sum problems. J. ACM 32(1), 229–246 (1985)CrossRefzbMATHMathSciNetGoogle Scholar
 15.Micciancio, D., Voulgaris, P.: A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. In: Proceedings of the STOC, pp. 351–358. ACM (2010)Google Scholar
 16.Micciancio, D., Voulgaris, P.; Faster exponential time algorithms for the shortest vector problem. In: The 21st Annual ACMSIAM Symposium on Discrete Algorithms Proceedings, pp. 1468–1480. SIAM, January 2010Google Scholar
 17.Milde, B., Schneider, M.: A parallel implementation of GaussSieve for the shortest vector problem in lattices. In: Malyshkin, V. (ed.) PaCT 2011. LNCS, vol. 6873, pp. 452–458. Springer, Heidelberg (2011) Google Scholar
 18.Nguyên, P.Q., Stehlé, D.: LLL on the average. In: Hess, F., Pauli, S., Pohst, M. (eds.) ANTS 2006. LNCS, vol. 4076, pp. 238–256. Springer, Heidelberg (2006) Google Scholar
 19.Nguyên, P.Q., Stern, J.: The two faces of lattices in cryptology. In: Silverman, J.H. (ed.) CaLC 2001. LNCS, vol. 2146, pp. 146–180. Springer, Heidelberg (2001)Google Scholar
 20.Nguyen, P.Q., Vidick, T.: Sieve algorithms for the shortest vector problem are practical. J. Math. Cryptology 2(2), 181–207 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
 21.Pujol, X., Stehlé, D.: Solving the shortest lattice vector problem in time \(2^{2.465n}\). Cryptology ePrint Archive, Report 2009/605 (2009)Google Scholar
 22.Regev, O.: Lecture notes on lattices in computer science. http://www.cs.tau.ac.il/odedr/teaching/latticesfall2004/index.html (2004)
 23.Schneider, M.: Analysis of Gausssieve for solving the shortest vector problem in lattices. In: Katoh, N., Kumar, A. (eds.) WALCOM 2011. LNCS, vol. 6552, pp. 89–97. Springer, Heidelberg (2011) Google Scholar
 24.Schneider, M.: Sieving for shortest vectors in ideal lattices. In: Youssef, A., Nitaj, A., Hassanien, A.E. (eds.) AFRICACRYPT 2013. LNCS, vol. 7918, pp. 375–391. Springer, Heidelberg (2013) CrossRefGoogle Scholar
 25.Schnorr, C.P.: A hierarchy of polynomial lattice basis reduction algorithms. Theoret. Comput. Sci. 53, 201–224 (1987)CrossRefzbMATHMathSciNetGoogle Scholar
 26.Schnorr, C.P., Euchner, M.: Lattice basis reduction: improved practical algorithms and solving subset sum problems. Math. Program. 66, 181–199 (1994)CrossRefzbMATHMathSciNetGoogle Scholar
 27.Shamir, A.: A polynomial time algorithm for breading the basic MerkelHellman cryptosystem. In: The 23rd IEEE Symposium on Foundations of Computer Science Proceedings, pp. 145–152. IEEE (1982)Google Scholar
 28.Shoup, V.: NTL: a library for doing number theory. http://www.shoup.net/ntl/
 29.Voulgaris, P.: Gauss Sieve alpha V. 0.1 (2010). http://cseweb.ucsd.edu/pvoulgar/impl.html
 30.Wang, X., Liu, M., Tian, C., Bi, J.: Improved NguyenVidick Heuristic sieve algorithm for shortest vector problem. In: The 6th ACM Symposium on Information, Computer and Communications Security Proceedings, pp. 1–9. ACM (2011)Google Scholar