Abstract
In this paper, we investigate a variant of the BKZ algorithm, called progressive BKZ, which performs BKZ reductions by starting with a small blocksize and gradually switching to larger blocks as the process continues. We discuss techniques to accelerate the speed of the progressive BKZ algorithm by optimizing the following parameters: blocksize, searching radius and probability for pruning of the local enumeration algorithm, and the constant in the geometric series assumption (GSA). We then propose a simulator for predicting the length of the GramSchmidt basis obtained from the BKZ reduction. We also present a model for estimating the computational cost of the proposed progressive BKZ by considering the efficient implementation of the local enumeration algorithm and the LLL algorithm. Finally, we compare the cost of the proposed progressive BKZ with that of other algorithms using instances from the Darmstadt SVP Challenge. The proposed algorithm is approximately 50 times faster than BKZ 2.0 (proposed by ChenNguyen) for solving the SVP Challenge up to 160 dimensions.
Y. Aono—This work was supported by JSPS KAKENHI Grant Number 26730069.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Lattices in cryptography have been actively used as the foundation for constructing efficient or highfunctional cryptosystems such as publickey encryptions [17, 26, 41], fully homomorphic encryptions [10, 22], and multilinear maps [21]. The security of latticebased cryptography is based on the hardness of solving the (approximate) shortest vector problems (SVP) in the underlying lattice [15, 32, 35, 36]. In order to put latticebased cryptography into practical use, we must precisely estimate the secure parameters in theory and practice by analyzing the previously known efficient algorithms for solving the SVP.
Currently the most efficient algorithms for solving the SVP are perhaps a series of BKZ algorithms [13, 14, 46, 47]. Numerous efforts have been made to estimate the security of latticebased cryptography by analyzing the BKZ algorithms. Lindner and Peikert [32] gave an estimation of secure key sizes by connecting the computational cost of BKZ algorithm with the root Hermite factor from their experiment using the NTLBKZ [49]. Furthermore, van de Pol and Smart [51] estimated the key sizes of fully homomorphic encryptions using a simulator based on ChenNguyen’s BKZ 2.0 [13]. Lepoint and Naehrig [31] gave a more precise estimation using the parameters of the fullversion of BKZ 2.0 paper [14]. On the other hand, Liu and Nguyen [33] estimated the secure key sizes of some LWEbased cryptosystems by considering the BDD in the associated qary lattice. Aono et al. [7] gave another security estimation for LWEbased cryptosystems by considering the challenge data from the Darmstadt Lattice Challenge [50]. Recently, Albrecht et al. presented a comprehensive survey on the stateoftheart of hardness estimation for the LWE problem [5].
The above analyzing algorithms are usually called “latticebased attacks”, which have a generic framework consisting of two parts:
(1) Lattice reduction: This step aims to decrease the norm of vectors in the basis by performing a lattice reduction algorithm such as the LLL or BKZ algorithm.
(2) Point search: This step finds a short vector in the lattice with the reduced basis by performing the enumeration algorithm.
In order to obtain concrete and practical security parameters for latticebased cryptosystems, it is necessary to investigate the tradeoffs between the computational cost of a lattice reduction and that of a lattice point search.
For our total cost estimation, we further limit the latticebased attack model by (1) using our improved progressive BKZ algorithm for lattice reduction, and (2) using the standard (sometimes randomized) lattice vector enumeration algorithm with sound pruning [20]. To predict the computational cost under this model, we propose a simulation method to generate the computing time of lattice reduction and the lengths of the GramSchmidt vectors of the basis to be computed.
BKZ Algorithms: Let \(B = (\mathbf{b}_1,\ldots ,\mathbf{b}_n)\) be the basis of the lattice. The BKZ algorithms perform the following local point search and update process from index \(i=1\) to \(n1\). The local point search algorithm, which is essentially the same as the algorithm used in the second part of the latticebased attacks, finds a short vector in the local block \(B_i = \pi _i (\mathbf{b}_i,\ldots ,\mathbf{b}_{i+\beta 1})\) of the fixed blocksize \(\beta \) (the blocksize shrinks to \(ni+1\) for large \(i \ge n\beta +1\)). Here, the lengths of vectors are measured under the projection \(\pi _i\) which is defined in Sect. 2.1. Then, the update process applies lattice reduction for the degenerated basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_{i1},\mathbf{v},\) \(\mathbf{b}_i,\ldots ,\mathbf{b}_n)\) after inserting vector \(\mathbf{v}\) at ith index.
The point search subroutine finds a short vector in some searching radius \(\alpha \cdot \mathrm {GH}(B_i)\) with some probability which is defined over random local blocks of the fixed dimension. Here, \(\mathrm {GH}(B_i)\) is an approximation of the length of the shortest vector in the sublattice generated by \(B_i\).
In the classical BKZ algorithms [46, 47], the local point search calls a single execution of a lattice vector enumeration algorithm with a reasonable pruning for searching tree. The BKZ 2.0 algorithm proposed by Chen and Nguyen [13] uses the extreme pruning technique [20], which performs the lattice enumeration with success probability p for \(\lfloor 1/p \rceil \) different bases \(G_1,\ldots ,G_{\lfloor 1/p \rceil }\) obtained by randomizing the local basis \(B_i\). They use the fixed searching radius as \(\sqrt{1.1} \cdot \mathrm {GH}(B_i)\). We stress that BKZ 2.0 is practically the fastest algorithm for solving the approximate SVP of large dimensions. Indeed, many toprecords in the Darmstadt Lattice Challenge [50] have been solved by BKZ 2.0 (Table 1).
Our Contributions: In this paper we revisit progressive BKZ algorithms, which have been mentioned in several studies; these include [13, 19, 25, 45, 48]. The main idea of progressive BKZ is that performing BKZ iteratively starting with a small blocksize is practically faster than the direct execution of BKZ with a larger blocksize. The method used to increase the blocksize \(\beta \) strongly affects the overall computational cost of progressive BKZ. The research goal here is to find an optimal method of increasing the blocksize \(\beta \) according to the other parameters in the BKZ algorithms.
One major difference between BKZ 2.0 and our algorithm is the usage of randomized enumeration in local blocks. To find a very short vector in each local block efficiently, BKZ 2.0 uses the randomizing technique in [20]. Then, it reduces each block to decrease the cost of lattice enumeration. Although it is significantly faster than the enumeration without pruning, it introduces overhead because the bases are not good in practice after they have been randomized. To avoid this overhead, we adopted the algorithm with a single enumeration with a low probability.
Moreover, BKZ of a large blocksize with large pruning (i.e., a low probability) is generally better in both speed and quality of basis than that of a small blocksize with few pruning (i.e., a high probability), as a rule of thumb. We pursue this idea and add the freedom to choose the radius \(\alpha \cdot \mathrm {GH}(L)\) of the enumeration of the local block; this value is fixed in BKZ 2.0 as \(\sqrt{1.1} \cdot \mathrm {GH}(L)\).
To optimize the algorithm, we first discuss techniques for optimizing the BKZ parameters of enumeration subroutine, including the blocksize \(\beta \), success probability p of enumeration, and \(\alpha \) to set the searching radius of enumeration as \(\alpha \cdot \mathrm {GH}(B_i)\). We then show the parameter relationship that minimizes the computational cost for enumeration of a BKZ\(\beta \)reduced basis. Next, we introduce the new usage of full enumeration cost (FEC), derived from GamaNguyenRegev’s cost estimation [20] with a Gaussian heuristic radius and without pruning, to define the quality of the basis and to predict the cost after BKZ\(\beta \) is performed. Using this metric, we can determine the timing for increasing blocksize \(\beta \) that provides an optimized strategy; in previous works, the timing was often heuristic.
Furthermore, we propose a new BKZ simulator to predict the GramSchmidt lengths \(\Vert \mathbf{b}^*_i\Vert \) after BKZ\(\beta \). Some previous works aimed to find a short vector as fast as possible, and did not consider other quantities. However, additional information is needed to analyze the security of latticebased cryptosystems. In literatures, a series of works on lattice basis reduction [13, 14, 19, 44] have attempted to predict the GramSchmidt lengths \(\Vert \mathbf{b}^*_i\Vert \) after lattice reduction. In particular, Schnorr’s GSA is the first simulator of GramSchmidt lengths and the information it provides is used to analyze the random sampling algorithm. We follow this idea, i.e., predicting GramSchmidt lengths to analyze other algorithms.
Our simulator is based on the Gaussian heuristic with some modifications, and is computable directly from the lattice dimension and the blocksize. On the other hand, ChenNguyen’s simulator must compute the values sequentially; it has an inherent problem of accumulative error, if we use the strategy that changes blocksize many times. We also investigate the computational cost of our implementation of the new progressive BKZ, and show our estimation for solving challenge problems in the Darmstadt SVP Challenge and Ideal Lattice Challenge [50]. Our cost estimation is derived by setting the computation model and by curve fitting based on results from computer experiments. Using our improved progressive BKZ, we solved Ideal Lattice Challenge of 600 and 652 dimensions in the exact expected times of \(2^{20.7}\) and \(2^{24.0}\) s, respectively, on a standard PC.
Finally, we compare our algorithm with several previous algorithms. In particular, compared with ChenNguyen’s BKZ 2.0 algorithm [13, 14] and Schnorr’s blocksize doubling strategy [48], our algorithm is significantly faster. For example, to find a vector shorter than \(1.05 \cdot \mathrm {GH}(L)\), which is required by the SVP Challenge [50], our algorithm is approximately 50 times faster than BKZ 2.0 in a simulatorbased comparison up to 160 dimensions.
Roadmap: In Sect. 2 we introduce the basic facts on lattices. In Sect. 3 we give an overview of BKZ algorithms, including ChenNguyen’s BKZ 2.0 [13] and its cost estimation; we also state some heuristic assumptions. In Sect. 4, we propose the optimized BKZ parameters under the Schnorr’s geometric series assumption (GSA). In Sect. 5, we explain the basic variant of the proposed progressive BKZ algorithm and its simulator for the cost estimation. In Sect. 6, we discuss the optimized block strategy that improved the speed of the proposed progressive BKZ algorithm. In Sect. 7, we show the cost estimation for processing local blocks based on our implementation. Due to the spacing limitation, we omit the details of our implementation (See the full version [8] for the details). We then discuss an extended strategy using many random reduced bases [20] besides our progressive BKZ in Sect. 8. Finally, Sect. 9 gives the results of our simulation to solve the SVP Challenge problems and compares these results with previous works (Fig. 1).
2 Lattice and Shortest Vector
A lattice L is generated by a basis B which is a set of linearly independent vectors \({\mathbf{b}_1,\dots ,\mathbf{b}_n}\) in \(\mathbb {R}^m\). We will refer to it as \(L({\mathbf{b}_1,\dots ,\mathbf{b}_n})=\{\sum _{i=1}^{n}{x_i}{\mathbf{b}_i},{x_i}\in \mathbb {Z}\}\). Throughout this paper, we assume \(m=O(n)\) to analyze the computational cost, though it is not essential. The length of \(\mathbf{v}\in \mathbb {R}^m\) is the standard Euclidean norm \(\Vert \mathbf{v}\Vert := \sqrt{\mathbf{v}\cdot \mathbf{v}}\), where the dot product of any two lattice vectors \(\mathbf{v}=(v_1,\ldots ,v_m)\) and \(\mathbf{w}=(w_1,\ldots ,w_m)\) is defined as \(\mathbf{v}\cdot \mathbf{w}=\sum _{i=1}^{m} v_i w_i\). For natural numbers i and j with \(i<j\), [i : j] is the set of integers \(\{ i, i+1,\ldots , j\}\). Particularly, [1 : j] is denoted by [j].
The gamma function \(\varGamma (s)\) is defined for \(s > 0\) by \(\varGamma (s)=\int _0^\infty t^{s1}\cdot e^{t} dt\). The beta function is \(\mathrm{B}(x,y)=\int ^1_0 t^{x1} (1t)^{y1} dt\). We denote by Ball\(_n(R)\) the ndimensional Euclidean ball of radius R, and then its volume \(V_n(R)=R^n\cdot \frac{\pi ^{n/2}}{\varGamma (n/2+1)}\). Stirling’s approximation yields \(\varGamma (n/2+1)\approx \sqrt{\pi n} (n/2)^{n/2} e^{n/2}\) and \(V_n(1)^{1/n} \approx \sqrt{n/(2\pi e)} \approx \sqrt{n/17}\).
2.1 GramSchmidt Basis and Projective Sublattice
For a given lattice basis \({B=(\mathbf{b}_1,\dots ,\mathbf{b}_n)}\), we define its GramSchmidt orthogonal basis \(B^*=(\mathbf{b}_1^*,\dots ,\mathbf{b}_n^*)\) by \(\mathbf{b}^*_i=\mathbf{b}_i  \sum _{j=1}^{i1}\mu _{ij}{} \mathbf{b}_j^*\) for \(1\le j < i \le n\), where \(\mu _{ij}=(\mathbf{b}_i\cdot \mathbf{b}_j^*)/\Vert \mathbf{b}_j^*\Vert ^2\) are the GramSchmidt coefficients (abbreviated as GScoefficients). We sometimes refer to \(\Vert \mathbf{b}^*_i\Vert \) as the GramSchmidt lengths (abbreviated as GSlengths). We also use the GramSchmidt variables (abbreviated as GSvariables) to denote the set of GScoefficients \(\mu _{ij}\) and lengths \(\mathbf{b}^*_i\). The lattice determinant is defined as \(\det (L) := \prod _{i=1}^n \Vert \mathbf{b}^*_i\Vert \) and it is equal to the volume \(\mathrm{vol}(L)\) of the fundamental parallelepiped. We denote the orthogonal projection by \(\pi _i:\mathbb R^m \mapsto \mathrm{span}(\mathbf{b}_1\,,\dots ,\) \(\mathbf{b}_{i1})^\perp \) for \(i\in \{1,\dots ,n\}\). In particular, \(\pi _1( \cdot )\) is used as the identity map.
We denote the local block by the projective sublattice
for \(j\in \{i,i+1,\dots ,n\}\). We sometimes use \(B_i\) to denote the lattice whose basis is \((\pi _i(\mathbf{b}_i),\ldots ,\) \(\pi _i(\mathbf{b}_j))\) of projective sublattice \( L_{[i:j]}\). That is, we omit the change of blocksize \(\beta =ji+1\) if it is clear by context.
2.2 Shortest Vector and Gaussian Heuristic
A nonzero vector in a lattice L that has the minimum norm is called the shortest vector. We use \(\lambda _1(L)\) to denote the norm of the shortest vector. The notion is also defined for a projective sublattice as \(\lambda _1(L_{[i:j]})\) (we occasionally refer to this as \(\lambda _1(B_i)\) in this paper).
The shortest vector problem (SVP) is the problem of finding a vector of length \(\lambda _1(L)\). For a function \(\gamma (n)\) of a lattice dimension n, the standard definition of \(\gamma \)approximate SVP is the problem of finding a vector shorter than \(\gamma (n) \cdot \lambda _1(L)\).
An ndimensional lattice L and a continuous (usually convex and symmetric) set \(S \subset \mathbb {R}^m\) are given. Then the Gaussian heuristic says that the number of points in \(S\cap L\) is approximately vol(S)/vol(L).
In particular, taking S as the origincentered ball of radius R, the number of lattice points is approximately \(V_n(R) / \mathrm{vol}(L)\), which derives the length of shortest vector \(\lambda _1\) by R so that the volume of the ball is equal to that of the lattice:
This is usually called the Gaussian heuristic of a lattice, and we denote it by \(\mathrm {GH}(L) = \det (L)^{1/n}/V_n(1)^{1/n}\).
For our analysis, we use the following lemma on the randomly generated points.
Lemma 1
Let \(x_1,\ldots ,x_K\) be K points uniformly sampled from the ndimensional unit ball. Then, the expected value of the shortest length of vectors from origin to these points is
In particular, letting \(K=1\), the expected value is \(n/(n+1)\).
Proof
Since the cumulative distribution function of each \(\Vert x_i\Vert \) is \(F_i(r) = r^n\), the cumulative function of the shortest length of the vectors is \(F_{\min }(r) = 1 (1F_i(r))^K = 1(1r^n)^K.\) Its probability density function is \(P_{\min }(r)= \frac{dF}{dr} =Kn\cdot r^{n1} (1r^n)^{K1}.\) Therefore, the expected value of the shortest length of the vectors is
\(\Box \)
2.3 Enumeration Algorithm [20, 28, 46]
We explain the enumeration algorithm for finding a short vector in the lattice. The pseudo code of the enumeration algorithm is given in [20, 46]. For given lattice basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_n)\), and its GramSchmidt basis \((\mathbf{b}^*_1,\ldots ,\mathbf{b}^*_n)\), the enumeration algorithm considers a search tree whose nodes are labeled by vectors. The root of the search tree is the zero vector; for each node labeled by \(\mathbf{v} \in L\) at depth \(k\in [n]\), its children have labels \(\mathbf{v} + a_{nk}\cdot \mathbf{b}_{nk}\) \((a_{nk}\in \mathbb {Z})\) whose projective length \(\Vert \pi _{nk}(\sum _{i=nk}^n a_i \cdot \mathbf{b}_i )\Vert \) is smaller than a bounding value \(R_{k+1} \in (0,\Vert \mathbf{b}_1\Vert ]\). After searching all possible nodes, the enumeration algorithm finds a lattice vector shorter than \(R_n\) at a leaf of depth n, or its projective length is somehow short at a node of depth \(k<n\). It is clear that by taking \(R_k=\Vert \mathbf{b}_1\Vert \) for all \(k\in [n]\), the enumeration algorithm always finds the shortest vector \(\mathbf{v}_1\) in the lattice, namely \(\Vert \mathbf{v}_1\Vert =\lambda _1(L)\).
Because \(\Vert \mathbf{b}_1\Vert \) is often larger than \(\lambda _1(L)\), we can set a better searching radius \(R_n=\mathrm {GH}(L)\) to decrease the computational cost. We call this the full enumeration algorithm and define the full enumeration cost \(\mathrm {FEC}(B)\) as the cost of the algorithm for this basis. With the same argument in [20], we can evaluate \(\mathrm {FEC}(B)\) using the following equation.
Because full enumeration is a costintensive algorithm, several improvements have been proposed by considering the tradeoffs between running time, searching radius, and success probability [20, 47]. GamaNguyenRegev [20] proposed a cost estimation model of the lattice enumeration algorithm to optimize the bounding functions of \(R_1, \ldots , R_n\), which were mentioned above. The success probability p of finding a single vector within a radius c is given by
where \(S_n\) is the surface of the ndimensional unit ball. Then, the cost of the enumeration algorithm can be estimated by the number of processed nodes, i.e.,
Note that the factor 1 / 2 is based on the symmetry. Using the methodology in [20], ChenNguyen proposed a method to find the optimal bounding functions of \(R_1,\ldots ,R_n\) that minimizes N subject to p.
In this paper, we use the lattice enumeration cost, abbreviated as ENUM cost, to denote the number N in Eq. (1). For a lattice L defined by a basis B and parameters \(\alpha >0\) and \(p\in [0,1]\), we use \(\mathrm {ENUMCost}(B;\alpha ,p)\) to denote the minimized cost N of lattice enumeration with radius \(c=\alpha \cdot \mathrm {GH}(L)\) subject to the success probability p. This notion is also defined for a projective sublattice.
3 Lattice Reduction Algorithms
Lattice reduction algorithms transform a given lattice basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_n)\) to another basis whose GramSchmidt lengths are relatively shorter.
LLL Algorithm [30]: The LLL algorithm transforms the basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_n)\) using the following two operations: size reduction \(\mathbf{b}_i \leftarrow \mathbf{b}_i  \lfloor \mu _{ji} \rceil \mathbf{b}_j\) for \(j\in [i1]\), and neighborhood swaps between \(\mathbf{b}_i\) and \(\mathbf{b}_{i+1}\) if \(\Vert \mathbf{b}^*_{i+1} \Vert ^2 \le 1/2\Vert \mathbf{b}^*_{i} \Vert ^2\) until no update occurs.
BKZ Algorithms [46, 47]. For a given lattice basis and a fixed blocksize \(\beta \), the BKZ algorithm processes the following operation in the local block \(B_i\), i.e., the projected sublattice \(L_{[i,i+\beta 1]}\) of blocksize \(\beta \), starting from the first index \(i=1\) to \(i=n1\). Note that the blocksize \(\beta \) reluctantly shrinks to \(ni+1\) for large \(i > n\beta +1\), and thus we sometimes use \(\beta '\) to denote the dimension of \(B_i\), i.e. \(\beta '=\min (\beta ,ni+1)\).
At index i, the standard implementation of the BKZ algorithm calls the enumeration algorithm for the local block \(B_i\). Let \(\mathbf{v}\) be a shorter vector found by the enumeration algorithm. Then the BKZ algorithm inserts \(\mathbf{v}\) into \(\mathbf{b}_{i1}\) and \(\mathbf{b}_i\), and constructs the degenerated basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_{i1}, \mathbf{v},\mathbf{b}_i,\ldots ,\mathbf{b}_{\min (i+\beta 1,n)})\). For this basis, we apply the LLL algorithm (or BKZ with a smaller blocksize) so that the basis of shorter independent vectors can be obtained. One set of these procedures from \(i=1\) to \(n1\) is usually called a tour. The original version of the BKZ algorithm stops when no update occurs during \(n1\) iterations. In this paper, we refer to the BKZ algorithm with blocksize \(\beta \) as the BKZ\(\beta \).
HKZ Reduced Basis: The lattice basis \((\mathbf{b}_1,\ldots ,\mathbf{b}_n)\) is called HermiteKorkineZolotarev (HKZ) reduced [38, Chapter 2] if it is sizereduced \(\mu _{ji}\le 1/2\) for all i and j, and \(\pi _i(\mathbf{b}_i)\) is the shortest vector in the projective sublattice \(L_{[i:n]}\) for all i. We can estimate the GramSchmidt length of the HKZreduced basis by using the Gaussian heuristic as \(\Vert \mathbf{b}^*_i\Vert =\mathrm {GH}(L_{[i:n]})\). Since the HKZreduced basis is completely reduced in this sense, we will use this to discuss the lower bound of computing time in Sect. 8.2.
3.1 Some Heuristic Assumptions in BKZ
Gaussian Heuristic in Small Dimensions: Chen and Nguyen observed that the length \(\lambda _1(B_i)\) of the shortest vector in the local block \(B_i\) is usually larger than \(\mathrm {GH}(B_i)\) in small dimensions i.e., small \(\beta '\) [13]. They gave the averaged values of \(\Vert \mathbf{b}^*_i\Vert /\det (L)^{1/n}\) for the last indexes of highly reduced bases to modify their BKZ simulator, see [13, Appendix C]. For their 50 simulated values for \(\Vert \mathbf{b}^*_{n49}\Vert ,\ldots ,\Vert \mathbf{b}^*_{n}\Vert \), we define the modified Gaussian heuristic constant by
We will use \(\tau _i\) for \(i\le 50\) to denote these modifying constants; for \(i > 50\) we define \(\tau _i=1\) following ChenNguyen’s simulator [13].
In the rest of this paper, we assume that the shortest vector lengths of \(\beta \)dimensional local blocks \(B_i\) of reduced bases satisfies
on average.
We note that there exists a mathematical theory to guarantee \(\tau _i \rightarrow 1\) for random lattices when the dimension goes to infinity [42]. Though it does not give the theoretical guarantee \(\tau _i=1\) for BKZ local blocks, they are very close in our preliminary experiments.
Geometric Series Assumption (GSA): Schnorr [44] introduced geometric series assumption (GSA), which says that the GramSchmidt lengths \(\Vert \mathbf{b}_i^*\Vert \) in the BKZreduced basis decay geometrically with quotient r for \(i=1,\dots ,n\), namely, \(\Vert \mathbf{b}_i^*\Vert ^2/\Vert \mathbf{b}_1\Vert ^2=r^{i1}\), for some \(r \in [3/4,1)\). Here r is called the GSA constant. Figure 2 shows the GramSchmidt lengths of a 240dimensional reduced basis after processing BKZ100 using our algorithm and parameters.
It is known that GSA does not hold exactly in the first and last indexes [11]. Several previous works [3, 11, 44] aimed to modify the reduction algorithm that outputs the reduced basis satisfying GSA. However, it seems difficult to obtain such a reduced basis in practice. In this paper, we aim to modify the parameters in the first and last indexes so that the proposed simulator performs with optimal efficiency (See Sect. 5.1).
3.2 ChenNguyen’s BKZ 2.0 Algorithm [13]
We recall ChenNguyen’s BKZ 2.0 Algorithm in this section. The outline of the BKZ 2.0 algorithm is described in Fig. 3.
SpeedUp Techniques for BKZ 2.0: BKZ 2.0 employs four major speedup techniques that differentiate it from the original BKZ:

1.
BKZ 2.0 employs the extreme pruning technique [20], which attempts to find shorter vectors in the local blocks \(B_i\) with low probability p by randomizing basis \(B_i\) to more blocks \(G_1,\ldots ,G_M\) where \(M = \lfloor 1/p \rceil \).

2.
For the search radius \(\min \{ \mathbf{b}^*_i, \alpha \cdot \mathrm {GH}(B_i) \}\) in the enumeration algorithm of the local block \(B_i\), Chen and Nguyen set the value as \(\alpha = \sqrt{1.1}\) from their experiments, while the previous works set the radius as \(\Vert \mathbf{b}_i^*\Vert \).

3.
In order to reduce the cost of the enumeration algorithm, BKZ 2.0 preprocesses the local blocks by executing the sequence of BKZ algorithm, e.g., 3 tours of BKZ50 and then 5 tours of BKZ60, and so on. The parameters blocksize, number of rounds and number of randomized bases, are precomputed to minimize the total enumeration cost.

4.
BKZ 2.0 uses the terminating condition introduced in [23], which aborts BKZ within small number of tours. It can find a short vector faster than the full execution of BKZ.
ChenNguyen’s BKZ 2.0 Simulator: In order to predict the computational cost and the quality of the output basis, they also propose the simulating procedure of the BKZ 2.0 algorithm. Let \((\ell _1,\ldots ,\ell _n)\) be the simulated values of the GSlengths \(\Vert \mathbf{b}^*_i\Vert \) for \(i=1,\ldots ,n\). Then, the simulated values of the determinant and the Gaussian heuristic are represented by \(\prod _{j=1}^n \ell _j\) and \(\mathrm {GH}(B_i) = V_{\beta '}(1)^{1/\beta '} \prod _{j=i}^{i+\beta '1} \ell _i\) where \(\beta '=\min \{\beta ,ni+1\}\), respectively.
They simulate a BKZ tour of blocksize \(\beta \) assuming that each enumeration procedure finds a vector of projective length \(\mathrm {GH}(B_i)\). Roughly speaking, their simulator updates \((\ell _i,\ell _{i+1})\) to \((\ell '_i,\ell '_{i+1})\) for \(i=1,\ldots ,n1\), where \(\ell '_i = \mathrm {GH}(B_i)\) and \(\ell '_{i+1} = \ell _{i+1}\cdot (\ell _i/\ell '_i)\). Here, the last 50 GSlengths are modified using an HKZ reduced basis. The details of their simulator are given in [13, Algorithm 3].
They also present the upper and lower bounds for the number of processed nodes during the lattice enumeration of blocksize \(\beta \). From [14, Table 4], we extrapolate the costs as
Then, the total enumeration cost of performing the BKZ 2.0 algorithm using blocksize \(\beta \) and t tours is given by
To convert the number of nodes into singlethreaded time in seconds, we use the rational constant \(4\cdot 10^9 / 200 = 2\cdot 10^7\), because they assumed that processing one node requires 200 clock cycles in a standard CPU, and we assume it can work at 4.0GHz.
We note that there are several models to extrapolate \(\log _2(\mathrm{Cost}_{\beta })\). Indeed, Lepoint and Naehrig [31] consider two models by a quadratic interpolation and a linear interpolation from the table. Albrecht et al. [5] showed another BKZ 2.0 cost estimation that uses an interpolation using the cost model \(\log _2(\mathrm{Cost}_{\beta }) = O(n\log n)\). It is a highly nontrivial task to find a proper interpolation that estimates a precise cost of the BKZ 2.0 algorithm.
We further mention that the upper bound of the simulator is somewhat debatable, because they use the enumeration radius \(c=\min \{ \sqrt{1.1} \cdot \mathrm {GH}(B_i),\Vert \mathbf{b}_i^*\Vert \} \) for \(i < n30\) in their experiment whereas they assume \(c=\mathrm {GH}(B_i)\) for the cost estimation in their upper bound simulation. Thus, the actual cost of BKZ 2.0 could differ by a factor of \(1.1^{O(\beta )}\).
4 Optimizing Parameters in Plain BKZ
In this section we consider the plain BKZ algorithm described in Fig. 4, and roughly predict the GSlengths of the output basis, which were computed by the GSA constant r. Using this analysis, we can obtain the optimal settings for parameters \((\alpha ,p)\) in Step 4 of the plain BKZ algorithm of blocksize \(\beta \).
4.1 Relationship of Parameters \(\alpha , P, \beta , R\)
We fix the values of parameters \((\beta ,\alpha )\) and assume that the lattice dimension n is sufficiently large.
Suppose that we found a vector \(\mathbf{v}\) of \(\Vert \mathbf{v}\Vert < \alpha \cdot \mathrm {GH}(B_i)\) in the local block \(B_i\). We update the basis \(B_i\) by inserting \(\mathbf{v}\) at ith index, and perform LLL or small blocksize BKZ on the updated basis.
When the lattice dimension is large, Rogers’ theorem [42] says that approximately \(\alpha ^n/2\) vector pairs \((\mathbf{v},\mathbf{v})\) exist within the ball of radius \(c=\alpha \cdot \mathrm {GH}(L)\). Since the pruning probability is defined for a single vector pair, we expect the actual probability that the enumeration algorithm finds at least one vector shorter than c is roughly
From relation (5), there may exist one lattice vector in the searching space by setting parameter p as
Remark 1
The probability setting of Eq. (6) is an optimal choice under our assumption. If p is smaller, the enumeration algorithm finds no short vector with high probability and basis updating at ith index does not occur, which is a waste of time. On the other hand, if we take a larger p so that there exist \(p\cdot \alpha ^\beta /2>1\) vector pairs, the computational time of the enumeration algorithm increases more than \(p\cdot \alpha ^\beta /2\) times [20]. Although it can find shorter vectors, this is also a waste of time from the viewpoint of basis updating.
Assume that one vector is found using the enumeration, and also assume that the distribution of it is the same as the random point in the \(\beta \)dimensional ball of radius \(\alpha \cdot \mathrm {GH}(B_i)\). Then, the expected value of \(\Vert \mathbf{v}\Vert \) is \(\frac{\beta }{\beta +1} \alpha \cdot \mathrm {GH}(B_i)\) by letting \(K=1\) in Lemma 1. Thus, we can expect that this is the value \(\Vert \mathbf{b}^*_i\Vert \) after update.
Therefore, after executing a sufficient number of BKZ tours, we can expect that all the lengths \(\Vert \mathbf{b}^*_i\Vert \) of the GramSchmidt basis satisfy
on average. Hence, under Schnorr’s GSA, we have the relation
and the GSA constant is
Therefore, by fixing \((\alpha ,\beta )\), we can set the probability p and obtain r as a rough prediction of the quality of the output lattice basis. We will use the relations (6) and (9) to set our parameters. Note that any two of \(\beta ,\alpha ,p\) and r are determined from the other two values.
Remark 2
Our estimation is somehow underestimate, i.e., in our experiments, the found vectors during BKZ algorithm are often shorter than the estimation in Eq. (7). This gap is mainly from the estimation in (5), which can be explained as follows. Let \((R_1,\ldots ,R_\beta )\) be a bounding function of probability p for a vector of length \(\Vert v\Vert \). Then, the probability \(p'\) for a vector of length \(\Vert v'\Vert \) of a shorter vector is the same as the scaled bounding function \((R'_1,\ldots ,R'_\beta )\) where \(R'_i = \min \{ 1.0,R_i \cdot \Vert v\Vert /\Vert v'\Vert \}\). Here, \(p'\) is clearly larger than p due to \(R'_i \ge R_i\) for \(i\in [\beta ]\). Therefore, when the above parameters are used, the quality of the output basis is better than that derived from Eq. (9) if we perform a sufficient number of tours. Hence, within a few tours, our algorithm can output a basis which has a good quality predicted by our estimation in this section.
4.2 Optimizing Parameters
Now for a fixed parameter pair \((\beta ,r)\), the cost \(\mathrm {ENUMCost}(B_i;\alpha ,p)\) of the enumeration algorithm in local block \(B_i\) satisfying GSA is computable. Concretely, we compute \(\alpha \) using the relation (9), fix p by (6), and simulate the GramSchmidt lengths of \(B_i\) using \(\Vert \mathbf{b}^*_i\Vert =r^{(i1)/2}\). Using the computation technique in [20], for several GSA constants r, we search for the optimal blocksize \(\beta \) that minimizes the enumeration cost \(\mathrm {ENUMCost}(B_i;\alpha ,p)\). The small squares in Fig. 5 show the results. From these points, we find the functions \(f_1(\beta )\) and \(f_2(\beta )\), whose graphs are also in the figure.
We explain how to find these functions \(f_1(\beta )\) and \(f_2(\beta )\). Suppose lattice dimension n is sufficiently large, and suppose the cost of the enumeration algorithm is roughly dominated by the probability p times the factor at \(k=n/2\) in the summation (1). Then \(\mathrm {ENUMCost}(B_i;\alpha ,p)\) is approximately
where from Eq. (9) we have obtained
In order to minimize D, we roughly need the above derivative to be zero; thus, we use the following function of \(\beta \) for our cost estimation with constants \(c_i\)
From this observation, we fix the fitting function model as \(f(\beta ) = \frac{ \log (c_1 \beta + c_2) }{c_3\beta + c_4}.\)
By using the least squares method implemented in gnuplot, we find the coefficients \(c_i\) so that \(f(\beta )\) is a good approximation of the pairs \((\beta _i,\log (r_i))\). In our curve fitting, we separate the range of \(\beta \) into the interval [40, 100], and the larger one. This is needed for converging to \(\log (r)=0\) when \(\beta \) is sufficiently large; however, our curve fitting using a single natural function did not achieve it. Curves \(f_1(\beta )\) and \(f_2(\beta )\) in Fig. 5 are the results of our curve fitting for the range [40, 100] and the larger one, respectively.
For the range of \(\beta \in [40,100]\), we have obtained
and for the larger blocksize \(\beta >100\),
Note that we will use the relation (10) when the blocksize is smaller than 40.
Moreover, we obtain pairs of \(\beta \) and minimize \(\mathrm {ENUMCost}(B_i;\alpha ,p)\), in accordance with the above experiments. Using the curve fitting that minimizes \(\sum _{\beta } f(\beta )  \log _2 \mathrm {ENUMCost}(B_i;\alpha ,p) ^2\) using gnuplot, we find the extrapolating formula
to \(\log _2 \mathrm {ENUMCost}(B_i;\alpha ,p)\). We will use this as the standard of the enumeration cost of blocksize \(\beta \).
Remark 3
Our estimation from the real experiments is \(0.25\beta \cdot \mathrm {ENUMCost}(B_i;\) \(\alpha ,p)\) (See, Sect. 7.1 of the fullversion [8]), which crosses over the estimation of BKZ 2.0 simulator (3) at \(\beta =873\). Thus, the performance of BKZ 2.0 might be better in some extremely high block sizes, while our algorithm has a better performance in the realizable block sizes \(<200\).
4.3 Parameter Settings in Step 4 in Fig. 4
Using the above arguments, we can fix the optimized pair \((\alpha ,p)\) for each blocksize \(\beta \). That is, to process a local block of blocksize \(\beta \) in Step 4 of the plain BKZ algorithm in Fig. 4, we compute the corresponding r by Eqs. (10) and (11), and additionally obtain the parameters \(\alpha \) by Eq. (9) and p by Eq. (6). These are our basic parameter settings.
Modifying Blocksize at First Indexes: We sometimes encounter the phenomenon in which the actual \(\mathrm {ENUMCost}(B_i;\alpha ,p)\) in small indexes is much smaller than that in middle indexes. This is because \(\mathbf{b}^*_i\) is smaller than \(\mathrm {GH}(B_i)\) in small indexes. In other words, \(\mathbf{b}_i\) is hard to update using the enumeration of blocksize \(\beta \). To speed up the lattice reduction, we use a heuristic method that enlarges the blocksizes as follows.
From the discussion in the above subsection, we know the theoretical value of the enumeration cost at blocksize \(\beta \). On the other hand, in the actual computing of BKZ algorithms, the enumeration cost is increased because the sequence \((\mathbf{b}^*_i,\ldots ,\mathbf{b}^*_{i+\beta 1})\), which mainly affects the computing cost, does not follow the GSA of slope r exactly. Thus, we define the expected enumeration cost in blocksize \(\beta \) as \(\beta \cdot \mathrm {MINCost}(\beta )\). With this expectation, we reset the blocksize as the minimum \(\beta \) satisfying \(\mathrm {ENUMCost}(B_{[i:i+\beta 1]};\alpha ,p) > \beta \cdot \mathrm {MINCost}(\beta )\).
Modifying \((\alpha ,p)\) at Last Indexes: For large indexes such as \(i>n\beta \), the blocksize of a local block shrinks to \(\beta '=\min (\beta ,ni+1)\). In our implementation, we increase the success probability to a new \(p'\), while \(\mathrm {ENUMCost}(B_i;\alpha ',p')\) is smaller than \(\beta \cdot \mathrm {MINCost}(\beta )\). We also reset the radius as \(\alpha '=(2/p')^{1/\beta }\) from Eq. (6).
5 Our Proposed Progressive BKZ: Basic Variant
In this section, we explain the basic variant of our proposed progressive BKZ algorithm.
In general, if the blocksize of the BKZ algorithm increases, a shorter vector \(\mathbf{b}_1\) can be computed; however, the running cost will eventually increase. The progressive BKZ algorithm starts a BKZ algorithm with a relatively small blocksize \(\beta _{start}\) and increases the blocksize to \(\beta _{end}\) by some criteria. The idea of the progressive BKZ algorithm has been mentioned in several literatures, for example, [13, 25, 45, 48]. The research challenge in the progressive BKZ algorithm is to find an effective criteria for increasing blocksizes that minimizes the total running time.
In this paper we employ the full enumeration cost (FEC) in Sect. 2.3, in order to evaluate the quality of the basis for finding the increasing criteria. Recall that the FEC of basis \(B=(\mathbf{b}_1,\ldots ,\mathbf{b}_n)\) of ndimensional lattice L is defined by \(\mathrm {FEC}(B)=\sum _{k=1}^{n}\frac{V_k(\mathrm {GH}(L))}{\prod _{i=nk+1}^n \Vert \mathbf{b}_i^* \Vert }\), where \(\Vert \mathbf{b}_i^*\Vert \) represents the GSlengths. Note that \(\mathrm {FEC}(B)\) eventually decreases after performing several tours of the BKZ algorithm using the fixed blocksize \(\beta \).
Moreover, we construct a simulator that evaluates the GSlengths by the optimized parameters \(\alpha , p, \beta , r\) for the BKZ algorithm described in the local block discussion in Sect. 4.3. The simulator for an ndimensional lattice only depends on the blocksize \(\beta \) of the local block; we denote by \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta )\) the simulated GSlengths \((\ell _1,\ldots ,\ell _n)\). The construction of simulator will be presented in Sect. 5.1.
For this purpose, we define some functions defined on the simulated GSlengths \((\ell _1,\ldots ,\ell _n)\). \(\mathrm{Sim}\text{ }\mathrm{GH}(\ell _1,\ldots ,\ell _n) = V_{n}(1)^{1/n} \prod _{j=1}^{n} \ell _j^{1/n}\) is the simulated Gaussian heuristic. The simulated value of full enumeration cost is
Further, for \((\ell _1,\ldots ,\ell _n)=\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta )\), we use the notation
\(\mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta ) := \mathrm{Sim}\text{ }\mathrm{FEC}(\ell _1,\ldots ,\ell _n)\) in particular. The simulated enumeration cost \(\mathrm{Sim}\text{ }\mathrm{ENUMCost}(\ell _1,\ldots ,\ell _{\beta };\alpha ,p)\) is defined by \(\mathrm {ENUMCost}(B;\alpha ,p)\) for a lattice basis B that has GSlengths \(\Vert \mathbf{b}^*_i \Vert =\ell _i\) for \(i\in [\beta ]\).
The key point of our proposed progressive BKZ algorithm is to increase the blocksize \(\beta \) if \(\mathrm {FEC}(B)\) becomes smaller than \(\mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta )\). In other words, we perform the BKZ tours of blocksize \(\beta \) while \(\mathrm {FEC}(B) > \mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta )\). We describe the proposed progressive BKZ in Fig. 6.
Remark 4
In the basic variant of our progressive BKZ described in Sect. 6.1, we increase the blocksize \(\beta \) in increments of one in Step 2. However, we will present an optimal strategy for increasing the blocksize in Sect. 5.
5.1 \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta )\): Predicting GramSchmidt Lengths
In the following, we construct a simulator for predicting the GramSchmidt lengths \(\Vert \mathbf{b}^*_i\Vert \) obtained from the plain BKZ algorithm of blocksize \(\beta \).
Our simulator consists of two phases. First, we generate approximated GSlengths using Gaussian heuristics; we then modify it for the first and last indexes of GSA in Sect. 3.1. We will explain how to compute \((\ell _1,\ldots ,\ell _n)\) as the output of \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta )\).
First Phase: Our simulator computes the initial value of \((\ell _1,\ldots ,\ell _n)\).
We start from the last index by setting \(\ell _n=1\), and compute \(\ell _i\) backwards. From Eqs. (2) and (7) we are able to simulate the GSlengths \(\ell _i\) by solving the following equation of \(\ell _i\):
Here, \(\alpha \) is the optimized radius parameter in Sect. 4.3 and \(\tau _{\beta '}\) is the coefficient of the modified Gaussian heuristic.
This simple simulation in the first phase is sufficient for smaller blocksizes (\(\beta <30\)). However, for simulating larger blocksizes, we must modify the GSlengths of the first and last indexes in Sect. 3.1.
Second Phase: To modify the results of the simple simulation, we consider our two modifying methods described in Sect. 4.3. We recall that \(\mathrm {MINCost}(\beta )\) is the standard value of the enumeration cost of blocksize \(\beta \).
We first consider the modification for the last indexes \(i>n\beta +1\), i.e., a situation in which the blocksize is smaller than \(\beta \). We select the modified probability \(p_i\) at index i so that \(\mathrm{Sim}\text{ }\mathrm{ENUMCost}(\ell _i,\ldots ,\ell _n;\alpha _i,p_i) = \mathrm {MINCost}(\beta )\), where \(\ell _i,\ldots ,\ell _n\) is the result of the first simulation, and we use \(\alpha _i = (2/p_i)^{ni+1}\). After all (\(\alpha _i,p_i\)) for \(n\beta +1\le i \le n\) are fixed, we modify the GSlengths by solving the following equation of \(\ell _i\) again:
Next, using the modified \((\ell _1,\ldots ,\ell _n)\), we again modify the first indexes as follows. We determine the integer parameter \(b>0\) for the size of enlargement. For \(b=1,2,\ldots \), we reset the blocksize at index i as \(\beta _i := \beta + \max \{ (bi+1)/2, b2(i1) \} \) for \(i \in \{1,\ldots ,b\}\). Using these blocksizes, we recompute the GSlengths by solving Eq. (13) from \(i=\beta _i\) to 1. Then, we compute \(\mathrm{Sim}\text{ }\mathrm{ENUMCost}(\ell _1,\ldots ,\ell _{\beta +b};\alpha ,p)\). We select the maximum b such that this simulated enumeration cost is smaller than \(2\cdot \mathrm {MINCost}(\beta )\).
Experimental Result of Our GSlengths Simulator: We performed some experiments on the GSlengths for some random lattices from the Darmstadt SVP Challenge [50]. We computed the GSlengths for 120, 150 and 200 dimensions using the proposed progressive BKZ algorithm, with ending blocksizes of 40, 60, and 100, respectively (Note that the starting blocksize is irrelevant to the quality of the GSlengths). The simulated result is shown in Fig. 7. Almost all small squares of the computed GSlengths are plotted on the bold line obtained by our above simulation. Our simulator can precisely predict the GSlengths of these lattices. The progress of the first vector, which uses 300dimensional lattices, is also shown in the figure.
5.2 Expected Number of BKZ Tours at Step 3
At Step 3 in the proposed algorithm (Fig. 6) we iterate the BKZ tour with blocksize \(\beta \) as long as the full enumeration cost \(\mathrm {FEC}(B)\) is larger than the simulated cost \(\mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta )\). In the following we estimate the expected number of BKZ tours (we denote it as \(\sharp tours\)) at blocksize \(\beta \).
In order to estimate \(\sharp tours\), we first compute \((\ell _1,\ldots ,\ell _n)\) and the output of \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta 1)\), and update it by using the modified ChenNguyen’s BKZ 2.0 simulator described in Sect. 3.2, until \(\mathrm{Sim}\text{ }\mathrm{FEC}(\ell _1,\ldots ,\ell _n)\) is smaller than \(\mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta )\). We simulate a BKZ tour by updating the pair \((\ell _i,\ell _{i+1})\) to \((\ell '_i,\ell '_{i+1})\) for \(i=1,\ldots ,n1\) according to the following rule:
At the simulation of tth BKZ tour, write the input GSlengths \((\ell '_1,\ldots ,\ell '_n)\); i.e., the output of the \((t1)\)th BKZ tour. We further denote the output of tth BKZ tour as \((\ell _1,\ldots ,\ell _n)\). Suppose they satisfy
Then, our estimation of \(\sharp tours\) is the interpolated value:
Note that we can use this estimation for other BKZ strategies, although we estimate the number of BKZ tours from BKZ\((\beta 1)\) basis to BKZ\(\beta \) basis, using BKZ\(\beta \) algorithm. We will estimate the tours for other combinations of starting and ending blocksizes, and use them in the algorithm.
6 Our Progressive BKZ: Optimizing Blocksize Strategy
We propose how to optimally increase the blocksize \(\beta \) in the proposed progressive BKZ algorithm. Several heuristic strategies for increasing the blocksizes have been proposed. The following sequences of blocksizes after LLLreduction have been used in the previous literatures:
The timings for changing to the next blocksize were not explicitly given. They sometimes continue the BKZ tour until no update occurs as the original BKZ. In this section we try to find the sequence of the blocksizes that minimizes the total cost of the progressive BKZ to find a BKZ\(\beta \) reduced basis. To find this strategy, we consider all the possible combinations of blocksizes used in our BKZ algorithm and the timing to increase the blocksizes.
Notations on Blocksize Strategy: We say a lattice basis B of dimension n is \(\beta \)reduced when \(\mathrm {FEC}(B)\) is smaller than \(\mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta )\). For a tuple of blocksizes \((\beta ^{alg},\beta ^{start},\beta ^{goal})\) satisfying \(2\le \beta ^{start} < \beta ^{goal} \le \beta ^{alg}\), the notation
is the process of the BKZ following algorithm. The input is a \(\beta ^{start}\)reduced basis B, and the algorithm updates B using the tours of BKZ\(\beta ^{alg}\) algorithm with parameters in Sect. 4.3. It stops when \(\mathrm {FEC}(B) < \mathrm{Sim}\text{ }\mathrm{FEC}(n,\beta ^{goal})\).
\(\mathrm {TimeBKZ}(n,\beta ^{start} {\,\mathop {\rightarrow }\limits ^{{\beta ^{alg}}}\,}\beta ^{goal})\) is the computing time in seconds of this algorithm. We provide a concrete simulating procedure in this and the next sections. We assume that \(\mathrm {TimeBKZ}\) is a function of \(n,\beta ^{alg},\beta ^{start}\) and \(\beta ^{goal}\).
To obtain a BKZ\(\beta \) reduced basis from an LLL reduced basis, many blocksize strategies are considered as follows:
We denote this sequence as \(\{ (\beta _j^{alg},\beta ^{goal}_j) \}_{j=1,\ldots ,D}\), and regard it as the progressive BKZ given in Fig. 8.
6.1 Optimizing Blocksize Strategies
Our goal in this section is to find the optimal sequence that minimizes the total computing time
of the progressive BKZ algorithm to find a BKZ\(\beta ^{goal}_D\) basis.
Based on our experimental results, which are given in Sect. 7, we can estimate the computing time of the BKZ algorithm:
when dimension n is small (\(n < 400\)), and
when dimension n is large (\(n \ge 400\)). The difference is caused by the difference in the types to compute GramSchmidt variables in implementation. The former and latter implementation employ \(\mathtt{quad\_float}\) and \(\mathtt{RR}\) (320 bits) respectively, where \(\mathtt{RR}\) is the arbitrary precision floating point type in the NTL library [49]. To compute \(\sharp tours\) we use the procedure in Sect. 5.2. The input of the \(\mathrm {ENUMCost}\) function is from \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta ^{start})\) at the first tour. From the second tour, we use the updated GSlengths by the ChenNguyen’s simulator with blocksize \(\beta ^{alg}\).
Using these computing time estimations, we discuss how to find the optimal blocksize strategy (15) that minimizes the total computing time. In this optimizing procedure, the input consists of n and \(\beta \), the lattice dimension and the goal blocksize. We denote \(\mathrm {TimeBKZ}(n,\beta ^{goal})\) to be the minimized time in seconds to find a \(\beta \)reduced basis from an LLL reduced basis, that is, the minimum of (16) from among the possible blocksize strategies. By definition, we have
where we take the minimum over the pair of blocksizes \((\beta ',\beta ^{alg})\) satisfying \(\beta ' < \beta ^{goal} \le \beta ^{alg}\).
For the given \((n,\beta )\), our optimizing algorithm computes \(\mathrm {TimeBKZ}(n,\bar{\beta })\) from small \(\bar{\beta }\) to the target \(\bar{\beta }= \beta \). As the base case, we define that \(\mathrm {TimeBKZ}(n,20)\) represents the time to compute a BKZ20 reduced basis using a fixed blocksize, starting from an LLL reduced basis:
6.2 Simulating Time to Find Short Vectors in Random Lattices
In this section, we give our simulating result of finding short vectors for random lattices. For the given lattice dimension n and the target length, we simulate the necessary BKZ blocksize \(\beta \) so that \(\ell _1\) of \(\mathrm{Sim}\text{ }\mathrm{GS}\text{ }\mathrm{lengths}(n,\beta )\) is smaller than the target length. Then, we simulate \(\mathrm {TimeBKZ}(n,\beta )\) by using the method in Sect. 6.1.
As an example, in Table 2, we show the optimized blocksize strategy and computing time to find a 102reduced basis in \(n=600\) dimension. We estimate blocksize 102 is necessary to find a vector shorter than \(n\cdot \det (L)^{1/n}\), which is the condition to enter the Hall of Fame in the Approximate Ideal Lattice Challenge [50].
Table 3 shows the blocksize and predicted total computing time in seconds to find a vector shorter than \(n\cdot \mathrm {GH}(L)\) (this corresponds to the napproximate SVP from the learning with errors problem [41].), \(n\cdot \det (L)^{1/n}\) (from the Approximate Ideal Lattice Challenge published in Darmstadt [50]), and \(\sqrt{n}\cdot \mathrm {GH}(L)\). For comparison, the simulating result of BKZ 2.0 is given to find \(n\cdot \det (L)^{1/n}\). Recall that their estimated cost in seconds is given by \(\sharp \mathrm{ENUM} / 2\cdot 10^7\). From Table 3, our algorithm is asymptotically faster than BKZ 2.0.
6.3 Comparing with Other Heuristic Blocksize Strategies
In this section, we compare the blocksize strategy of our progressive BKZ in Fig. 8. Using a random 256dimensional basis, we experimented and simulated the progressive BKZ to find a BKZ128 reduced basis with the three following strategies:
(SchnorrShevchenko’s doubling strategy [48])
(Simplest stepbystep in Fig. 6)
(Optimized blocksize strategy in Fig. 8)
In experiment, our simple and optimized strategy takes about 27.1 min and about 11.5 min respectively to achieve BKZ64 basis after the LLL reduction. On the other hand, SchnorrSchevchenko’s doubling strategy takes about 21 min.
After then, the doubling strategy switches to BKZ128 and takes about 14 singlecore days to process the first one index, while our strategies comfortably continues the execution of progressive BKZ.
Our simulator predicts that it takes about \(2^{25.3}\), \(2^{25.1}\) and \(2^{37.3}\) s to finish BKZ128 by our simple, optimized, and SchnorrSchevchenko’s doubling strategy, respectively. Our strategy is about 5000 times faster than the doubling strategy.
Interestingly, we find that the computing time of simple blocksize strategy is close to that of optimized strategy in many simulations when the blocksize is larger than about 100. Hence, the simple blocksize strategy would be better than the optimizing blocksize strategy in practice, because the latter needs a heavy precomputing as in Sect. 6.1.
7 Our Implementation and Cost Estimation for Processing Local Blocks
In this section we describe how to derive the estimation of the computing times of Eqs. (17) and (18) of Step 3–10 in Fig. 6. Remark that due to the page limitation, we omit almost of detailed description from the fullversion [8].
The total computing time is the sum of times to process local blocks (corresponds to Step 5–8 in Fig. 6):
We constructed our model of computing time for small dimensional lattices (\(dim<400\)) as follows.
And for the large dimensions as
In this section, we conduct the computer experiments with the simple blocksize strategy:
using a lattice generated by the SVP challenge problem generator, and then we estimate the undefined variables \(W_1\), \(W_2\), \(A_1\) and \(A_2\) by the experimental computing time after BKZ55, i.e., \(\beta ^{start}=55\).
We find the suitable coefficients \((A_1, W_1)\) by using the standard curve fitting method in semilog scale, which minimize
where \(T(dim, \beta , A_1, W_1) = Time_{Sim{\text{ }}large}(dim, \beta , A_1, W_1)\) in the small dimensional situation. For the large dimensional situation, we use the result of \(dim \in \{ 600,800\}\) to fix \(A_2\) and \(W_2\).
We find suitable coefficients
The fitting results are given in Fig. 9. Using the Eqs. (20) and (21) with the above coefficients (22), we can estimate the computing times of our progressive BKZ algorithm.
8 Pre/PostProcessing the Entire Basis
In this section, we consider an extended strategy that enhances the speed of our progressive BKZ by pre/postprecessing the entire basis.
In preprocessing we first generate a number of randomized bases for input basis. Each basis is then reduced by using the proposed progressive BKZ algorithm. Finally we perform the enumeration algorithm for each reduced basis with some low probability in the postprocessing. This strategy is essentially the same as the extreme pruning technique [20]. However, it is important to note that we do not generate a randomized basis inside the progressive BKZ. Our simulator for the proposed progressive BKZ is so precise that we can also estimate the speedup by the pre/postprecessing using our simulator.
8.1 Algorithm for Finding Nearly Shortest Vectors
In the following, we construct an algorithm for finding a vector shorter than \(\gamma \cdot \mathrm {GH}(L)\) with a reasonable probability using the strategy above, and we analyze the total computing time using our simulator for the BKZ algorithm.
Concretely, for given lattice basis B of dimension n, the preprocessing part generates M randomized bases \(B_i = U_i B\) by multiplying unimodular matrices \(U_i\) for \(i = 1,\ldots ,M\). Next, we apply our progressive BKZ for finding the BKZ\(\beta \) reduced basis. The cost to obtain the randomized reduced bases is estimated by \(M\cdot (\mathrm{TimeRandomize}(n) + \mathrm {TimeBKZ}(n,\beta ))\). Here, \(\mathrm{TimeRandomize}\) includes the cost of generating a random unimodular matrix and matrix multiplication, which is negligibly smaller than \(\mathrm {TimeBKZ}\) in general. Thus we assume the computational cost for lattice reduction is \(M\cdot \mathrm {TimeBKZ}(n,\beta )\).
Finally, in the postprocessing part, we execute the standard enumeration algorithm with the searching radius parameter \(\alpha =\gamma \) and probability parameter \(p=2\cdot \gamma ^{n}/M\). As with the similar argument in Sect. 4.1, there exist about \(\gamma ^n /2\) short vector pairs in \(\mathrm{Ball}_n(\gamma \cdot \mathrm {GH}(L))\). Therefore, the probability that one enumeration finds the desired vector is about \((\gamma ^n /2) \cdot (2\cdot \gamma ^{n}/M) = 1/M\) and the total probability of success is \(1(11/M)^M \approx 0.632\).
Consequently, the total computing cost in our model is
where \(\mathrm {TimeBKZ}(n, \beta )\) and \(\mathrm {ENUMCost}(B;\gamma ,p)\) are defined by Sect. 6.1 and Sect. 2.3, respectively. We can optimize this total cost by finding the minimum of formula (23) over parameter \((\beta ,M)\). Here, note that the constant \(6\cdot 10^7\) comes from our best benchmarking record of lattice enumeration. In Table 4, we provide the detailed simulating result with setting \(\gamma =1.05\) to analyze the hardness of the Darmstadt SVP Challenge in several dimensions. A comparison with previous works are given in Sect. 9 (See the line C in Fig. 10).
8.2 Lower Bound of the Cost by an Idealized Algorithm
Here we discuss the lower bound of the total computing cost of the proposed progressive BKZ algorithm (or other reduction algorithm) with the pre/postprocessing.
The total cost is estimated by the sum of the computational time for the randomization, the progressive BKZ algorithm, and the enumeration algorithm by the following extremely idealized situations. Note that we believe that they are beyond the most powerful cryptanalysis which we can achieve in the future, and thus we say that this is the lower bound in our model.
(a) The cost for the randomization becomes negligibly small. The algorithm for randomizing the basis would not only be the method of multiplying random unimodular bases, and we could find an ideal randomization at a negligibly small cost. Thus, \(\mathrm{TimeRandomize}(n)=0\).
(b) The cost for the progressive BKZ algorithm does not become lower than that of computing the GramSchmidt lengths. Even though the progressive BKZ algorithm ideally improved, we always need the GramSchmidt basis computation used for the enumeration algorithm or the LLL algorithm. The computation of the GramSchmidt basis (even though the computation is performed in an approximation using floatingpoint operations with a sufficient precision) includes \(\varTheta (n^3)\) floating point arithmetic operations via the Cholesky factorization algorithm (See, for example [38, Chapter 5]). A modern CPU can perform a floating point operation in one clock cycle, and it can work at about 4.0 GHz. Thus, we assume that the lower bound of the time in seconds is \((4.0\cdot 10^9)^{1} \cdot n^3\).
(c) The reduced basis obtained by the progressive BKZ (or other reduction algorithm) becomes ideally reduced. We define the simulated \(\gamma \) approximate HKZ basis \(B_{\gamma \text{ }HKZ}\) by a basis satisfying
For any fixed \(\gamma \) and p, we assume this basis minimizes the cost for enumeration over any basis satisfying \(\mathbf{b}_1 \ge \gamma \cdot \mathrm {GH}(L)\).
Therefore, the lower bound of the total cost of the idealized algorithm in seconds is given by
Setting \(\gamma =1.05\), we analyze the lower bound cost to enter the SVP Challenge. (See the line D in Fig. 10).
9 Simulation Results for SVP Challenge and Comparison
In this section, we give our simulation results using our proposed progressive BKZ algorithm together with the pre/postprocessing strategy in Sect. 8.1 for solving the Darmstadt SVP Challenge [50], which tries to find a vector shorter than \(1.05 \cdot \mathrm {GH}(L)\) in the random lattice L of dimension n.
We also simulate the cost estimation of Lindner and Peikert [32] and that of Chen and Nguyen [13] in the same model. The summery of our simulation results and the latest records published in the SVP Challenge are given in Fig. 10. The outlines of our estimations A to D in Fig. 10 are given below.
From our simulation, the proposed progressive BKZ algorithm is about 50 times faster than BKZ 2.0 and about 100 times slower than the idealized algorithm that achieves the lower bound in our model of Sect. 8.2.
A: LindnerPeikert’s Estimation [32]: From the experiments using the BKZ implementation in the NTL library [49], they estimated that the BKZ algorithm can find a short vector of length \(\delta ^n \det (L)^{1/n}\) in \(2^{1.8/\log _2(\delta ) 110}\) [sec.] in the ndimensional lattice. The computing time of LindnerPeikert’s model becomes
because this \(\delta \) attains \(1.05 \cdot \mathrm {GH}(L)=\delta ^n \det (L)^{1/n}\).
B: ChenNguyen’s BKZ 2.0 [13, 14]: We estimated the cost of BKZ 2.0 using the simulator in Sect. 3.2. Following the original paper [13], we assume that a blocksize is fixed and the estimation is the minimum of (4) over all possible pairs of the blocksize \(\beta \) and the number t of tours. Again we convert the number of nodes into the singlethreaded time, we divide the number by \(2\cdot 10^7\).
C: Our Estimation: We searched the minimum cost using the estimation (23) over M and \(\beta \) with setting \(\gamma =1.05\).
D: Lower Bound in Our Model: We searched the minimum cost using the estimation (24) over M with setting \(\gamma =1.05\).
Records of SVP Challenge: From the hall of fame in the SVP Challenge [50] and reporting paper [18], we listed up the records that contain the computing time with a single thread in Fig. 10, as black circles “\(\bullet \)”. Moreover we performed experiments on our proposed progressive BKZ algorithm using the pre/postprocessing strategy in Sect. 8.1 up to 123 dimensions which are also indicated by the white circles “\(\circ \)” in Fig. 10.
10 Conclusions and Future Work
We proposed an improved progressive BKZ algorithm with optimized parameters and blockincreasing strategy. We also gave a simulator that can precisely predict the GramSchmidt lengths computed using the proposed progressive BKZ. We also presented the efficient implementation of the enumeration algorithm and LLL algorithm, and the total cost of the proposed progressive BKZ algorithm was precisely evaluated by the sharp simulator.
Moreover, we showed a comparison with other algorithms by simulating the cost of solving the instances from the Darmstadt SVP Challenge. Our progressive BKZ algorithm is about 50 times faster than the BKZ 2.0 proposed by Chen and Nguyen for solving the SVP Challenges up to 160 dimensions. Finally, we discussed a computational lower bound of the proposed progressive BKZ algorithm under certain ideal assumptions. These simulation results contribute to the estimation of the secure parameter sizes used in lattice based cryptography.
We outline some future works: (1) constructing a BKZ simulator without using our \(\mathrm {ENUMCost}\), (2) adopting our simulator with other strategies such as BKZthenSieve strategy for computing a short vector more efficiently, and (3) estimating the secure key length of latticebased cryptosystems using the lower bound of the proposed progressive BKZ.
References
Aggarwal, D., Dadush, D., Regev, O., StephensDavidowitz, N.: Solving the shortest vector problem in \(2^n\) time using discrete Gaussian sampling: extended abstract. In: STOC 2015, pp. 733–742 (2015)
Ajtai, M.: The shortest vector problem in \(L_2\) is NPhard for randomized reductions. In: STOC, pp. 10–19 (1998)
Ajtai, M.: The worstcase behavior of Schnorr’s algorithm approximating the shortest nonzero vector in a lattice. In: STOC, pp. 396–406 (2003)
Albrecht, M.R., Fitzpatrick, R., Göpfert, F.: On the efficacy of solving LWE by reduction to UniqueSVP. In: Lee, H.S., Han, D.G. (eds.) ICISC 2013. LNCS, vol. 8565, pp. 293–310. Springer, Heidelberg (2014)
Albrecht, M.R., Player, R., Scott, S.: On the concrete hardness of learning with errors. J. Math. Cryptology 9(3), 169–203 (2015)
Aono, Y.: A faster method for computing GamaNguyenRegev’s extreme pruning coefficients (2014). arXiv:1406.0342
Aono, Y., Boyen, X., Phong, L.T., Wang, L.: Keyprivate proxy reencryption under LWE. In: Paul, G., Vaudenay, S. (eds.) INDOCRYPT 2013. LNCS, vol. 8250, pp. 1–18. Springer, Heidelberg (2013)
Aono, Y., Wang, Y., Hayashi, T., Takagi, T.: Improved progressive BKZ algorithms and their precise cost estimation by sharp simulator. In: IACR Cryptology ePrint Archive 2016: 146 (2016)
Bai, S., Galbraith, S.D.: Lattice decoding attacks on binary LWE. In: Susilo, W., Mu, Y. (eds.) ACISP 2014. LNCS, vol. 8544, pp. 322–337. Springer, Heidelberg (2014)
Brakerski, Z., Vaikuntanathan, V.: Fully homomorphic encryption from RingLWE and security for key dependent messages. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 505–524. Springer, Heidelberg (2011)
Buchmann, J., Ludwig, C.: Practical lattice basis sampling reduction. In: Hess, F., Pauli, S., Pohst, M. (eds.) ANTS 2006. LNCS, vol. 4076, pp. 222–237. Springer, Heidelberg (2006)
Chen, Y.: Réduction de réseau et sécurité concrète du chiffrement complètement homomorphe, Doctoral dissertation (2013)
Chen, Y., Nguyen, P.Q.: BKZ 2.0: better lattice security estimates. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 1–20. Springer, Heidelberg (2011)
Chen, Y., Nguyen, P.Q.: BKZ 2.0: better lattice security estimates, the full version. http://www.di.ens.fr/~ychen/research/Full_BKZ.pdf
Coppersmith, D., Shamir, A.: Lattice attacks on NTRU. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 52–61. Springer, Heidelberg (1997)
Fincke, U., Pohst, M.: Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comp. 44, 463–471 (1985)
Fischlin, R., Seifert, J.P.: Tensorbased trapdoors for CVP and their application to public key cryptography. In: Walker, M. (ed.) Cryptography and Coding 1999. LNCS, vol. 1746, pp. 244–257. Springer, Heidelberg (1999)
Fukase, M., Kashiwabara, K.: An accelerated algorithm for solving SVP based on statistical analysis. J. Inf. Process. 23(1), 67–80 (2015)
Gama, N., Nguyen, P.Q.: Predicting lattice reduction. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 31–51. Springer, Heidelberg (2008)
Gama, N., Nguyen, P.Q., Regev, O.: Lattice enumeration using extreme pruning. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 257–278. Springer, Heidelberg (2010)
Garg, S., Gentry, C., Halevi, S.: Candidate multilinear maps from ideal lattices. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 1–17. Springer, Heidelberg (2013)
Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC 2009, pp. 169–178 (2009)
Hanrot, G., Pujol, X., Stehlé, D.: Analyzing blockwise lattice algorithms using dynamical systems. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 447–464. Springer, Heidelberg (2011)
Hanrot, G., Stehlé, D.: Improved Analysis of Kannan’s shortest lattice vector algorithm. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 170–186. Springer, Heidelberg (2007)
Haque, M., Rahman, M.O., Pieprzyk, J.: Analysing progressiveBKZ lattice reduction algorithm. In: NCICIT 2013, pp. 73–80 (2013)
Hoffstein, J., Pipher, J., Silverman, J.H.: An Introduction to Mathematical Cryptography. SpringerVerlag New York, New York (2008)
Ishiguro, T., Kiyomoto, S., Miyake, Y., Takagi, T.: Parallel Gauss Sieve algorithm: solving the SVP challenge over a 128dimensional ideal lattice. In: Krawczyk, H. (ed.) PKC 2014. LNCS, vol. 8383, pp. 411–428. Springer, Heidelberg (2014)
Kannan, R.: Improved algorithms for integer programming and related lattice problems. In: STOC, pp. 193–206 (1983)
Korkine, A., Zolotareff, G.: Sur les formes quadratiques. Math. Ann. 6(3), 366–389 (1873)
Lenstra, A.K., Lenstra Jr., H.W., Lovász, L.: Factoring polynomials with rational coefficients. Math. Ann. 261(4), 515–534 (1982)
Lepoint, T., Naehrig, M.: A comparison of the homomorphic encryption schemes FV and YASHE. In: Pointcheval, D., Vergnaud, D. (eds.) AFRICACRYPT 2014. LNCS, vol. 8469, pp. 318–335. Springer, Heidelberg (2014)
Lindner, R., Peikert, C.: Better key sizes (and attacks) for LWEbased encryption. In: Kiayias, A. (ed.) CTRSA 2011. LNCS, vol. 6558, pp. 319–339. Springer, Heidelberg (2011)
Liu, M., Nguyen, P.Q.: Solving BDD by enumeration: an update. In: Dawson, E. (ed.) CTRSA 2013. LNCS, vol. 7779, pp. 293–309. Springer, Heidelberg (2013)
Micciancio, D.: The shortest vector problem is NPhard to approximate to within some constant. In: FOCS, pp. 92–98 (1998)
Nguyên, P.Q.: Cryptanalysis of the GoldreichGoldwasserHalevi cryptosystem from Crypto’97. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 288–304. Springer, Heidelberg (1999)
Nguyên, P.Q., Regev, O.: Learning a parallelepiped: cryptanalysis of GGH and NTRU signatures. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 271–288. Springer, Heidelberg (2006)
Nguyên, P.Q., Stehlé, D.: LLL on the average. In: Hess, F., Pauli, S., Pohst, M. (eds.) ANTS 2006. LNCS, vol. 4076, pp. 238–256. Springer, Heidelberg (2006)
Nguyen, P.Q., Vallée, B.: The LLL Algorithm: Survey and Applications. SpringerVerlag, Heidelberg (2009)
Plantard, T., Schneider, M.: Creating a challenge for ideal lattices. In: IACR Cryptology ePrint Archive 2013: 039 (2013)
Plantard, T., Susilo, W., Zhang, Z.: Adaptive precision floating point LLL. In: Boyd, C., Simpson, L. (eds.) ACISP 2013. LNCS, vol. 7959, pp. 104–117. Springer, Heidelberg (2013)
Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. J. ACM, 56(6), Article no. 34 (2009)
Rogers, C.A.: The number of lattice points in a set. Proc. London Math. Soc. 3(6), 305–320 (1956)
Schnorr, C.P.: A hierarchy of polynomial time lattice basis reduction algorithms. Theor. Comput. Sci. 53(2–3), 201–224 (1987)
Schnorr, C.P.: Lattice reduction by random sampling and birthday methods. In: Alt, H., Habib, M. (eds.) STACS 2003. LNCS, vol. 2607, pp. 145–156. Springer, Heidelberg (2003)
Schnorr, C.P.: Accelerated and Improved Slide and LLLReduction. ECCC TR11050 (2011)
Schnorr, C.P., Euchner, M.: Lattice basis reduction: improved practical algorithms and solving subset sum problems. Math. Program. 66(1–3), 181–199 (1994)
Schnorr, C.P., Hörner, H.H.: Attacking the ChorRivest cryptosystem by improved lattice reduction. In: Guillou, L.C., Quisquater, J.J. (eds.) EUROCRYPT 1995. LNCS, vol. 921, pp. 1–12. Springer, Heidelberg (1995)
Schnorr, C.P., Shevchenko, T.: Solving subset sum problems of density close to 1 by “randomized” BKZreduction. In: IACR Cryptology ePrint Archive 2012: 620 (2012)
Shoup, V.: NTL: a library for ng number theory. http://www.shoup.net/ntl/
TU Darmstadt Lattice Challenge. http://www.latticechallenge.org/
van de Pol, J., Smart, N.P.: Estimating key sizes for high dimensional latticebased systems. In: Stam, M. (ed.) IMACC 2013. LNCS, vol. 8308, pp. 290–303. Springer, Heidelberg (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 International Association for Cryptologic Research
About this paper
Cite this paper
Aono, Y., Wang, Y., Hayashi, T., Takagi, T. (2016). Improved Progressive BKZ Algorithms and Their Precise Cost Estimation by Sharp Simulator. In: Fischlin, M., Coron, JS. (eds) Advances in Cryptology – EUROCRYPT 2016. EUROCRYPT 2016. Lecture Notes in Computer Science(), vol 9665. Springer, Berlin, Heidelberg. https://doi.org/10.1007/9783662498903_30
Download citation
DOI: https://doi.org/10.1007/9783662498903_30
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 9783662498897
Online ISBN: 9783662498903
eBook Packages: Computer ScienceComputer Science (R0)