A Survey of Solving SVP Algorithms and Recent Strategies for Solving the SVP Challenge

Recently, lattice-based cryptography has received attention as a candidate of post-quantum cryptography (PQC). The essential security of lattice-based cryptography is based on the hardness of classical lattice problems such as the shortest vector problem (SVP) and the closest vector problem (CVP). A number of algorithms have been proposed for solving SVP exactly or approximately, and most of them are useful also for solving CVP. In this paper, we give a survey of typical algorithms for solving SVP from a mathematical point of view. We also present recent strategies for solving the Darmstadt SVP challenge in dimensions higher than 150.


Introduction
There has recently been a substantial amount of research for large-scale quantum computers. On the other hand, if such computers were built, they could break currently used public-key cryptosystems such as the RSA cryptosystem and the elliptic curve cryptography. (See Shor 1994 for Shor's quantum algorithms.) In order to prepare information security systems to be able to resist quantum computing, the US National Institute of Standards and Technology (NIST) began a process to develop new standards for PQC in 2015 and called for proposals in 2016. It has rapidly accelerated to research lattice-based cryptography as a candidate of PQC. Specifically, at the submission deadline of the end of November 2017 for the call, NIST received more than 20 proposals of lattice-based cryptosystems. Among them, more than 10 proposals were allowed for Round 2 submissions around the end of January 2019. (See the web page of NIST 2016.) The security of such proposals relies on the hard-ness of cryptographic lattice problems such as learning with errors (LWE) and NTRU. Such problems are reduced to approximate-SVP or approximate-CVP. (For example, see Albrecht et al. 2018 for details.) Therefore, it is becoming more important to understand classical lattice problems for evaluating the security of lattice-based PQC candidates.
For a positive integer n, a (full-rank) lattice L in R n is the set of all integral linear combinations of linearly independent vectors b 1 , . . . , b n in R n . (The set of the b i 's is called a basis of L.) Given a basis of a lattice L, SVP asks to find the non-zero shortest vector in L. In this paper, we give a survey of typical algorithms for solving SVP from a mathematical point of view. These algorithms can be classified into two categories, depending on whether they solve SVP exactly or approximately. Exact-SVP algorithms perform an exhaustive search for an integer combination of the basis vectors b i 's to find the non-zero shortest lattice vector v = n i=1 v i b i ∈ L, and their cost is expensive. In contrast, approximate-SVP algorithms are much faster than exact algorithms, but they find short lattice vectors, not necessarily the shortest ones. However, exact-and approximate-SVP algorithms are complementary. For example, exact algorithms apply an approximation algorithm as a preprocessing to reduce their expensive cost, while several approximate-SVP algorithms call many times an exact algorithm in low dimension as a subroutine to find a very short lattice vector. In this paper, we also introduce recent strategies for solving the Darmstadt SVP challenge Darmstadt (2010), in which sample lattice bases are presented in order to test algorithms solving SVP. In particular, these strategies combine approximate-and exact-SVP algorithms to efficiently solve SVP in high dimensions such as n ≥ 150.
Notation. The symbols Z, Q, and R denote the ring of integers, the field of rational numbers, and the field of real numbers, respectively. Let z denote the rounding integer of an integer z. We represent all vectors in column format. For a = (a 1 , . . . , a n ) ∈ R n , let a denote its Euclidean norm. For a = (a 1 , . . . , a n ) and b = (b 1 , . . . , b n ) , let a, b denote the inner product n i=1 a i b i . Denote by V n (R) the volume of the n-dimensional ball of radius R > 0 centered at the origin. In particular, we let ν n = V n (1) denote the volume of the unit ball. Then V n (R) = ν n R n and ν n = π n/2 (1 + n/2) ∼ 1 √ π n 2π e n n/2 using Stirling's formula, where (s) = ∞ 0 t s−1 e −t dt denotes the Gamma function.

Mathematical Background
In this section, we introduce basic definitions and properties on lattices, and present famous lattice problems whose hardness ensures the essential security of latticebased cryptography. (For example, see Galbraith 2012, Part IV or Nguyen 2009 for more details.)

Lattices and Their Bases
For a positive integer n, let b 1 , . . . , b n be n linearly independent (column) vectors in R n . The set of all integral linear combinations of the b i 's is a (full-rank) lattice (A basis is regarded not only as a set of vectors, but also as a matrix whose column vectors span a lattice.) Every lattice has infinitely many bases if n ≥ 2; if two bases B 1 and B 2 span the same lattice, then there exists an n × n unimodular matrix U ∈ GL n (Z) with B 1 = B 2 U.
The volume of L is defined as vol(L) = | det(B)|, independent of the choice of bases. The Gram-Schmidt orthogonalization for an (ordered) basis B is the orthogonal Notice that the Gram-Schmidt vectors b * i 's depend on the order of basis vectors in B. For convenience, set μ = (μ i, j ) ∈ R n×n where let μ i, j = 0 for all i < j and μ k,k = 1 for all k. Then B = B * μ, and thus vol(L) = n i=1 b * i from the orthogonality of Gram-Schmidt vectors. For 2 ≤ ≤ n, let π denote the orthogonal projection over the orthogonal supplement of the R-vector space b 1 , . . . , b −1 R as Every projection map depends on a basis. We also set π 1 = id for convenience.

Successive Minima, Hermite's Constants, and Gaussian Heuristic
For every 1 ≤ i ≤ n, the ith successive minimum of an n-dimensional lattice L, denoted by λ i (L), is defined as the minimum of max 1≤ j≤i v j over all i linearly independent vectors v 1 , . . . , v i ∈ L. In particular, the first minimum λ 1 (L) is the norm of the shortest non-zero vector in L. We clearly have λ 1 (L) ≤ λ 2 (L) ≤ · · · ≤ λ n (L) by definition. Moreover, for any basis B = (b 1 , . . . , b n ) of L, its Gram-Schmidt vectors satisfy λ i (L) ≥ min i≤ j≤n b * j for every 1 ≤ i ≤ n. (See Bremner 2011, Proposition 3.14 for proof.) Hermite (1850) first proved that the quantity λ 1 (L) 2 vol(L) 2/n is upper bounded over all lattices L of dimension n. Its supremum over all lattices of dimension n is called Hermite's constant of dimension n, denoted by γ n . This implies λ 1 (L) ≤ √ γ n vol(L) 1/n for any lattice L of dimension n. As its extension, it satisfies for any lattice L of dimension n. Moreover, it satisfies γ n ≤ 1 + n 4 from well-known formulas for ν n . It is very difficult to find the exact value of γ n , and such values are known for only a few integers n. However, every γ n is known as essentially linear in n. It also satisfies Mordell's inequality γ n ≤ γ (n−1)/(k−1) k for any n ≥ k ≥ 2. (See Nguyen 2009 for more details on Hermite's constants.) Given a lattice L of dimension n and a measurable set S in R n , the Gaussian Heuristic predicts that the number of vectors in L ∩ S is roughly equal to vol(S)/vol(L). By applying the ball of radius λ 1 (L) centered at the origin in R n , it leads to the prediction of the norm of the shortest non-zero vector in L. Specifically, the expectation of λ 1 (L) according to the Gaussian Heuristic is given by GH(L) = ν −1/n n vol(L) 1/n ∼ n 2π e vol(L) 1/n . This is tight compared to Eq. (1). Note that this is only a heuristic. But for "random" lattices, λ 1 (L) is asymptotically equal to GH(L) with overwhelming probability Ajtai (1996).

Introduction to Lattice Problems
The most famous lattice problem is given below.

• ? The Shortest Vector Problem (SVP)
Given a basis B = (b 1 , . . . , b n ) of a lattice L, find the shortest non-zero vector in L, that is, a vector s ∈ L such that s = λ 1 (L).
It was proven by Ajtai (1996) that SVP is NP-hard under randomized reductions. SVP can be relaxed by an approximate factor: Given a basis of a lattice L and an approximation factor f ≥ 1, find a non-zero vector v in L such that v ≤ f λ 1 (L). Approximate-SVP is exactly SVP when f = 1. It is unlikely that one can efficiently solve approximate-SVP within quasi-polynomial factors in n, while approximate-SVP within a factor n/ log(n) is unlikely to be NP-hard. (See Nguyen 2009 for more details.) Another famous lattice problem is given below.

• ? The Closest Vector Problem (CVP)
Given a basis B = (b 1 , . . . , b n ) of a lattice L and a target vector t, find a vector in L closest to t, that is, a vector v ∈ L such that the distance t − v is minimized.
CVP is at least as hard as SVP. As in the case of SVP, we can define an approximate variant of CVP by an approximate factor. Approximate-CVP is also at least as hard as approximate-SVP with the same factor. From a practical point of view, both are considered equally hard, due to Kannan's embedding technique Kannan (1987) which can transform approximate-CVP into approximate-SVP. (See also Galbraith 2012 for the embedding.) The security of modern lattice-based cryptosystems is based on the hardness of cryptographic lattice problems, such as the LWE and the NTRU problems.

Solving SVP Algorithms
In this section, we present typical algorithms for solving SVP. These algorithms can be classified into two categories, depending on whether they solve SVP exactly or approximately. However, both categories are complementary; exact algorithms first apply an approximation algorithm as a preprocessing to reduce their cost, while blockwise algorithms (e.g., the BKZ algorithm presented below) call many times an exact algorithm in low dimension as a subroutine to find a very short lattice vector.

Exact-SVP Algorithms
Exact-SVP algorithms find the non-zero shortest lattice vector, but they are expensive. These algorithms perform an exhaustive search of all short vectors, whose number is exponential in the dimension (in the worst case). These algorithms can be split in two categories; polynomial-space algorithms and exponential-space algorithms.

Polynomial-Space Exact Algorithms: Enumeration
They are based on enumeration, which dates back to the early 1980s with work by Pohst (1981), Kannan (1983), and Fincke-Pohst (1985). Enumeration is simply an exhaustive search for an integer combination of the basis vectors such that the lattice vector is the shortest. An enumeration algorithm takes as input an enumeration radius R > 0 and a basis B = (b 1 , . . . , b n ) of a lattice L, and outputs all non-zero vectors s in L such that s ≤ R (if exists). The radius R is taken as an upper bound of λ 1 (L), like √ γ n vol(L) 1/n , to find the shortest non-zero lattice vector. It goes through the enumeration tree formed by all vectors in the projected lattices π n (L), π n−1 (L), · · · , π 1 (L) = L with norm at most R. More precisely, the enumeration tree is a tree of depth n, and for each 1 ≤ k ≤ n + 1, the nodes at depth n + 1 − k are all the vectors in the projected lattice π k (L) with norm at most R. In particular, the root of the tree is the zero vector because π n+1 (L) = {0}. The parent of a node u ∈ π k (L) at depth n + 1 − k is the node π k+1 (u) at depth n − k. The child nodes are arranged in order of norms.
Here we introduce the basic idea of the Schnorr-Euchner algorithm Schnorr and Euchner (1994), which is a depth first search of the enumeration tree to find all leaves in practice. (cf. Kannan's algorithm 1983 is asymptotically superior in the running time, but it is not competitive in practice due to a substantial overhead of recursive procedures. See also Micciancio and Walter 2014 for such discussion.) We represent the shortest non-zero vector as s Due to the orthogonality of Gram-Schmidt vectors b * j 's, the squared norms of projections of the vector s are given as for every 1 ≤ k ≤ n If s is a leaf of the enumeration tree, then its projections all satisfy π k (s) 2 ≤ R 2 for all 1 ≤ k ≤ n. These n inequalities together with above equations enable to perform an exhaustive search for the integral coordinates v n , v n−1 , . . . , v 1 of s: for every 1 ≤ k ≤ n. We start with k = n in Eq.
(2), that is, 0 ≤ v n ≤ R b * n , because we can restrict to "positive" nodes due to the symmetry of the enumeration tree. Choosing a candidate of v n , we move to the next index k = n − 1 in Eq. (2), that is, to find a candidate of v n−1 . By repeating this procedure, assume that the integers v n , . . . , v k+1 are found for some 1 < k < n. Then Eq. (2) enables to compute an interval I k such that v k ∈ I k , and thus to perform an exhaustive search for the integer v k . A depth first search of the tree corresponds to enumerating the interval from its middle, namely, a zig-zag search like The basic Schnorr-Euchner enumeration algorithm Schnorr and Euchner (1994) is as below (see Gama et al. 2010, Algorithm 2 for the algorithm with some improvements).

Algorithm: The basic Schnorr-Euchner enumeration Schnorr and Euchner (1994)
The running time of the enumeration algorithm fully depends on the total number of tree nodes N . An estimate of N can be derived from the Gaussian Heuristic. More precisely, the number of nodes at level is exactly half the number of vectors in the projected lattice π n+1− (L) with norm at most R. Since vol(π n+1− (L)) = n i=n+1− b * i , the Gaussian Heuristic predicts the number of nodes at level scanned by the Schnorr-Euchner algorithm to be close to Then N ≈ n =1 H . For a "good" basis (reduced by LLL or BKZ, introduced in the next subsection), we have b * i / b * i+1 ≈ q for some constant q. This is called the geometric series assumption (GSA), 1 first introduced by Schnorr (2003). The constant q depends on the reduction algorithm. For example, we experimentally have q ≈ 1.04 by LLL and q ≈ 1.025 by BKZ with blocksize 20 for high-dimensional lattices (see  for details.) Now we take the enumeration radius R = √ γ n vol(L) 1/n , which is optimal in the worst case. With the constant q, Gama et al. (2010). The right-hand term is maximized for = n 2 , and it is less than q n 2 /8 2 O(n) . Thus the maximum of H is super-exponential in n and is reached for ≈ n 2 . (See Gama et al. 2010, Fig. 1 for the actual number of nodes, which is very close to this prediction.) Since smaller q is obtained for a more reduced basis, it shows that the more reduced the input basis is, the less are the nodes in the enumeration tree, and the cheaper the enumeration cost.
It is possible to obtain substantial speedups using pruning techniques by Gama et al. (2010). Their idea is tempting not to enumerate all the tree nodes, by discarding certain branches. (See Aono et al. 2018 for a lower bound of the time complexity of pruned enumeration.) However, it decreases the success probability to find the shortest non-zero lattice vector s. For instance, one might intuitively hope that π n/2 (s) 2 s 2 /2, which is more restrictive than the inequality π n/2 (s) 2 ≤ s 2 . Formally, pruning replaces each of the n inequalities π k (s) 2 ≤ R 2 by π k (s) 2 ≤ R 2 n+1−k , where R 1 ≤ · · · ≤ R n = R are n real numbers defined by a pruning strategy. A pruning parameter is set in the fplll library The FPLLL development team (2016), and a pruning function for setting R i 's is implemented in the progressive BKZ library Aono et al. (2016).

Exponential-Space Exact Algorithms: Sieve
These algorithms have a better asymptotic running time, but they all require exponential space 2 (n) . The first algorithm of this kind is the randomized sieve algorithm proposed by Ajtai, Kumar, and Sivakumar (AKS) Ajtai et al. (2001). The AKS algorithm outputs the shortest lattice vector with overwhelming probability, and its asymptotic complexity is much better than deterministic enumeration algorithms with 2 O(n 2 ) time complexity. The main idea is as follows (see also , Sect. 3 or Nguyen 2009): Given a lattice L of dimension n, consider a ball S centered at the origin and of radius r with λ 1 (L) ≤ r ≤ O(λ 1 (L)). Then #(L ∩ S) = 2 O(n) based on the Gaussian Heuristic. If we could perform an exhaustive search for all vectors in L ∩ S, we could find the shortest lattice vector within 2 O(n) polynomialtime operations. Enumeration enables to perform an exhaustive search of L ∩ S, but it requires to go through all the vectors in the union set S = n k=1 (π k (L) ∩ S), whose total number is much larger than #(L ∩ S). In contrast, the AKS algorithm performs a randomized sampling of L ∩ S, without going through the set S. If it was uniformly sampled over L ∩ S, a short lattice vector would be included in N samples with probability close to 1 for N #(L ∩ S). Unfortunately, it is unclear whether the uniform property is satisfied by the AKS sampling. However, it can be shown that there exists a vector w ∈ L ∩ S such that w and w + s can be sampled with non-zero probability for some shortest lattice vector s. Thus the shortest lattice vector is obtained by computing the shortest difference of any pairs of the N sampled vectors in L ∩ S.
There are several heuristic variants of the AKS algorithm with time complexity 2 O(n) and space complexity exponential in n for an n-dimensional lattice L Baiet al.
(2016), Herold and Kirshanova (2017), Micciancio and Voulgaris (2010), . Given a basis of L, these algorithms build databases of lattice vectors with norms at most R · GH(L) for a small constant R > 0 such as R 2 = 4 3 . In generic sieves, it is checked whether the sum or the difference of any pair of vectors in databases becomes shorter. The basic sieve algorithm is as below. Output: A database of N short vectors in L 1: Take a set D of N random vectors in L (with norm at most 2 n vol(L) 1/n ) 2: while ∃(v, w) ∈ D 2 such that v + w < v (resp., v − w < v ) do 3: v ← v + w (resp., v ← v − w) // update vectors in the database D 4: end while 5: return D

In
Step 1 of the above algorithm, the initialization of the database D can be performed by first computing an LLL-reduced basis (see the next subsection for the LLL reduction), and taking random small integral combinations of the basis vectors. (A natural idea is to use a stronger reduction algorithm such as BKZ in order to generate shorter initial vectors.) The Nguyen-Vidick sieve (2008) finds pairs of vectors (v 1 , v 2 ) from D, whose sum or difference gives a shorter vector, that is, v 1 ± v 2 < max v∈D v . Once such a pair is found, the longest vector from the database gets replaced by v 1 ± v 2 . The database size is a priori fixed to the asymptotic heuristic minimum 2 0.2075n+O(n) in order to find enough such pairs. The running time is quadratic in the database size. The Gauss sieve (2010) is a variant of the Nguyen-Vidick sieve with substantial improvements; the main improvement is to divide the database into two parts, the so-called "list " part and the "queue" part. Both parts are separately sorted by Euclidean norm in order to make early reduction likely. In updating vectors, the queue part enables to avoid considering the same pair several times. The running time and the database size for the Gauss sieve are asymptotically the same as for the Nguyen-Vidick sieve, but its performance is better in practice.

Approximate-SVP Algorithms
These algorithms are much faster than exact algorithms, but they output short lattice vectors, not necessarily the shortest ones.

LLL Reduction
The first efficient approximate-SVP algorithm is the celebrated algorithm by Lenstra, Lenstra, and Lovász (LLL) Lenstra et al. (1982). Nowadays it is known as the most famous algorithm of lattice basis reduction, which finds a lattice basis with short and nearly orthogonal basis vectors. Such a basis is called reduced or good. We introduce the notion of LLL reduction. Let B = (b 1 , . . . , b n ) be a basis of a lattice L, and B * = (b * 1 , . . . , b * n ) its Gram-Schmidt vectors with coefficients μ i, j . For a parameter 1 4 < δ < 1, the basis B is called δ-LLL-reduced if it satisfies two conditions:

Algorithm: The basic LLL Lenstra et al. (1982)
Input: A basis B = (b 1 , . . . , b n ) of a lattice L, and a reduction parameter 1 4 < δ < 1 Output: A δ-LLL-reduced basis B of L 1: Compute Gram-Schmidt information μ i, j and b * i 2 of the input basis B 2: k ← 2 3: while k ≤ n do 4: Size-reduce B = (b 1 , . . . , b n is called the potential of a basis B. Every swap in the LLL algorithm decreases the potential of an input basis by a factor at least δ < 1. (cf. the size-reduction procedure does not change the potential.) This guarantees the termination of the LLL algorithm in polynomial time in n. Furthermore, the LLL algorithm is applicable also for linearly dependent vectors to remove their linear dependency. (See Bremner 2011, Chap. 6, Cohen 2013, Sect. 2.6.4, Pohst 1987or Sims 1994.7 for details.)

LLL with Deep Insertions (DeepLLL)
This variant is a straightforward generalization of LLL, in which non-adjacent basis vectors can be changed. Specifically, a basis vector b k is inserted between b i−1 and b i as σ i,k (B) = (. . . , b i−1 , b k , b i , . . . , b k−1 , b k+1 , . . . ), called a deep insertion, if the so-called deep exchange condition π i (b k ) 2 < δ b * i 2 is satisfied for 1 4 < δ < 1. In this case, the new GSO vector at the ith position is given by π i (b k ), strictly shorter than the old GSO vector b (The case i = k − 1 is just Lovász' condition.) Any δ-DeepLLL-reduced basis satisfies the below properties Yasuda and Yamaguchi (2019), Theorem 1: where α is the same as in LLL.
These properties are strictly stronger than the case of LLL. The basic DeepLLL algorithm Schnorr and Euchner (1994) is given below (see also Bremner 2011, Fig. 5.1 or Cohen 2013, Algorithm 2.6.4).

Algorithm: The basic DeepLLL Schnorr and Euchner (1994)
Input: A basis B = (b 1 , . . . , b n ) of a lattice L, and a reduction parameter 1 4 < δ < 1 Output: A δ-DeepLLL-reduced basis B of L 1: Compute Gram-Schmidt information μ i, j and b * i 2 of the input basis B 2: k ← 2 3: while k ≤ n do 4: Size-reduce B as in LLL 5: does not always decrease the potential of an input basis, and thus the complexity of DeepLLL is no longer polynomial-time but potentially super-exponential in the lattice dimension. However, DeepLLL often finds much shorter lattice vectors than LLL in practice .

Algorithm: The basic BKZ Schnorr and Euchner (1994)
Input: A basis B = (b 1 , . . . , b n ) of a lattice L, a blocksize 2 ≤ β ≤ n, and a reduction parameter 1 4 < δ < 1 of LLL Output: A β-DeepBKZ-reduced basis B of L 1: B ← LLL(B, δ) // Compute μ i, j and b * j 2 of the new basis B together 2: z ← 0, j ← 0 3: while z < n − 1 do 4: There are experimental evidences to support this prediction for high blocksizes such as β > 50. (Note that the Gaussian Heuristic holds in practice for random lattices in high dimensions, but unfortunately it is violated in low dimensions.) In a simple form based on the Gaussian Heuristic, the GSA shape of a β-BKZ-reduced basis of volume 1 is predicted as b . This is reasonably accurate in practice for β > 50 and β n. (See Chen 2013, 2011Yu and Ducas 2017.) Other variants of BKZ have been proposed such as slide reduction , self-dual BKZ Micciancio and Walter (2016), and progressive- BKZ Aono et al. (2016). As a mathematical improvement of BKZ, DeepBKZ was recently proposed in Yamaguchi and Yasuda (2017), in which DeepLLL is called a subroutine alternative to LLL. In particular, DeepBKZ finds a short lattice vector by smaller blocksizes than BKZ in practice. (Dual and self-dual variants of DeepBKZ were also proposed in Yasuda (2018), Yasuda et al. (2018).)

The SVP Challenge and Recent Strategies
To test algorithms solving SVP, sample lattice bases are presented in Darmstadt (2010) for dimensions from 40 up to 200. (The lattices are random in the sense of Goldstein and Mayer Goldstein and Mayer (2003).) For every lattice L, any nonzero lattice vector with (Euclidean) norm less than 1.05GH(L) can be submitted to the hall of fame in the SVP challenge. To enter the hall of fame, the lattice vector is required to be shorter than a previous one in the same dimension (with possibly different seed). Note that not all lattice vectors in the hall of fame are necessarily the shortest. In this section, we introduce two recent strategies for solving the SVP challenge in high dimensions such as n ≥ 150.

The Random Sampling Strategy
Early in 2017, a non-zero vector in a lattice L of dimension n = 150 with norm less than 1.05GH(L) was first found by Teruya and Kashiwabara using many highperformance servers. (See Teruya et al. 2018 for their large-scale experiments.) Their strategy is based on the work of Fukase and Kashiwabara (2015), which is an extension of Schnorr's random sampling reduction (RSR) Schnorr (2003). Here we review random sampling (SA) and RSR. For a lattice L of dimension n, fix 1 ≤ u < n to be a constant of search space bound. Given a basis B = (b 1 , . . . , b n ) of L, SA samples a vector v = n i=1 ν i b * i in L satisfying ν i ∈ (−1/2, 1/2] for 1 ≤ i < n − u, ν i ∈ (−1, 1] for n − u ≤ i < n and ν n = 1. Let S u,B denote the set of such lattice vectors. Since the number of candidates for ν i with |ν i | ≤ 1/2 (resp. |ν i | ≤ 1) is 1 (resp. 2), there are 2 u lattice vectors in S u,B . By calling SA up to 2 u times, RSR generates v satisfying v 2 < 0.99 b 1 2 Schnorr (2003), Theorem 1. Two extensions are proposed in Fukase and Kashiwabara (2015) for solving the SVP challenge; the first one is to represent a lattice vector by a sequence of natural numbers via the Gram-Schmidt orthogonalization, and to sample lattice vectors on an appropriate distribution of the representation. The second one is to decrease the sum of the squared Gram-Schmidt lengths SS(B) := n i=1 b * i 2 to make it easier to sample very short lattice vectors. The effectiveness of their extensions is guaranteed by their statistical analysis on lattices. Specifically, under the randomness assumption (RA), 2 they roughly estimate that the distribution of the squared length of a sampled vector v This implies that shorter lattice vectors are sampled as the squared-sum SS(B) becomes smaller. Then the basic strategy in Fukase and Kashiwabara (2015); Teruya et al. (2018) consists of the following two steps: (i) We reduce an input basis so that it decreases the sum of its squared Gram-Schmidt lengths as small as possible, by using LLL and insertion of sampled lattice vectors like BKZ. (See also  for such procedure). (ii) With such reduced basis B, we then find a short lattice vector by randomly sampling v = i=1 ν i b * i . As a sequential work, Aono and Nguyen (2017) introduced lattice enumeration with discrete pruning to generalize random sampling, and also provided a deep analysis of discrete pruning by using the volume of the intersection of a ball with a box. In particular, under RA, the expectation of the length of a short vector generated by lattice enumeration with discrete pruning from the so-called tag t = (t 1 , . . . , t n ) ∈ Z n is roughly given by which is a generalization of the above mean μ. However, it is shown in Aono and Nguyen (2017) that the empirical correlation between E(t) and the volume of ball-box intersection is negative. This is statistical evidence why decreasing SS(B) is important instead of increasing the volume of ball-box intersection. Furthermore, the calculation of the volume presented in Aono and Nguyen (2017) is much less efficient than the computation of SS(B). In 2018, Matsuda et al. (2018) investigated the strategy of Fukase and Kashiwabara (2015) by the Gram-Charlier approximation in order to precisely estimate the success probability of sampling short lattice vectors, and also discussed the effectiveness of decreasing SS(B) for sampling short lattice vectors.

The Sub-Sieving Strategy
Around the end of August 2018, many records for the SVP challenge in dimensions up to 155 had been found by the sub-sieving strategy of Ducas (2018 lattices, and the sieve is useful to collect such vectors. In particular, the sieve is performed in projected lattices instead of the full lattice. The specific strategy is as follows Ducas (2018), Section 3. Given a basis B = (b 1 , . . . , b n ) of a lattice L of high dimension n, we fix an integer d with 1 ≤ d ≤ n, and perform the sieve in the projected lattice π d (L) to obtain a list of short lattice vectors We hope that the desired shortest non-zero vector s in the full lattice L projects to a vector in the above list D, that is, it satisfies π d (s) = 0 and π d (s) ≤ is sufficient to satisfy our hope. This condition is not tight, since the projected vector π d (s) becomes shorter than the full vector s as the index d increases. By exhaustive search over the list D, we assume that the projected vector s d := π d (s) ∈ D is known. We need to recover the full vector s from s d . Write s = Bx for some x ∈ Z n , and split x as (x 1 | x 2 ) with x 1 ∈ Z d−1 and x 2 ∈ Z n−d+1 . Then s d = π d (Bx) = B d x 2 and hence x 2 is known, where B d = (π d (b d ), . . . , π d (b n )). Now we need to recover x 1 so that s = B 1 x 1 + B 2 x 2 is small (or the shortest), where B = (B 1 | B 2 ). This is an easy BDD instance over the d-dimensional lattice spanned by B 1 for the target vector B 2 x 2 . A sufficient condition to solve this problem using Babai's nearest plane algorithm Babai (1986) is that | b * i , s | ≤ 1 2 b * i 2 for all 1 ≤ i < d. (See also Galbraith 2012, Chap. 18 for Babai's algorithms.) Since | b * i , s | ≤ b * i s , a further sufficient condition is that GH(L) ≤ 1 2 min i<d b * i . This condition is far from tight, and it should not be a serious issue in practice. Indeed, even for a strongly reduced basis, the d first Gram-Schmidt lengths won't be much smaller than GH(L), say by more than a factor 2. (The BKZ-preprocessing with blocksize β = n 2 is assumed in Ducas (2018).) A concrete maximal value of d satisfying the condition (3) depends on the shape of a basis B. It is estimated in Ducas (2018) that d = (n/ log n) is suitable over a quasi-HKZ-reduced basis.
In 2019, Albrecht et al. (2019) proposed the General Sieve Kernel (G6K), an abstract stateful machine supporting a variety of advanced lattice reductions based on sieving algorithms. They have provided a highly optimized, multi-threaded, and tweakable implementation of G6K as an open-source C++ and Python library. A number of records in the hall of fame for the SVP challenge were found by the sub-sieving strategy on G6K. (In June 2019, the highest dimension to be solved in the SVP challenge is 157, using G6K.) Specifically, their experiments imply that in average d = 11.46 + 0.0757n is a suitable free dimension of the sub-sieving strategy for the SVP challenge in high dimensions n. Furthermore, their solution for the SVP challenge in dimension 151 was found 400 times faster than the times reported for the SVP challenge in dimension 150, which was solved early in 2017 by the random sampling strategy.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.