Keywords

1 Introduction

There has recently been a substantial amount of research for large-scale quantum computers. On the other hand, if such computers were built, they could break currently used public-key cryptosystems such as the RSA cryptosystem and the elliptic curve cryptography. (See Shor 1994 for Shor’s quantum algorithms.) In order to prepare information security systems to be able to resist quantum computing, the US National Institute of Standards and Technology (NIST) began a process to develop new standards for PQC in 2015 and called for proposals in 2016. It has rapidly accelerated to research lattice-based cryptography as a candidate of PQC. Specifically, at the submission deadline of the end of November 2017 for the call, NIST received more than 20 proposals of lattice-based cryptosystems. Among them, more than 10 proposals were allowed for Round 2 submissions around the end of January 2019. (See the web page of NIST 2016.) The security of such proposals relies on the hardness of cryptographic lattice problems such as learning with errors (LWE) and NTRU. Such problems are reduced to approximate-SVP or approximate-CVP. (For example, see Albrecht et al. 2018 for details.) Therefore, it is becoming more important to understand classical lattice problems for evaluating the security of lattice-based PQC candidates.

For a positive integer n, a (full-rank) lattice L in \(\mathbb {R}^n\) is the set of all integral linear combinations of linearly independent vectors \(\mathbf {b}_1, \dots , \mathbf {b}_n\) in \(\mathbb {R}^n\). (The set of the \(\mathbf {b}_i\)’s is called a basis of L.) Given a basis of a lattice L, SVP asks to find the non-zero shortest vector in L. In this paper, we give a survey of typical algorithms for solving SVP from a mathematical point of view. These algorithms can be classified into two categories, depending on whether they solve SVP exactly or approximately. Exact-SVP algorithms perform an exhaustive search for an integer combination of the basis vectors \(\mathbf {b}_i\)’s to find the non-zero shortest lattice vector \(\mathbf {v} = \sum _{i=1}^n v_i \mathbf {b}_i \in L\), and their cost is expensive. In contrast, approximate-SVP algorithms are much faster than exact algorithms, but they find short lattice vectors, not necessarily the shortest ones. However, exact- and approximate-SVP algorithms are complementary. For example, exact algorithms apply an approximation algorithm as a preprocessing to reduce their expensive cost, while several approximate-SVP algorithms call many times an exact algorithm in low dimension as a subroutine to find a very short lattice vector. In this paper, we also introduce recent strategies for solving the Darmstadt SVP challenge Darmstadt (2010), in which sample lattice bases are presented in order to test algorithms solving SVP. In particular, these strategies combine approximate- and exact-SVP algorithms to efficiently solve SVP in high dimensions such as \(n \ge 150\).

Notation. The symbols \(\mathbb {Z}\), \(\mathbb {Q}\), and \(\mathbb {R}\) denote the ring of integers, the field of rational numbers, and the field of real numbers, respectively. Let \(\lfloor z \rceil \) denote the rounding integer of an integer z. We represent all vectors in column format. For \(\mathbf {a} = (a_1, \dots , a_n)^\top \in \mathbb {R}^n\), let \(\Vert \mathbf {a}\Vert \) denote its Euclidean norm. For \(\mathbf {a} = (a_1, \dots , a_n)^\top \) and \(\mathbf {b} = (b_1, \dots , b_n)^\top \), let \(\langle \mathbf {a}, \mathbf {b} \rangle \) denote the inner product \(\sum _{i = 1}^n a_i b_i\). Denote by \(V_n(R)\) the volume of the n-dimensional ball of radius \(R > 0\) centered at the origin. In particular, we let \(\nu _n = V_n(1)\) denote the volume of the unit ball. Then \(V_n(R) = \nu _n R^n\) and

$$ \nu _n = \dfrac{\pi ^{n/2}}{\Gamma (1 + n/2)} \sim \dfrac{1}{\sqrt{\pi n}} \left( \dfrac{2 \pi e}{n} \right) ^{n/2} $$

using Stirling’s formula, where \(\Gamma (s) = \int _0^\infty t^{s-1} e^{-t} dt\) denotes the Gamma function.

2 Mathematical Background

In this section, we introduce basic definitions and properties on lattices, and present famous lattice problems whose hardness ensures the essential security of lattice-based cryptography. (For example, see Galbraith 2012, Part IV or Nguyen 2009 for more details.)

2.1 Lattices and Their Bases

For a positive integer n, let \(\mathbf {b}_1, \dots , \mathbf {b}_n\) be n linearly independent (column) vectors in \(\mathbb {R}^n\). The set of all integral linear combinations of the \(\mathbf {b}_i\)’s is a (full-rank) lattice

$$ L = \mathcal {L}(\mathbf {b}_1, \dots , \mathbf {b}_n)= \left\{ \sum _{i = 1}^n v_i \mathbf {b}_i : v_i \in \mathbb {Z} \text{ for } \text{ all } 1 \le i \le n \right\} $$

of dimension n with basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n) \in \mathbb {R}^{n \times n}\). (A basis is regarded not only as a set of vectors, but also as a matrix whose column vectors span a lattice.) Every lattice has infinitely many bases if \(n \ge 2\); if two bases \(\mathbf {B}_1\) and \(\mathbf {B}_2\) span the same lattice, then there exists an \(n \times n\) unimodular matrix \(\mathbf {U} \in \mathrm {GL}_n(\mathbb {Z})\) with \(\mathbf {B}_1 = \mathbf {B}_2 \mathbf {U}\). The volume of L is defined as \(\mathrm {vol}(L) = |\det ( \mathbf {B})|\), independent of the choice of bases.

The Gram–Schmidt orthogonalization for an (ordered) basis \(\mathbf {B}\) is the orthogonal family \(\mathbf {B}^* =(\mathbf {b}_1^*, \dots , \mathbf {b}^*_n) \in \mathbb {R}^{n \times n}\), recursively defined by \(\mathbf {b}_1^* = \mathbf {b}_1\) and for \(2 \le i \le n\)

$$ \mathbf {b}_i^* = \mathbf {b}_i - \sum _{j = 1}^{i-1} \mu _{i, j} \mathbf {b}_j^*, \text{ where } \mu _{i, j} = \frac{\langle \mathbf {b}_i, \mathbf {b}_j^*\rangle }{\Vert \mathbf {b}_j^* \Vert ^2} \text{ for } 1 \le j<i \le n. $$

Notice that the Gram–Schmidt vectors \(\mathbf {b}_i^*\)’s depend on the order of basis vectors in \(\mathbf {B}\). For convenience, set \(\mathbf {\mu } = (\mu _{i, j}) \in \mathbb {R}^{n \times n}\) where let \(\mu _{i, j} = 0\) for all \(i<j\) and \(\mu _{k, k} = 1\) for all k. Then \(\mathbf {B} = \mathbf {B}^* \mu \), and thus \(\mathrm {vol}(L) = \prod _{i = 1}^n \Vert \mathbf {b}_i^* \Vert \) from the orthogonality of Gram–Schmidt vectors. For \(2 \le \ell \le n\), let \(\pi _\ell \) denote the orthogonal projection over the orthogonal supplement of the \(\mathbb {R}\)-vector space \(\langle \mathbf {b}_1, \dots , \mathbf {b}_{\ell -1} \rangle _\mathbb {R}\) as

$$ \pi _\ell : \mathbb {R}^n \longrightarrow \langle \mathbf {b}_1, \dots , \mathbf {b}_{\ell -1} \rangle _\mathbb {R}^\perp = \langle \mathbf {\mathbf {b}}_\ell ^*, \dots , \mathbf {b}_n^* \rangle _\mathbb {R},~ \pi _\ell (\mathbf {x}) = \sum _{i = \ell }^n \dfrac{\langle \mathbf {x}, \mathbf {b}_i^* \rangle }{\Vert \mathbf {b}_i^* \Vert ^2} \mathbf {b}_i^*. $$

Every projection map depends on a basis. We also set \(\pi _1 = \mathrm {id}\) for convenience.

2.2 Successive Minima, Hermite’s Constants, and Gaussian Heuristic

For every \(1 \le i \le n\), the ith successive minimum of an n-dimensional lattice L, denoted by \(\lambda _i(L)\), is defined as the minimum of \(\max _{1 \le j \le i} \Vert \mathbf {v}_j \Vert \) over all i linearly independent vectors \(\mathbf {v}_1, \dots , \mathbf {v}_i \in L\). In particular, the first minimum \(\lambda _1(L)\) is the norm of the shortest non-zero vector in L. We clearly have \(\lambda _1(L) \le \lambda _2(L) \le \dots \le \lambda _n(L)\) by definition. Moreover, for any basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of L, its Gram–Schmidt vectors satisfy \(\lambda _i(L) \ge \min _{i \le j \le n} \Vert \mathbf {b}_j^* \Vert \) for every \(1 \le i \le n\). (See Bremner 2011, Proposition 3.14 for proof.)

Hermite (1850) first proved that the quantity \(\frac{\lambda _1(L)^2}{\mathrm {vol}(L)^{2/n}}\) is upper bounded over all lattices L of dimension n. Its supremum over all lattices of dimension n is called Hermite’s constant of dimension n, denoted by \(\gamma _n\). This implies \(\lambda _1(L) \le \sqrt{\gamma _n} \mathrm {vol}(L)^{1/n}\) for any lattice L of dimension n. As its extension, it satisfies

$$ \left( \prod _{i=1}^r \lambda _i(L) \right) ^{1/r} \le \sqrt{\gamma _n} \mathrm {vol}(L)^{1/n} \text{ for } 1 \le r \le n. $$

This is known as Minkowski’s second theorem. (See Martinet 2013, Chap. 2 for proof.) It is important to know the value of \(\gamma _n\) in order to obtain an upper bound of \(\lambda _1(L)\); Minkowski’s convex body theorem implies \(\gamma _n \le 4 \nu _n^{-2/n}\). (See Martinet 2013, Chap. 2 for proof.) This shows that

$$\begin{aligned} \lambda _1(L) \le 2 \nu _n^{-1/n} \mathrm {vol}(L)^{1/n} \end{aligned}$$
(1)

for any lattice L of dimension n. Moreover, it satisfies \(\gamma _n \le 1 + \frac{n}{4}\) from well-known formulas for \(\nu _n\). It is very difficult to find the exact value of \(\gamma _n\), and such values are known for only a few integers n. However, every \(\gamma _n\) is known as essentially linear in n. It also satisfies Mordell’s inequality \(\gamma _n \le \gamma _k^{(n-1)/(k-1)}\) for any \(n \ge k \ge 2\). (See Nguyen 2009 for more details on Hermite’s constants.)

Given a lattice L of dimension n and a measurable set S in \(\mathbb {R}^n\), the Gaussian Heuristic predicts that the number of vectors in \(L \cap S\) is roughly equal to \(\mathrm {vol}(S)/\mathrm {vol}(L)\). By applying the ball of radius \(\lambda _1(L)\) centered at the origin in \(\mathbb {R}^n\), it leads to the prediction of the norm of the shortest non-zero vector in L. Specifically, the expectation of \(\lambda _1(L)\) according to the Gaussian Heuristic is given by

$$ \mathrm {GH}(L) = \nu _n^{-1/n} \mathrm {vol}(L)^{1/n} \sim \sqrt{\frac{n}{2 \pi e}} \mathrm {vol}(L)^{1/n}. $$

This is tight compared to Eq. (1). Note that this is only a heuristic. But for “random” lattices, \(\lambda _1(L)\) is asymptotically equal to \(\mathrm {GH}(L)\) with overwhelming probability Ajtai (1996).

2.3 Introduction to Lattice Problems

The most famous lattice problem is given below.

The Shortest Vector Problem (SVP)

   Given a basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L, find the shortest non-zero vector in L, that is, a vector \(\mathbf {s} \in L\) such that \(\Vert \mathbf {s} \Vert = \lambda _1(L)\).

It was proven by Ajtai (1996) that SVP is NP-hard under randomized reductions. SVP can be relaxed by an approximate factor: Given a basis of a lattice L and an approximation factor \(f \ge 1\), find a non-zero vector \(\mathbf {v}\) in L such that \(\Vert \mathbf {v} \Vert \le f \lambda _1(L)\). Approximate-SVP is exactly SVP when \(f = 1\). It is unlikely that one can efficiently solve approximate-SVP within quasi-polynomial factors in n, while approximate-SVP within a factor \(\sqrt{n/\log (n)}\) is unlikely to be NP-hard. (See Nguyen 2009 for more details.)

Another famous lattice problem is given below.

The Closest Vector Problem (CVP)

Given a basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L and a target vector \(\mathbf {t}\), find a vector in L closest to \(\mathbf {t}\), that is, a vector \(\mathbf {v} \in L\) such that the distance \(\Vert \mathbf {t} - \mathbf {v} \Vert \) is minimized.

CVP is at least as hard as SVP. As in the case of SVP, we can define an approximate variant of CVP by an approximate factor. Approximate-CVP is also at least as hard as approximate-SVP with the same factor. From a practical point of view, both are considered equally hard, due to Kannan’s embedding technique Kannan (1987) which can transform approximate-CVP into approximate-SVP. (See also Galbraith 2012 for the embedding.)

The security of modern lattice-based cryptosystems is based on the hardness of cryptographic lattice problems, such as the LWE and the NTRU problems. (For example, see NIST 2016 for NIST post-quantum candidates.) Such lattice problems are reduced to approximate-SVP or approximate-CVP. (For example, see Albrecht et al. 2018 for details.)

3 Solving SVP Algorithms

In this section, we present typical algorithms for solving SVP. These algorithms can be classified into two categories, depending on whether they solve SVP exactly or approximately. However, both categories are complementary; exact algorithms first apply an approximation algorithm as a preprocessing to reduce their cost, while blockwise algorithms (e.g., the BKZ algorithm presented below) call many times an exact algorithm in low dimension as a subroutine to find a very short lattice vector.

3.1 Exact-SVP Algorithms

Exact-SVP algorithms find the non-zero shortest lattice vector, but they are expensive. These algorithms perform an exhaustive search of all short vectors, whose number is exponential in the dimension (in the worst case). These algorithms can be split in two categories; polynomial-space algorithms and exponential-space algorithms.

3.1.1 Polynomial-Space Exact Algorithms: Enumeration

They are based on enumeration, which dates back to the early 1980s with work by Pohst (1981), Kannan (1983), and Fincke–Pohst (1985). Enumeration is simply an exhaustive search for an integer combination of the basis vectors such that the lattice vector is the shortest. An enumeration algorithm takes as input an enumeration radius \(R > 0\) and a basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L, and outputs all non-zero vectors \(\mathbf {s}\) in L such that \(\Vert \mathbf {s} \Vert \le R\) (if exists). The radius R is taken as an upper bound of \(\lambda _1(L)\), like \(\sqrt{\gamma _n} \mathrm {vol}(L)^{1/n}\), to find the shortest non-zero lattice vector. It goes through the enumeration tree formed by all vectors in the projected lattices \(\pi _n(L)\), \(\pi _{n-1}(L), \cdots , \pi _1(L) = L\) with norm at most R. More precisely, the enumeration tree is a tree of depth n, and for each \(1 \le k \le n+1\), the nodes at depth \(n+1-k\) are all the vectors in the projected lattice \(\pi _k(L)\) with norm at most R. In particular, the root of the tree is the zero vector because \(\pi _{n+1}(L) = \{ \mathbf {0} \}\). The parent of a node \(\mathbf {u} \in \pi _k(L)\) at depth \(n+1-k\) is the node \(\pi _{k+1}(\mathbf {u})\) at depth \(n-k\). The child nodes are arranged in order of norms.

Here we introduce the basic idea of the Schnorr–Euchner algorithm Schnorr and Euchner (1994), which is a depth first search of the enumeration tree to find all leaves in practice. (cf. Kannan’s algorithm 1983 is asymptotically superior in the running time, but it is not competitive in practice due to a substantial overhead of recursive procedures. See also Micciancio and Walter 2014 for such discussion.) We represent the shortest non-zero vector as \(\mathbf {s} = v_1 \mathbf {b}_1 + \cdots + v_n \mathbf {b}_n \in L\) for some unknown integers \(v_i\)’s. With Gram–Schmidt information of \(\mathbf {B}\), it is rewritten as

$$ \mathbf {s} = \sum _{i=1}^n v_i \left( \mathbf {b}_i^* + \sum _{j=1}^{i-1} \mu _{i, j} \mathbf {b}_j^* \right) = \sum _{j=1}^n \left( v_j + \sum _{i=j+1}^n \mu _{i, j} v_i \right) \mathbf {b}_j^*. $$

Due to the orthogonality of Gram–Schmidt vectors \(\mathbf {b}_j^*\)’s, the squared norms of projections of the vector \(\mathbf {s}\) are given as for every \(1 \le k \le n\)

$$\begin{aligned} \Vert \pi _k(\mathbf {s}) \Vert ^2 = \sum _{j=k}^n \left( v_j + \sum _{i=j+1}^n \mu _{i, j} v_i \right) ^2 \Vert \mathbf {b}_j^* \Vert ^2. \end{aligned}$$

If \(\mathbf {s}\) is a leaf of the enumeration tree, then its projections all satisfy \(\Vert \pi _{k}(\mathbf {s}) \Vert ^2 \le R^2\) for all \(1 \le k \le n\). These n inequalities together with above equations enable to perform an exhaustive search for the integral coordinates \(v_n, v_{n-1}, \dots , v_1\) of \(\mathbf {s}\):

$$\begin{aligned} \left( v_{k} + \sum _{i = k+1}^n \mu _{i, k} v_i \right) ^2 \le \dfrac{R^2 - \sum _{j = k+1}^n \left( v_j + \sum _{i = j+1}^n \mu _{i, j} v_i \right) ^2 \Vert \mathbf {b}_j^* \Vert ^2}{\Vert \mathbf {b}_{k}^* \Vert ^2} \end{aligned}$$
(2)

for every \(1 \le k \le n\). We start with \(k=n\) in Eq. (2), that is, \(0 \le v_n \le \frac{R}{\Vert \mathbf {b}_n^* \Vert }\), because we can restrict to “positive” nodes due to the symmetry of the enumeration tree. Choosing a candidate of \(v_n\), we move to the next index \(k=n-1\) in Eq. (2), that is, \((v_{n-1} + \mu _{n, n-1} v_n)^2 \le \frac{R^2 - v_n^2 \Vert \mathbf {b}_n^* \Vert ^2}{\Vert \mathbf {b}_{n-1}^* \Vert ^2}\) to find a candidate of \(v_{n-1}\). By repeating this procedure, assume that the integers \(v_n, \dots , v_{k+1}\) are found for some \(1< k < n\). Then Eq. (2) enables to compute an interval \(I_k\) such that \(v_k \in I_k\), and thus to perform an exhaustive search for the integer \(v_k\). A depth first search of the tree corresponds to enumerating the interval from its middle, namely, a zig-zag search like

$$ v_k = \left\lfloor c_k \right\rceil ,~ \left\lfloor c_k \right\rceil \pm 1,~ \left\lfloor c_k \right\rceil \pm 2,~ \cdots , $$

where \(c_k = -\sum _{i=k+1}^n \mu _{i, k} v_i\). The basic Schnorr–Euchner enumeration algorithm Schnorr and Euchner (1994) is as below (see Gama et al. 2010, Algorithm 2 for the algorithm with some improvements).

figure a

The running time of the enumeration algorithm fully depends on the total number of tree nodes N. An estimate of N can be derived from the Gaussian Heuristic. More precisely, the number of nodes at level \(\ell \) is exactly half the number of vectors in the projected lattice \(\pi _{n+1-\ell }(L)\) with norm at most R. Since \(\mathrm {vol}(\pi _{n+1-\ell }(L)) = \prod _{i = n+1-\ell }^n \Vert \mathbf {b}_i^* \Vert \), the Gaussian Heuristic predicts the number of nodes at level \(\ell \) scanned by the Schnorr–Euchner algorithm to be close to

$$\begin{aligned} H_\ell \approx \frac{1}{2} \cdot \dfrac{V_\ell (R)}{\prod _{i=n+1-\ell }^n \Vert \mathbf {b}_i^* \Vert }. \end{aligned}$$

Then \(N \approx \sum _{\ell = 1}^n H_\ell \). For a “good” basis (reduced by LLL or BKZ, introduced in the next subsection), we have \(\Vert \mathbf {b}_i^* \Vert / \Vert \mathbf {b}_{i+1}^* \Vert \approx q\) for some constant q. This is called the geometric series assumption (GSA),Footnote 1 first introduced by Schnorr (2003). The constant q depends on the reduction algorithm. For example, we experimentally have \(q \approx 1.04\) by LLL and \(q \approx 1.025\) by BKZ with blocksize 20 for high-dimensional lattices (see Gama and Nguyen 2008 for details.) Now we take the enumeration radius \(R = \sqrt{\gamma _n} \mathrm {vol}(L)^{1/n}\), which is optimal in the worst case. With the constant q, we estimate

$$ H_\ell \approx \dfrac{q^{(n-\ell )(n-1)/2}V_\ell (\sqrt{\gamma _n})}{2q^{(n-\ell -1)(n-\ell )/2}} = q^{\ell (n-\ell )/2} 2^{O(n)} $$

since we can roughly estimate \(V_\ell (\sqrt{\gamma _n}) = 2^{O(n)}\) from \(\sqrt{\gamma _n} = \Theta \left( \sqrt{n} \right) \) Gama et al. (2010). The right-hand term is maximized for \(\ell = \frac{n}{2}\), and it is less than \(q^{n^2/8} 2^{O(n)}\). Thus the maximum of \(H_\ell \) is super-exponential in n and is reached for \(\ell \approx \frac{n}{2}\). (See Gama et al. 2010, Fig. 1 for the actual number of nodes, which is very close to this prediction.) Since smaller q is obtained for a more reduced basis, it shows that the more reduced the input basis is, the less are the nodes in the enumeration tree, and the cheaper the enumeration cost.

It is possible to obtain substantial speedups using pruning techniques by Gama et al. (2010). Their idea is tempting not to enumerate all the tree nodes, by discarding certain branches. (See Aono et al. 2018 for a lower bound of the time complexity of pruned enumeration.) However, it decreases the success probability to find the shortest non-zero lattice vector \(\mathbf {s}\). For instance, one might intuitively hope that \(\Vert \pi _{n/2}(\mathbf {s}) \Vert ^2 \lessapprox \Vert \mathbf {s} \Vert ^2/2\), which is more restrictive than the inequality \(\Vert \pi _{n/2}(\mathbf {s}) \Vert ^2 \le \Vert \mathbf {s} \Vert ^2\). Formally, pruning replaces each of the n inequalities \(\Vert \pi _k(\mathbf {s})\Vert ^2 \le R^2\) by \(\Vert \pi _{k}(\mathbf {s}) \Vert ^2 \le R_{n+1-k}^2\), where \(R_1 \le \dots \le R_n = R\) are n real numbers defined by a pruning strategy. A pruning parameter is set in the fplll library The FPLLL development team (2016), and a pruning function for setting \(R_i\)’s is implemented in the progressive BKZ library Aono et al. (2016).

3.1.2 Exponential-Space Exact Algorithms: Sieve

These algorithms have a better asymptotic running time, but they all require exponential space \(2^{\Theta (n)}\). The first algorithm of this kind is the randomized sieve algorithm proposed by Ajtai, Kumar, and Sivakumar (AKS) Ajtai et al. (2001). The AKS algorithm outputs the shortest lattice vector with overwhelming probability, and its asymptotic complexity is much better than deterministic enumeration algorithms with \(2^{O(n^2)}\) time complexity. The main idea is as follows (see also Nguyen 2008, Sect. 3 or Nguyen 2009): Given a lattice L of dimension n, consider a ball S centered at the origin and of radius r with \(\lambda _1(L) \le r \le O(\lambda _1(L))\). Then \(\#(L \cap S) = 2^{O(n)}\) based on the Gaussian Heuristic. If we could perform an exhaustive search for all vectors in \(L \cap S\), we could find the shortest lattice vector within \(2^{O(n)}\) polynomial-time operations. Enumeration enables to perform an exhaustive search of \(L \cap S\), but it requires to go through all the vectors in the union set \(\widetilde{S} = \bigcup _{k=1}^n \left( \pi _k(L) \cap S \right) \), whose total number is much larger than \(\# (L \cap S)\). In contrast, the AKS algorithm performs a randomized sampling of \(L \cap S\), without going through the set \(\widetilde{S}\). If it was uniformly sampled over \(L \cap S\), a short lattice vector would be included in N samples with probability close to 1 for \(N \gg \#(L \cap S)\). Unfortunately, it is unclear whether the uniform property is satisfied by the AKS sampling. However, it can be shown that there exists a vector \(\mathbf {w} \in L \cap S\) such that \(\mathbf {w}\) and \(\mathbf {w} + \mathbf {s}\) can be sampled with non-zero probability for some shortest lattice vector \(\mathbf {s}\). Thus the shortest lattice vector is obtained by computing the shortest difference of any pairs of the N sampled vectors in \(L \cap S\).

There are several heuristic variants of the AKS algorithm with time complexity \(2^{O(n)}\) and space complexity exponential in n for an n-dimensional lattice L Baiet al. (2016), Herold and Kirshanova (2017), Micciancio and Voulgaris (2010), Nguyen (2008). Given a basis of L, these algorithms build databases of lattice vectors with norms at most \(R \cdot \mathrm {GH}(L)\) for a small constant \(R > 0\) such as \(R^2 = \frac{4}{3}\). In generic sieves, it is checked whether the sum or the difference of any pair of vectors in databases becomes shorter. The basic sieve algorithm is as below.

figure b

In Step 1 of the above algorithm, the initialization of the database D can be performed by first computing an LLL-reduced basis (see the next subsection for the LLL reduction), and taking random small integral combinations of the basis vectors. (A natural idea is to use a stronger reduction algorithm such as BKZ in order to generate shorter initial vectors.) The Nguyen–Vidick sieve (2008) finds pairs of vectors \((\mathbf {v}_1, \mathbf {v}_2)\) from D, whose sum or difference gives a shorter vector, that is, \(\Vert \mathbf {v}_1 \pm \mathbf {v}_2 \Vert < \max _{\mathbf {v} \in D} \Vert \mathbf {v} \Vert \). Once such a pair is found, the longest vector from the database gets replaced by \(\mathbf {v}_1 \pm \mathbf {v}_2\). The database size is a priori fixed to the asymptotic heuristic minimum \(2^{0.2075 n + O(n)}\) in order to find enough such pairs. The running time is quadratic in the database size. The Gauss sieve (2010) is a variant of the Nguyen–Vidick sieve with substantial improvements; the main improvement is to divide the database into two parts, the so-called “list ” part and the “queue” part. Both parts are separately sorted by Euclidean norm in order to make early reduction likely. In updating vectors, the queue part enables to avoid considering the same pair several times. The running time and the database size for the Gauss sieve are asymptotically the same as for the Nguyen–Vidick sieve, but its performance is better in practice. The 3-sieve Baiet al. (2016), Herold and Kirshanova (2017) searches for triples of lattice vectors whose sum gives a shorter vector. (cf. the Nguyen–Vidick and the Gauss algorithms are a kind of 2-sieve.) There are more possible triples than pairs to shorten vectors in the database, but a search for such triples is more costly. (Filtering techniques Herold and Kirshanova 2017 are required to speed up such a search.) Several tricks and techniques have been proposed to improve sieve algorithms, such as the SimHash technique Charikar (2002), Ducas (2018), Fitzpatrick et al. (2014). Several practical sieve algorithms also have been implemented in the fplll library The FPLLL development team (2016).

3.2 Approximate-SVP Algorithms

These algorithms are much faster than exact algorithms, but they output short lattice vectors, not necessarily the shortest ones.

3.2.1 LLL Reduction

The first efficient approximate-SVP algorithm is the celebrated algorithm by Lenstra, Lenstra, and Lovász (LLL) Lenstra et al. (1982). Nowadays it is known as the most famous algorithm of lattice basis reduction, which finds a lattice basis with short and nearly orthogonal basis vectors. Such a basis is called reduced or good. We introduce the notion of LLL reduction. Let \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) be a basis of a lattice L, and \(\mathbf {B}^* = (\mathbf {b}_1^*, \dots , \mathbf {b}_n^*)\) its Gram–Schmidt vectors with coefficients \(\mu _{i, j}\). For a parameter \(\frac{1}{4}< \delta < 1\), the basis \(\mathbf {B}\) is called \(\delta \)-LLL-reduced if it satisfies two conditions: (i) (Size-reduction condition) \(|\mu _{i, j}| \le \frac{1}{2}\) for all \(1 \le j < i \le n\). (ii) (Lovász’ condition) \(\delta \Vert \mathbf {b}_{k-1}^* \Vert ^2 \le \Vert \pi _{k-1} (\mathbf {b}_k) \Vert ^2\) for all \(2 \le k \le n\). This can be rewritten as \(\Vert \mathbf {b}_{k}^* \Vert ^2 \ge (\delta - \mu _{k, k-1}^2) \Vert \mathbf {b}_{k-1}^* \Vert ^2\). Any \(\delta \)-LLL-reduced basis satisfies the below properties (see Bremner 2011 for proof):

  • \(\Vert \mathbf {b}_1 \Vert \le \alpha ^{(n-1)/4} \mathrm {vol}(L)^{1/n}\), where \(\alpha = \frac{4}{4 \delta -1} > \frac{4}{3}\).

  • \(\Vert \mathbf {b}_i \Vert \le \alpha ^{(n-1)/2} \lambda _i(L)\) for \(1 \le i \le n\), and \(\prod _{i=1}^n \Vert \mathbf {b}_i \Vert \le \alpha ^{n(n-1)/4} \mathrm {vol}(L)\).

Given any basis of L, the LLL algorithm finds a \(\delta \)-LLL-reduced basis of L. As seen from the above second property, it can solve approximate-SVP with factor \(\alpha ^{(n-1)/2}\). The basic LLL algorithm is given below (see also Galbraith 2012, Chap. 17 or Nguyen 2009).

figure c

In the LLL algorithm, a pair of adjacent basis vectors \((\mathbf {b}_{k-1}, \mathbf {b}_k)\) is swapped if it does not satisfy Lovász’ condition. Thus the output basis is \(\delta \)-LLL-reduced if the algorithm terminates. The quantity \(\mathrm {Pot}(\mathbf {B}) = \prod _{i=1}^{n-1} \Vert \mathbf {b}_i^* \Vert ^{2(n-i)}\) is called the potential of a basis \(\mathbf {B}\). Every swap in the LLL algorithm decreases the potential of an input basis by a factor at least \(\delta < 1\). (cf. the size-reduction procedure does not change the potential.) This guarantees the termination of the LLL algorithm in polynomial time in n. Furthermore, the LLL algorithm is applicable also for linearly dependent vectors to remove their linear dependency. (See Bremner 2011, Chap. 6, Cohen 2013, Sect. 2.6.4, Pohst 1987 or Sims 1994, Sect. 8.7 for details.)

3.2.2 Variants of LLL

3.2.2.1 LLL with Deep Insertions (DeepLLL)

This variant is a straightforward generalization of LLL, in which non-adjacent basis vectors can be changed. Specifically, a basis vector \(\mathbf {b}_k\) is inserted between \(\mathbf {b}_{i-1}\) and \(\mathbf {b}_i\) as \(\sigma _{i, k}(\mathbf {B}) = (\dots , \mathbf {b}_{i-1}, \mathbf {b}_k, \mathbf {b}_i, \dots , \mathbf {b}_{k-1}, \mathbf {b}_{k + 1}, \dots )\), called a deep insertion, if the so-called deep exchange condition \(\Vert \pi _i(\mathbf {b}_k) \Vert ^2 < \delta \Vert \mathbf {b}_i^* \Vert ^2\) is satisfied for \(\frac{1}{4}< \delta < 1\). In this case, the new GSO vector at the ith position is given by \(\pi _i(\mathbf {b}_k)\), strictly shorter than the old GSO vector \(\mathbf {b}_i^*\). A basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) is called \(\delta \)-DeepLLL-reduced if it satisfies two conditions: (i) it is size-reduced, (ii) \(\Vert \pi _i(\mathbf {b}_k) \Vert ^2 \ge \delta \Vert \mathbf {b}_i^* \Vert ^2\) for all \(1 \le i < k \le n\). (The case \(i = k-1\) is just Lovász’ condition.) Any \(\delta \)-DeepLLL-reduced basis satisfies the below properties Yasuda and Yamaguchi (2019), Theorem 1:

  • \(\Vert \mathbf {b}_1 \Vert \le \alpha ^\frac{n-1}{2n} \left( 1 + \frac{\alpha }{4} \right) ^\frac{(n-1)(n-2)}{4n} \mathrm {vol}(L)^\frac{1}{n}\), where \(\alpha \) is the same as in LLL.

  • \(\Vert \mathbf {b}_i \Vert \le \sqrt{\alpha } \left( 1 {+} \frac{\alpha }{4} \right) ^\frac{n-2}{2} \lambda _i(L)\) for \(1 \le i \le n\), and \(\prod _{i{=}1}^n \Vert \mathbf {b}_i \Vert \le \left( 1 + \frac{\alpha }{4} \right) ^\frac{n(n-1)}{4} \mathrm {vol}(L)\).

These properties are strictly stronger than the case of LLL. The basic DeepLLL algorithm Schnorr and Euchner (1994) is given below (see also Bremner 2011, Fig. 5.1 or Cohen 2013, Algorithm 2.6.4).

figure d

Compared with LLL, it is complicated to update the Gram–Schmidt information of \(\mathbf {B}\) after every deep insertion. (See Yamaguchi and Yasuda 2017.) Every deep insertion does not always decrease the potential of an input basis, and thus the complexity of DeepLLL is no longer polynomial-time but potentially super-exponential in the lattice dimension. However, DeepLLL often finds much shorter lattice vectors than LLL in practice Gama and Nguyen (2008).

3.2.2.2 Block Korkine–Zolotarev (BKZ) Algorithm

Let us first introduce a strong notion of reduction: A basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L is called HKZ-reduced if it is size-reduced and it satisfies \(\Vert \mathbf {b}_i^* \Vert = \lambda _1(\pi _i(L))\) for all \(1 \le i \le n\). For \(1 \le i \le j \le n\), denote by \(\mathbf {B}_{[i, j]}\) the local projected block \((\pi _i(\mathbf {b}_i), \pi _i(\mathbf {b}_{i+1}), \dots , \pi _i(\mathbf {b}_j))\), and by \(L_{[i, j]}\) the lattice spanned by \(\mathbf {B}_{[i, j]}\). The notion of BKZ-reduction is a local block version of HKZ-reduction Schnorr (1987), Schnorr (1992), Schnorr and Euchner (1994). For a blocksize \(2 \le \beta \le n\), a basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L is called \(\beta \)-BKZ-reduced if it is size-reduced and every local block \(\mathbf {B}_{[j, j+\beta -1]}\) is HKZ-reduced for \(1 \le j \le n-\beta +1\). The second condition means \(\Vert \mathbf {b}_j^* \Vert = \lambda _1 ( L_{[j, k]} )\) for \(1 \le j \le n-1\) with \(k = \min (j + \beta -1, n)\). Every \(\beta \)-BKZ-reduced basis satisfies \(\Vert \mathbf {b}_1 \Vert \le \gamma _\beta ^{(n-1)/(\beta -1)}\lambda _1(L)\) Schnorr (1992). The BKZ algorithm Schnorr and Euchner (1994) finds a \(\beta \)-BKZ-reduced basis, and it calls LLL to reduce every local block before finding the shortest vector over the block lattice. (As \(\beta \) increases, a shorter lattice vector can be found, but the running time is more costly.)

figure e

It is customary to terminate the BKZ algorithm after a selected number of calls to an exact-SVP algorithm over block lattices. (See Hanrot et al. 2011 for analysis.) Efficient variants such as BKZ 2.0 Chen (2011) have been proposed, and some of them have been implemented in The FPLLL development team (2016). The Hermite factor is a good index to measure the practical output quality of a reduction algorithm. (See Gama and Nguyen 2008 for their experiments.) It is defined by \(\gamma = \frac{\Vert \mathbf {v} \Vert }{\mathrm {vol}(L)^{1/n}}\), where \(\mathbf {v}\) is the shortest basis vector output by a reduction algorithm for a basis of a lattice L of dimension n. Under the Gaussian Heuristic and GSA, a limiting value of the root Hermite factor of BKZ with blocksize \(\beta \) is predicted in Chen (2013) as

$$ \lim _{n \rightarrow \infty } \gamma ^\frac{1}{n} = \left( \nu _\beta ^{-\frac{1}{\beta }} \right) ^\frac{1}{\beta -1} \sim \left( \frac{\beta }{2 \pi e} (\pi \beta )^\frac{1}{\beta } \right) ^\frac{1}{2(\beta -1)}. $$

There are experimental evidences to support this prediction for high blocksizes such as \(\beta > 50\). (Note that the Gaussian Heuristic holds in practice for random lattices in high dimensions, but unfortunately it is violated in low dimensions.) In a simple form based on the Gaussian Heuristic, the GSA shape of a \(\beta \)-BKZ-reduced basis of volume 1 is predicted as \(\Vert \mathbf {b}_i^* \Vert \approx \alpha _\beta ^{\frac{n-1}{2}-i}, \text{ where } \alpha _\beta = \left( \frac{\beta }{2 \pi e} \right) ^{1/\beta }.\) This is reasonably accurate in practice for \(\beta > 50\) and \(\beta \ll n\). (See Chen 2013, 2011; Yu and Ducas 2017.) Other variants of BKZ have been proposed such as slide reduction Gama and Nguyen (2008), self-dual BKZ Micciancio and Walter (2016), and progressive-BKZ Aono et al. (2016). As a mathematical improvement of BKZ, DeepBKZ was recently proposed in Yamaguchi and Yasuda (2017), in which DeepLLL is called a subroutine alternative to LLL. In particular, DeepBKZ finds a short lattice vector by smaller blocksizes than BKZ in practice. (Dual and self-dual variants of DeepBKZ were also proposed in Yasuda (2018), Yasuda et al. (2018).)

4 The SVP Challenge and Recent Strategies

To test algorithms solving SVP, sample lattice bases are presented in Darmstadt (2010) for dimensions from 40 up to 200. (The lattices are random in the sense of Goldstein and Mayer Goldstein and Mayer (2003).) For every lattice L, any non-zero lattice vector with (Euclidean) norm less than \(1.05\mathrm {GH}(L)\) can be submitted to the hall of fame in the SVP challenge. To enter the hall of fame, the lattice vector is required to be shorter than a previous one in the same dimension (with possibly different seed). Note that not all lattice vectors in the hall of fame are necessarily the shortest. In this section, we introduce two recent strategies for solving the SVP challenge in high dimensions such as \(n \ge 150\).

4.1 The Random Sampling Strategy

Early in 2017, a non-zero vector in a lattice L of dimension \(n=150\) with norm less than \(1.05 \mathrm {GH}(L)\) was first found by Teruya and Kashiwabara using many high-performance servers. (See Teruya et al. 2018 for their large-scale experiments.) Their strategy is based on the work of Fukase and Kashiwabara (2015), which is an extension of Schnorr’s random sampling reduction (RSR) Schnorr (2003). Here we review random sampling (SA) and RSR. For a lattice L of dimension n, fix \(1 \le u < n\) to be a constant of search space bound. Given a basis \(\mathbf {B} = (\mathbf {b}_1, \ldots , \mathbf {b}_n)\) of L, SA samples a vector \(\mathbf {v} = \sum _{i = 1}^{n} \nu _i \mathbf {b}_i^*\) in L satisfying \(\nu _i \in (-1/2, 1/2]\) for \(1 \le i < n-u\), \(\nu _i \in (-1, 1]\) for \(n-u \le i < n\) and \(\nu _n = 1\). Let \(S_{u, \mathbf {B}}\) denote the set of such lattice vectors. Since the number of candidates for \(\nu _i\) with \(| \nu _i | \le 1/2\) (resp. \(| \nu _i |\le 1\)) is 1 (resp. 2), there are \(2^u\) lattice vectors in \(S_{u, \mathbf {B}}\). By calling SA up to \(2^u\) times, RSR generates \(\mathbf {v}\) satisfying \(\Vert \mathbf {v} \Vert ^2 < 0.99 \Vert \mathbf {b}_1 \Vert ^2\) Schnorr (2003), Theorem 1. Two extensions are proposed in Fukase and Kashiwabara (2015) for solving the SVP challenge; the first one is to represent a lattice vector by a sequence of natural numbers via the Gram–Schmidt orthogonalization, and to sample lattice vectors on an appropriate distribution of the representation. The second one is to decrease the sum of the squared Gram–Schmidt lengths \(\mathrm {SS}(\mathbf {B}) := \sum _{i=1}^n \Vert \mathbf {b}_i^* \Vert ^2\) to make it easier to sample very short lattice vectors. The effectiveness of their extensions is guaranteed by their statistical analysis on lattices. Specifically, under the randomness assumption (RA),Footnote 2 they roughly estimate that the distribution of the squared length of a sampled vector \(\Vert \mathbf {v} \Vert ^2 = \sum _{i = 1}^n \nu _i^2 \Vert \mathbf {b}_i^* \Vert ^2\) follows the normal distribution \(\mathcal {N}(\mu , \sigma ^2)\) with

$$ \mu = \frac{\sum _{i = 1}^n \Vert \mathbf {b}_i^* \Vert ^2}{12} \quad \text{ and } \quad \sigma = \left( \frac{\sum _{i = 1}^n \Vert \mathbf {b}_i^* \Vert ^4 }{180} \right) ^{1/2}. $$

This implies that shorter lattice vectors are sampled as the squared-sum \(\mathrm {SS} (\mathbf {B})\) becomes smaller. Then the basic strategy in Fukase and Kashiwabara (2015); Teruya et al. (2018) consists of the following two steps: (i) We reduce an input basis so that it decreases the sum of its squared Gram–Schmidt lengths as small as possible, by using LLL and insertion of sampled lattice vectors like BKZ. (See also Yasuda et al. 2017 for such procedure). (ii) With such reduced basis \(\mathbf {B}\), we then find a short lattice vector by randomly sampling \(\mathbf {v} = \sum _{i = 1} \nu _i \mathbf {b}_i^*\).

As a sequential work, Aono and Nguyen (2017) introduced lattice enumeration with discrete pruning to generalize random sampling, and also provided a deep analysis of discrete pruning by using the volume of the intersection of a ball with a box. In particular, under RA, the expectation of the length of a short vector generated by lattice enumeration with discrete pruning from the so-called tag \(\mathbf {t} = (t_1, \dots , t_n) \in \mathbb {Z}^n\) is roughly given by \( E(\mathbf {t}) = \sum _{i=1}^n \left( \frac{t_i^2}{4} + \frac{t_i}{4} + \frac{1}{12} \right) \Vert \mathbf {b}_i^* \Vert ^2, \) which is a generalization of the above mean \(\mu \). However, it is shown in Aono and Nguyen (2017) that the empirical correlation between \(E(\mathbf {t})\) and the volume of ball-box intersection is negative. This is statistical evidence why decreasing \(\mathrm {SS}(\mathbf {B})\) is important instead of increasing the volume of ball-box intersection. Furthermore, the calculation of the volume presented in Aono and Nguyen (2017) is much less efficient than the computation of \(\mathrm {SS}(\mathbf {B})\). In 2018, Matsuda et al. (2018) investigated the strategy of Fukase and Kashiwabara (2015) by the Gram–Charlier approximation in order to precisely estimate the success probability of sampling short lattice vectors, and also discussed the effectiveness of decreasing \(\mathrm {SS}(\mathbf {B})\) for sampling short lattice vectors.

4.2 The Sub-Sieving Strategy

Around the end of August 2018, many records for the SVP challenge in dimensions up to 155 had been found by the sub-sieving strategy of Ducas (2018). (See Albrecht et al. 2019 for their experiments report.) The basic idea is to reduce SVP in high dimensions to the bounded distance decoding (BDD) problem in low dimensions, a particular case of CVP, in which the target vector is known to be somewhat close to the lattice. It enforces us to find an enormous number of short vectors in projected lattices, and the sieve is useful to collect such vectors. In particular, the sieve is performed in projected lattices instead of the full lattice.

The specific strategy is as follows Ducas (2018), Section 3. Given a basis \(\mathbf {B} = (\mathbf {b}_1, \dots , \mathbf {b}_n)\) of a lattice L of high dimension n, we fix an integer d with \(1 \le d \le n\), and perform the sieve in the projected lattice \(\pi _d(L)\) to obtain a list of short lattice vectors

$$ D := \left\{ \mathbf {v} \in \pi _d(L) \mid \mathbf {v} \ne \mathbf {0} \text{ and } \Vert \mathbf {v} \Vert \le \sqrt{\dfrac{4}{3}} \mathrm {GH}\left( \pi _d(L) \right) \right\} . $$

We hope that the desired shortest non-zero vector \(\mathbf {s}\) in the full lattice L projects to a vector in the above list D, that is, it satisfies \(\pi _d(\mathbf {s}) \ne \mathbf {0}\) and \(\Vert \pi _d(\mathbf {s}) \Vert \le \sqrt{\frac{4}{3}} \mathrm {GH}(\pi _d(L))\). (Note that \(\pi _d(\mathbf {s}) = \mathbf {0}\) means that the vector \(\mathbf {s}\) is in the sub-lattice \(\mathcal {L}(\mathbf {b}_1, \dots , \mathbf {b}_{d-1})\) of L. Here we do not care about the case.) Since \(\Vert \pi _d(\mathbf {s}) \Vert \le \Vert \mathbf {s} \Vert \approx \mathrm {GH}(L)\), the condition

$$\begin{aligned} \mathrm {GH}(L) \le \sqrt{\frac{4}{3}} \mathrm {GH}\left( \pi _d(L) \right) \end{aligned}$$
(3)

is sufficient to satisfy our hope. This condition is not tight, since the projected vector \(\pi _d(\mathbf {s})\) becomes shorter than the full vector \(\mathbf {s}\) as the index d increases. By exhaustive search over the list D, we assume that the projected vector \(\mathbf {s}_d := \pi _d(\mathbf {s}) \in D\) is known. We need to recover the full vector \(\mathbf {s}\) from \(\mathbf {s}_d\). Write \(\mathbf {s} = \mathbf {B} \mathbf {x}\) for some \(\mathbf {x} \in \mathbb {Z}^n\), and split \(\mathbf {x}\) as \((\mathbf {x}_1 \mid \mathbf {x}_2)\) with \(\mathbf {x}_1 \in \mathbb {Z}^{d-1}\) and \(\mathbf {x}_2 \in \mathbb {Z}^{n-d+1}\). Then \(\mathbf {s}_d = \pi _d(\mathbf {B} \mathbf {x}) = \mathbf {B}_d \mathbf {x}_2\) and hence \(\mathbf {x}_2\) is known, where \(\mathbf {B}_d = \left( \pi _d(\mathbf {b}_d), \dots , \pi _d(\mathbf {b}_n)\right) \). Now we need to recover \(\mathbf {x}_1\) so that \(\mathbf {s} = \mathbf {B}_1 \mathbf {x}_1 + \mathbf {B}_2 \mathbf {x}_2\) is small (or the shortest), where \(\mathbf {B} = (\mathbf {B}_1 \mid \mathbf {B}_2)\). This is an easy BDD instance over the d-dimensional lattice spanned by \(\mathbf {B}_1\) for the target vector \(\mathbf {B}_2 \mathbf {x}_2\). A sufficient condition to solve this problem using Babai’s nearest plane algorithm Babai (1986) is that \(| \langle \mathbf {b}_i^*, \mathbf {s} \rangle | \le \frac{1}{2} \Vert \mathbf {b}_i^* \Vert ^2\) for all \(1 \le i < d\). (See also Galbraith 2012, Chap. 18 for Babai’s algorithms.) Since \(| \langle \mathbf {b}_i^*, \mathbf {s} \rangle | \le \Vert \mathbf {b}_i^* \Vert \Vert \mathbf {s} \Vert \), a further sufficient condition is that \(\mathrm {GH}(L) \le \frac{1}{2} \min _{i < d} \Vert \mathbf {b}_i^* \Vert .\) This condition is far from tight, and it should not be a serious issue in practice. Indeed, even for a strongly reduced basis, the d first Gram–Schmidt lengths won’t be much smaller than \(\mathrm {GH}(L)\), say by more than a factor 2. (The BKZ-preprocessing with blocksize \(\beta = \frac{n}{2}\) is assumed in Ducas (2018).) A concrete maximal value of d satisfying the condition (3) depends on the shape of a basis \(\mathbf {B}\). It is estimated in Ducas (2018) that \(d = \Theta (n / \log n)\) is suitable over a quasi-HKZ-reduced basis.

In 2019, Albrecht et al. (2019) proposed the General Sieve Kernel (G6K), an abstract stateful machine supporting a variety of advanced lattice reductions based on sieving algorithms. They have provided a highly optimized, multi-threaded, and tweakable implementation of G6K as an open-source C++ and Python library. A number of records in the hall of fame for the SVP challenge were found by the sub-sieving strategy on G6K. (In June 2019, the highest dimension to be solved in the SVP challenge is 157, using G6K.) Specifically, their experiments imply that in average \(d = 11.46 + 0.0757 n\) is a suitable free dimension of the sub-sieving strategy for the SVP challenge in high dimensions n. Furthermore, their solution for the SVP challenge in dimension 151 was found 400 times faster than the times reported for the SVP challenge in dimension 150, which was solved early in 2017 by the random sampling strategy.