Keywords

1 Introduction

Hard lattice problems. Lattices are discrete subgroups of \(\mathbb {R}^d\). More concretely, given a basis \(B = \{\varvec{b}_1, \dots , \varvec{b}_d\} \subset \mathbb {R}^d\), the lattice \(\mathcal {L}= \mathcal {L}(B)\) generated by B is defined as \(\mathcal {L}(B) = \left\{ \sum _{i=1}^d \lambda _i \varvec{b}_i: \lambda _i \in \mathbb {Z}\right\} \). Given a basis of a lattice \(\mathcal {L}\), the Shortest Vector Problem (SVP) asks to find a shortest non-zero vector in \(\mathcal {L}\) under the Euclidean norm, i.e., a non-zero lattice vector \(\varvec{s}\) of norm \(\Vert \varvec{s}\Vert = \lambda _1(\mathcal {L}) := \min _{\varvec{v} \in \mathcal {L}\setminus \{\varvec{0}\}} \Vert \varvec{v}\Vert \). Given a basis of a lattice and a target vector \(\varvec{t} \in \mathbb {R}^d\), the Closest Vector Problem (CVP) asks to find a vector \(\varvec{s} \in \mathcal {L}\) closest to \(\varvec{t}\) under the Euclidean distance, i.e. such that \(\Vert \varvec{s} - \varvec{t}\Vert = \min _{\varvec{v} \in \mathcal {L}} \Vert \varvec{v} - \varvec{t}\Vert \).

These two hard problems are fundamental in the study of lattice-based cryptography, as the security of these schemes is directly related to the hardness of SVP and CVP in high dimensions. Various other hard lattice problems, such as Learning With Errors (LWE) and the Shortest Integer Solution (SIS) problem are closely related to SVP and CVP, and many reductions between these and other hard lattice problems are known; see e.g. [LvdPdW12, Fig. 3.1] or [Ste16] for an overview. These reductions show that being able to solve CVP efficiently implies that almost all other lattice problems can also be solved efficiently in the same dimension, which makes the study of the hardness of CVP even more important for choosing parameters in lattice-based cryptography.

Algorithms for SVP and CVP. Although SVP and CVP are both central in the study of lattice-based cryptography, algorithms for SVP have received somewhat more attention, including a benchmarking website to compare different algorithms [SG15]. Various SVP methods have been studied which can solve CVP as well, such as enumeration (see e.g. [Kan83, FP85, GNR10, MW15]), discrete Gaussian sampling [ADRS15, ADS15], constructing the Voronoi cell of the lattice [AEVZ02, MV10a], and using a tower of sublattices [BGJ14]. On the other hand, for the asymptotically fastest method in high dimensions for SVPFootnote 1, lattice sieving, it is not known how to solve CVP with similar costs as SVP.

After a series of theoretical works on constructing efficient heuristic sieving algorithms [NV08, MV10b, WLTB11, ZPH13, Laa15a, LdW15, BGJ15, BL16, BDGL16] as well as practical papers studying how to speed up these algorithms even further [MS11, Sch11, Sch13, BNvdP14, FBB+14, IKMT14, MTB14, MODB14, MLB15, MB16, MLB16], the best time complexity for solving SVP currently stands at \(2^{0.292d + o(d)}\) [BDGL16, MLB16]. Although for various other methods the complexities for solving SVP and CVP are similar [GNR10, MV10a, ADS15], one can only guess whether the same holds for lattice sieving methods. To date, the best heuristic time complexity for solving CVP in high dimensions stands at \(2^{0.377d + o(d)}\), due to Becker–Gama–Joux [BGJ14].

1.1 Contributions

In this paper we revisit heuristic lattice sieving algorithms, as well as the recent trend to speed up these algorithms using nearest neighbor searching, and we investigate how these algorithms can be modified to solve CVP and its generalizations. We present two different approaches for solving CVP with sieving, each of which we argue has its own merits.

Adaptive sieving. In adaptive sieving, we adapt the entire sieving algorithm to the problem instance, including the target vector. As the resulting algorithm is tailored specifically to the given CVP instance, this leads to the best asymptotic complexity for solving a single CVP instance out of our two proposed methods: \(2^{0.292d + o(d)}\) time and space. This method is very similar to solving SVP with lattice sieving, and leads to equivalent asymptotics on the space and time complexities as for SVP. The corresponding space-time tradeoff is illustrated in Fig. 1, and equals that of [BDGL16] for solving SVP.

Non-adaptive sieving. Our main contribution, non-adaptive sieving, takes a different approach, focusing on cases where several CVP instances are to be solved on the same lattice. The goal here is to minimize the costs of computations depending on the target vector, and spend more time on preprocessing the lattice, so that the amortized time complexity per instance is smaller when solving many CVP instances on the same lattice. This is very closely related to the Closest Vector Problem with Preprocessing (CVPP), where the difference is that we allow for exponential-size preprocessed space. Using nearest neighbor techniques with a balanced space-time tradeoff, we show how to solve CVPP with \(2^{0.636d + o(d)}\) space and preprocessing, in \(2^{0.136d + o(d)}\) time. A continuous tradeoff between the two complexities can be obtained, where in the limit we can solve CVPP with \((1/\varepsilon )^{O(d)}\) space and preprocessing, in \(2^{\varepsilon d + o(d)}\) time. This tradeoff is depicted in Fig. 1.

A potential application of non-adaptive sieving is as a subroutine within enumeration methods. As described in e.g. [GNR10], at any given level in the enumeration tree, one is attempting to solve a CVP instance in a lower-dimensional sublattice of \(\mathcal {L}\), where the target vector is determined by the path chosen from the root to the current node in the tree. That means that if we can preprocess this sublattice such that the amortized time complexity of solving CVPP is small, then this could speed up processing the bottom part of the enumeration tree. This in turn might help speed up the lattice basis reduction algorithm BKZ [Sch87, SE94, CN11], which commonly uses enumeration as its SVP subroutine, and is key in assessing the security of lattice-based schemes. As the preprocessing needs to be performed once, CVPP algorithms with impractically large preprocessing costs may not be useful, but we show that with sieving the preprocessing costs can be quite small.

Fig. 1.
figure 1

Heuristic complexities for solving the Closest Vector Problem (CVP), the Closest Vector Problem with Preprocessing (CVPP), Bounded Distance Decoding with Preprocessing (\(\delta \)-BDDP), and the Approximate Closest Vector Problem with Preprocessing (\(\kappa \)-CVPP). The red curve shows CVP complexities of Becker–Gama–Joux [BGJ14]. The left blue curve denotes CVP complexities of adaptive sieving. The right blue curve shows exact CVPP complexities using non-adaptive sieving. Purple curves denote relaxations of CVPP corresponding to different parameters \(\delta \) (BDD radius) and \(\kappa \) (approximation factor). Note that exact CVPP corresponds to \(\delta \)-BDDP with \(\delta = 1\) and to \(\kappa \)-CVPP with \(\kappa = 1\). (Color figure online)

Outline. The remainder of the paper is organized as follows. In Sect. 2 we describe some preliminaries, such as sieving algorithms and a useful result on nearest neighbor searching. Section 3 describes adaptive sieving and its analysis for solving CVP without preprocessing. Section 4 describes the preprocessing approach to solving CVP, with complexity analyses for exact CVP and some of its relaxations.

2 Preliminaries

2.1 Lattice Sieving for Solving SVP

Heuristic lattice sieving algorithms for solving the shortest vector problem all use the following basic property of lattices: if \(\varvec{v}, \varvec{w} \in \mathcal {L}\), then their sum/difference \(\varvec{v} \pm \varvec{w} \in \mathcal {L}\) is a lattice vector as well. Therefore, if we have a long list L of lattice vectors stored in memory, we can consider combinations of these vectors to obtain new, shorter lattice vectors. To make sure the algorithm makes progress in finding shorter lattice vectors, L needs to contain a lot of lattice vectors; for vectors \(\varvec{v}, \varvec{w} \in \mathcal {L}\) of similar norm, the vector \(\varvec{v} - \varvec{w}\) is shorter than \(\varvec{v}, \varvec{w}\) iff the angle between \(\varvec{v}, \varvec{w}\) is smaller than \(\pi /3\), which for random vectors \(\varvec{v}, \varvec{w}\) occurs with probability \((3/4)^{d/2 + o(d)}\). The expected space complexity of heuristic sieving algorithms follows directly from this observation: if we draw \((4/3)^{d/2 + o(d)}\) random vectors from the unit sphere, we expect a large number of pairs of vectors to have angle less than \(\pi /3\), leading to many short difference vectors. This is exactly the heuristic assumption used in analyzing these sieving algorithms: when normalized, vectors in L follow the same distribution as vectors sampled uniformly at random from the unit sphere.

Heuristic 1

When normalized, the list vectors \(\varvec{w} \in L\) behave as i.i.d. uniformly distributed random vectors from the unit sphere \(\mathcal {S}^{d-1} := \{\varvec{x} \in \mathbb {R}^d: \Vert \varvec{x}\Vert = 1\}\).

Therefore, if we start by sampling a list L of \((4/3)^{d/2 + o(d)}\) long lattice vectors, and iteratively consider combinations of vectors in L to find shorter vectors, we expect to keep making progress. Note that naively, combining pairs of vectors in a list of size \((4/3)^{d/2 + o(d)} \approx 2^{0.208d + o(d)}\) takes time \((4/3)^{d + o(d)} \approx 2^{0.415d + o(d)}\).

The Nguyen-Vidick sieve. The heuristic sieve algorithm of Nguyen and Vidick [NV08] starts by sampling a list L of \((4/3)^{d/2 + o(d)}\) long lattice vectors, and uses a sieve to map L, with maximum norm \(R := \max _{\varvec{v} \in L} \Vert \varvec{v}\Vert \), to a new list \(L'\), with maximum norm at most \(\gamma R\) for \(\gamma < 1\) close to 1. By repeatedly applying this sieve, after \({\text {poly}}(d)\) iterations we expect to find a long list of lattice vectors of norm at most \(\gamma ^{{\text {poly}}(d)} R = O(\lambda _1(\mathcal {L}))\). The final list is then expected to contain a shortest vector of the lattice. Algorithm 3 in Appendix A describes a sieve equivalent to Nguyen-Vidick’s original sieve, to map L to \(L'\) in \(|L|^2\) time.

Micciancio and Voulgaris’ GaussSieve. Micciancio and Voulgaris used a slightly different approach in the GaussSieve [MV10b]. This algorithm reduces the memory usage by immediately reducing all pairs of lattice vectors that are sampled. The algorithm uses a single list L, which is always kept in a state where for all \(\varvec{w}_1, \varvec{w}_2 \in L\), \(\Vert \varvec{w}_1 \pm \varvec{w}_2\Vert \ge \Vert \varvec{w}_1\Vert , \Vert \varvec{w}_2\Vert \), and each time a new vector \(\varvec{v} \in \mathcal {L}\) is sampled, its norm is reduced with vectors in L. After the norm can no longer be reduced, the vectors in L are reduced with \(\varvec{v}\). Modified list vectors are added to a stack to be processed later (to maintain the pairwise reduction-property of L), and new vectors which are pairwise reduced with L are added to L. Immediately reducing all pairs of vectors means that the algorithm uses less time and memory in practice, but at the same time Nguyen and Vidick’s heuristic proof technique does not apply here. However, it is commonly believed that the same bounds \((4/3)^{d/2 + o(d)}\) and \((4/3)^{d + o(d)}\) on the space and time complexities hold for the GaussSieve. Pseudocode of the GaussSieve is given in Algorithm 4 in Appendix A.

2.2 Nearest Neighbor Searching

Given a data set \(L \subset \mathbb {R}^d\), the nearest neighbor problem asks to preprocess L such that, when given a query \(\varvec{t} \in \mathbb {R}^d\), one can quickly return a nearest neighbor \(\varvec{s} \in L\) with distance \(\Vert \varvec{s} - \varvec{t}\Vert = \min _{\varvec{w} \in L} \Vert \varvec{w} - \varvec{t}\Vert \). This problem is essentially identical to CVP, except that L is a finite set of unstructured points, rather than the infinite set of all points in a lattice \(\mathcal {L}\).

Locality-Sensitive Hashing/Filtering (LSH/LSF). A celebrated technique for finding nearest neighbors in high dimensions is Locality-Sensitive Hashing (LSH) [IM98, WSSJ14], where the idea is to construct many random partitions of the space, and store the list L in hash tables with buckets corresponding to regions. Preprocessing then consists of constructing these hash tables, while a query \(\varvec{t}\) is answered by doing a lookup in each of the hash tables, and searching for a nearest neighbor in these buckets. More details on LSH in combination with sieving can be found in e.g. [Laa15a, LdW15, BGJ15, BL16].

Similar to LSH, Locality-Sensitive Filtering (LSF) [BDGL16, Laa15b] divides the space into regions, with the added relaxation that these regions do not have to form a partition; regions may overlap, and part of the space may not be covered by any region. This leads to improved results compared to LSH when L has size exponential in d [BDGL16, Laa15b]. Below we restate one of the main results of [Laa15b] for our applications. The specific problem considered here is: given a data set \(L \subset \mathcal {S}^{d-1}\) sampled uniformly at random, and a random query \(\varvec{t} \in \mathcal {S}^{d-1}\), return a vector \(\varvec{w} \in L\) such that the angle between \(\varvec{w}\) and \(\varvec{t}\) is at most \(\theta \). The following result further assumes that the list L contains \(n = (1 / \sin \theta )^{d + o(d)}\) vectors.

Lemma 1

[Laa15b, Corollary 1] Let \(\theta \in (0, \frac{1}{2} \pi )\), and let \(u \in [\cos \theta , 1/\cos \theta ]\). Let \(L \subset \mathcal {S}^{d-1}\) be a list of \(n = (1 / \sin \theta )^{d + o(d)}\) vectors sampled uniformly at random from \(\mathcal {S}^{d-1}\). Then, using spherical LSF with parameters \(\alpha _{\mathrm {q}}= u \cos \theta \) and \(\alpha _{\mathrm {u}}= \cos \theta \), one can preprocess L in time \(n^{1 + \rho _{\mathrm {u}}+ o(1)}\), using \(n^{1 + \rho _{\mathrm {u}}+ o(1)}\) space, and with high probability answer a random query \(\varvec{t} \in \mathcal {S}^{d-1}\) correctly in time \(n^{\rho _{\mathrm {q}}+ o(1)}\), where:

$$\begin{aligned} n^{\rho _{\mathrm {q}}}&= \left( \frac{\sin ^2 \theta \, (u \cos \theta + 1)}{u \cos \theta - \cos 2 \theta }\right) ^{d/2}, \quad n^{\rho _{\mathrm {u}}} = \left( \frac{\sin ^2 \theta }{1 - \cot ^2 \theta \left( u^2 - 2 u \cos \theta + 1\right) }\right) ^{d/2}. \end{aligned}$$
(1)

Applying this result to sieving for solving SVP, where \(n = \sin (\frac{\pi }{3})^{-d + o(d)}\) and we are looking for pairs of vectors at angle at most \(\frac{\pi }{3}\) to perform reductions, this leads to a space and preprocessing complexity of \(n^{0.292d + o(d)}\), and a query complexity of \(2^{0.084d + o(d)}\). As the preprocessing in sieving is only performed once, and queries are performed \(n \approx 2^{0.208d + o(d)}\) times, this leads to a reduction of the complexities of sieving (for SVP) from \(2^{0.208d + o(d)}\) space and \(2^{0.415d + o(d)}\) time, to \(2^{0.292d + o(d)}\) space and time [BDGL16].

3 Adaptive Sieving for CVP

We present two methods for solving CVP using sieving, the first of which we call adaptive sieving – we adapt the entire sieving algorithm to the particular CVP instance, to obtain the best overall time complexity for solving one instance. When solving several CVP instances, the costs roughly scale linearly with the number of instances.

Using one list. The main idea behind this method is to translate the SVP algorithm by the target vector \(\varvec{t}\); instead of generating a long list of lattice vectors reasonably close to \(\varvec{0}\), we generate a list of lattice vectors close to \(\varvec{t}\), and combine lattice vectors to find lattice vectors even closer vectors to \(\varvec{t}\). The final list then hopefully contains a closest vector to \(\varvec{t}\).

One quickly sees that this does not work, as the fundamental property of lattices does not hold for the lattice coset \(\varvec{t} + \mathcal {L}\): if \(\varvec{w}_1, \varvec{w}_2 \in \varvec{t} + \mathcal {L}\), then \(\varvec{w}_1 \pm \varvec{w}_2 \notin \varvec{t} + \mathcal {L}\). In other words, two lattice vectors close to \(\varvec{t}\) can only be combined to form lattice vectors close to \(\varvec{0}\) or \(2 \varvec{t}\). So if we start with a list of vectors close to \(\varvec{t}\), and combine vectors in this list as in the Nguyen-Vidick sieve, then after one iteration we will end up with a list \(L'\) of lattice vectors close to \(\varvec{0}\).

Using two lists. To make the idea of translating the whole problem by \(\varvec{t}\) work for the Nguyen-Vidick sieve, we make the following modification: we keep track of two lists \(L = L_{\varvec{0}}\) and \(L_{\varvec{t}}\) of lattice vectors close to \(\varvec{0}\) and \(\varvec{t}\), and construct a sieve which maps two input lists \(L_{\varvec{0}}, L_{\varvec{t}}\) to two output lists \(L_{\varvec{0}}', L_{\varvec{t}}'\) of lattice vectors slightly closer to \(\varvec{0}\) and \(\varvec{t}\). Similar to the original Nguyen-Vidick sieve, we then apply this sieve several times to two initial lists \((L_{\varvec{0}}, L_{\varvec{t}})\) with a large radius R, to end up with two lists \(L_{\varvec{0}}\) and \(L_{\varvec{t}}\) of lattice vectors at distance at most approximately \(\sqrt{4/3} \cdot \lambda _1(\mathcal {L})\) from \(\varvec{0}\) and \(\varvec{t}\) Footnote 2. The argumentation that this algorithm works is almost identical to that for solving SVP, where we now make the following slightly different heuristic assumption.

Heuristic 2

When normalized, the list vectors \(L_{\varvec{0}}\) and \(L_{\varvec{t}}\) in the modified Nguyen-Vidick sieve both behave as i.i.d. uniformly distributed random vectors from the unit sphere.

The resulting algorithm, based on the Nguyen-Vidick sieve, is presented in Algorithm 1.

figure a

Main result. As the (heuristic) correctness of this algorithm follows directly from the correctness of the original NV-sieve, and nearest neighbor techniques can be applied to this algorithm in similar fashion as well, we immediately obtain the following result. Note that space-time tradeoffs for SVP, such as the one illustrated in [BDGL16, Fig. 1], similarly carry over to solving CVP, and the best tradeoff for SVP (and therefore CVP) is depicted in Fig. 1.

Theorem 1

Assuming Heuristic 2 holds, the adaptive Nguyen-Vidick sieve with spherical LSF solves CVP in time \(\mathrm {T}\) and space \(\mathrm {S}\), with

$$\begin{aligned} \mathrm {S}= (4/3)^{d/2 + o(d)} \approx 2^{0.208 d + o(d)}, \quad \mathrm {T}= (3/2)^{d/2 + o(d)} \approx 2^{0.292 d + o(d)}. \end{aligned}$$
(2)

An important open question is whether these techniques can also be applied to the faster GaussSieve algorithm to solve CVP. The GaussSieve seems to make even more use of the property that the sum/difference of two lattice vectors is also in the lattice, and operations in the GaussSieve in \(\mathcal {L}\) cannot as easily be mimicked for the coset \(\varvec{t} + \mathcal {L}\). Solving CVP with the GaussSieve with similar complexities is left as an open problem.

4 Non-adaptive Sieving for CVPP

Our second method for finding closest vectors with heuristic lattice sieving follows a slightly different approach. Instead of focusing only on the total time complexity for one problem instance, we split the algorithm into two phases:

  • Phase 1: Preprocess the lattice \(\mathcal {L}\), without knowledge of the target \(\varvec{t}\);

  • Phase 2: Process the query \(\varvec{t}\) and output a closest lattice vector \(\varvec{s} \in \mathcal {L}\) to \(\varvec{t}\).

Intuitively it may be more important to keep the costs of Phase 2 small, as the preprocessed data can potentially be reused later for other instances on the same lattice. This approach is essentially equivalent to the Closest Vector Problem with Preprocessing (CVPP): preprocess \(\mathcal {L}\) such that when given a target vector \(\varvec{t}\) later, one can quickly return a closest vector \(\varvec{s} \in \mathcal {L}\) to \(\varvec{t}\). For CVPP however the preprocessed space is usually restricted to be of polynomial size, and the time used for preprocessing the lattice is often not taken into account. Here we will keep track of the preprocessing costs as well, and we do not restrict the output from the preprocessing phase to be of size \({\text {poly}}(d)\).

Algorithm description. To minimize the costs of answering a query, and to do the preprocessing independently of the target vector, we first run a standard SVP sieve, resulting in a large list L of almost all short lattice vectors. Then, after we are given the target vector \(\varvec{t}\), we use L to reduce the target. Finally, once the resulting vector \(\varvec{t}' \in \varvec{t} + \mathcal {L}\) can no longer be reduced with our list, we hope that this reduced vector \(\varvec{t}'\) is the shortest vector in the coset \(\varvec{t} + \mathcal {L}\), so that \(\varvec{0}\) is the closest lattice vector to \(\varvec{t}'\) and \(\varvec{s} = \varvec{t} - \varvec{t}'\) is the closest lattice vector to \(\varvec{t}\).

The first phase of this algorithm consists in running a sieve and storing the resulting list in memory (potentially in a nearest neighbor data structure for faster lookups). For this phase either the Nguyen-Vidick sieve or the GaussSieve can be used. The second phase is the same for either method, and is described in Algorithm 2 for the general case of an input list essentially consisting of the \(\alpha ^{d + o(d)}\) shortest vectors in the lattice. Note that a standard SVP sieve would produce a list of size \((4/3)^{d/2 + o(d)}\) corresponding to \(\alpha = \sqrt{4/3}\).

figure b

List size. We first study how large L must be to guarantee that the algorithm succeeds. One might wonder why we do not fix \(\alpha = \sqrt{4/3}\) immediately in Algorithm 2. To see why this choice of \(\alpha \) does not suffice, suppose we have a vector \(\varvec{t}' \in \varvec{t} + \mathcal {L}\) which is no longer reducible with L. This implies that \(\varvec{t}'\) has norm approximately \(\sqrt{4/3} \cdot \lambda _1(\mathcal {L})\), similar to what happens in the GaussSieve. Now, unfortunately the fact that \(\varvec{t}'\) cannot be reduced with L anymore, does not imply that the closest lattice point to \(\varvec{t}'\) is \(\varvec{0}\). In fact, it is more likely that there exists an \(\varvec{s} \in \varvec{t} + \mathcal {L}\) of norm slightly more than \(\sqrt{4/3} \cdot \lambda _1(\mathcal {L})\) which is closer to \(\varvec{t}'\), but which is not used for reductions.

By the Gaussian heuristic, we expect the distance from \(\varvec{t}\) and \(\varvec{t}'\) to the lattice to be \(\lambda _1(\mathcal {L})\). So to guarantee that \(\varvec{0}\) is the closest lattice vector to the reduced vector \(\varvec{t}'\), we need \(\varvec{t}'\) to have norm at most \(\lambda _1(\mathcal {L})\). To analyze and prove correctness of this algorithm, we will therefore prove that, under the assumption that the input is a list of the \(\alpha ^{d + o(d)}\) shortest lattice vectors of norm at most \(\alpha \cdot \lambda _1(\mathcal {L})\) for a particular choice of \(\alpha \), w.h.p. the algorithm reduces \(\varvec{t}\) to a vector \(\varvec{t}' \in \varvec{t} + \mathcal {L}\) of norm at most \(\lambda _1(\mathcal {L})\).

To study how to set \(\alpha \), we start with the following elementary lemma regarding the probability of reduction between two uniformly random vectors with given norms.

Lemma 2

Let \(v, w > 0\) and let \(\varvec{v} = v \cdot \varvec{e}_v\) and \(\varvec{w} = w \cdot \varvec{e}_w\). Then:

$$\begin{aligned} \mathbb {P}_{\varvec{e}_v, \varvec{e}_w \sim \mathcal {S}^{d-1}}\Big (\Vert \varvec{v} - \varvec{w}\Vert ^2 < \Vert \varvec{v}\Vert ^2\Big ) \sim \left[ 1 - \left( \tfrac{w}{2v}\right) ^2\right] ^{d/2 + o(d)}. \end{aligned}$$
(3)

Proof

Expanding \(\Vert \varvec{v} - \varvec{w}\Vert ^2 = v^2 + w^2 - 2 v w \left\langle {\varvec{e}_v},{\varvec{e}_w}\right\rangle \) and \(\Vert \varvec{v}\Vert ^2 = v^2\), the condition \(\Vert \varvec{v} - \varvec{w}\Vert ^2 < \Vert \varvec{v}\Vert ^2\) equals \(\frac{w}{2v} < \left\langle {\varvec{e}_v},{\varvec{e}_w}\right\rangle \). The result follows from [BDGL16, Lemma 2.1].

Under Heuristic 1, we then obtain a relation between the choice of \(\alpha \) for the input list and the expected norm of the reduced vector \(\varvec{t}'\) as follows.

Lemma 3

Let \(L \subset \alpha \cdot \mathcal {S}^{d-1}\) be a list of \(\alpha ^{d + o(d)}\) uniformly random vectors of norm \(\alpha > 1\), and let \(\varvec{v} \in \beta \cdot \mathcal {S}^{d-1}\) be sampled uniformly at random. Then, for high dimensions d, there exists a \(\varvec{w} \in L\) such that \(\Vert \varvec{v} - \varvec{w}\Vert < \Vert \varvec{v}\Vert \) if and only if

$$\begin{aligned} \alpha ^4 - 4 \beta ^2 \alpha ^2 + 4\beta ^2 < 0. \end{aligned}$$
(4)

Proof

By Lemma 2 we can reduce \(\varvec{v}\) with \(\varvec{w} \in L\) with probability similar to \(p = [1 - \frac{\alpha ^2}{4\beta ^2}]^{d/2 + o(d)}\). Since we have \(n = \alpha ^{d + o(d)}\) such vectors \(\varvec{w}\), the probability that none of them can reduce \(\varvec{v}\) is \((1 - p)^n\), which is o(1) if \(n \gg 1/p\) and \(1 - o(1)\) if \(n \ll 1/p\). Expanding \(n \cdot p\), we obtain the given Eq. (4), where \(\alpha ^4 - 4 \beta ^2 \alpha ^2 + 4 \beta ^2 < 0\) implies \(n \gg 1/p\).

Note that in our applications, we do not just have a list of \(\alpha ^{d + o(d)}\) lattice vectors of norm \(\alpha \cdot \lambda _1(\mathcal {L})\); for any \(\alpha _0 \in [1, \alpha ]\) we expect L to contain \(\alpha _0^{d + o(d)}\) lattice vectors of norm at most \(\alpha _0 \cdot \lambda _1(\mathcal {L})\). To obtain a reduced vector \(\varvec{t}'\) of norm \(\beta \cdot \lambda _1(\mathcal {L})\), we therefore obtain the condition that for some value \(\alpha _0 \in [1, \alpha ]\), it must hold that \(\alpha _0^4 - 4 \beta ^2 \alpha _0^2 + 4\beta _0^2 < 0\).

From (4) it follows that \(p(\alpha ^2) = \alpha ^4 - 4 \beta ^2 \alpha ^2 + 4\beta ^2\) has two roots \(r_1< 2 < r_2\) for \(\alpha ^2\), which lie close to 2 for \(\beta \approx 1\). The condition that \(p(\alpha _0^2) < 0\) for some \(\alpha _0 \le \alpha \) is equivalent to \(\alpha > r_1\), which for \(\beta = 1 + o(1)\) implies that \(\alpha ^2 \ge 2 + o(1)\). This means that asymptotically we must set \(\alpha = \sqrt{2}\), and use \(n = 2^{d/2 + o(d)}\) input vectors, to guarantee that w.h.p. the algorithm succeeds. A sketch of the situation is also given in Fig. 2a.

Fig. 2.
figure 2

Comparison of the list size complexity analysis for CVP (left) and BDD/approximate CVP (right). The point \(\varvec{t}\) represents the target vector, and after a series of reductions with Algorithm 2, we obtain \(\varvec{t}' \in \varvec{t} + \mathcal {L}\). Blue balls around \(\varvec{t}'\) depict regions in which we expect the closest lattice point to \(\varvec{t}'\) to lie, where the blue shaded area indicates a negligible fraction of this ball [BDGL16, Lemma 2]. (Color figure online)

Modifying the first phase. As we will need a larger list of size \(2^{d/2 + o(d)}\) to make sure we can solve CVP exactly, we need to adjust Phase 1 of the algorithm as well. Recall that with standard sieving, we reduce vectors iff their angle is at most \(\theta = \frac{\pi }{3}\), resulting in a list of size \((\sin \theta )^{-d + o(d)}\). As we now need the output list of the first phase to consist of \(2^{d/2 + o(d)} = (\sin \theta ')^{-d + o(d)}\) vectors for \(\theta ' = \frac{\pi }{4}\), we make the following adjustment: only reduce \(\varvec{v}\) and \(\varvec{w}\) if their common angle is less than \(\frac{\pi }{4}\). For unit length vectors, this condition is equivalent to reducing \(\varvec{v}\) with \(\varvec{w}\) iff \(\Vert \varvec{v} - \varvec{w}\Vert ^2 \le (2 - \sqrt{2}) \cdot \Vert \varvec{v}\Vert ^2\). This further accelerates nearest neighbor techniques due to the smaller angle \(\theta \). Pseudocode for the modified first phase is given in Appendix B.

Main result. With the algorithm in place, let us now analyze its complexity for solving CVP. The first phase of the algorithm generates a list of size \(2^{d/2 + o(d)}\) by combining pairs of vectors, and naively this can be done in time \(\mathrm {T}_1 = 2^{d + o(d)}\) and space \(\mathrm {S}= 2^{d/2 + o(d)}\), with query complexity \(\mathrm {T}_2 = 2^{d/2 + o(d)}\). Using nearest neighbor searching (Lemma 1), the query and preprocessing complexities can be further reduced, leading to the following result.

Theorem 2

Let \(u \in (\frac{1}{2} \sqrt{2}, \sqrt{2})\). Using non-adaptive sieving, we can solve CVP with preprocessing time \(\mathrm {T}_1\), space complexity \(\mathrm {S}\), and query time complexity \(\mathrm {T}_2\) as follows:

$$\begin{aligned} \mathrm {S}= \mathrm {T}_1&= \left( \frac{1}{u (\sqrt{2} - u)}\right) ^{d/2 + o(d)}, \qquad \mathrm {T}_2 = \left( \frac{\sqrt{2} + u}{2 u}\right) ^{d/2 + o(d)}. \end{aligned}$$
(5)

Proof

These complexities follow from Lemma 1 with \(\theta = \frac{\pi }{4}\), noting that the first phase can be performed in time and space \(\mathrm {T}_1 = \mathrm {S}= n^{1 + \rho _{\mathrm {u}}}\), and the second phase in time \(\mathrm {T}_2 = n^{\rho _{\mathrm {q}}}\).

To illustrate the time and space complexities of Theorem 2, we highlight three special cases u as follows. The full tradeoff curve for \(u \in (\frac{1}{2} \sqrt{2}, \sqrt{2})\) is depicted in Fig. 1.

  • Setting \(u = \frac{1}{2} \sqrt{2}\), we obtain \(\mathrm {S}= \mathrm {T}_1 = 2^{d/2 + o(d)}\) and \(\mathrm {T}_2 \approx 2^{0.2925d + o(d)}\).

  • Setting \(u = 1\), we obtain \(\mathrm {S}= \mathrm {T}_1 \approx 2^{0.6358 d + o(d)}\) and \(\mathrm {T}_2 \approx 2^{0.1358 d + o(d)}\).

  • Setting \(u = \frac{1}{2}(\sqrt{2} + 1)\), we get \(\mathrm {S}= \mathrm {T}_1 = 2^{d + o(d)}\) and \(\mathrm {T}_2 \approx 2^{0.0594 d + o(d)}\).

The first result shows that the query complexity of non-adaptive sieving is never worse than for adaptive sieving; only the space and preprocessing complexities are worse. The second and third results show that CVP can be solved in significantly less time, even with preprocessing and space complexities bounded by \(2^{d + o(d)}\).

Minimizing the query complexity. As \(u \rightarrow \sqrt{2}\), the query complexity keeps decreasing while the memory and preprocessing costs increase. For arbitrary \(\varepsilon > 0\), we can set \(u = u_\varepsilon \approx \sqrt{2}\) as a function of \(\varepsilon \), resulting in asymptotic complexities \(\mathrm {S}= \mathrm {T}_1 = (1/\varepsilon )^{O(d)}\) and \(\mathrm {T}_2 = 2^{\varepsilon d + o(d)}\). This shows that it is possible to obtain a slightly subexponential query complexity, at the cost of superexponential space, by taking \(\varepsilon = o(1)\) as a function of d.

Corollary 1

For arbitrary \(\varepsilon > 0\), using non-adaptive sieving we can solve CVPP with preprocessing time and space complexities \((1/\varepsilon )^{O(d)}\), in time \(2^{\varepsilon d + o(d)}\). In particular, we can solve CVPP in \(2^{o(d)}\) time, using \(2^{\omega (d)}\) space and preprocessing.

Being able to solve CVPP in subexponential time with superexponential preprocessing and memory is neither trivial nor quite surprising. A naive approach to the problem, with this much memory, could for instance be to index the entire fundamental domain of \(\mathcal {L}\) in a hash table. One could partition this domain into small regions, solve CVP for the centers of each of these regions, and store all the solutions in memory. Then, given a query, one looks up which region \(\varvec{t}\) is in, and returns the answer corresponding to that vector. With a sufficiently fine-grained partitioning of the fundamental domain, the answers given by the look-ups are accurate, and this algorithm probably also runs in subexponential time.

Although it may not be surprising that it is possible to solve CVPP in subexponential time with (super)exponential space, it is not clear what the complexities of other methods would be. Our method presents a clear tradeoff between the complexities, where the constants in the preprocessing exponent are quite small; for instance, we can solve CVPP in time \(2^{0.06d + o(d)}\) with less than \(2^{d + o(d)}\) memory, which is the same amount of memory/preprocessing of the best provable SVP and CVP algorithms [ADRS15, ADS15]. Indexing the fundamental domain may well require much more memory than this.

4.1 Bounded Distance Decoding with Preprocessing

We finally take a look at specific instances of CVP which are easier than the general problem, such as when the target \(\varvec{t}\) lies unusually close to the lattice. This problem naturally appears in practice, when a private key consists of a good basis of a lattice with short basis vectors, and the public key is a bad basis of the same lattice. An encryption of a message could then consist of the message being mapped to a lattice point \(\varvec{v} \in \mathcal {L}\), and a small error vector \(\varvec{e}\) being added to \(\varvec{v}\) (\(\varvec{t} = \varvec{v} + \varvec{e}\)) to hide \(\varvec{v}\). If the noise \(\varvec{e}\) is small enough, then with a good basis one can decode \(\varvec{t}\) to the closest lattice vector \(\varvec{v}\), while someone with the bad basis cannot decode correctly. As decoding for arbitrary \(\varvec{t}\) (solving CVP) is known to be hard even with knowledge of a good basis [Mic01, FM02, Reg04, AKKV05], \(\varvec{e}\) needs to be very short, and \(\varvec{t}\) must lie unusually close to the lattice.

So instead of assuming target vectors \(\varvec{t} \in \mathbb {R}^d\) are sampled at random, suppose that \(\varvec{t}\) lies at distance at most \(\delta \cdot \lambda _1(\mathcal {L})\) from \(\mathcal {L}\), for \(\delta \in (0,1)\). For adaptive sieving, recall that the list size \((4/3)^{d/2 + o(d)}\) is the minimum initial list size one can hope to use to obtain a list of short lattice vectors; with fewer vectors, one would not be able to solve SVP.Footnote 3 For non-adaptive sieving however, it may be possible to reduce the list size below \(2^{d/2 + o(d)}\).

List size. Let us again assume that the preprocessed list L contains almost all \(\alpha ^{d + o(d)}\) lattice vectors of norm at most \(\alpha \cdot \lambda _1(\mathcal {L})\). The choice of \(\alpha \) implies a maximum norm \(\beta _{\alpha } \cdot \lambda _1(\mathcal {L})\) of the reduced vector \(\varvec{t}'\), as described in Lemma 3. The nearest lattice vector \(\varvec{s} \in \mathcal {L}\) to \(\varvec{t}'\) lies within radius \(\delta \cdot \lambda _1(\mathcal {L})\) of \(\varvec{t}'\), and w.h.p. \(\varvec{s} - \varvec{t}'\) is approximately orthogonal to \(\varvec{t}'\); see Fig. 2b, where the shaded area is asymptotically negligible. Therefore w.h.p. \(\varvec{s}\) has norm at most \((\sqrt{\beta _{\alpha }^2 + \delta ^2}) \cdot \lambda _1(\mathcal {L})\). Now if \(\sqrt{\beta _{\alpha }^2 + \delta ^2} \le \alpha \), then we expect the nearest vector to be contained in L, so that ultimately \(\varvec{0}\) is nearest to \(\varvec{t}'\). Substituting \(\alpha ^4 - 4 \beta ^2 \alpha ^2 + 4 \beta ^2 = 0\) and \(\beta ^2 + \delta ^2 \le \alpha ^2\), and solving for \(\alpha \), this leads to the following condition on \(\alpha \).

$$\begin{aligned} \alpha ^2 \ge \tfrac{2}{3} (1 + \delta ^2) + \tfrac{2}{3} \sqrt{(1 + \delta ^2)^2 - 3 \delta ^2} \, . \end{aligned}$$
(6)

Taking \(\delta = 1\), corresponding to exact CVP, leads to the condition \(\alpha \ge \sqrt{2}\) as expected, while in the limiting case of \(\delta \rightarrow 0\) we obtain the condition \(\alpha \ge \sqrt{4/3}\). This matches experimental observations using the GaussSieve, where after finding the shortest vector, newly sampled vectors often cause collisions (i.e. being reduced to the \(\varvec{0}\)-vector). In other words, Algorithm 2 often reduces target vectors \(\varvec{t}\) which essentially lie on the lattice (\(\delta \rightarrow 0\)) to the \(\varvec{0}\)-vector when the list has size \((4/3)^{d/2 + o(d)}\). This explains why collisions in the GaussSieve are common when the list size grows to size \((4/3)^{d/2 + o(d)}\).

Main result. To solve BDD with a target \(\varvec{t}\) at distance \(\delta \cdot \lambda _1(\mathcal {L})\) from the lattice, we need the preprocessing to produce a list of almost all \(\alpha ^{d + o(d)}\) vectors of norm at most \(\alpha \cdot \lambda _1(\mathcal {L})\), with \(\alpha \) satisfying (6). Similar to the analysis for CVP, we can produce such a list by only doing reductions between two vectors if their angle is less than \(\theta \), where now \(\theta = \arcsin (1 / \alpha )\). Combining this with Lemma 2, we obtain the following result.

Theorem 3

Let \(\alpha \) satisfy (6) and let \(u \!\in \! (\sqrt{\frac{\alpha ^2 - 1}{\alpha ^2}}, \sqrt{\frac{\alpha ^2}{\alpha ^2 - 1}})\). Using non-adaptive sieving, we can heuristically solve BDD for targets \(\varvec{t}\) at distance \(\delta \cdot \lambda _1(\mathcal {L})\) from the lattice, with preprocessing time \(\mathrm {T}_1\), space complexity \(\mathrm {S}\), and query time complexity \(\mathrm {T}_2\) as follows:

$$\begin{aligned}&\qquad \mathrm {S}= \left( \frac{1}{1 - (\alpha ^2 - 1) (u^2 - \frac{2 u}{\alpha } \sqrt{\alpha ^2 - 1} + 1)}\right) ^{d/2 + o(d)}, \end{aligned}$$
(7)
$$\begin{aligned} \mathrm {T}_1&= \max \left\{ \mathrm {S}, \ (3/2)^{d/2 + o(d)}\right\} , \qquad \mathrm {T}_2 = \left( \frac{\alpha + u \sqrt{\alpha ^2 - 1}}{2 \alpha - \alpha ^3 + \alpha ^2 u \sqrt{\alpha ^2 - 1}}\right) ^{d/2 + o(d)}. \end{aligned}$$
(8)

Proof

These complexities directly follow from applying Lemma 1 with \(\theta = \arcsin (1/\alpha )\), and again observing that Phase 1 can be performed in time \(\mathrm {T}_1 = n^{1 + \rho _{\mathrm {u}}}\) and space \(\mathrm {S}= n^{1 + \rho _{\mathrm {u}}}\), while Phase 2 takes time \(\mathrm {T}_2 = n^{\rho _{\mathrm {q}}}\). Note that we cannot combine vectors whose angles are larger than \(\frac{\pi }{3}\) in Phase 1, which leads to a lower bound on the preprocessing time complexity \(\mathrm {T}_1\) based on the costs of solving SVP.

Theorem 3 is a generalization of Theorem 2, as the latter can be derived from the former by substituting \(\delta = 1\) above. To illustrate the results, Fig. 1 considers two special cases:

  • For \(\delta = \frac{1}{2}\), we find \(\alpha \approx 1.1976\), leading to \(\mathrm {S}\approx 2^{0.2602d + o(d)}\) and \(\mathrm {T}_2 = 2^{0.1908d + o(d)}\) when minimizing the space complexity.

  • For \(\delta \rightarrow 0\), we have \(\alpha \rightarrow \sqrt{4/3} \approx 1.1547\). The minimum space complexity is therefore \(\mathrm {S}= (4/3)^{d/2 + o(d)}\), with query complexity \(\mathrm {T}_2 = 2^{0.1610d + o(d)}\).

In the limit of \(u \rightarrow \sqrt{\frac{\alpha ^2}{\alpha ^2 - 1}}\) we need superexponential space/preprocessing \(\mathrm {S}, \mathrm {T}_1 \rightarrow 2^{\omega (d)}\) and a subexponential query time \(\mathrm {T}_2 \rightarrow 2^{o(d)}\) for all \(\delta > 0\).

4.2 Approximate Closest Vector Problem with Preprocessing

Given a lattice \(\mathcal {L}\) and a target vector \(\varvec{t} \in \mathbb {R}^d\), approximate CVP with approximation factor \(\kappa \) asks to find a vector \(\varvec{s} \in \mathcal {L}\) such that \(\Vert \varvec{s} - \varvec{t}\Vert \) is at most a factor \(\kappa \) larger than the real distance from \(\varvec{t}\) to \(\mathcal {L}\). For random instances \(\varvec{t}\), by the Gaussian heuristic this means that a lattice vector counts as a solution iff it lies at distance at most \(\kappa \cdot \lambda _1(\mathcal {L})\) from \(\varvec{t}\).

List size. Instead of reducing \(\varvec{t}\) to a vector \(\varvec{t}'\) of norm at most \(\lambda _1(\mathcal {L})\) as is needed for solving exact CVP, we now want to make sure that the reduced vector \(\varvec{t}'\) has norm at most \(\kappa \cdot \lambda _1(\mathcal {L})\). If this is the case, then the vector \(\varvec{t} - \varvec{t}'\) is a lattice vector lying at distance at most \(\kappa \cdot \lambda _1(\mathcal {L})\), which w.h.p. qualifies as a solution. This means that instead of substituting \(\beta = 1\) in Lemma 3, we now substitute \(\beta = \kappa \). This leads to the condition that \(\alpha _0^4 - 4\kappa ^2 \alpha _0^2 + 4 \beta ^2 < 0\) for some \(\alpha _0 \le \alpha \). By a similar analysis \(\alpha ^2\) must therefore be larger than the smallest root \(r_1 = 2\kappa (\kappa - \sqrt{\kappa ^2 - 1})\) of this quadratic polynomial in \(\alpha ^2\). This immediately leads to the following condition on \(\alpha \):

$$\begin{aligned} \alpha ^2 \ge 2 \kappa \left( \kappa - \sqrt{\kappa ^2 - 1}\right) . \end{aligned}$$
(9)

A sanity check shows that \(\kappa = 1\), corresponding to exact CVP, indeed results in \(\alpha \ge \sqrt{2}\), while in the limit of \(\kappa \rightarrow \infty \) a value \(\alpha \approx 1\) suffices to obtain a vector \(\varvec{t}'\) of norm at most \(\kappa \cdot \lambda _1(\mathcal {L})\). In other words, to solve approximate CVP with very large (constant) approximation factors, a preprocessed list of size \((1 + \varepsilon )^{d + o(d)}\) suffices.

Main result. Similar to the analysis of CVPP, we now take \(\theta = \arcsin (1/\alpha )\) as the angle with which to reduce vectors in Phase 1, so that the output of Phase 1 is a list of almost all \(\alpha ^{d + o(d)}\) shortest lattice vectors of norm at most \(\alpha \cdot \lambda _1(\mathcal {L})\). Using a smaller angle \(\theta \) for reductions again means that nearest neighbor searching can speed up the reductions in both Phase 1 and Phase 2 even further. The exact complexities follow from Lemma 1.

Theorem 4

Using non-adaptive sieving with spherical LSF, we can heuristically solve \(\kappa \)-CVP with similar complexities as in Theorem 3, where now \(\alpha \) must satisfy (9).

Note that only the dependence of \(\alpha \) on \(\kappa \) is different, compared to the dependence of \(\alpha \) on \(\delta \) for bounded distance decoding. The complexities for \(\kappa \)-CVP arguably decrease faster than for \(\delta \)-BDD: for instance, for \(\kappa \approx 1.0882\) we obtain the same complexities as for BDD with \(\delta = \frac{1}{2}\), while \(\kappa = \sqrt{4/3} \approx 1.1547\) leads to the same complexities as for BDD with \(\delta \rightarrow 0\). Two further examples are illustrated in Fig. 1:

  • For \(\kappa = 2\), we have \(\alpha \approx 1.1976\), which for \(u \approx 0.5503\) leads to \(\mathrm {S}= \mathrm {T}_1 = 2^{0.2602 d + o(d)}\) and \(\mathrm {T}_2 = 2^{0.1908 d + o(d)}\), and for \(u = 1\) leads to \(\mathrm {S}= \mathrm {T}_1 = 2^{0.3573 d + o(d)}\) and \(\mathrm {T}_2 = 2^{0.0971 d + o(d)}\).

  • For \(\kappa \rightarrow \infty \), we have \(\alpha \rightarrow 1\), i.e. the required preprocessed list size approaches \(2^{o(d)}\) as \(\kappa \) grows. For sufficiently large \(\kappa \), we can solve \(\kappa \)-CVP with a preprocessed list of size \(2^{\varepsilon d + o(d)}\) in at most \(2^{\varepsilon d + o(d)}\) time. The preprocessing time is given by \(2^{0.2925 d + o(d)}\).

The latter result shows that for any superconstant approximation factor \(\kappa = \omega (1)\), we can solve the corresponding approximate closest vector problem with preprocessing in subexponential time, with an exponential preprocessing time complexity \(2^{0.292d + o(d)}\) for solving SVP and generating a list of short lattice vectors, and a subexponential space complexity required for Phase 2. In other words, even without superexponential preprocessing/memory we can solve CVPP with large approximation factors in subexponential time.

To compare this result with previous work, note that the lower bound on \(\alpha \) from (9) tends to \(1 + 1/(8 \kappa ^2) + O(\kappa ^{-4})\) as \(\kappa \) grows. The query space and time complexities are further both proportional to \(\alpha ^{\varTheta (d)}\). To obtain a polynomial query complexity and polynomial storage after the preprocessing phase, we can solve for \(\kappa \), leading to the following result.

Corollary 2

With non-adaptive sieving we can heuristically solve approximate CVPP with approximation factor \(\kappa \) in polynomial time with polynomial-sized advice iff \(\kappa = \varOmega (\sqrt{d / \log d})\).

Proof

The query time and space complexities are given by \(\alpha ^{\varTheta (d)}\), where \(\alpha = 1 + \varTheta (1 / \kappa ^2)\). To obtain polynomial complexities in d, we must have \(\alpha ^{\varTheta (d)} = d^{O(1)}\), or equivalently:

$$\begin{aligned} 1 + \varTheta \left( \frac{1}{\kappa ^2}\right) = \alpha = d^{O(1/d)} = \exp \, O\left( \frac{\log d}{d}\right) = 1 + O\left( \frac{\log d}{d}\right) . \end{aligned}$$
(10)

Solving for \(\kappa \) leads to the given relation between \(\kappa \) and d.

Apart from the heuristic assumptions we made, this is equivalent to a result of Aharonov and Regev [AR04], who previously showed that the decision version of CVPP with approximation factor \(\kappa = \varOmega (\sqrt{d / \log d})\) can provably be solved in polynomial time. This further improves upon results of [LLS90, DRS14], who are able to solve search-CVPP with polynomial time and space complexities for \(\kappa = O(d^{3/2})\) and \(\kappa = \varOmega (d / \sqrt{\log d})\) respectively. Assuming the heuristic assumptions are valid, Corollary 2 closes the gap between these previous results for decision-CVPP and search-CVPP with a rather simple algorithm: (1) preprocess the lattice by storing all \(d^{O(1)}\) shortest vectors of the lattice in a list; and (2) apply Algorithm 2 to this list and the target vector to find an approximate closest vector. Note that nearest neighbor techniques only affect leading constants; even without nearest neighbor searching this would heuristically result in a polynomial time and space algorithm for \(\kappa \)-CVPP with \(\kappa = \varOmega (\sqrt{d / \log d})\).