Chapter

Database Theory — ICDT’99

Volume 1540 of the series Lecture Notes in Computer Science pp 217-235

Date:

When Is “Nearest Neighbor” Meaningful?

  • Kevin BeyerAffiliated withCS Dept., University of Wisconsin-Madison
  • , Jonathan GoldsteinAffiliated withCS Dept., University of Wisconsin-Madison
  • , Raghu RamakrishnanAffiliated withCS Dept., University of Wisconsin-Madison
  • , Uri ShaftAffiliated withCS Dept., University of Wisconsin-Madison

* Final gross prices may vary according to local VAT.

Get Access

Abstract

We explore the effect of dimensionality on the “nearest neighbor” problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10–15 dimensions.

These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10–15) dimensionality!