Encyclopedia of Database Systems

2009 Edition

Curse of Dimensionality

  • Lei Chen
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-39940-9_133



The curse of dimensionality, first introduced by Bellman [1], indicates that the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function.

For similarity search (e.g., nearest neighbor query or range query), the curse of dimensionality means that the number of objects in the data set that need to be accessed grows exponentially with the underlying dimensionality.

Key Points

The curse of dimensionality is an obstacle for solving dynamic optimization problems by backwards induction. Moreover, it renders machine learning problems complicated, when it is necessary to learn a state-of-nature from finite number data samples in a high dimensional feature space. Finally, the curse of dimensionalityseriously affects the query performance for similarity search over multidimensional indexes because, in high...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Bellman R.E. Adaptive Control Processes. Princeton University Press, Princeton, NJ, 1961.zbMATHGoogle Scholar
  2. 2.
    Beyer K.S., Goldstein J., Ramakrishnan R., Shaft U. When is “Nearest Neighbor” Meaningful? In Proc. 7th Int. Conf. on Database Theory, 1999, pp. 217–235.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Lei Chen
    • 1
  1. 1.Hong Kong University of Science and TechnologyHong KongChina