The curse of dimensionality, first introduced by Bellman , indicates that the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function.
For similarity search (e.g., nearest neighbor query or range query), the curse of dimensionality means that the number of objects in the data set that need to be accessed grows exponentially with the underlying dimensionality.
The curse of dimensionality is an obstacle for solving dynamic optimization problems by backwards induction. Moreover, it renders machine learning problems complicated, when it is necessary to learn a state-of-nature from finite number data samples in a high dimensional feature space. Finally, the curse of dimensionalityseriously affects the query performance for similarity search over multidimensional indexes because, in high dimensions,...
This is a preview of subscription content, access via your institution.
Tax calculation will be finalised at checkout
Purchases are for personal use onlyLearn about institutional subscriptions
Bellman R.E. Adaptive Control Processes. Princeton University Press, Princeton, NJ, 1961.
Beyer K.S., Goldstein J., Ramakrishnan R., Shaft U. When is “Nearest Neighbor” Meaningful? In Proc. 7th Int. Conf. on Database Theory, 1999, pp. 217–235.
Editors and Affiliations
© 2009 Springer Science+Business Media, LLC
About this entry
Cite this entry
Chen, L. (2009). Curse of Dimensionality. In: LIU, L., ÖZSU, M.T. (eds) Encyclopedia of Database Systems. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-39940-9_133
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-35544-3
Online ISBN: 978-0-387-39940-9