Curse of Dimensionality
The curse of dimensionality, first introduced by Bellman , indicates that the number of samples needed to estimate an arbitrary function with a given level of accuracy grows exponentially with respect to the number of input variables (i.e., dimensionality) of the function.
For similarity search (e.g., nearest neighbor query or range query), the curse of dimensionality means that the number of objects in the data set that need to be accessed grows exponentially with the underlying dimensionality.
The curse of dimensionality is an obstacle for solving dynamic optimization problems by backwards induction. Moreover, it renders machine learning problems complicated, when it is necessary to learn a state-of-nature from finite number data samples in a high dimensional feature space. Finally, the curse of dimensionalityseriously affects the query performance for similarity search over multidimensional indexes because, in high...