Encyclopedia of Database Systems

Living Edition
| Editors: Ling Liu, M. Tamer Özsu

Learning Distance Measures

Living reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7993-3_614-2



Many problems in data mining (e.g., classification, clustering, information retrieval) are concerned with the discovery of homogeneous groups of data according to a certain similarity (or distance) measure. The distance measure in use strongly affects the nature of the patterns (clusters, classes, or retrieved images) emerging from the given data. Typically, any chosen fixed distance measure, such as Euclidean or Manhattan distance, does not capture the underlying structure of the data, and fails to find meaningful patterns which correspond to the user’s preferences. To address this issue, techniques have been developed that learn from the data how to compute dissimilarities between pairs of objects. Since objects are commonly represented as vectors of measurements in a given feature space, distances between two objects are computed in terms of the dissimilarity between their corresponding feature components....
This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Bellman R. Adaptive control processes: Princeton University Press; 1961.Google Scholar
  2. 2.
    Blansch A, Ganarski P, Korczak J. Maclaw: a modular approach for clustering with local attribute weighting. Pattern Recognit Lett. 2006; 27(11):1299–1306.Google Scholar
  3. 3.
    Domeniconi C, Gunopulos D, Peng J. Large margin nearest neighbor classifiers. IEEE Trans Neural Netw. 2005; 16:899–909.Google Scholar
  4. 4.
    Domeniconi C, Gunopulos D, Yan S, Ma B, Al-Razgan M, Papadopoulos D. Locally adaptive metrics for clustering high dimensional data. Data Mining Knowl Discov J. 2007; 14:63–97.Google Scholar
  5. 5.
    Domeniconi C, Peng J, Gunopulos D. Locally adaptive metric nearest neighbor classification. IEEE Trans Pattern Anal Mach Intell. 2002; 24:1281–85.Google Scholar
  6. 6.
    Friedman J. Flexible metric nearest neighbor classification. In: Technical Report, Department of Statistics, Stanford University, 1994.Google Scholar
  7. 7.
    Friedman J, Meulman J. Clustering objects on subsets of attributes. Technical Report, Stanford University, 2002.Google Scholar
  8. 8.
    Frigui H, Nasraoui O. Unsupervised learning of prototypes and attribute weights. Pattern Recognit. 2004; 37(3):943–52.Google Scholar
  9. 9.
    Hartigan JA. Direct clustering of a data matrix. J Am Stat Assoc. 1972; 67(337):123–9.Google Scholar
  10. 10.
    Hastie T, Tibshirani R. Discriminant adaptive nearest neighbor classification. IEEE Trans Pattern Anal Machine Intell. 1996; 18:607–15.Google Scholar
  11. 11.
    Jain A, Mutty M, Flyn P. Data clustering: a review. ACM Comput Surv. 1999; 31(3).Google Scholar
  12. 12.
    Modha D. and Spangler S.. Feature weighting in K-means clustering. Mach Learn. 2003; 52(3):217–37.Google Scholar
  13. 13.
    Shawe-Taylor J, Pietzuch FN. Kernel methods for pattern analysis. London: Cambridge University Press; 2004.Google Scholar
  14. 14.
    Xing E, Ng A, Jordan M, Russell S. Distance metric learning, with application to clustering with side-information. Advances in NIPS, vol. 15, 2003.Google Scholar

Copyright information

© Springer Science+Business Media LLC 2016

Authors and Affiliations

  1. 1.George Mason UniversityFairfaxUSA