Two Partitional Methods for Interval-Valued Data Using Mahalanobis Distances

  • Renata M. C. R. de Souza
  • Francisco A. T. de Carvalho
  • Camilo P. Tenorio
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3315)


Two dynamic cluster methods for interval data are presented: the first method furnish a partition of the input data and a corresponding prototype (a vector of intervals) for each class by optimizing an adequacy criterion based on Mahalanobis distances between vectors of intervals and the second is an adaptive version of the first method. In order to show the usefulness of these methods, synthetic and real interval data sets considered. The synthetic interval data sets are obtained from quantitative data sets drawn according to bi-variate normal distributions. The adaptive method outperforms the non-adaptive one concerning the average behaviour of a cluster quality measure.


Mahalanobis Distance Interval Data Partitional Method Monte Carlo Experience Adequacy Criterion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bock, H.H., Diday, E.: Analysis of Symbolic Data: Exploratory Methods for Extracting Statistical Information from Complex Data. Springer, Berlin (2000)Google Scholar
  2. 2.
    Chavent, M., Lechevallier, Y.: Dynamical Clustering Algorithm of Interval Data: Optimization of an Adequacy Criterion Based on Hausdorff Distance. In: Sokolowsky, Bock, H.H., Jaguja, A. K. (eds.) Classification, Clustering and Data Analysis (IFCS 2002), pp. 53–59. Springer, Berlin (2002)Google Scholar
  3. 3.
    Diday, E., Govaert, G.: Classification Automatique avec Distances Adaptatives. R.A.I.R.O. Informatique Computer Science 11(4), 329–349 (1977)MathSciNetGoogle Scholar
  4. 4.
    Diday, E., Simon, J.C.: Clustering analysis. In: Fu, K.S. (ed.) Digital Pattern Clasification, pp. 47–94. Springer, Berlin (1976)Google Scholar
  5. 5.
    Govaert, G.: Classification automatique et distances adaptatives. Thèse de 3ème cycle, Mathématique appliquée, Universit´e Paris VI (1975)Google Scholar
  6. 6.
    Hubert, L., Arabie, P.: Comparing Partitions. Journal of Classification 2, 193–218 (1985)CrossRefGoogle Scholar
  7. 7.
    Jain, A.K., Murty, M.N., Flynn, P.J.: Data Clustering: A Review. ACM Computing Surveys 31(3), 264–323 (1999)CrossRefGoogle Scholar
  8. 8.
    Milligan, G.W.: Clustering Validation: results and implications for applied analysis. In: Arabie, P., Hubert, L.J., De Soete, G. (eds.) Clustering and Classification, pp. 341–375. Word Scientific, Singapore (1996)Google Scholar
  9. 9.
    Souza, R.M.C.R., De Carvalho, F.A.T.: Clustering of interval data based on city-block distances. Pattern Recognition Letters 25(3), 353–365 (2004)CrossRefGoogle Scholar
  10. 10.
    Verde, R., De Carvalho, F.A.T., Lechevallier, Y.: A dynamical clustering algorithm for symbolic data. In: Diday, E., Lechevallier, Y. (eds.) Tutorial on Symbolic Data Analysis (Gfkl 2001), pp. 59–72 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Renata M. C. R. de Souza
    • 1
  • Francisco A. T. de Carvalho
    • 1
  • Camilo P. Tenorio
    • 1
  1. 1.Centro de InformaticaCIn / UFPERecifeBrasil

Personalised recommendations