Advertisement

Multivariate Saddle Point Detection for Statistical Clustering

  • Dorin Comaniciu
  • Visvanathan Ramesh
  • Alessio Del Bue
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2352)

Abstract

Decomposition methods based on nonparametric density estimation define a cluster as the basin of attraction of a local maximum (mode) of the density function, with the cluster borders being represented by valleys surrounding the mode. To measure the significance of each delineated cluster we propose a test statistics that compares the estimated density of the mode with the estimated maximum density on the cluster boundary. While for a given kernel bandwidth the modes can be safely obtained by using the mean shift procedure, the detection of maximum density points on the cluster boundary (i.e., the saddle points) is not straightforward for multivariate data. We therefore develop a gradient-based iterative algorithm for saddle point detection and show its effectiveness in various data decomposition tasks. After finding the largest density saddle point associated with each cluster, we compute significance measures that allow formal hypothesis testing of cluster existence. The new statistical framework is extended and tested for the task of image segmentation.

Keywords

grouping and segmentation image features nonparametric clustering cluster significance 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Babaud, A. Witkin, M. Baudin, and R. Duda. Uniqueness of the gaussian kernel for scale-space filtering. IEEE Trans. Pattern Anal. Machine Intell., 8(1):26–33, 1986.zbMATHGoogle Scholar
  2. 2.
    D. Comaniciu and P. Meer. Mean shift analysis and applications. In Proc. Int. Conf. Computer Vision, Kerkyra, Greece, pages 1197–1203, September 1999.Google Scholar
  3. 3.
    D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Machine Intell., 24(5):To appear, 2002.Google Scholar
  4. 4.
    D. Comaniciu, V. Ramesh, and P. Meer. The variable bandwidth mean shift and data-driven scale selection. In Proceedings International Conference on Computer Vision, Vancouver, Canada, volume I, pages 438–445, July 2001.Google Scholar
  5. 5.
    R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley, second edition, 2001.Google Scholar
  6. 6.
    P. F. Felzenszwalb and D. P. Huttenlocher. Image segmentation using local variation. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, pages 98–103, June 1998.Google Scholar
  7. 7.
    C. Fraley and A. Raftery. How many clusters? which clustering method?-answers via model-based cluster analysis. Computer Journal, 41:578–588, 1998.zbMATHCrossRefGoogle Scholar
  8. 8.
    K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, second edition, 1990.Google Scholar
  9. 9.
    K. Fukunaga and L. D. Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Information Theory, 21:32–40, 1975.zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Y. Gdalyahu, D. Weinshall, and M. Werman. Self organization in vision: Stochastic clustering for image segmentation, perceptual grouping, and image database organization. IEEE Trans. Pattern Anal. Machine Intell., 23(10):1053–1074, 2001.CrossRefGoogle Scholar
  11. 11.
    R. Gilmore. Catastrophe Theory for Scientists and Engineers. Dover, 1993.Google Scholar
  12. 12.
    J. Hartigan. Statistical theory in clustering. Journal of Classification, 2:63–76, 1985.zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    G. Henkelman, G. Johannesson, and H. Jonsson. Methods for finding saddle points and minimum energy paths. In S. Schwartz, editor, Progress on Theoretical Chemistry and Physics, pages 269–300. Kluwer, 2000.Google Scholar
  14. 14.
    G. Henkelman and H. Jonsson. A dimer method for finding saddle points on high dimensional potential surfaces using only first derivatives. Journal of Chemical Physics, 111:7010–7022, 1999.CrossRefGoogle Scholar
  15. 15.
    G. Henkelman, B. Uberuaga, and H. Jonsson. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. Journal of Chemical Physics, 113:9901–9904, 2000.CrossRefGoogle Scholar
  16. 16.
    T. Hofmann and J. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Trans. Pattern Anal. Machine Intell., 19(1):1–13, 1997.CrossRefGoogle Scholar
  17. 17.
    A. Jain, M. Murty, and P. Flyn. Data clustering: A review. ACM Computing Surveys, 31(3):264–323, 1999.CrossRefGoogle Scholar
  18. 18.
    A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.Google Scholar
  19. 19.
    L. Kauffman and P. Rousseeuw. Finding Groups in Data: An Introduction to Cluster Analysis. J. Wiley & Sons, 1990.Google Scholar
  20. 20.
    Y. Leung, J. Zhang, and Z. Xu. Clustering by scale-space filtering. IEEE Trans. Pattern Anal. Machine Intell., 22(12):1396–1410, 2000.CrossRefGoogle Scholar
  21. 21.
    G. Milligan and M. Cooper. An examination of procedures for determining the number of clusters in a data set. Psychometrika, 50:159–179, 1985.CrossRefGoogle Scholar
  22. 22.
    M. Minnotte. Nonparametric testing of the existence of modes. The Annals of Statistics, 25(4):1646–1660, 1997.zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    A. Moore. Very fast EM-based mixture model clustering using multiresolution kd-trees. In Advances in Neural Information Processing Systems 11. MIT Press, 1999.Google Scholar
  24. 24.
    E. J. Pauwels and G. Frederix. Finding salient regions in images. Computer Vision and Image Understanding, 75:73–85, 1999.CrossRefGoogle Scholar
  25. 25.
    P. Perona and W. Freeman. A factorization approach to grouping. In Proceedings 5th European Conference on Computer Vision, Freiburg, Germany, pages 655–670, 1998.Google Scholar
  26. 26.
    S. J. Roberts. Parametric and non-parametric unsupervised cluster analysis. Pattern Recog., 30:261–272, 1997.CrossRefGoogle Scholar
  27. 27.
    J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Machine Intell., 22(8):888–905, 2000.CrossRefGoogle Scholar
  28. 28.
    B. Silverman. Using kernel density estimates to investigate multimodality. J. R. Statist. Soc. B, 43(1):97–99, 1981.Google Scholar
  29. 29.
    R. Tibshirani, G. Walther, D. Botstein, and P. Brown. Cluster validation by prediction strength. Technical Report 21, Dept. of Statistics, Stanford University, 2001.Google Scholar
  30. 30.
    R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters in a data set via the gap statistic. Technical Report 208, Dept. of Statistics, Stanford University, 2000.Google Scholar
  31. 31.
    Z. Tu, S. Zhu, and H. Shum. Image segmentation by data driven markov chain monte carlo. In Proceedings International Conference on Computer Vision, Vancouver, Canada, volume II, pages 131–138, July 2001.Google Scholar
  32. 32.
    A. Yuille and T. Poggio. Scaling theorems for zero crossings. IEEE Trans. Pattern Anal. Machine Intell., 8(1):15–25, 1986.zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Dorin Comaniciu
    • 1
  • Visvanathan Ramesh
    • 1
  • Alessio Del Bue
    • 2
  1. 1.Imaging and Visualization DepartmentSiemens Corporate ResearchPrincetonUSA
  2. 2.Department of Biophysical and Electronic EngineeringUniversity of GenovaGenovaItaly

Personalised recommendations