Machine Learning

, Volume 42, Issue 1–2, pp 9–29 | Cite as

An Experimental Comparison of Model-Based Clustering Methods

  • Marina Meilă
  • David Heckerman
Article

Abstract

We compare the three basic algorithms for model-based clustering on high-dimensional discrete-variable datasets. All three algorithms use the same underlying model: a naive-Bayes model with a hidden root node, also known as a multinomial-mixture model. In the first part of the paper, we perform an experimental comparison between three batch algorithms that learn the parameters of this model: the Expectation–Maximization (EM) algorithm, a “winner take all” version of the EM algorithm reminiscent of the K-means algorithm, and model-based agglomerative clustering. We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization methods on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of agglomerative clustering. Although the methods are substantially different, they lead to learned models that are similar in quality.

clustering model-based clustering naive-Bayes model multinomial-mixture model EM algorithm agglomerative clustering initialization 

References

  1. Banfield, J. & Raftery, A. (1993). Model-based Gaussian and non-Gaussian clustering. Biometrics, 49, 803–821.Google Scholar
  2. Bauer, E., Koller, D., & Singer, Y. (1997). Update rules for parameter estimation in Bayesian networks. In D. Geiger and P. Shenoy (Eds.), Proceedings of Thirteenth Conference on Uncertainty in Artificial Intelligence, Providence, RI, (pp. 3–13). San Mateo, CA: Morgan Kaufmann.Google Scholar
  3. Celeux, G. & Govaert, G. (1992). A classification EM algorithm for clustering and two stochastic versions. Computational Statistics and Data Analysis, 14, 315–332.Google Scholar
  4. Cheeseman, P. & Stutz, J. (1995). Bayesian classification (AutoClass): Theory and results. In U. Fayyad, G. Piatesky-Shapiro, P. Smyth, and R. Uthurusamy (Eds.) Advances in Knowledge Discovery and Data Mining (pp. 153–180). Menlo Park, CA: AAAI Press.Google Scholar
  5. Chickering, D. & Heckerman, D. (1997). Efficient approximations for the marginal likelihood of Bayesian networks with hidden variables. Machine Learning, 29, 181–212.Google Scholar
  6. Clogg, C. (1995). Latent class models. In Handbook of Statistical Modeling for the Social and Behavioral Sciences (pp. 311–359). New York: Plenum Press.Google Scholar
  7. DeGroot, M. (1970). Optimal Statistical Decisions. New York, NY: McGraw-Hill.Google Scholar
  8. Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39, 1–38.Google Scholar
  9. Dobson, A. J. (1990). An Introduction to Generalized Linear Models. New York, NY: Chapman and Hall.Google Scholar
  10. Duda, R. O. & Hart, P. E. (1973) Pattern Classification and Scene Analysis. New York, NY: John Wiley & Sons.Google Scholar
  11. Fisher, D. (1996). Iterative optimization and simplification of hierarchical clustering. Journal of Artificial Intelligence Research, 4:270:281.Google Scholar
  12. Fraley, C. (1997). Algorithms for model-based Gaussian hierarchical clustering. SIAM Journal on Scientific Computing, 20, 270–281.Google Scholar
  13. Frey, B., Hinton, G., & Dayan, P. (1996). Does the wake-sleep algorithm produce good density estimators? In D. Touretsky, M. Mozer, & M. Hasselmo, (Eds.), Neural Information Processing Systems (Vol. 8, pp. 661–667). Cambridge, MA: MIT Press.Google Scholar
  14. Jain, A. K. & Dubes, R. C. (1988). Algorithms for Clustering Data. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  15. Meilă, M. & Heckerman, D. (February, 1998). An experimental comparison of several clustering and initialization methods. Technical Report MSR-TR-98-06, Microsoft Research, Redmond, WA.Google Scholar
  16. Thiesson, B. (1995). Accelerated quantification of Bayesian networks with incomplete data. In Proceedings of First International Conference on Knowledge Discovery and Data Mining, Montreal, QU (pp. 306–311). San Francisco, CA: Morgan Kaufmann.Google Scholar
  17. Thiesson, B., Meek, C., Chickering, D., & Heckerman, D. (1999). Computationally efficient methods for selecting among mixtures of graphical models, with discussion. In Bayesian Statistics 6: Proceedings of the Sixth Valencia International Meeting (pp. 631–656), Oxford: Oxford University Press.Google Scholar
  18. Zipf, G. (1949). Human Behavior and the Principle of Least Effort. Cambridge, MA: Addison-Wesley.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Marina Meilă
    • 1
  • David Heckerman
    • 1
  1. 1.Microsoft ResearchRedmondUSA

Personalised recommendations