Skip to main content

Genetic Algorithms for Subset Selection in Model-Based Clustering

  • Chapter
  • First Online:
Unsupervised Learning Algorithms

Abstract

Model-based clustering assumes that the data observed can be represented by a finite mixture model, where each cluster is represented by a parametric distribution. The Gaussian distribution is often employed in the multivariate continuous case. The identification of the subset of relevant clustering variables enables a parsimonious number of unknown parameters to be achieved, thus yielding a more efficient estimate, a clearer interpretation and often improved clustering partitions. This paper discusses variable or feature selection for model-based clustering. Following the approach of Raftery and Dean (J Am Stat Assoc 101(473):168–178, 2006), the problem of subset selection is recast as a model comparison problem, and BIC is used to approximate Bayes factors. The criterion proposed is based on the BIC difference between a candidate clustering model for the given subset and a model which assumes no clustering for the same subset. Thus, the problem amounts to finding the feature subset which maximises such a criterion. A search over the potentially vast solution space is performed using genetic algorithms, which are stochastic search algorithms that use techniques and concepts inspired by evolutionary biology and natural selection. Numerical experiments using real data applications are presented and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Back, T., Fogel, D.B., Michalewicz, Z.: Evolutionary Computation 1: Basic Algorithms and Operators. IOP Publishing, Bristol and Philadelphia (2000)

    Book  MATH  Google Scholar 

  2. Banfield, J., Raftery, A.E.: Model-based Gaussian and non-Gaussian clustering. Biometrics 49, 803–821 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bean, J.C.: Genetic algorithms and random keys for sequencing and optimization. ORSA J. Comput. 6(2), 154–160 (1994)

    Article  MATH  Google Scholar 

  4. Biernacki, C., Celeux, G., Govaert, G.: Assessing a mixture model for clustering with the integrated completed likelihood. IEEE Trans. Pattern Anal. Mach. Intell. 22(7), 719–725 (2000)

    Article  Google Scholar 

  5. Celeux, G., Govaert, G.: Gaussian parsimonious clustering models. Pattern Recogn. 28, 781–793 (1995)

    Article  Google Scholar 

  6. Chang, W.C.: On using principal components before separating a mixture of two multivariate normal distributions. Appl. Stat. 32(3), 267–275 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chatterjee, S., Laudato, M., Lynch, L.A.: Genetic algorithms and their statistical applications: an introduction. Comput. Stat. Data Anal. 22, 633–651 (1996)

    Article  MATH  Google Scholar 

  8. Cook, D.R., Forzani, L.: Likelihood-based sufficient dimension reduction. J. Am. Stat. Assoc. 104(485), 197–208 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dean, N., Raftery, A.E.: clustvarsel1: variable selection for model-based clustering. (2009). http://CRAN.R-project.org/package=clustvarsel, R package version 1.3

  10. Dean, N., Raftery, A.E., Scrucca. L.: clustvarsel: variable selection for model-based clustering. (2014). http://CRAN.R-project.org/package=clustvarsel, R package version 2.1

  11. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm (with discussion). J. R. Stat. Soc. Ser. B Stat. Methodol. 39, 1–38 (1977)

    MathSciNet  MATH  Google Scholar 

  12. Forina, M., Armanino, C., Castino, M., Ubigli, M.: Multivariate data analysis as a discriminating method of the origin of wines. Vitis 25, 189–201 (1986). ftp://ftp.ics.uci.edu/pub/machine-learning-databases/wine, wine Recognition Database

  13. Fraley, C., Raftery, A.E.: How many clusters? Which clustering method? Answers via model-based cluster analysis. Comput. J. 41, 578–588 (1998)

    MATH  Google Scholar 

  14. Fraley, C., Raftery, A.E.: Model-based clustering, discriminant analysis, and density estimation. J. Am. Stat. Assoc. 97(458), 611–631 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  15. Fraley, C., Raftery, A.E., Murphy, T.B., Scrucca, L.: MCLUST version 4 for R: Normal mixture modeling for model-based clustering, classification, and density estimation. Technical Report 597, Department of Statistics, University of Washington (2012)

    Google Scholar 

  16. Goldberg, D.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Professional, Boston, MA (1989)

    MATH  Google Scholar 

  17. Haupt, R.L., Haupt, S.E.: Practical Genetic Algorithms, 2nd edn. Wiley, New York (2004)

    MATH  Google Scholar 

  18. Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–72 (1992)

    Article  Google Scholar 

  19. Hubert, L., Arabie, P.: Comparing partitions. J. Classif. 2, 193–218 (1985)

    Article  MATH  Google Scholar 

  20. Kass, R.E., Raftery, A.E.: Bayes factors. J. Am. Stat. Assoc. 90, 773–795 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  21. Keribin, C.: Consistent estimation of the order of mixture models. Sankhya Ser. A 62(1), 49–66 (2000)

    MathSciNet  MATH  Google Scholar 

  22. Law, M.H.C., Figueiredo, M.A.T., Jain, A.K.: Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1154–1166 (2004)

    Article  Google Scholar 

  23. Maugis, C., Celeux, G., Martin-Magniette, M.L.: Variable selection for clustering with gaussian mixture models. Biometrics 65(3), 701–709 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Maugis, C., Celeux, G., Martin-Magniette, M.L.: Variable selection in model-based clustering: a general variable role modeling. Comput. Stat. Data Anal. 53(11), 3872–3882 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. McLachlan, G.J, Krishnan, T.: The EM Algorithm and Extensions, 2nd edn. Wiley, Hoboken, NJ (2008)

    Book  MATH  Google Scholar 

  26. McLachlan, G.J., Peel, D.: Finite Mixture Models. Wiley, New York (2000)

    Book  MATH  Google Scholar 

  27. Melnykov, V., Maitra, R.: Finite mixture models and model-based clustering. Stat. Surv. 4, 80–116 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  28. Neath, A.A., Cavanaugh, J.E.: The Bayesian information criterion: background, derivation, and applications. Wiley Interdiscip. Rev. Comput. Stat. 4(2),199–203 (2012). doi:10.1002/wics.199

    Article  Google Scholar 

  29. R Core Team (2014) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/

    Google Scholar 

  30. Raftery, A.E., Dean, N.: Variable selection for model-based clustering. J. Am. Stat. Assoc. 101(473), 168–178 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  31. Roeder, K., Wasserman, L.: Practical bayesian density estimation using mixtures of normals. J. Am. Stat. Assoc. 92(439), 894–902 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  32. Schwartz, G.: Estimating the dimension of a model. Ann. Stat. 6, 31–38 (1978)

    MathSciNet  Google Scholar 

  33. Scrucca, L.: Dimension reduction for model-based clustering. Stat. Comput. 20(4), 471–484 (2010). doi:10.1007/s11222-009-9138-7

    Article  MathSciNet  Google Scholar 

  34. Scrucca, L.: GA: A package for genetic algorithms in R. J. Stat. Softw. 53(4), 1–37 (2013). http://www.jstatsoft.org/v53/i04/

    Article  Google Scholar 

  35. Scrucca, L.: Graphical tools for model-based mixture discriminant analysis. Adv. Data Anal. Classif. 8(2), 147–165 (2014)

    Article  MathSciNet  Google Scholar 

  36. Scrucca, L., Raftery, A.E.: Clustvarsel: A package implementing variable selection for model-based clustering in R http://arxiv.org/abs/1411.0606. J. Stat. Soft. Available at http://arxiv.org/abs/1411.0606 (2014, submitted)

  37. Ševčíková, H.: Statistical simulations on parallel computers. J. Comput. Graph. Stat. 13(4), 886–906 (2004)

    Article  MathSciNet  Google Scholar 

  38. Winker, P., Gilli, M.: Applications of optimization heuristics to estimation and modelling problems. Comput. Stat. Data Anal. 47(2), 211–223 (2004)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Scrucca .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Scrucca, L. (2016). Genetic Algorithms for Subset Selection in Model-Based Clustering. In: Celebi, M., Aydin, K. (eds) Unsupervised Learning Algorithms. Springer, Cham. https://doi.org/10.1007/978-3-319-24211-8_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-24211-8_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-24209-5

  • Online ISBN: 978-3-319-24211-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics