Skip to main content
Log in

A review of feature selection methods based on mutual information

  • Invited Review
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this work, we present a review of the state of the art of information-theoretic feature selection methods. The concepts of feature relevance, redundance, and complementarity (synergy) are clearly defined, as well as Markov blanket. The problem of optimal feature selection is defined. A unifying theoretical framework is described, which can retrofit successful heuristic criteria, indicating the approximations made by each method. A number of open problems in the field are presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Almuallim H, Dietterich TG (1991) Learning with many irrelevant features. In: Artificial intelligence, proceedings of the ninth national conference on, AAAI Press, pp 547–552

  2. Almuallim H, Dietterich TG (1992) Efficient algorithms for identifying relevant features. In: Artificial intelligence, proceedings of the ninth canadian conference on, Morgan Kaufmann, pp 38–45

  3. Balagani K, Phoha V (2010) On the feature selection criterion based on an approximation of multidimensional mutual information. IEEE Trans Pattern Anal Mach Intell 32(7):1342–1343

    Article  Google Scholar 

  4. Battiti R (1994) Using mutual information for selecting features in supervised neural net learning. IEEE Trans Neural Netw 5(4):537–550

    Article  Google Scholar 

  5. Bell AJ (2003) The co-information lattice. Analysis pp 921–926

  6. Bell DA, Wang H (2000) A formalism for relevance and its application in feature subset selection. Mach Learn 41(2):175–195

    Article  MATH  Google Scholar 

  7. Bellman RE (1961) Adaptive control processes: a guided tour. 1st edn. Princeton University Press, Princeton

    MATH  Google Scholar 

  8. Bins J, Draper B (2001) Feature selection from huge feature sets. In: Computer Vision, 2001. Proceedings eighth IEEE international conference, vol 2, pp 159–165

  9. Bonev B, Escolano F, Cazorla M (2008) Feature selection, mutual information, and the classification of high-dimensional patterns: applications to image classification and microarray data analysis. Pattern Anal Appl 11(3-4):309–319

    Article  MathSciNet  Google Scholar 

  10. Brown G, Pocock A, Zhao MJ, Luján M (2012) Conditional likelihood maximisation: a unifying framework for information theoretic feature selection. J Mach Learn Res 13:27–66

    MathSciNet  Google Scholar 

  11. Brown LE, Tsamardinos I (2008) Markov blanket-based variable selection in feature space. Technical report dsl-08-01, Discovery systems laboratory, Vanderbilt University

  12. Cheng H, Qin Z, Feng C, Wang Y, Li F (2011) Conditional mutual information-based feature selection analyzing for synergy and redundancy. Electron Telecommun Res Inst 33(2):210–218

    Google Scholar 

  13. Chow T, Huang D (2005) Estimating optimal feature subsets using efficient estimation of high-dimensional mutual information. IEEE Trans Neural Netw 16(1):213–224

    Article  MathSciNet  Google Scholar 

  14. Cover TM, Thomas JA (2006) Elements of Information Theory. 2nd edn. Wiley-Interscience, New Jersey

    MATH  Google Scholar 

  15. Davies S, Russell S (1994) Np-completeness of searches for smallest possible feature sets. In: Intelligent Relevance, association for the advancement of artificial intelligence symposium on, AAAI Press, pp 37–39

  16. Duch W (2006) Filter methods. In: Feature extraction, foundations and applications, studies in fuzziness and soft computing, vol 207, Springer, Heidelberg, chap 3, pp 167–185

  17. Duch W, Winiarski T, Biesiada J, Kachel A (2003) Feature selection and ranking filter. In: International conference on artificial neural networks (ICANN) and International conference on neural information processing (ICONIP), pp 251–254

  18. Estévez PA, Caballero R (1998) A niching genetic algorithm for selecting features for neural networks classifiers. In: Perspectives in neural computation, Springer, New York, pp 311–316

  19. Estévez PA, Tesmer M, Pérez CA, Zurada JM (2009) Normalized mutual information feature selection. IEEE Trans Neural Netw 20(2):189–201

    Article  Google Scholar 

  20. Feder M, Merhav N (1994) Relations between entropy and error probability. IEEE Trans Inform Theory 40(1):259–266

    Article  MATH  MathSciNet  Google Scholar 

  21. Fleuret F, Guyon I (2004) Fast binary feature selection with conditional mutual information. J Mach Learn Res 5:1531–1555

    MATH  Google Scholar 

  22. Guo B, Nixon MS (2009) Gait feature subset selection by mutual information. Syst Man Cybern Part A IEEE Trans 39(1):36–46

    Article  Google Scholar 

  23. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  24. Guyon I, Elisseeff A (2006) An introduction to feature extraction. In: Feature extraction, foundations and applications, studies in fuzziness and soft computing, vol 207, Springer, Berlin, pp 1–25

  25. Guyon I, Aliferis C, Elisseeff A (2008) Causal feature selection. In: Liu H, Motoda H (eds) Computational methods of feature selection, Chapman & Hall/CRC, chap 4

  26. Hellman M, Raviv J (1970) Probability of error, equivocation, and the chernoff bound. IEEE Trans Inform Theory 16(4):368–372

    Article  MATH  MathSciNet  Google Scholar 

  27. Hero A, Michel O (1999) Estimation of renyi information divergence via pruned minimal spanning trees. Higher-Order statistics proceedings of the IEEE signal processing workshop, pp 264–268

  28. Huang S (2003) Dimensionality reduction in automatic knowledge acquisition: a simple greedy search approach. IEEE Trans Knowl Data Eng 15(6):1364–1373

    Article  Google Scholar 

  29. Jakulin A (2005) Learning based on attribute interactions. PhD thesis, University of Ljubljana, Slovenia

  30. Jakulin A, Bratko I (2003) Quantifying and visualizing attribute interactions. CoRR cs.AI/0308002, http://arxiv.org/abs/cs.AI/0308002

  31. Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324

    Article  MATH  Google Scholar 

  32. Kojadinovic I (2005) Relevance measures for subset variable selection in regression problems based on k-additive mutual information. Comput Stat Data Anal 49(4):1205–1227

    Article  MATH  MathSciNet  Google Scholar 

  33. Koller D, Sahami M (1996) Toward optimal feature selection. Technical Report 1996-77, Stanford InfoLab

  34. Kraskov A, Stögbauer H, Grassberger P (2004) Estimating mutual information. Phys Rev E 69:066,138

    Article  Google Scholar 

  35. Kullback S (1997) Information theory and statistics. 2nd edn. Dover, New York

    MATH  Google Scholar 

  36. Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22:49–86

    Article  MathSciNet  Google Scholar 

  37. Kwak N, Choi CH (2002) Input feature selection for classification problems. IEEE Trans Neural Netw 13(1):143–159

    Article  Google Scholar 

  38. Lal KN, Chapelle O, Weston J, Elisseeff A (2006) Embedded methods. In: Feature extraction, foundations and applications, studies in fuzziness and soft computing, vol 207, Springer, Berlin, chap 5, pp 167–185

  39. Lewis DD (1992) Feature selection and feature extraction for text categorization. In: Proceedings of speech and natural language workshop, Morgan Kaufmann, pp 212–217

  40. Lin D, Tang X (2006) Conditional infomax learning: an integrated framework for feature extraction and fusion. In: Computer vision—ECCV 2006, Lecture Notes in Computer Science, vol 3951, Springer, Berlin, pp 68–82

  41. Marill T, Green D (1963) On the effectiveness of receptors in recognition systems. IEEE Trans Inform Theory 9(1):11–17

    Article  Google Scholar 

  42. McGill W (1954) Multivariate information transmission. Psychometrika 19(2):97–116

    Article  MATH  Google Scholar 

  43. Meyer P, Schretter C, Bontempi G (2008) Information-theoretic feature selection in microarray data using variable complementarity. IEEE J Sel Top Signal Process 2(3):261–274

    Article  Google Scholar 

  44. Mo D, Huang SH (2011) Feature selection based on inference correlation. Intell Data Anal 15(3):375–398

    Google Scholar 

  45. Mo D, Huang SH (2012) Fractal-based intrinsic dimension estimation and its application in dimensionality reduction. IEEE Trans Knowl Data Eng 24(1):59–71

    Article  Google Scholar 

  46. Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238

    Article  Google Scholar 

  47. Principe JC (2010) Information theoretic learning: Renyi’s entropy and kernel perspectives. 1st edn. Springer Publishing Company, Berlin

    Book  Google Scholar 

  48. Raudys S, Jain A (1991) Small sample size effects in statistical pattern recognition: recommendations for practitioners. IEEE Trans Pattern Anal Mach Intell 13(3):252–264

    Article  Google Scholar 

  49. Saeys Y, Inza I, Larranaga P (2007) A review of feature selection techniques in bioinformatics. Bioinformatics 23(19):2507–2517

    Article  Google Scholar 

  50. Sebban M, Nock R (2002) A hybrid filter/wrapper approach of feature selection using information theory. Pattern Recogn 35(4):835–846

    Article  MATH  Google Scholar 

  51. Seth S, Príncipe J (2010) Variable selection: A statistical dependence perspective. In: Machine Learning and Applications (ICMLA), 2010 ninth international conference, pp 931–936

  52. Shannon CE (1948) A mathematical theory of communication. Bell System Technical J 27:379–423, 625–56

  53. Somol P, Pudil P, Kittler J (2004) Fast branch & bound algorithms for optimal feature selection. IEEE Trans Pattern Anal Mach Intell 26(7):900–912

    Article  Google Scholar 

  54. Stearns SD (1976) On selecting features for pattern classifiers. In: Pattern recognition, proceedings of the 3rd international conference on, Coronado, CA, pp 71–75

  55. Tishby N, Pereira FC, Bialek W (1999) The information bottleneck method. Proceedings on 37th Annu Allerton conference communication, Control and Computing

  56. Torkkola K (2006) Information-theoretic methods. In: Feature extraction, foundations and applications, studies in fuzziness and soft computing, vol 207, Springer, Berlin, chap 6, pp 167–185

  57. Trunk GV (1979) A problem of dimensionality: A simple example. IEEE Trans Pattern Anal Mach Intell-1(3):306–307

    Google Scholar 

  58. Tsamardinos I, Aliferis CF (2003) Towards principled feature selection: relevancy, filters and wrappers. In: Artificial intelligence and statistics, proceedings of the ninth international workshop, Morgan Kaufmann Publishers

  59. Tsamardinos I, Aliferis CF, Statnikov E (2003) Algorithms for large scale markov blanket discovery. In: The 16th international FLAIRS conference, St, AAAI Press, pp 376–380

  60. Vergara JR, Estévez PA (2010) Cmim-2: an enhanced conditional mutual information maximization criterion for feature selection. J Appl Comput Sci Methods 2(1):5–20

    Google Scholar 

  61. Vidal-Naquet M, Ullman S (2003) Object recognition with informative features and linear classification. In: Computer Vision, 2003. Proceedings ninth IEEE international conference, vol 1, pp 281–288

  62. Watanabe S (1960) Information theoretical analysis of multivariate correlation. IBM J Res Dev 4(1):66–82

    Article  MATH  Google Scholar 

  63. Webb AR (2002) Statistical Pattern Recognition. 2nd edn. Wiley, New Jersey

    Book  MATH  Google Scholar 

  64. Weston J, Mukherjee S, Chapelle O, Pontil M, Poggio T, Vapnik V (2000) Feature selection for svms. In: Advances in neural information processing systems 13, MIT Press, pp 668–674

  65. Whitney A (1971) A direct method of nonparametric measurement selection. IEEE Trans Comput C-20(9):1100–1103

    Article  MathSciNet  Google Scholar 

  66. Yang HH, Moody J (1999) Feature selection based on joint mutual information. In: Advances in intelligent data analysis, proceedings of international ICSC symposium, pp 22–25

  67. Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5:1205–1224

    MATH  Google Scholar 

  68. Zhao Z, Liu H (2009) Searching for interacting features in subset selection. Intell Data Anal 13(2):207–228

    Google Scholar 

Download references

Acknowledgments

This work was funded by CONICYT-CHILE under grant FONDECYT 1110701.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo A. Estévez.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vergara, J.R., Estévez, P.A. A review of feature selection methods based on mutual information. Neural Comput & Applic 24, 175–186 (2014). https://doi.org/10.1007/s00521-013-1368-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-013-1368-0

Keywords

Navigation