Optimal Feature Selection for Decision Trees Induction Using a Genetic Algorithm Wrapper - A Model Approach

  • Prokopis K. Theodoridis
  • Dimitris C. GkikasEmail author
Conference paper
Part of the Springer Proceedings in Business and Economics book series (SPBE)


The aim of this paper is to describe an approach to a sophisticated model of optimised subsets of data classification. This effort refers to a seemingly parallel processing of two algorithms, in order to successfully classify features through optimization processing, using a wrapping method in order to decrease overfitting and maintain accuracy. A wrapping method measures how useful the features are through the classifier’s performance optimisation. In cases where big datasets are classified the risk of overfitting to occur is high. Thus, instead of classifying big datasets, a “smarter” approach is used by classifying subsets of data, also called chromosomes, using a genetic algorithm. The genetic algorithm is used to find the best combinations of chromosomes from a series of combinations called generations. The genetic algorithm will produce a big number of chromosomes of certain number of attributes, also called genes, that will be classified from the decision tree and they will get a fitness number. This fitness number refers to classification accuracy that each chromosome got from the classification process. Only the strongest chromosomes will pass on the next generation. This method reduces the size of genes classified, eliminating at the same time the risk of overfitting. At the end, the fittest chromosomes or sets of genes or subsets of attributes will be represented. This method helps on faster and more accurate decision making. Applications of this wrapper can be used in digital marketing campaigns metrics, analytics metrics, website ranking factors, content curation, keyword research, consumer/visitor behavior analysis and other areas of marketing and business interest.


Decision trees Genetic algorithm Data classification Data optimisation Overfitting Classification accuracy Chromosomes Genes 


  1. 1.
    Kohavi R (1996) Wrappers for performance enhancement and oblivious decision graphs. Stanford University, Stanford, CA.
  2. 2.
    Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97:273–324. CrossRefGoogle Scholar
  3. 3.
    William HΗ (2003) Genetic wrappers for feature selection in decision tree induction and variable ordering in Bayesian network structure learning. Department of Computing and Information Sciences, Kansas State University, Manhattan, KSGoogle Scholar
  4. 4.
    Yu E, Cho S (2003) GA-SVM wrapper approach for feature selection in keystroke dynamics identity verification. In: Proceedings of 2003 INNS-IEEE International Joint Conference on Neural Networks, pp 2253–2257Google Scholar
  5. 5.
    Sung K, Cho S (2005) GA SVM wrapper ensemble for keystroke dynamics authentication. In: Zhang D, Jain AK (eds) Advances in biometrics. ICB 2006. Lecture notes in computer science, vol 3832. Springer, Berlin. CrossRefGoogle Scholar
  6. 6.
    Yu E, Cho S (2006) Ensemble based on GA wrapper feature selection. Comput Ind Eng 51:111–116. CrossRefGoogle Scholar
  7. 7.
    Huang J, Cai Y, Xu X (2007) A hybrid genetic algorithm for feature selection wrapper based on mutual information. Pattern Recogn Lett 28:1825–1844. CrossRefGoogle Scholar
  8. 8.
    Rokach L (2007) Genetic algorithm-based feature set partitioning for classification problems. Pattern Recogn 41:1676–1700. CrossRefGoogle Scholar
  9. 9.
    Huang J, Wang H, Wang W, Xiong Z (2013) A computational study for feature selection on customer credit evaluation. In: International conference on systems, man, and cyberneticsGoogle Scholar
  10. 10.
    Soufan O, Kleftogiannis D, Kalnis P, Bajic VB (2015) DWFS: a wrapper feature selection tool based on a parallel genetic algorithm. PLoS One. CrossRefGoogle Scholar
  11. 11.
    Hammami M, Bechikh S, Hung CC, Said LB (2019) A multi-objective hybrid filter-wrapper evolutionary approach for feature selection. In: Conference: IEEE Congress on Evolutionary Computation, Brazil, vol 11, p 193. CrossRefGoogle Scholar
  12. 12.
    Mitchell TM (1997) Machine learning. McGraw-Hill, New YorkGoogle Scholar
  13. 13.
    Russel S, Norvig P (2003) Artificial intelligence: a modern approach. Prentice Hall, Upper Saddle RiverGoogle Scholar
  14. 14.
    Witten IH, Frank E, Hall MA (2011) Data mining: practical machine learning tools and techniques. Morgan Kaufmann, Amsterdam. ISBN: 978-0-12-374856-0CrossRefGoogle Scholar
  15. 15.
    Quinlan JR (1986) Induction of decision trees. Mach Learn 1986(1):81–106. CrossRefGoogle Scholar
  16. 16.
    Quinlan JR (1987) Simplifying decision trees. Int J Man Mach Stud 27:221–234. CrossRefGoogle Scholar
  17. 17.
    Blockeel H, Raedt LD (1998) Top – down induction of first – order logical decision trees. Artif Intell 101:285–297. CrossRefGoogle Scholar
  18. 18.
    Mitchell M (1996) An introduction to genetic algorithms. MIT Press, Cambridge, MAGoogle Scholar
  19. 19.
    Whitley D (1994) A genetic algorithm tutorial. Stat Comput.
  20. 20.
    Hsu WH, Genetic algorithms. Kansas State University, Manhattan.
  21. 21.
    Davis L (1991) Handbook of genetic algorithms, vol 115. Van Nostrand Reinhold, New YorkGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of PatrasAgrinioGreece

Personalised recommendations