The induction of decision trees is one of the oldest and most popular techniques for learning discriminatory models, which has been developed independently in the statistical (Breiman et al. 1984; Kass 1980) and machine learning (Hunt et al. 1966; Quinlan 1983, 1986) communities. A decision tree is a tree-structured classification model, which is easy to understand, even by non-expert users, and can be efficiently induced from data. An extensive survey of decision-tree learning can be found in Murthy (1998).
- Buntine W, Niblett T (1992) A further comparison of splitting rules for decision-tree induction. Mach Learn 8:75–85Google Scholar
- Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the 13th international conference on machine learning, Bari. Morgan Kaufmann, pp 148–156Google Scholar
- Hunt EB, Marin J, Stone PJ (1966) Experiments in induction. Academic, New YorkGoogle Scholar
- Mingers J (1989a) An empirical comparison of selection measures for decision-tree induction. Mach Learn 3:319–342Google Scholar
- Quinlan JR (1983) Learning efficient classification procedures and their application to chess end games. In: Michalski RS, Carbonell JG, Mitchell TM (eds) Machine learning. An artificial intelligence approach, Tioga, Palo Alto, pp 463–482Google Scholar
- Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106Google Scholar
- Quinlan JR (1993) C4.5: Programs for machine learning. Morgan Kaufmann, San MateoGoogle Scholar