Partitioning Nominal Attributes in Decision Trees
- 534 Downloads
To find the optimal branching of a nominal attribute at a node in an L-ary decision tree, one is often forced to search over all possible L-ary partitions for the one that yields the minimum impurity measure. For binary trees (L = 2) when there are just two classes a short-cut search is possible that is linear in n, the number of distinct values of the attribute. For the general case in which the number of classes, k, may be greater than two, Burshtein et al. have shown that the optimal partition satisfies a condition that involves the existence of 2 L hyperplanes in the class probability space. We derive a property of the optimal partition for concave impurity measures (including in particular the Gini and entropy impurity measures) in terms of the existence ofL vectors in the dual of the class probability space, which implies the earlier condition.
Unfortunately, these insights still do not offer a practical search method when n and k are large, even for binary trees. We therefore present a new heuristic search algorithm to find a good partition. It is based on ordering the attribute's values according to their principal component scores in the class probability space, and is linear in n. We demonstrate the effectiveness of the new method through Monte Carlo simulation experiments and compare its performance against other heuristic methods.
Unable to display preview. Download preview PDF.
- Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J. 1984. Classification and Regression Trees. Monterey, CA: Wadsworth International Group.Google Scholar
- Burshtein, D., Pietra, V.D., Kanevsky, D., and Nadas, A. 1989. A splitting theorem for tree construction. Research Report RC 14754, IBM Research Division, Yorktown Heights, NY.Google Scholar
- Burshtein, D., Pietra, V.D., Kanevsky, D., and Nadas, A. 1992. Minimum impurity partitions. Ann. Statist., 20:1637-1646.Google Scholar
- Chou, P.A. 1988. Application of information theory to pattern recognition and the design of decision trees and trellises. Ph.D. Dissertation, Stanford Univ., Stanford, CA.Google Scholar
- Chou, P.A. 1991. Optimal partitioning for classification and regression trees. IEEE Trans. PAMI, 13:340-354.Google Scholar
- Cover, T.M. 1965. Geometrical and statistical properties of system of linear inequalities with applications in pattern recognition. IEEE Trans. Elec. Comput., EC-14:326-334.Google Scholar
- Mehta, M., Agrawal, R., and Rissanen, J. 1996. SLIQ: A fast classifier for data mining. Proc. 5th Int. Conf. on Extending Database Technology, Avignon, France.Google Scholar
- Nadas, A., Nahamoo, D., Picheny, M.A., and Powell, J. 1991. An iterative flip-flop approximation of the most informative split in the construction of decision trees. Proc. ICASSP-91, pp. 565-568.Google Scholar
- NASA. 1992. Introduction to IND Version 2.1, GA23-2475-02 edition. NASA Ames Research Center.Google Scholar
- Palmer, W.C. 1965. Meteorological drought. Research Paper 45, Weather Bureau, Washington, DC.Google Scholar
- Quinlan, J.R. 1993. C4.5 Programs for Machine Learning. San Francisco, CA: Morgan Kaufmann.Google Scholar
- Teqnical Services Inc. 1996. National Electronic Drought Atlas (CD-ROM). Teqnical Services Inc., New London, CT.Google Scholar
- Willeke, G.E., Hosking, J.R.M., Wallis, J.R., and Guttman, N.B. 1995. The National Drought Atlas (draft). IWR Report 94-NDS-4, US Army Corps of Engineers, Fort Belvoir, VA.Google Scholar