An Extended Study of the Discriminant Random Forest
Classification technologies have become increasingly vital to information analysis systems that rely upon collected data to make predictions or informed decisions. Many approaches have been developed, but one of the most successful in recent times is the random forest. The discriminant random forest is a novel extension of the random forest classification methodology that leverages linear discriminant analysis to performmultivariate node splitting during tree construction.An extended study of the discriminant random forest is presented which shows that its individual classifiers are stronger and more diverse than their random forest counterparts, yielding statistically significant reductions in classification error of up to 79.5%. Moreover, empirical tests suggest that this approach is computationally less costly with respect to both memory and efficiency. Further enhancements of the methodology are investigated that exhibit significant performance improvements and greater stability at low false alarm rates.
KeywordsRandom Forest Linear Discriminant Analysis False Alarm Rate Node Splitting Split Dimension
Unable to display preview. Download preview PDF.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
- 1.Breiman, L.: Bagging Predictors. Machine Learning. 26(2), 123–140 (1996)Google Scholar
- 2.Breiman, L.: Using Adaptive Bagging to Debias Regressions. Technical Report 547, Statistics Dept. UC Berkeley (1999)Google Scholar
- 4.Breiman, L., Friedman, J., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Boca Raton, FL: Chapman and Hall (1984)Google Scholar
- 5.Chernick, M.R.: Bootstrap Methods, A Practitioner’s Guide. New York: John Wiley and Sons, Inc. (1999).Google Scholar
- 6.Dietterich, T.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Machine Learning. 1–22 (1998)Google Scholar
- 7.Drummond, C., Holte, R.: Explicitly Representing Expected Cost: An Alternative to ROC Representation. Proc. of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2000)Google Scholar
- 8.Duda, R.O., Hart, P.E., Stork, D.H.: Pattern Classification, 2nd edition. New York: Wiley Interscience (2000)Google Scholar
- 9.Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. New York: Chapman and Hall/CRC (1993)Google Scholar
- 10.Fawcett, T.: ROC Graphs: Notes and Practical Considerations for Researchers. Technical Report, Palo Alto, USA: HP Laboratories (2004)Google Scholar
- 12.Ho, T.K.: Random Decision Forest. Proc. of the 3rd International Conference on Document Analysis and Recognition. 278–282 (1995)Google Scholar
- 14.Lemmond, T.D., Hatch A.O., Chen, B.Y., Knapp, D.A., Hiller, L.J., Mugge, M.J., and Hanley, W.G.: Discriminant Random Forests. Proceedings of the 2008 International Conference on Data Mining (2008) (to appear) Google Scholar
- 15.Loeve, M.: Probability Theory II (Graduate Texts in Mathematics), 4th edition. New York: Springer-Verlag (1994)Google Scholar
- 16.Mardia, K., Kent, J., Bibby, J.: Multivariate Analysis. New York: Academic Press (1992)Google Scholar
- 17.McLachlan, G.J.: Discriminant Analysis and Statistical Pattern Recognition. New York, Wiley-Interscience (2004)Google Scholar
- 18.The Physical Protection of Nuclear Material and Nuclear Facilities. IAEA INFCIRC/225/Rev.4 (Corrected).Google Scholar