Advertisement

Data Mining pp 123-146 | Cite as

An Extended Study of the Discriminant Random Forest

  • Tracy D. Lemmond
  • Barry Y. Chen
  • Andrew O. Hatch
  • William G. Hanley
Chapter
Part of the Annals of Information Systems book series (AOIS, volume 8)

Abstract

Classification technologies have become increasingly vital to information analysis systems that rely upon collected data to make predictions or informed decisions. Many approaches have been developed, but one of the most successful in recent times is the random forest. The discriminant random forest is a novel extension of the random forest classification methodology that leverages linear discriminant analysis to performmultivariate node splitting during tree construction.An extended study of the discriminant random forest is presented which shows that its individual classifiers are stronger and more diverse than their random forest counterparts, yielding statistically significant reductions in classification error of up to 79.5%. Moreover, empirical tests suggest that this approach is computationally less costly with respect to both memory and efficiency. Further enhancements of the methodology are investigated that exhibit significant performance improvements and greater stability at low false alarm rates.

Keywords

Random Forest Linear Discriminant Analysis False Alarm Rate Node Splitting Split Dimension 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

References

  1. 1.
    Breiman, L.: Bagging Predictors. Machine Learning. 26(2), 123–140 (1996)Google Scholar
  2. 2.
    Breiman, L.: Using Adaptive Bagging to Debias Regressions. Technical Report 547, Statistics Dept. UC Berkeley (1999)Google Scholar
  3. 3.
    Breiman, L.: Random Forests. Machine Learning. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  4. 4.
    Breiman, L., Friedman, J., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Boca Raton, FL: Chapman and Hall (1984)Google Scholar
  5. 5.
    Chernick, M.R.: Bootstrap Methods, A Practitioner’s Guide. New York: John Wiley and Sons, Inc. (1999).Google Scholar
  6. 6.
    Dietterich, T.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Machine Learning. 1–22 (1998)Google Scholar
  7. 7.
    Drummond, C., Holte, R.: Explicitly Representing Expected Cost: An Alternative to ROC Representation. Proc. of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2000)Google Scholar
  8. 8.
    Duda, R.O., Hart, P.E., Stork, D.H.: Pattern Classification, 2nd edition. New York: Wiley Interscience (2000)Google Scholar
  9. 9.
    Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. New York: Chapman and Hall/CRC (1993)Google Scholar
  10. 10.
    Fawcett, T.: ROC Graphs: Notes and Practical Considerations for Researchers. Technical Report, Palo Alto, USA: HP Laboratories (2004)Google Scholar
  11. 11.
    Fisher, R.A.: The Use of Multiple Measurements in Taxonomic Problems. Annals of Eugenics. 7, 179–188 (1936)CrossRefGoogle Scholar
  12. 12.
    Ho, T.K.: Random Decision Forest. Proc. of the 3rd International Conference on Document Analysis and Recognition. 278–282 (1995)Google Scholar
  13. 13.
    Ho, T.K.: The Random Subspace Method for Constructing Decision Forests. IEEE Trans. On Pattern Analysis and Machine Intelligence. 20(8), 832–844 (1998)CrossRefGoogle Scholar
  14. 14.
    Lemmond, T.D., Hatch A.O., Chen, B.Y., Knapp, D.A., Hiller, L.J., Mugge, M.J., and Hanley, W.G.: Discriminant Random Forests. Proceedings of the 2008 International Conference on Data Mining (2008) (to appear) Google Scholar
  15. 15.
    Loeve, M.: Probability Theory II (Graduate Texts in Mathematics), 4th edition. New York: Springer-Verlag (1994)Google Scholar
  16. 16.
    Mardia, K., Kent, J., Bibby, J.: Multivariate Analysis. New York: Academic Press (1992)Google Scholar
  17. 17.
    McLachlan, G.J.: Discriminant Analysis and Statistical Pattern Recognition. New York, Wiley-Interscience (2004)Google Scholar
  18. 18.
    The Physical Protection of Nuclear Material and Nuclear Facilities. IAEA INFCIRC/225/Rev.4 (Corrected).Google Scholar
  19. 19.
    Prinzie, A., Van den Poel, D.: Random Forests for multiclass classification: Random Multinomial Logit. Expert Systems with Applications. 34(3), 1721–1732 (2008)CrossRefGoogle Scholar
  20. 20.
    Rodriguez, J.J., Kuncheva, L.I., et al.: Rotation forest: A New Classifier Ensemble Method. IEEE Transactions on Pattern Analysis and Machine Intelligence. 28(10), 1619–1630 (2006)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Lawrence Livermore National LaboratorySystems and Decision SciencesLivermoreUSA

Personalised recommendations