Advertisement

Mining Big Data with Random Forests

  • Alessandro Lulli
  • Luca Oneto
  • Davide Anguita
Article
  • 30 Downloads

Abstract

In the current big data era, naive implementations of well-known learning algorithms cannot efficiently and effectively deal with large datasets. Random forests (RFs) are a popular ensemble-based method for classification. RFs have been shown to be effective in many different real-world classification problems and are commonly considered one of the best learning algorithms in this context. In this paper, we develop an RF implementation called ReForeSt, which, unlike the currently available solutions, can distribute data on available machines in two different ways to optimize the computational and memory requirements of RF with arbitrarily large datasets ranging from millions of samples to millions of features. A recently proposed improved RF formulation called random rotation ensembles can be used in conjunction with model selection to automatically tune the RF hyperparameters. We perform an extensive experimental evaluation on a wide range of large datasets and several environments with different numbers of machines and numbers of cores per machine. Results demonstrate that ReForeSt, in comparison to other state-of-the-art alternatives such as MLlib, is less computationally intensive, more memory efficient, and more effective.

Keywords

Random forest Random rotation ensembles Model selection Big data Apache Spark Memory efficiency Computational efficiency Open source software 

Notes

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent

Not applicable.

References

  1. 1.
    Abdullah A, Hussain A, Khan IH. Introduction: dealing with big data-lessons from cognitive computing. Cogn Comput 2015;7(6):635–636.Google Scholar
  2. 2.
    Anguita D, Ghio A, Oneto L, Ridella S. In-sample and out-of-sample model selection and error estimation for support vector machines. IEEE Trans Neural Netw Learn Syst 2012;23:1390–1406.PubMedGoogle Scholar
  3. 3.
    Arlot S, Celisse A. A survey of cross-validation procedures for model selection. Stat Survey 2010;4:40–79.Google Scholar
  4. 4.
    Baldi P, Sadowski P, Whiteson D. Searching for exotic particles in high-energy physics with deep learning. Nat Commun 2014;5(4308):1–9.Google Scholar
  5. 5.
    Bernard S, Heutte L, Adam S. Influence of hyperparameters on random forest accuracy. MCS. pp. 171–180; 2009.Google Scholar
  6. 6.
    Bertolucci M, Carlini E, Dazzi P, Lulli A, Ricci L. Static and dynamic big data partitioning on apache spark. PARCO. pp. 489–498; 2015.Google Scholar
  7. 7.
    Biau G. Analysis of a random forests model. J Mach Learn Res 2012;13:1063–1095.Google Scholar
  8. 8.
    Blackard J, Dean D. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Comput Electron Agric 1999;24(3):131–151.Google Scholar
  9. 9.
    Blaser R, Fryzlewicz P. Random rotation ensembles. J Mach Learn Res 2015;2:1–15.Google Scholar
  10. 10.
    Bosse T, Duell R, Memon ZA, Treur J, van der Wal CN. Agent-based modeling of emotion contagion in groups. Cogn Comput 2015;7(1):111–136.Google Scholar
  11. 11.
    Breiman L. Random forests. Mach Learn 2001;45(1):5–32.Google Scholar
  12. 12.
    Cambria E, Chattopadhyay A, Linn E, Mandal B, White B. Storages are not forever. Cogn Comput 2017;9(5):646–658.Google Scholar
  13. 13.
    Cao L, Sun F, Liu X, Huang W, Kotagiri R, Li H. End-to-end convnet for tactile recognition using residual orthogonal tiling and pyramid convolution ensemble. Cogn Comput 2018;10(5):1–19.Google Scholar
  14. 14.
    Chen J, Li K, Tang Z, Bilal K, Yu S, Weng C, Li K. A parallel random forest algorithm for big data in a spark cloud computing environment. IEEE Trans Parallel Distributed Syst 2017;28(4):919–933.Google Scholar
  15. 15.
    Chung S. Sequoia forest : random forest of humongous trees. Spark summit; 2014.Google Scholar
  16. 16.
    Dean J, Ghemawat S. Mapreduce: simplified data processing on large clusters. Commun ACM 2008;51(1): 107–113.Google Scholar
  17. 17.
    Donders ART, van der Heijden GJMG, Stijnen T, Moons KGM. Review: a gentle introduction to imputation of missing values. J Clin Epidemiol 2006;59(10):1087–1091.PubMedGoogle Scholar
  18. 18.
    Fernández-Delgado M, Cernadas E, Barro S, Amorim D. Do we need hundreds of classifiers to solve real world classification problems? J Mach Learn Res 2014;15(1):3133–3181.Google Scholar
  19. 19.
    Galton F. Vox populi (the wisdom of crowds). Nature 1907;75(7):450–451.Google Scholar
  20. 20.
    Gashler M, Giraud-Carrier C, Martinez T. Decision tree ensemble: small heterogeneous is better than large homogeneous. International conference on machine learning and applications; 2008.Google Scholar
  21. 21.
    Genuer R, Poggi J, Tuleau-Malot C, Villa-Vialaneix N. Random forests for big data. arXiv:1511.08327; 2015.
  22. 22.
    George L. HBAse: the definitiveguide: random access to your planet-size data. Sebastopol: O’Reilly Media, Inc; 2011.Google Scholar
  23. 23.
    Hastie T, Tibshirani R, Friedman J. The elements of statistical learning: data mining, inference, and prediction. Berlin: Springer; 2009.Google Scholar
  24. 24.
    Hernández-Lobato D, Martínez-muñoz G, Suárez A. How large should ensembles of classifiers be? Pattern Recogn 2013;46(5):1323–1336.Google Scholar
  25. 25.
    Hilbert M. Big data for development: a review of promises and challenges. Dev Policy Rev 2016;34(1):135–174.Google Scholar
  26. 26.
    Jin XB, Xie GS, Huang K, Hussain A. Accelerating infinite ensemble of clustering by pivot features. Cogn Comput. 2018; 1–9. https://link.springer.com/article/10.1007/s12559-018-9583-8.
  27. 27.
    Karau H, Konwinski A, Wendell P, Zaharia M. Learning spark: lightning-fast big data analysis. Sebastopol: O’Reilly Media Inc; 2015.Google Scholar
  28. 28.
    Khan FH, Qamar U, Bashir S. Multi-objective model selection (moms)-based semi-supervised framework for sentiment analysis. Cogn Comput 2016;8(4):614–628.Google Scholar
  29. 29.
    Kleiner A, Talwalkar A, Sarkar P, Jordan MI. A scalable bootstrap for massive data. J R Stat Soc Ser B Stat Methodol 2014;76(4):795–816.Google Scholar
  30. 30.
    Li Y, Zhu E, Zhu X, Yin J, Zhao J. Counting pedestrian with mixed features and extreme learning machine. Cogn Comput 2014;6(3):462–476.Google Scholar
  31. 31.
    Liu N, Sakamoto JT, Cao J, Koh ZX, Ho AFW, Lin Z, Ong MEH. Ensemble-based risk scoring with extreme learning machine for prediction of adverse cardiac events. Cogn Comput 2017;9(4):545–554.Google Scholar
  32. 32.
    Loosli G, Canu S, Bottou L. Training invariant support vector machines using selective sampling. Large scale kernel machines; 2007.Google Scholar
  33. 33.
    Lulli A, Carlini E, Dazzi P, Lucchese C, Ricci L. Fast connected components computation in large graphs by vertex pruning. IEEE Trans Parallel Distributed Syst 2017;28(3):760–773.Google Scholar
  34. 34.
    Lulli A, Debatty T, Dell’Amico M, Michiardi P, Ricci L. Scalable k-nn based text clustering. IEEE International conference on big data. pp. 958–963; 2015.Google Scholar
  35. 35.
    Lulli A, Oneto L, Anguita D. Crack random forest for arbitrary large datasets. IEEE International conference on big data (IEEE BIG DATA); 2017.Google Scholar
  36. 36.
    Lulli A, Oneto L, Anguita D. Reforest: random forests in apache spark. International conference on artificial neural networks; 2017.Google Scholar
  37. 37.
    Manjusha KK, Sankaranarayanan K, Seena P. Prediction of different dermatological conditions using naive bayesian classification. Int J Adv Res Comput Sci Softw Eng. 2014;4.Google Scholar
  38. 38.
    Meng X, Bradley J, Yavuz B, Sparks E, Venkataraman S, Liu D, Freeman J, Tsai DB, Amde M, Owen S, Xin D, Xin R, Franklin MJ, Zadeh R, Zaharia M, Talwalkar A. Mllib: machine learning in apache spark. J Mach Learn Res 2016;17(1):1235–1241.Google Scholar
  39. 39.
    Ofek N, Poria S, Rokach L, Cambria E, Hussain A, Shabtai A. Unsupervised commonsense knowledge enrichment for domain-specific sentiment analysis. Cogn Comput 2016;8(3):467–477.Google Scholar
  40. 40.
    Oneto L. Model selection and error estimation without the agonizing pain. WIREs DMKD. 2018;, pp (In–Press).Google Scholar
  41. 41.
    Oneto L, Bisio F, Cambria E, Anguita D. Statistical learning theory and elm for big social data analysis. IEEE Comput Intell Mag 2016;11(3):45–55.Google Scholar
  42. 42.
    Oneto L, Bisio F, Cambria E, Anguita D. Semi-supervised learning for affective common-sense reasoning. Cogn Comput 2017;9(1):18–42.Google Scholar
  43. 43.
    Oneto L, Bisio F, Cambria E, Anguita D. Slt-based elm for big social data analysis. Cogn Comput 2017;9(2):259–274.Google Scholar
  44. 44.
    Oneto L, Coraddu A, Sanetti P, Karpenko O, Cipollini F, Cleophas T, Anguita D. Marine safety and data analytics: Vessel crash stop maneuvering performance prediction. International conference on artificial neural networks; 2017.Google Scholar
  45. 45.
    Oneto L, Fumeo E, Clerico C, Canepa R, Papa F, Dambra C, Mazzino N, Davide A. Train delay prediction systems: a big data analytics perspective. Big Data Research. 2017, pp (in–press).Google Scholar
  46. 46.
    Orlandi I, Oneto L, Anguita D. Random forests model selection. European symposium on artificial neural networks, computational intelligence and machine learning; 2016.Google Scholar
  47. 47.
    Ortín S, Pesquera L. Reservoir computing with an ensemble of time-delay reservoirs. Cogn Comput 2017; 9(3):327–336.Google Scholar
  48. 48.
    Panda B, Herbach J, Basu S, Bayardo R. Planet: massively parallel learning of tree ensembles with mapreduce. International conference on very large data bases; 2009.Google Scholar
  49. 49.
    Reyes-Ortiz JL, Oneto L, Anguita D. Big data analytics in the cloud: spark on hadoop vs mpi/openmp on beowulf. Procedia Comput Sci 2015;53:121–130.Google Scholar
  50. 50.
    Rijn J. 2014. BNG(mfeat-karhunen) - OpenML Repository. https://www.openml.org/d/252.
  51. 51.
    Rokach L, Maimon O. 2008. Data mining with decision trees: theory and applications world scientific.Google Scholar
  52. 52.
    Rotem D, Stockinger K, Wu K. Optimizing candidate check costs for bitmap indices. Proceedings of the 14th ACM international conference on Information and knowledge management. pp 648–655; 2005.Google Scholar
  53. 53.
    Ryza S. Advanced analytics with spark: patterns for learning from data at scale. Sebastopol: O’Reilly Media Inc; 2017.Google Scholar
  54. 54.
    Segal MR. Machine learning benchmarks and random forest regression. UCSF: center For bioinformatics and molecular biostatistics; 2004.Google Scholar
  55. 55.
    Shalev-Shwartz S, Ben-David S. Understanding machine learning: from theory to algorithms. Cambridge: Cambridge University Press; 2014.Google Scholar
  56. 56.
    Sonnenburg S, Franc V, Yom-Tov E, Sebag M. Pascal large scale learning challenge. International conference on machine learning; 2008.Google Scholar
  57. 57.
    Thusoo A, Sarma JS, Jain N, Shao Z, Chakka P, Anthony S, Liu H, Wyckoff P, Murthy R. Hive: a warehousing solution over a map-reduce framework. Proc VLDB Endowment 2009;2(2):1626–1629.Google Scholar
  58. 58.
    Wainberg M, Alipanahi B, Frey BJ. Are random forests truly the best classifiers? J Mach Learn Res 2016;17(1):3837–3841.Google Scholar
  59. 59.
    Wakayama R, Murata R, Kimura A, Yamashita T, Yamauchi Y, Fujiyoshi H. Distributed forests for mapreduce-based machine learning. IAPR Asian conference on pattern recognition; 2015.Google Scholar
  60. 60.
    Wang D, Irani D, Pu C. Evolutionary study of web spam: Webb spam corpus 2011 versus webb spam corpus 2006. International conference on collaborative computing: networking, Applications and Worksharing; 2012.Google Scholar
  61. 61.
    Wen G, Hou Z, Li H, Li D, Jiang L, Xun E. Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cogn Comput 2017;9(5):597–610.Google Scholar
  62. 62.
    White T. Hadoop: The definitive guide. Sebastopol: O’Reilly Media Inc; 2012.Google Scholar
  63. 63.
    Wolpert DH. The lack of a priori distinctions between learning algorithms. Neural Comput 1996;8(7):1341–1390.Google Scholar
  64. 64.
    Wu X, Zhu X, Wu G, Ding W. Data mining with big data. IEEE Trans Knowl Data Eng 2014;26 (1):97–107.Google Scholar
  65. 65.
    Yang B, Zhang T, Zhang Y, Liu W, Wang J, Duan K. Removal of electrooculogram artifacts from electroencephalogram using canonical correlation analysis with ensemble empirical mode decomposition. Cogn Comput 2017;9(5):626–633.Google Scholar
  66. 66.
    Yu H, Hsieh C, Chang K, Lin C. Large linear classification when data cannot fit in memory. ACM Trans Knowl Discovery Data 2012;5(4):23.Google Scholar
  67. 67.
    Yuan G, Ho C, Lin C. An improved glmnet for l1-regularized logistic regression. J Mach Learn Res 2012; 13:1999–2030.Google Scholar
  68. 68.
    Zaharia M, Chowdhury M, Das T, Dave A, Ma J, McCauley M, Franklin MJ, Shenker S, Stoica I. Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. Proceedings of the 9th USENIX conference on networked systems design and implementation. pp. 2–2; 2012.Google Scholar
  69. 69.
    Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I. Spark: cluster computing with working sets. HotCloud 2010;10(10–10):1–9.Google Scholar
  70. 70.
    Zhang S, Huang K, Zhang R, Hussain A. Learning from few samples with memory network. Cogn Comput 2018;10(1):15–22.Google Scholar
  71. 71.
    Zhou ZH. Ensemble methods: foundations and algorithms. Boca Raton: CRC Press; 2012.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.DIBRIS DepartmentUniversity of GenoaGenoaItaly

Personalised recommendations