Advertisement

ForestNet – Automatic Design of Sparse Multilayer Perceptron Network Architectures Using Ensembles of Randomized Trees

  • Dalia Rodríguez-SalasEmail author
  • Nishant Ravikumar
  • Mathias Seuret
  • Andreas Maier
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12046)

Abstract

In this paper, we introduce a mechanism for designing the architecture of a Sparse Multi-Layer Perceptron network, for classification, called ForestNet. Networks built using our approach are capable of handling high-dimensional data and learning representations of both visual and non-visual data. The proposed approach first builds an ensemble of randomized trees in order to gather information on the hierarchy of features and their separability among the classes. Subsequently, such information is used to design the architecture of a sparse network, for a specific data set and application. The number of neurons is automatically adapted to the dataset. The proposed approach was evaluated using two non-visual and two visual datasets. For each dataset, 4 ensembles of randomized trees with different sizes were built. In turn, per ensemble, a sparse network architecture was designed using our approach and a fully connected network with same architecture was also constructed. The sparse networks defined using our approach consistently outperformed their respective tree ensembles, achieving statistically significant improvements in classification accuracy. While we do not beat state-of-art results with our network size and the lack of data augmentation techniques, our method exhibits very promising results, as the sparse networks performed similarly to their fully connected counterparts with a reduction of more than 98% of connections in the visual tasks.

Keywords

Multilayer perceptron Random forest Randomized trees Sparse neural networks Network architecture 

References

  1. 1.
    Biau, G., Scornet, E., Welbl, J.: Neural random forests. Sankhya A 81, 347–386 (2019) MathSciNetCrossRefGoogle Scholar
  2. 2.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001).  https://doi.org/10.1023/A:1010933404324CrossRefzbMATHGoogle Scholar
  3. 3.
    Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
  4. 4.
    Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=rJl-b3RcF7
  5. 5.
    Frosst, N., Hinton, G.: Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784 (2017)
  6. 6.
    Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006).  https://doi.org/10.1007/s10994-006-6226-1CrossRefzbMATHGoogle Scholar
  7. 7.
    Humbird, K.D., Peterson, J.L., Mcclarren, R.G.: Deep neural network initialization with decision trees. IEEE Trans. Neural Netw. Learn. Syst. 30(5), 1286–1295 (2019).  https://doi.org/10.1109/TNNLS.2018.2869694CrossRefGoogle Scholar
  8. 8.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  9. 9.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  10. 10.
    LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010) . http://yann.lecun.com/exdb/mnist/
  11. 11.
    Miikkulainen, R., et al.: Evolving deep neural networks. In: Artificial Intelligence in the Age of Neural Networks and Brain Computing, pp. 293–312. Elsevier (2019)Google Scholar
  12. 12.
    Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)Google Scholar
  13. 13.
    Paszke, A., et al.: Automatic differentiation in pytorch (2017)Google Scholar
  14. 14.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011) MathSciNetzbMATHGoogle Scholar
  15. 15.
    Rodríguez-Salas, D., Gómez-Gil, P., Olvera-López, A.: Designing partially-connected, multilayer perceptron neural nets through information gain. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–5, August 2013.  https://doi.org/10.1109/IJCNN.2013.6706991
  16. 16.
    Sethi, I.K.: Entropy nets: from decision trees to neural networks. Proc. IEEE 78(10), 1605–1613 (1990)CrossRefGoogle Scholar
  17. 17.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. comput. 10(2), 99–127 (2002)CrossRefGoogle Scholar
  18. 18.
    Steinberg, D., Colla, P.: CART: classification and regression trees. In: The Top Ten Algorithms in Data Mining, vol. 9, p. 179 (2009)CrossRefGoogle Scholar
  19. 19.
    Utgoff, P.E.: Perceptron trees: a case study in hybrid concept representations. Connect. Sci. 1(4), 377–391 (1989)CrossRefGoogle Scholar
  20. 20.
    Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bull. 1(6), 80–83 (1945)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Pattern Recognition LabFriedrich-Alexander-Universität Erlangen-NürnbergErlangenGermany
  2. 2.School of ComputingUniversity of LeedsLeedsUK

Personalised recommendations