Abstract
Deep forest is an alternative to deep neural networks to use multiple layers of random forests without back-propagation for solving various problems. In this study, we propose a genetic programming-based approach to automatically and simultaneously evolving effective structures of deep forest connections and extracting informative features for image classification. First, in the new approach we define two types of modules: forest modules and feature extraction modules. Second, an encoding strategy is developed to integrate forest modules and feature extraction modules into a tree and the search strategy is introduced to search for the best solution. With these designs, the proposed approach can automatically extract image features and find forests with effective structures simultaneously for image classification. The parameters in the forest can be dynamically determined during the learning process of the new approach. The results show that the new approach can achieve better performance on the datasets having a small number of training instances and competitive performance on the datasets having a large number of training instances. The analysis of evolved solutions shows that the proposed approach uses a smaller number of random forests over the deep forest method.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S.: A survey of the recent architectures of deep convolutional neural networks. arXiv preprint arXiv:1901.06032 (2019)
Wang, Y., Yao, Q., Kwok, J., Ni, L.: Few-shot learning: a survey. arXiv preprint arXiv:1904.05046 (2019)
Zhou, Z.H., Feng, J.: Deep forest: towards an alternative to deep neural networks. In: Proceedings of International Joint Conferences on Artificial Intelligence, pp. 3553–3559 (2017)
Al-Sahaf, H., et al.: A survey on evolutionary machine learning. J. Roy. Soc. NZ 49(2), 205–228 (2019)
Sun, Y., Xue, B., Zhang, M., Yen, G.G.: Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 24, 394–407 (2019)
Bi, Y., Xue, B., Zhang, M.: An evolutionary deep learning approach using genetic programming with convolution operators for image classification. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 3197–3204 (2019)
Baioletti, M., Milani, A., Santucci, V.: Learning Bayesian networks with algebraic differential evolution. In: Auger, A., Fonseca, C.M., Lourenço, N., Machado, P., Paquete, L., Whitley, D. (eds.) PPSN 2018. LNCS, vol. 11102, pp. 436–448. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99259-4_35
Chen, B., Wu, H., Mo, W., Chattopadhyay, I., Lipson, H.: Autostacker: a compositional evolutionary learning system. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 402–409 (2018)
Bi, Y., Xue, B., Zhang, M.: Genetic programming with a new representation to automatically learn features and evolve ensembles for image classification. IEEE Trans. Cybern., 15 p. (2020). https://doi.org/10.1109/TCYB.2020.2964566
Bi, Y., Xue, B., Zhang, M.: An automated ensemble learning framework using genetic programming for image classification. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 365–373 (2019)
Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)
Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893 (2005)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)
Liu, L., et al.: Deep learning for generic object detection: a survey. arXiv preprint arXiv:1809.02165 (2018)
Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC, Boca Raton (2012)
Sagi, O., Rokach, L.: Ensemble learning: a survey. Wiley Interdisc. Rev.: Data Min. Knowl. Discov. 8(4), 1–19 (2018)
Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45014-9_1
Young, S., Abdou, T., Bener, A.: Deep super learner: a deep ensemble for classification problems. In: Bagheri, E., Cheung, J.C.K. (eds.) Canadian AI 2018. LNCS (LNAI), vol. 10832, pp. 84–95. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89656-4_7
Ding, C., Tao, D.: Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 1002–1014 (2017)
Zhou, M., Zeng, X., Chen, A.: Deep forest hashing for image retrieval. Pattern Recogn. 95, 114–127 (2019)
Zhu, G., Hu, Q., Gu, R., Yuan, C., Huang, Y.: ForestLayer: efficient training of deep forests on distributed task-parallel platforms. J. Parallel Distrib. Comput. 132, 113–126 (2019)
Zhang, Y.L., et al.: Distributed deep forest and its application to automatic detection of cash-out fraud. ACM Trans. Intell. Syst. Technol. 10(5), 1–19 (2019)
Bi, Y., Xue, B., Zhang, M.: An effective feature learning approach using genetic programming with image descriptors for image classification [research frontier]. IEEE Comput. Intell. Mag. 15(2), 65–77 (2020)
Montana, D.J.: Strongly typed genetic programming. Evol. Comput. 3(2), 199–230 (1995)
Samaria, F.S., Harter, A.C.: Parameterisation of a stochastic model for human face identification. In: Proceedings of 1994 IEEE Workshop on Applications of Computer Vision, pp. 138–142 (1994)
Lee, K.C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 5, 684–698 (2005)
Fei-Fei, L., Perona, P.: A Bayesian hierarchical model for learning natural scene categories. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 524–531 (2005)
Mallikarjuna, P., Targhi, A.T., Fritz, M., Hayman, E., Caputo, B., Eklundh, J.O.: THE KTH-TIPS2 database, pp. 1–10. Computational Vision and Active Perception Laboratory, Stockholm, Sweden (2006)
LeCun, Y., Cortes, C., Burges, C.J.: The mnist database (1998). http://yann.lecun.com/exdb/mnist
Krizhevsky, A., Nair, V., Hinton, G.: The cifar-10 dataset, no. 55 (2014). http://www.cs.toronto.edu/kriz/cifar.html
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report, Citeseer (2009)
Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in Neural Information Processing Systems, pp. 2654–2662 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bi, Y., Xue, B., Zhang, M. (2020). Evolving Deep Forest with Automatic Feature Extraction for Image Classification Using Genetic Programming. In: Bäck, T., et al. Parallel Problem Solving from Nature – PPSN XVI. PPSN 2020. Lecture Notes in Computer Science(), vol 12269. Springer, Cham. https://doi.org/10.1007/978-3-030-58112-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-58112-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58111-4
Online ISBN: 978-3-030-58112-1
eBook Packages: Computer ScienceComputer Science (R0)