Abstract
Deep neural networks (DNNs) are prone to overfitting and hence have high variance. Overfitted networks do not perform well for a new data instance. So instead of using a single DNN as classifier, we propose an ensemble of seven independent DNN learners by varying only the input to these DNNs keeping their architecture and intrinsic properties same. To induce variety in the training input, for each of the seven DNNs, one-seventh of the data is deleted and replenished by bootstrap sampling from the remaining samples. We have proposed a novel technique for combining the prediction of the DNN learners in the ensemble. Our method is called pre-filtering by majority voting coupled with stacked meta-learner which performs a two-step confidence check for the predictions before assigning the final class labels. All the algorithms in this paper have been tested on five benchmark datasets, namely human activity recognition (HAR), gas sensor array drift, Isolet, Spambase, and Internet advertisements. Our ensemble approach achieves higher accuracy than a single DNN and the average individual accuracies of DNNs in the ensemble, as well as the baseline approaches of plurality voting and meta-learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
D. CireşAn, U. Meier, J. Masci, J. Schmidhuber,Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)
S. Hong, M. Wu, Y. Zhou, Q. Wang, J. Shang, H. Li, J. Xie,ENCASE: an ensemble classifier for ECG classification using expert features and deep neural networks, in 2017 Computing in Cardiology (CinC) (IEEE, 2017), pp. 1–4
M. Gori, A. Tesi, On the problem of local minima in backpropagation. IEEE Trans. Pattern Anal. Mach. Intell. 1, 76–86 (1992)
S. Susan, R. Ranjan, U. Taluja, S. Rai, P. Agarwal, Neural net optimization by weight-entropy monitoring, in Computational Intelligence: Theories, Applications and Future Directions vol II (Springer, Singapore, 2019), pp. 201–213
S. Susan, R. Ranjan, U. Taluja, S. Rai, P. Agarwal,Global-best optimization of ANN trained by PSO using the non-extensive cross-entropy with Gaussian gain. Soft Comput. 1–13 (2020)
L. Breiman, Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)
T.K. Ho, The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)
Y. Raviv, N. Intrator, Bootstrapping with noise: an effective regularization technique. Connect. Sci. 8(3–4), 355–372 (1996)
D. Partridge, Network generalization differences quantified. Neural Netw. 9(2), 263–271 (1996)
I. Davidson, W. Fan,When efficient model averaging out-performs boosting and bagging, in European Conference on Principles of Data Mining and Knowledge Discovery (Springer, Berlin, 2006), pp. 478–486
P. Sollich, A. Krogh,Learning with ensembles: how overfitting can be useful, in Advances in Neural Information Processing Systems (1996), pp. 190–196
X. Mu, P. Watta, M.H. Hassoun, Analysis of a plurality voting-based combination of classifiers. Neural Process. Lett. 29(2), 89–107 (2009)
D. Ruta, B. Gabrys, Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)
D.H. Wolpert, Stacked generalization. Neural Netw. 5(2), 241–259 (1992)
S. Susan, J. Malhotra,Learning interpretable hidden state structures for handwritten numeral recognition, in 2020 4th International Conference on Computational Intelligence and Networks (CINE) (IEEE, 2020), pp. 1–6
S. Madisetty, M.S. Desarkar, A neural network-based ensemble approach for spam detection in Twitter. IEEE Trans. Comput. Soc. Syst. 5(4), 973–984 (2018)
Y. Freund, R. Schapire, N. Abe, A short introduction to boosting. J. Jpn. Soc. Artif. Intel. 14(771–780), 1612 (1999)
T. Chen, C. Guestrin,Xgboost: a scalable tree boosting system, in Proceedings of the 22nd acm sigkdd International Conference on Knowledge Discovery and Data Mining (ACM, 2016), pp. 785–794
T.J. DiCiccio, B. Efron, Bootstrap confidence intervals. Stat. Sci. 189–212 (1996)
D. Anguita, A. Ghio, L. Oneto, X. Parra, J.L. Reyes-Ortiz,A public domain dataset for human activity recognition using smartphones, in 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) (CIACO, 2013), pp. 437–442
D. Dua, C. Graff, UCI Machine Learning Repository (University of California, School of Information and Computer Science, Irvine, CA, 2019). https://archive.ics.uci.edu/ml
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Jain, A., Kumar, A., Susan, S. (2022). Evaluating Deep Neural Network Ensembles by Majority Voting Cum Meta-Learning Scheme. In: Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K.T.V. (eds) Soft Computing and Signal Processing. Advances in Intelligent Systems and Computing, vol 1340. Springer, Singapore. https://doi.org/10.1007/978-981-16-1249-7_4
Download citation
DOI: https://doi.org/10.1007/978-981-16-1249-7_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-1248-0
Online ISBN: 978-981-16-1249-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)