Skip to main content

Evaluating Deep Neural Network Ensembles by Majority Voting Cum Meta-Learning Scheme

  • Conference paper
  • First Online:
Soft Computing and Signal Processing

Abstract

Deep neural networks (DNNs) are prone to overfitting and hence have high variance. Overfitted networks do not perform well for a new data instance. So instead of using a single DNN as classifier, we propose an ensemble of seven independent DNN learners by varying only the input to these DNNs keeping their architecture and intrinsic properties same. To induce variety in the training input, for each of the seven DNNs, one-seventh of the data is deleted and replenished by bootstrap sampling from the remaining samples. We have proposed a novel technique for combining the prediction of the DNN learners in the ensemble. Our method is called pre-filtering by majority voting coupled with stacked meta-learner which performs a two-step confidence check for the predictions before assigning the final class labels. All the algorithms in this paper have been tested on five benchmark datasets, namely human activity recognition (HAR), gas sensor array drift, Isolet, Spambase, and Internet advertisements. Our ensemble approach achieves higher accuracy than a single DNN and the average individual accuracies of DNNs in the ensemble, as well as the baseline approaches of plurality voting and meta-learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. D. CireşAn, U. Meier, J. Masci, J. Schmidhuber,Multi-column deep neural network for traffic sign classification. Neural Netw. 32, 333–338 (2012)

    Google Scholar 

  2. S. Hong, M. Wu, Y. Zhou, Q. Wang, J. Shang, H. Li, J. Xie,ENCASE: an ensemble classifier for ECG classification using expert features and deep neural networks, in 2017 Computing in Cardiology (CinC) (IEEE, 2017), pp. 1–4

    Google Scholar 

  3. M. Gori, A. Tesi, On the problem of local minima in backpropagation. IEEE Trans. Pattern Anal. Mach. Intell. 1, 76–86 (1992)

    Article  Google Scholar 

  4. S. Susan, R. Ranjan, U. Taluja, S. Rai, P. Agarwal, Neural net optimization by weight-entropy monitoring, in Computational Intelligence: Theories, Applications and Future Directions vol II (Springer, Singapore, 2019), pp. 201–213

    Google Scholar 

  5. S. Susan, R. Ranjan, U. Taluja, S. Rai, P. Agarwal,Global-best optimization of ANN trained by PSO using the non-extensive cross-entropy with Gaussian gain. Soft Comput. 1–13 (2020)

    Google Scholar 

  6. L. Breiman, Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  7. T.K. Ho, The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)

    Article  Google Scholar 

  8. Y. Raviv, N. Intrator, Bootstrapping with noise: an effective regularization technique. Connect. Sci. 8(3–4), 355–372 (1996)

    Article  Google Scholar 

  9. D. Partridge, Network generalization differences quantified. Neural Netw. 9(2), 263–271 (1996)

    Article  Google Scholar 

  10. I. Davidson, W. Fan,When efficient model averaging out-performs boosting and bagging, in European Conference on Principles of Data Mining and Knowledge Discovery (Springer, Berlin, 2006), pp. 478–486

    Google Scholar 

  11. P. Sollich, A. Krogh,Learning with ensembles: how overfitting can be useful, in Advances in Neural Information Processing Systems (1996), pp. 190–196

    Google Scholar 

  12. X. Mu, P. Watta, M.H. Hassoun, Analysis of a plurality voting-based combination of classifiers. Neural Process. Lett. 29(2), 89–107 (2009)

    Article  Google Scholar 

  13. D. Ruta, B. Gabrys, Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)

    Article  Google Scholar 

  14. D.H. Wolpert, Stacked generalization. Neural Netw. 5(2), 241–259 (1992)

    Article  Google Scholar 

  15. S. Susan, J. Malhotra,Learning interpretable hidden state structures for handwritten numeral recognition, in 2020 4th International Conference on Computational Intelligence and Networks (CINE) (IEEE, 2020), pp. 1–6

    Google Scholar 

  16. S. Madisetty, M.S. Desarkar, A neural network-based ensemble approach for spam detection in Twitter. IEEE Trans. Comput. Soc. Syst. 5(4), 973–984 (2018)

    Google Scholar 

  17. Y. Freund, R. Schapire, N. Abe, A short introduction to boosting. J. Jpn. Soc. Artif. Intel. 14(771–780), 1612 (1999)

    Google Scholar 

  18. T. Chen, C. Guestrin,Xgboost: a scalable tree boosting system, in Proceedings of the 22nd acm sigkdd International Conference on Knowledge Discovery and Data Mining (ACM, 2016), pp. 785–794

    Google Scholar 

  19. T.J. DiCiccio, B. Efron, Bootstrap confidence intervals. Stat. Sci. 189–212 (1996)

    Google Scholar 

  20. D. Anguita, A. Ghio, L. Oneto, X. Parra, J.L. Reyes-Ortiz,A public domain dataset for human activity recognition using smartphones, in 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) (CIACO, 2013), pp. 437–442

    Google Scholar 

  21. D. Dua, C. Graff, UCI Machine Learning Repository (University of California, School of Information and Computer Science, Irvine, CA, 2019). https://archive.ics.uci.edu/ml

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jain, A., Kumar, A., Susan, S. (2022). Evaluating Deep Neural Network Ensembles by Majority Voting Cum Meta-Learning Scheme. In: Reddy, V.S., Prasad, V.K., Wang, J., Reddy, K.T.V. (eds) Soft Computing and Signal Processing. Advances in Intelligent Systems and Computing, vol 1340. Springer, Singapore. https://doi.org/10.1007/978-981-16-1249-7_4

Download citation

Publish with us

Policies and ethics