Skip to main content

Design & Implementation of the Parallel-distributed Neural Network Ensemble

  • Conference paper
Current Trends in High Performance Computing and Its Applications

Abstract

Neural network ensemble is a recently developed technology, which trains a few of neural networks and then combines their prediction results. It significantly improves the generalization ability of neural network system and relieves the trial-by-error process of tuning architectures. However, it is time-consuming. In order to overcome the disadvantage, a parallel-distributed neural network ensemble named PDNNE is proposed in this paper. The design and implementation of the PDNNE are presented through discussing the main issues such as partitioning, communication, and the component neural network. The experiments show both the generalization ability and time efficiency are significantly improved.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans Pattern Analysis and Machine Intelligence, 12,993–1001 (1990)

    Article  Google Scholar 

  2. Sollich, P., Krogh, A.: Learning with ensembles: when over-fitting can be useful. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (ed) Advances in Neural Information Processing System. Cambridge, MA(1996)

    Google Scholar 

  3. Gutta, S., Wechsler, H.: Face recognition using hybrid classifier systems. In: ICNN-96. IEEE Computer Society Press, Washington, DC,Los Alamitos, CA (1996)

    Google Scholar 

  4. Mao, J.: A case study on Bagging, Boosting and basic ensembles of neural networks for OCR. In: IEEE International Conference on Neural Networks. IEEE Computer Society, Anchorage, AK (1998)

    Google Scholar 

  5. Zhou, Z., Jiang, Y., Yang, Y., Chen, S.: Lung cancer cell identification based on artificial neural network ensembles. Artificial Intelligence in Medicine, 24, 25–36 (2002)

    Article  Google Scholar 

  6. Sharkey, A.J.C.: On combining artificial neural nets. Connection Science, 8, 299–313 (1996)

    Article  Google Scholar 

  7. Breiman, L.: Bagging predictors. Machine Learning, 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  8. Efron, B., Tibshirani, R.: An Introduction to The Bootstrap. Chapman & Hall, New York (1993)

    Google Scholar 

  9. Schapire, R.E.: The strength of weak learnability. Machine Learning, 5, 197–227 (1990)

    Google Scholar 

  10. Yu, C., Skilliom, D.B.: Parallelizing Boosting and Bagging. Technical Report, Queen’s University, Kingston, CA (2001)

    Google Scholar 

  11. Lozano, E., Acuna, E.: Parallel computation of kernel density estimates classifiers and their ensembles. In: International Conference on Computer, Communication and Control Technologies (2003)

    Google Scholar 

  12. Du, Zhihui, Li, Sanli: High Performance Computing and Parallel Programming Technologies MPI Parallel Programming. Tsinghua University Press, Beijing (2001)

    Google Scholar 

  13. Blake, C., Keogh, E., Merz, C.J.: UCI repository of machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html. Department of Information and Computer Science, University of California, Irvine, California (1998)

    Google Scholar 

  14. Moody, J., Darken, C.: Learning with localized receptive fields. In: the 1988 Connectionist Models Summer School. San Mateo, CA (1989)

    Google Scholar 

  15. Han, J., Kamber, M.: Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers Inc. (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y., Li, Y., Zhang, B., Wu, G. (2005). Design & Implementation of the Parallel-distributed Neural Network Ensemble. In: Zhang, W., Tong, W., Chen, Z., Glowinski, R. (eds) Current Trends in High Performance Computing and Its Applications. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-27912-1_10

Download citation

Publish with us

Policies and ethics