Skip to main content

Evolving an Ensemble of Neural Networks Using Artificial Immune Systems

  • Conference paper
Simulated Evolution and Learning (SEAL 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5361))

Included in the following conference series:

Abstract

This paper presents a novel ensemble construction approach based on Artificial Immune Systems (AIS) to solve regression problems. Over the last few years AIS have increasingly attracted interest from researchers due to their ability to balance the exploration and exploitation of the search space. Nevertheless, there have been just a few applications of those algorithms in the construction of committee machines. In this paper, a population of feed-forward neural networks is evolved using the Clonal Selection Algorithm and then ensembles are automatically composed of a subset of this neural network population. Results show that the proposed algorithm can achieve good generalization performance on some hard benchmark regression problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Mitchell, T.M.: Machine Learning. McGraw Hill, New York (1997)

    MATH  Google Scholar 

  2. Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)

    Article  Google Scholar 

  3. Drucker, H., Cortes, C., Jackel, L.D., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Computation 6(6), 1289–1301 (1994)

    Article  MATH  Google Scholar 

  4. Sollich, P., Krogh, A.: Learning with ensembles: How overfitting can be useful. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 190–196. MIT Press, Cambridge (1996)

    Google Scholar 

  5. Opitz, D., Maclin, R.: Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research 11, 169–198 (1999)

    MATH  Google Scholar 

  6. Dietterich, T.G.: Ensemble learning. In: Arbib, M.A. (ed.) The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 405–408. MIT Press, Cambridge (2002)

    Google Scholar 

  7. Yao, X., Islam, M.M.: Evolving artificial neural network ensembles. IEEE Computational Intelligence Magazine 3(1), 31–42 (2008)

    Article  Google Scholar 

  8. Hansen, L.K., Salamon, P.: Neural networks ensembles. IEEE Tran. Patterns Anal. Machine Intelligence 12(10), 993–1001 (1990)

    Article  Google Scholar 

  9. Sharkey, A.: Multi-Net Systems. In: Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, pp. 1–30. Springer, Heidelberg (1999)

    MATH  Google Scholar 

  10. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  11. Perrone, M.P., Cooper, L.N.: When networks disagree: ensemble methods for hybrid neural network. In: Mammone, R.J. (ed.) Neural Networks for Speech and Image Processing, pp. 126–142. Chapman Hall, Boca Raton (1993)

    Google Scholar 

  12. Abbass, H.A.: Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization. In: The 2003 Congress on Evolutionary Computation, vol. 8(12), pp. 2074–2080 (2003)

    Google Scholar 

  13. Jin, Y., Okabe, T., Sendhoff, B.: Neural network regularization and ensembling using multi-objective evolutionary algorithms. Congress on Evolutionary Computation 1, 1–8 (2004)

    Google Scholar 

  14. Chandra, A., Yao, X.: Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing 69(7-9), 686–700 (2006)

    Article  Google Scholar 

  15. Nguyen, M.H., Abbass, H.A., Mckay, R.I.: A novel mixture of experts model based on cooperative coevolution. Neurocomputing 70, 155–163 (2006)

    Article  Google Scholar 

  16. García-Pedrajas, N., Fyfe, C.: Construction of classifier ensembles by means of artificial immune systems. Journal of Heuristics 14(3), 285–310 (2008)

    Article  Google Scholar 

  17. Castro, P.D., Coelho, G.P., Caetano, M.F., Zuben, F.J.V.: Designing ensembles of fuzzy classification systems: an immune approach. In: Jacob, C., Pilat, M.L., Bentley, P.J., Timmis, J.I. (eds.) ICARIS 2005. LNCS, vol. 3627, pp. 469–482. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  18. Zhang, X., Wang, S., Shan, T., Jiao, L.: Selective SVMs ensemble driven by immune clonal algorithm. In: Rothlauf, F., Branke, J., Cagnoni, S., Corne, D.W., Drechsler, R., Jin, Y., Machado, P., Marchiori, E., Romero, J., Smith, G.D., Squillero, G. (eds.) EvoWorkshops 2005. LNCS, vol. 3449, pp. 325–333. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  19. García-Pedrajas, N., Fyfe, C.: Immune network based ensembles. Neurocomputing 70(7-9), 1155–1166 (2007)

    Article  Google Scholar 

  20. de Castro, L.N., Zuben, F.J.V.: Learning and optimization using the clonal selection principle. IEEE Trans. on Evolutionary Computation 6(3), 239–251 (2002)

    Article  Google Scholar 

  21. Ueda, N., Nakano, R.: Generalization error of ensemble estimators. In: IEEE International Conference on Neural Networks, vol. 1, pp. 90–95 (1996)

    Google Scholar 

  22. Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Networks 12(10), 1399–1404 (1999)

    Article  Google Scholar 

  23. McKay, R., Abbass, H.A.: Anti-correlation: a diversity promotion mechanisms in ensemble learning. The Australian Journal of Intelligent Information Processing Systems 7(3), 139–149 (2001)

    Google Scholar 

  24. Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. Journal of Information Fusion 6(1), 5–20 (2005)

    Article  Google Scholar 

  25. Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems 7, pp. 231–238. MIT Press, Cambridge (1995)

    Google Scholar 

  26. Brown, G., Wyatt, J., Tino, P.: Managing diversity in regression ensembles. Journal of Machine Learning Research 6, 1621–1650 (2005)

    MathSciNet  MATH  Google Scholar 

  27. García-Pedrajas, N., Hervás-Martínez, C., Ortiz-Boyer, D.: Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Transactions on Evolutionary Computation 9(3), 271–302 (2005)

    Article  Google Scholar 

  28. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51(2), 181–207 (2003)

    Article  MATH  Google Scholar 

  29. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  Google Scholar 

  30. Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)

    Google Scholar 

  31. Dasgupta, D., Ji, Z., González, F.: Artificial immune system (AIS) research in the last five years. In: IEEE International Conference on Evolutionary Computation, Canberra, Australia (2003)

    Google Scholar 

  32. de Castro, L.N., Timmis, J.: Artificial Immune Systems: A New Computational Intelligence Approach. Springer, London (2002)

    MATH  Google Scholar 

  33. Dasgupta, D. (ed.): Artificial Immune Systems and Their Applications. Springer, Heidelberg (1998)

    Google Scholar 

  34. Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artificial Intelligence 137(1-2), 239–263 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  35. Friedman, J.H.: Multivariate adaptive regression splines. The Annals of Statistics 19, 1–141 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  36. Borra, S., Ciaccio, A.D.: Improving nonparametric regression methods by bagging and boosting. Comput. Stat. Data Anal. 38(4), 407–420 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  37. Drucker, H.: Improving regressors using boosting techniques. In: ICML 1997: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 107–115. Morgan Kaufmann Publishers Inc., San Francisco (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barbosa, B.H.G., Bui, L.T., Abbass, H.A., Aguirre, L.A., Braga, A.P. (2008). Evolving an Ensemble of Neural Networks Using Artificial Immune Systems. In: Li, X., et al. Simulated Evolution and Learning. SEAL 2008. Lecture Notes in Computer Science, vol 5361. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89694-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-89694-4_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-89693-7

  • Online ISBN: 978-3-540-89694-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics