Natural Computing

, Volume 9, Issue 3, pp 625–653 | Cite as

Neural network ensembles: immune-inspired approaches to the diversity of components

  • Rodrigo Pasti
  • Leandro Nunes de Castro
  • Guilherme Palermo Coelho
  • Fernando José Von Zuben


This work applies two immune-inspired algorithms, namely opt-aiNet and omni-aiNet, to train multi-layer perceptrons (MLPs) to be used in the construction of ensembles of classifiers. The main goal is to investigate the influence of the diversity of the set of solutions generated by each of these algorithms, and if these solutions lead to improvements in performance when combined in ensembles. omni-aiNet is a multi-objective optimization algorithm and, thus, explicitly maximizes the components’ diversity at the same time it minimizes their output errors. The opt-aiNet algorithm, by contrast, was originally designed to solve single-objective optimization problems, focusing on the minimization of the output error of the classifiers. However, an implicit diversity maintenance mechanism stimulates the generation of MLPs with different weights, which may result in diverse classifiers. The performances of opt-aiNet and omni-aiNet are compared with each other and with that of a second-order gradient-based algorithm, named MSCG. The results obtained show how the different diversity maintenance mechanisms presented by each algorithm influence the gain in performance obtained with the use of ensembles.


Ensembles of classifiers Diversity of components Artificial immune systems Multi-layer perceptrons Multi-objective optimization 



The authors thank CAPES, Fapesp and CNPq for the financial support.


  1. Abbass HA (2003a) Speeding up backpropagation using multiobjective evolutionary algorithms. Neural Comput 15(11):2705–2726MATHCrossRefGoogle Scholar
  2. Abbass HA (2003b) Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization. In: Proceedings of the IEEE conference on evolutionary computation. Los Alamitos, 2003, pp 2074–2080Google Scholar
  3. Abbass HA (2003c) Pareto neuro-ensemble. In: Proceedings of the 16th Australian joint conference on artificial intelligence, Perth, 2003, pp 554–566Google Scholar
  4. Abbass HA, Sarker R, Newton C (2001) PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems. In: Proceedings of the IEEE congress on evolutionary computation, Seoul, 2001, pp 971–978Google Scholar
  5. Breiman L (1996) Bagging predictors. Mach Learning 24(2):123–140MATHMathSciNetGoogle Scholar
  6. Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. J Inf Fusion 6(1):5–20CrossRefGoogle Scholar
  7. Burnet FM (1978) Clonal selection and after. In: Bell GI, Perelson AS, Pimgley GH Jr (eds) Theoretical Immunology. Marcel Dekker Inc, New York, pp 63–85Google Scholar
  8. Chandra A, Yao X (2006) Ensemble learning using multi-objective evolutionary algorithms. J Math Model Algorithms 5(4):417–445MATHCrossRefMathSciNetGoogle Scholar
  9. Coelho GP, Von Zuben FJ (2006a) omni-aiNet: an immune-inspired approach for omni optimization. In: Proceedings of the international conference on artificial immune systems, Banff, 2006Google Scholar
  10. Coelho GP, Von Zuben FJ (2006b) The influence of the pool of candidates on the performance of selection and combination techniques in ensembles. In: Proceedings of the IEEE international joint conference on neural networks, Vancouver, 2006, pp 10588–10595Google Scholar
  11. de Castro LN, Timmis J (2002a) An introduction to artificial immune systems: a new computational intelligence paradigm. Springer-Verlag, LondonGoogle Scholar
  12. de Castro LN, Timmis J (2002b) An artificial immune network for multimodal function optimization. In: Proceedings of the IEEE congress on evolutionary computation, vol 1, pp 699–674, HonoluluGoogle Scholar
  13. de França FO, Von Zuben FJ, de Castro LN (2005) An artificial immune network for multimodal function optimization on dynamic environments. In: Proceedings of the genetic and evolutionary computation conference (GECCO). Washington DC, 2005, pp 289–296Google Scholar
  14. Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, SussexMATHGoogle Scholar
  15. Deb K, Tiwari S (2005) Omni-optimizer: a procedure for single and multi-objective optimization. In: Proceedings of the 3rd international conference on evolutionary multi-criterion optimization (EMO), Guanajuato, 2005Google Scholar
  16. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
  17. Eiben AE, Smith JE (2003) Introduction to evolutionary computing. Springer, BerlinMATHGoogle Scholar
  18. Freund Y, Shapire R (1996) Experiments with new boosting algorithm. In: Proceedings of the 13th international conference on machine learning, Bari, 1996, pp 149–156Google Scholar
  19. Hansen L, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1005CrossRefGoogle Scholar
  20. Hashem S (1997) Optimal linear combinations of neural networks. Neural Netw 10(4):599–614CrossRefGoogle Scholar
  21. Hashem S, Schmeiser B (1995) Improving model accuracy using optimal linear combinations of trained neural networks. IEEE Trans Neural Netw 6(3):792–794CrossRefGoogle Scholar
  22. Hashem S, Schmeiser B, Yih Y (1994) Optimal linear combinations of neural networks: an overview. In: Proceedings of the IEEE international conference on neural networks, Orlando, 1994Google Scholar
  23. Haykin S (1999) Neural networks: a comprehensive foundation, 2nd edn. Prentice Hall, New JerseyMATHGoogle Scholar
  24. Holland PW, Garcia-Fernandez J, Williams NA, Sidow A (1994) Gene duplications and the origins of vertebrate development. Dev Suppl 125–133Google Scholar
  25. Jerne NK (1974) Towards a network theory of the immune system. Ann Immunol (Paris) 125C:373–389Google Scholar
  26. Jin Y, Sendhoff B, Korner E (2004) Neural network regularization and ensembling using multi-objective evolutionary algorithms. In: Proceedings of the IEEE congress on evolutionary computation, Portland, 2004, pp 1–8Google Scholar
  27. Liu Y (1998) Negative correlation learning and evolutionary neural network ensembles. Ph.D. thesis, University College, The University of New South Wales, Australian Defense Force AcademyGoogle Scholar
  28. Moller MF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6:525–533CrossRefGoogle Scholar
  29. Newman DJ, Hettich S, Blake CL, Merz CJ (1998) UCI repository of machine learning databases []. Irvine, CA: University of California, Department of Information and Computer Science, 1998
  30. Ohno S (1970) Evolution by gene duplication. Allen and Unwin, LondonGoogle Scholar
  31. Pasti R, de Castro LN (2006) An immune and a gradient-based method to train multi-layer perceptron neural networks. Proceedings of the international joint conference on neural networks (World Congress of Computational Intelligence), pp 4077–4084Google Scholar
  32. Pasti R, de Castro LN (2007) The influence of diversity in an immune-based algorithm to train MLP networks. In Proceedings of the international conference on artificial immune systems, Santos, 2007Google Scholar
  33. Rudolph G, Agapie A (2000) Convergence properties of some multi-objective evolutionary algorithms. In: Proceedings of the IEEE conference on evolutionary computation, Piscataway, 2000, pp 1010–1016Google Scholar
  34. Skalak D (1996) The sources of increased accuracy for two proposed boosting algorithms. In: Proceedings of the American association for artificial intelligence (AAAI-96) integrating multiple learned models workshop. Portland, 1996Google Scholar
  35. Srinivas N, Deb K (1994) Multi-objective function optimization using non-dominated sorting genetic algorithms. Evol Comput 2(3):221–248CrossRefGoogle Scholar
  36. Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8(3–4):385–404CrossRefGoogle Scholar
  37. Witten IH, Frank E (2005) Data mining: practical learning tool and techniques, 2nd edn. Morgan Kauffman Publishers, San FranciscoGoogle Scholar
  38. Zhou Z, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1–2):239–263MATHCrossRefMathSciNetGoogle Scholar
  39. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  • Rodrigo Pasti
    • 1
  • Leandro Nunes de Castro
    • 2
  • Guilherme Palermo Coelho
    • 1
  • Fernando José Von Zuben
    • 1
  1. 1.Laboratory of Bioinformatics and Bio-Inspired Computing (LBiC), Department of Computer Engineering and Industrial Automation (DCA)School of Electrical and Computer Engineering (FEEC), University of Campinas (Unicamp)CampinasBrazil
  2. 2.Mackenzie UniversitySão PauloBrazil

Personalised recommendations