Towards designing neural network ensembles by evolution

  • Yong Liu
  • Xin Yao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1498)


This paper proposes a co-evolutionary learning system, i.e., CELS, to design neural network (NN) ensembles. CELS addresses the issue of automatic determination of the number of individual NNs in an ensemble and the exploitation of the interaction between individual NN design and combination. The idea of CELS is to encourage different individual NNs in the ensemble to learn different parts or aspects of the training data so that the ensemble can learn the whole training data better. The cooperation and specialisation among different individual NNs are considered during the individual NN design. This provides an opportunity for different NNs to interact with each other and to specialise. Experiments on two real-world problems demonstrate that CELS can produce NN ensembles with good generalisation ability.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    X. Yao and Y. Liu, “Making use of population information in evolutionary artificial neural networks,” IEEE Transactions on Systems, Man and Cybernetics, 28B(3),pp.417–425, June 1998.Google Scholar
  2. 2.
    Y. Liu and X. Yao, “Negatively correlated neural networks can produce best ensembles,” Australian Journal of Intelligent Information Processing Systems, 4(3/4), pp.176–185, 1997.Google Scholar
  3. 3.
    Y. Liu and X. Yao, “A cooperative ensemble learning system,” Proc. of the 1998 IEEE International Joint Conference on Neural Networks (IJCNN'98), Anchorage, USA, 4–9 May 1998, pp.2202–2207.Google Scholar
  4. 4.
    X. Yao, Y. Liu and P. Darwen, “How to make best use of evolutionary learning,” Complex Systems — From Local Interactions to Global Phenomena, IOS Press, Amsterdam, pp.229–242, 1996.Google Scholar
  5. 5.
    X. Yao and Y. Liu, “A new evolutionary system for evolving artificial neural networks,” IEEE Transactions on Neural Networks, Vol.8, no.3, pp.694–713, May 1997.MathSciNetCrossRefGoogle Scholar
  6. 6.
    D. B. Fogel, Evolutionary Computation: Towards a New Philosophy of Machine Intelligence, IEEE Press, New York, 1995.Google Scholar
  7. 7.
    R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptive mixtures of local experts,” Neural Computation, Vol.3, pp.79–87, 1991.Google Scholar
  8. 8.
    S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan College Publishing Company, Inc., pp.151–152, 1994.Google Scholar
  9. 9.
    P. Darwen and X. Yao, “Every niching method has its niche: fitness sharing and implicit sharing compared,” Parallel Problem Solving from Nature (PPSN) IV (H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, eds.), Vol. 1141 of Lecture Notes in Computer Science, (Berlin), pp. 398–407, Springer-Verlag, 1996.Google Scholar
  10. 10.
    J. MacQueen, “Some methods for classification and analysis of multivariate observation,” Proceedings of the 5th Berkely Symposium on Mathematical Statistics and Probability, Berkely: University of California Press, Vol.1, pp.281–297, 1967.Google Scholar
  11. 11.
    D. Michie and D. J. Spiegelhalter and C. C. Taylor, Machine Learning, Neural and Statistical Classification, Ellis Horwood Limited, London, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Yong Liu
    • 1
  • Xin Yao
    • 1
  1. 1.Computational Intelligence Group, School of Computer Science University CollegeThe University of New South Wales Australian Defence Force AcademyCanberraAustralia

Personalised recommendations