How to Stop the Evolutionary Process in Evolving Neural Network Ensembles

  • Yong Liu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4221)


In practice, two criteria have often been used to stop the evolutionary process in evolving neural network (NN) ensembles. One criterion is to stop the evolution when the maxial generation is reached. The other criterion is to stop the evolution when the evolved NN ensemble, i.e., the whole population, is satisfactory according to a certain evaluation. This paper points out that NN ensembles evolved from these two criteria might not be robust by having different performance. In order to make the evolved NN ensemble more stable, an alternative solution is to combine a number of evolved NN ensembles. Experimental analyses based on n-fold cross-validation have been given to explain why the evolved NN ensembles could be very different and how such difference could disappear or be reduced in the combination.


Mutual Information Correct Rate Constructive Method Individual Network Pruning Method 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Akaike, H.: A new look at the statistical model identification. IEEE Trans. Appl. Comp. AC-19, 716–723 (1974)MathSciNetGoogle Scholar
  2. 2.
    Clemen, R.T., Winkler, R.L.: Limits for the precision and value of information from dependent sources. Operations Research 33, 427–442 (1985)MATHCrossRefGoogle Scholar
  3. 3.
    Fogel, D.B.: Evolutionary Computation: Towards a New Philosophy of Machine Intelligence. IEEE Press, New York (1995)Google Scholar
  4. 4.
    Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999a)CrossRefGoogle Scholar
  5. 5.
    Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Networks 12, 1399–1404 (1999b)CrossRefGoogle Scholar
  6. 6.
    Liu, Y., Yao, X., Higuchi, T.: Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation 4(4), 380–387 (2000)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., Yao, X., Zhao, Q., Higuchi, T.: Evolving a cooperative population of neural networks by minimizing mutual information. In: Proc. of the 2001 Conference on Evolutionary Computation, pp. 384–389. IEEE Press, Los Alamitos (2001)Google Scholar
  8. 8.
    Liu, Y., Yao, X.: Maintaining population diversity by minimizing mutual information. In: Proceedings of the 2002 Genetic and Evolutionary Computation Conference (GECCO 2002), pp. 652–656. Morgan Kaufmann, San Francisco (2002)Google Scholar
  9. 9.
    Liu, Y., Yao, X.: Learning and evolution by minimization of mutual information. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 495–504. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  10. 10.
    Rissanen, J.: Modeling by shortest data description. Automatica 14, 465–471 (1978)MATHCrossRefGoogle Scholar
  11. 11.
    Wallace, C.S., Patrick, J.D.: Coding decision trees, Technical Report 91/153, Dept. of Computer Science, Monash University, Clayton, Victoria 3168, Australia (1991)Google Scholar
  12. 12.
    Wolpert, D.H.: A mathematical theory of generalization. Complex Systems 4, 151–249 (1990)MATHMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yong Liu
    • 1
  1. 1.School of Computer ScienceChina University of GeosciencesWuhanP.R. China

Personalised recommendations