Evolutionary Bi-objective Learning with Lowest Complexity in Neural Networks: Empirical Comparisons

  • Yamina Mohamed Ben Ali
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4431)

Abstract

This paper introduces a new study in evolutionary computation technique in order to learn optimal configuration of a multilayer neural network. Inspired from thermodynamic perception, the used evolutionary framework undertakes the optimal configuration problem as a Bi-objective optimization problem. The first objective aims to learn optimal layer topology by considering optimal nodes and optimal connections by nodes. Second objective aims to learn optimal weights setting. The evaluation function of both concurrent objectives is founded on an entropy function which leads the global system to optimal generalization point. Thus, the evolutionary framework shows salient improvements in both modeling and results. The performance of the required algorithms was compared to estimations distribution algorithms in addition to the Backpropagation training algorithm.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abbass, H.A.: An evolutionary artificial neural networks approach for breast cancer diagnosis. Artificial Intelligence in Medicine 25(3), 265–281 (2002)CrossRefGoogle Scholar
  2. 2.
    Abraham, A., Nath, B.: Optimal design of Neural nets using hybrid Algorithms. In: Mizoguchi, R., Slaney, J.K. (eds.) PRICAI 2000. LNCS, vol. 1886, pp. 510–520. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Abraham, A., Nath, B.: ALEC: An adaptive learning framework for optimizing artificial neural networks. In: Alexandrov, V.N., Dongarra, J., Juliano, B.A., Renner, R.S., Tan, C.J.K. (eds.) ICCS-ComputSci 2001. LNCS, vol. 2074, p. 171. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Bäck, T., Fogel, D.B., Michalewicz, Z.: Evolutionary computation 2: advanced algorithms and operators. Institute of physics publishing (2000)Google Scholar
  5. 5.
    Beyer, H.-G., Deb, K.: On self-adaptive features in real-parameter evolutionary algorithms. IEEE Transaction on Evolutionary Computation 5(3), 250–270 (2001)CrossRefGoogle Scholar
  6. 6.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
  7. 7.
    Cantù-Paz, E.: Pruning neural networks with distribution estimation algorithms. In: Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L., Roy, R., O’Reilly, U.-M., Beyer, H.-G., Kendall, G., Wilson, S.W., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A., Schultz, A., Dowsland, K.A., Jonoska, N., Miller, J., Standish, R.K. (eds.) GECCO 2003. LNCS, vol. 2723, pp. 790–800. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Conradie, A.E., Miikkulainen, R., Aldrich, C.: Intelligent Process Control Utilizing Symbiotic Mimetic. In: Proceeding of 2002 Congress on Evolutionary Computation, CEC 02, Honolulu, HI, USA, vol. 1, pp. 623–628 (2002)Google Scholar
  9. 9.
    Fogel, D.B.: Evolutionary computing. IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  10. 10.
    Gepperth, A., Roth, S.: Applications of multi-objective structure optimization. In: 13th European Symposium on Artificial Neural Networks, ESANN 2005 (2005)Google Scholar
  11. 11.
    Hüsken, M.H., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for aero-dynamic optimization. Soft Computing 9(1), 21–28 (2005)CrossRefGoogle Scholar
  12. 12.
    Hiroshi, T., Hisaaki, H.: The functional localization of neural networks using genetic algorithms. Neural Networks 16, 55–67 (2003)CrossRefGoogle Scholar
  13. 13.
    Larrañaga, P.: A review on estimation of distribution algorithms. In: Larrañaga, P., Lozano, J.A. (eds.) Estimation of Distribution Algorithms, pp. 57–100. Kluwer Academic Publishers, Dordrecht (2002)Google Scholar
  14. 14.
    Yamina, M.B.A., Laskri, M.T.: An evolutionary machine learning: an adaptability perspective at fine granularity. International Journal of Knowledge-based and Intelligent Engineering Systems, 13-20 (2005)Google Scholar
  15. 15.
    Mühlenbein, H., Pâaß, G.: From recombination of genes to the estimation of distributions I. Binary parameters, 178–187 (1996)Google Scholar
  16. 16.
    Pelikan, M., Goldberg, D.E., Cantù-Paz, E.: BOA: The Bayesian optimization algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference 1999, vol. 1, pp. 525–532. Morgan Kaufmann, San Francisco (1999)Google Scholar
  17. 17.
    Rocha, M., Cortez, P., Neves, J.: Evolutionary neural network learning. In: Pires, F.M., Abreu, S.P. (eds.) EPIA 2003. LNCS (LNAI), vol. 2902, pp. 24–28. Springer, Heidelberg (2003)Google Scholar
  18. 18.
    Simi, J., Orponen, P., Antti-Poika, S.: A computational taxonomy and survey of neural network models. Neural Computation 12(12), 2965–2989 (2000)CrossRefGoogle Scholar
  19. 19.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002)CrossRefGoogle Scholar
  20. 20.
    Stanley, K.O., Miikkulainen, R.: Competitive coevolution through evolutionary complexification. Journal of Artificial Intelligence research 21, 63–100 (2004)Google Scholar
  21. 21.
    Yao, X.: Evolving artificial neural networks. Proceeding of the IEEE 87(9), 1423–1447 (1999)CrossRefGoogle Scholar
  22. 22.
    Wicker, D., Rizki, M., Tamburino, L.A.: E- Net: Evolutionary neural network synthesis. Neurocomputing 42, 171–196 (2002)MATHCrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Yamina Mohamed Ben Ali
    • 1
  1. 1.Research Group on Artificial Intelligence, Computer Science Department, Badji Mokhtar University BP 12, AnnabaAlgeria

Personalised recommendations