Evolutionary Design and Training of Artificial Neural Networks

  • Lumír Kojecký
  • Ivan ZelinkaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10841)


The dynamics of neural networks and evolutionary algorithms share common attributes and based on many research papers it seems to be that from dynamic point of view are both systems indistinguishable. In order to compare them mutually from this point of view, artificial neural networks, as similar as possible to natural one, are needed. In this paper is described part of our research that is focused on the synthesis of artificial neural networks. Since most current ANN structures are not common in nature, we introduce a method of a complex network synthesis using network growth model, considered as a neural network. Synaptic weights of the synthesized ANN are then trained by an evolutionary algorithm to respond to an input training set successfully.


Neural network synthesis Network growth model Complex network Evolutionary algorithms 



The following grants are acknowledged for the financial support provided for this research: Grant of SGS No. 2018/177, VSB-Technical University of Ostrava and by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 710577.


  1. 1.
    Beyer, H.G., Schwefel, H.P.: Evolution strategies-a comprehensive introduction. Nat. Comput. 1(1), 3–52 (2002)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bianconi, G., Darst, R.K., Iacovacci, J., Fortunato, S.: Triadic closure as a basic generating mechanism of communities in complex networks. Phys. Rev. E 90(4), 042806 (2014)CrossRefGoogle Scholar
  3. 3.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)CrossRefGoogle Scholar
  4. 4.
    Kojecký, L., Zelinka, I., Prasad, A., Vantuch, T., Tomaszek, L.: Investigation on unconventional synthesis of astroinformatic data classificator powered by irregular dynamics. IEEE Intell. Syst. (in press)Google Scholar
  5. 5.
    Kojeckỳ, L., Zelinka, I., Šaloun, P.: Evolutionary synthesis of automatic classification on astroinformatic big data. Int. J. Parallel Emerg. Distrib. Syst. 32(5), 429–447 (2017)CrossRefGoogle Scholar
  6. 6.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  7. 7.
    Leung, F.H.F., Lam, H.K., Ling, S.H., Tam, P.K.S.: Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 14(1), 79–88 (2003)CrossRefGoogle Scholar
  8. 8.
    Matsumoto, M., Nishimura, T.: Mersenne Twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans. Model. Comput. Simul. (TOMACS) 8(1), 3–30 (1998)CrossRefGoogle Scholar
  9. 9.
    Onwubolu, G.C., Babu, B.: New Optimization Techniques in Engineering, vol. 141. Springer, Heidelberg (2004). Scholar
  10. 10.
    Oplatková, Z.: Metaevolution-synthesis of evolutionary algorithms by means of symbolic regression. Tomas Bata University in Zlín (2008)Google Scholar
  11. 11.
    Vařacha, P.: Neural network synthesis (2011)Google Scholar
  12. 12.
    Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989)CrossRefGoogle Scholar
  13. 13.
    Zelinka, I.: Analytic programming by means of soma algorithm. In: Proceedings of the 8th International Conference on Soft Computing, Mendel, vol. 2, pp. 93–101 (2002)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer Science, FEECSVŠB - Technical University of OstravaOstrava, PorubaCzech Republic

Personalised recommendations