Evolving Node Transfer Functions in Artificial Neural Networks for Handwritten Digits Recognition

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9972)


Feed-forward Artificial Neural Networks are popular choices among scientists and engineers for modeling complex real-world problems. One of the latest research areas in this field is evolving Artificial Neural Networks: NeuroEvolution. In this paper we investigate the effect of evolving a node transfer function and its parameters along with the evolution of connection weights in Evolutionary Artificial Neural Networks for the problem of handwritten digits recognition. The results are promising when compared with the traditional approach of homogeneous Artificial Neural Network with predefined transfer function.


  1. 1.
    Kent, A., Williams, J.G. (eds.): Evolutionary Artificial Neural Networks. Encyclopedia of Computer Science and Technology, vol. 33, pp. 137–170. Marcel Dekker, New York (1995)Google Scholar
  2. 2.
    Angeline, P.J., Saunders, G.M., Pollack, J.B.: An evolutionary algorithm that constructs recurrent neural networks. Neural Networks, pp. 54–65 (1994)Google Scholar
  3. 3.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10, 99–127 (2002)CrossRefGoogle Scholar
  4. 4.
    Mahsal, K.M., Masood, A.A., Khan, M., Miller, J.F.: Fast learning neural networks using Cartesian genetic programming. Neurocomputing (2013)Google Scholar
  5. 5.
    Duch, W., Jankowski, N.: Transfer functions: hidden possibilities for better neural networks. In: ESANN, pp. 81–94 (2001)Google Scholar
  6. 6.
    Duch, W., Jankowski, N.: Survey of neural transfer functions. Neural Comput. Surv. 2, 163–212 (1999)Google Scholar
  7. 7.
    Chauvin, Y., Rumelhart, D.E. (eds.): Backpropagation: Theory, Architectures, and Applications. Erlbaum, Hillsdale (1995)Google Scholar
  8. 8.
    Belew, R.K., McInerney, J., Schraudolph, N.N.: Evolving networks: using genetic algorithm with connectionist learning. University of California, San Diego, Technical report CS90-174 (1991)Google Scholar
  9. 9.
    Mani, G.: Learning by gradient descent in function space. In: Proceedings of the IEEE Internation Conference on System, Man, and Cybernetics, Los Angeles, CA, pp. 242–247 (1990)Google Scholar
  10. 10.
    Liu, Y., Yao, X.: Evolutionary design of artificial neural networks with different nodes. In: Proceedings of IEEE International Conference on Evolutionary Computation, pp. 670–675 (1996)Google Scholar
  11. 11.
    Poli, R.: Parallel distributed genetic programming. In: New Ideas in Optimization, Advanced Topics in Computer Science, pp. 403–431 (1999)Google Scholar
  12. 12.
    James, A.T., Miller, J.F.: Cartesian genetic programming encoded artificial neural networks: a comparison using three benchmarks. In: Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO 2013), pp. 1005–1012 (2013)Google Scholar
  13. 13.
    Manning, T., Walsh, P.: Improving the performance of CGPANN for breast cancer diagnosis using crossover and radial basis functions. In: Vanneschi, L., Bush, W.S., Giacobini, M. (eds.) EvoBIO 2013. LNCS, vol. 7833, pp. 165–176. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  14. 14.
    James, A.T., Miller, J.F.: NeuroEvolution: The Importance of Transfer Function Evolution (2013)Google Scholar
  15. 15.
    Thrun, S.B., Bala, J., Bloedorn, E., Bratko, I., Cestnik, B., Cheng, J., De Jong, K., Dzeroski, S., Fahlman, S.E., Fisher, D., et al.: The monk’s problems a performance comparison of different learning algorithms. Technical report, Carnegie Mellon University (1991)Google Scholar
  16. 16.
    The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Institute of Computer ScienceWarsaw University of TechnologyWarsawPoland

Personalised recommendations