Advertisement

Optimal Neuron Selection and Generalization: NK Ensemble Neural Networks

  • Darrell Whitley
  • Renato Tinós
  • Francisco Chicano
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11102)

Abstract

This paper explores how learning can be achieved by turning on and off neurons in a special hidden layer of a neural network. By posing the neuron selection problem as a pseudo-Boolean optimization problem with bounded tree width, an exact global optimum can be obtained to the neuron selection problem in O(N) time. To illustrate the effectiveness of neuron selection, the method is applied to optimizing a modified Echo State Network for two learning problems: (1) Mackey-Glass time series prediction and (2) a reinforcement learning problem using a recurrent neural network. Empirical tests indicate that neuron selection results in rapid learning and, more importantly, improved generalization.

References

  1. 1.
    Anderson, C., Elliott, D.: Faster reinforcement learning after pretraining deep networks to predict state dynamics. In: International Joint Conference on Neural Networks (2015)Google Scholar
  2. 2.
    Boros, E., Hammer, P.: Pseudo-boolean programming revisited. Discrete Appl. Math. 123(1), 155–225 (2002)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Chechik, G., Meilijson, I., Ruppin, E.: Neuronal regulation: a mechanism for synaptic pruning during brain maturation. Neural Comput. 11(8), 2061–2080 (1999)CrossRefGoogle Scholar
  4. 4.
    Crama, Y., Hansen, P., Jaumard, B.: The basic algorithm for pseudo-boolean programming revisited. Discrete Appl. Math. 29(2–3), 171–185 (1990)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Dürr, P., Mattiussi, C., Floreano, D.: Neuroevolution with analog genetic encoding. In: Runarsson, T.P., Beyer, H.-G., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 671–680. Springer, Heidelberg (2006).  https://doi.org/10.1007/11844297_68CrossRefGoogle Scholar
  6. 6.
    Gao, Y., Culberson, J.: On the treewidth of NK landscapes. In: Cantú-Paz, E., et al. (eds.) GECCO 2003. LNCS, vol. 2723, pp. 948–954. Springer, Heidelberg (2003).  https://doi.org/10.1007/3-540-45105-6_106CrossRefGoogle Scholar
  7. 7.
    Gomez, F., Miikkulainen, R.: Solving non-Markovian control tasks with neuroevolution. In: IJCAI. Morgan Kaufmann (1999)Google Scholar
  8. 8.
    Gomez, F., Schmidhuber, J., Miikkulainen, R.: Accelerated neural evolution through cooperatively coevolved synapses. J. Mach. Learn. Res. 9, 937–965 (2008)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Gruau, F., Whitley, D., Pyeatt, L.: A comparison between cellular encoding and direct encoding. In: Genetic Programming Conference. Morgan Kaufmann (1996)Google Scholar
  10. 10.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  11. 11.
    Kauffman, S.A.: The Origins of Order: Self-organization and Selection in Evolution. Oxford University Press, Oxford (1993)Google Scholar
  12. 12.
    Kristiansen, M., Ham, J.: Programmed cell death during neuronal development: the sympathetic neuron model. Cell Death Differ. (Nature Publishing Group) 21(7), 1025–1035 (2014)CrossRefGoogle Scholar
  13. 13.
    Lukoševičius, M.: A practical guide to applying echo state networks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 659–686. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_36CrossRefGoogle Scholar
  14. 14.
    Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)CrossRefGoogle Scholar
  15. 15.
    Roth, K.A., D’Sa, C.: Apoptosis and brain development. Ment. Retard. Dev. Disabil. Res. Rev. 7, 261–266 (2001)CrossRefGoogle Scholar
  16. 16.
    Santamaria, J., Sutton, R., Ram, A.: Experiments with reinforcement learning in problems with continuous state and actions spaces. Adapt. Behav. 6(2), 163–217 (1998)Google Scholar
  17. 17.
    Schiller, U.D., Steil, J.J.: Analyzing the weight dynamics of recurrent learning algorithms. Neurocomputing 63, 5–23 (2005)CrossRefGoogle Scholar
  18. 18.
    Stanley, K., Miikkulainen, R.: Efficient reinforcement learning through evolving neural network topologies. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 569–577, Morgan Kaufmann (2002)Google Scholar
  19. 19.
    Tomassini, M., Verel, S., Ochoa, G.: Complex-network analysis of combinatorial spaces: the NK landscape case. Phys. Rev. E 78, 066114 (2008)CrossRefGoogle Scholar
  20. 20.
    Wieland, A.P.: Evolving neural network controllers for unstable systems. In: Proceedings of the 1991 International Joint Conference on Neural Networks (IJCNN), vol. 2, pp. 667–673. IEEE (1991)Google Scholar
  21. 21.
    Wright, A.H., Thompson, R.K., Zhang, J.: The computational complexity of N-K fitness functions. IEEE Trans. Evolut. Comput. 4(4), 373–379 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Darrell Whitley
    • 1
  • Renato Tinós
    • 2
  • Francisco Chicano
    • 3
  1. 1.Colorado State UniversityFort CollinsUSA
  2. 2.University of São PauloRibeirão PretoBrazil
  3. 3.University of MálagaMálagaSpain

Personalised recommendations