Q-Learning Algorithm Module in Hybrid Artificial Neural Network Systems

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 285)

Abstract

Presented topic is from the research field called Artificial Life, but contributes also to the field of Artificial Intelligence (AI), Robotics and potentially into many other aspects of research. In this paper, there is reviewed and tested new approach to autonomous design of agent architectures. This novel approach is inspired by inherited modularity of biological brains. During designing of new brains, the evolution is not directly connecting individual neurons. Rather than that, it composes new brains by connecting larger, widely reused areas (modules). In this approach, agent architectures are represented as hybrid artificial neural networks composed of heterogeneous modules. Each module can implement different selected algorithm. Rather than describing this framework, this paper focuses on designing of one module. Such a module represents one component of hybrid neural network and can seamlessly integrate a selected algorithm into the node. The course of design of such a module is described on example of discrete reinforcement learning algorithm. The requirements posed by the framework are presented, the modifications on the classical version of algorithm are mentioned and then the resulting performance of module with expectations is evaluated. Finally, the future use cases of this module are described.

Keywords

Agent Architecture Artificial life Creature Behaviour Hybrid Neural networks Evolution 

References

  1. 1.
    Auda, G., Kamel, M.: Modular neural networks: a survey. Int. J. Neural Syst. 9(2), 129–151 (1999)CrossRefGoogle Scholar
  2. 2.
    Deb, K.: Multi-objective optimisation using evolutionary algorithms: an introduction. In: Wang, L., Ng, A.H.C., Deb, K. (eds.) Multi-objective Evolutionary Optimisation for Product Design and Manufacturing, pp. 3–34. Springer, London (2011)Google Scholar
  3. 3.
    Gu, D., Hu, H.: Reinforcement learning of fuzzy logic controllers for quadruped walking robots (2002)Google Scholar
  4. 4.
    Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003)CrossRefGoogle Scholar
  5. 5.
    Kadlecek, D.: Motivation driven reinforcement learning and automatic creation of behavior hierarchies. PhD thesis, Czech Technical University in Prague, Faculty of Electrical Engineering (2008)Google Scholar
  6. 6.
    Maass, W.: Networks of spiking neurons: the third generation of neural network models. J. Neural Netw. 10, 1659–1671 (1996)CrossRefGoogle Scholar
  7. 7.
    Murre, J.M.J., Phaf, R.H., Wolters, G.: Calm networks: a modular approach to supervised and unsupervised learning. In: Proceedings of International Joint Conference on Neural Networks, IJCNN, pp. 649–656 (1989)Google Scholar
  8. 8.
    Thomas, D.B., Luk, W.: FPGA accelerated simulation of biologically plausible spiking neural networks. In: Proceedings of the IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM), April 2009Google Scholar
  9. 9.
    Vitku, J.: An artificial creature capable of learning from experience in order to fulfill more complex tasks. Diploma thesis, Czech Technical University in Prague, Faculty of Electrical Engineering, Department of Cybernetics (2011). Supervisor: Doc. Ing. Nahodil Pavel CSc. (in English)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Department of CyberneticsCzech Technical University in PraguePrague 6Czech Republic

Personalised recommendations