Advertisement

GAMER: A Genetic Algorithm with Motion Encoding Reuse for Action-Adventure Video Games

  • Tasos PapagiannisEmail author
  • Georgios Alexandridis
  • Andreas Stafylopatis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11454)

Abstract

Genetic Algorithms (GAs) have been predominantly used in video games for finding the best possible sequence of actions that leads to a win condition. This work sets out to investigate an alternative application of GAs on action-adventure type video games. The main intuition is to encode actions depending on the state of the world of the game instead of the sequence of actions, like most of the other GA approaches do. Additionally, a methodology is being introduced which modifies a part of the agent’s logic and reuses it in another game. The proposed algorithm has been implemented in the GVG-AI competition’s framework and more specifically for the Zelda and Portals games. The obtained results, in terms of average score and win percentage, seem quite satisfactory and highlight the advantages of the suggested technique, especially when compared to a rolling horizon GA implementation of the aforementioned framework; firstly, the agent is efficient at various levels (different world topologies) after being trained in only one of them and secondly, the agent may be generalized to play more games of the same category.

Keywords

Intelligent agent Video games Genetic algorithms Action encoding GVG-AI 

References

  1. 1.
    Campbell, M., Hoane, A., Hsu, F.H.: Deep blue. Artif. Intell. 134(1), 57–83 (2002). http://www.sciencedirect.com/science/article/pii/S0004370201001291CrossRefGoogle Scholar
  2. 2.
    Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). http://dx.doi.org/10.1038/nature16961CrossRefGoogle Scholar
  3. 3.
    Stanley, K.O., Bryant, B.D., Miikkulainen, R.: Real-time neuroevolution in the nero video game. IEEE Trans. Evol. Comput. 9(6), 653–668 (2005)CrossRefGoogle Scholar
  4. 4.
    Galway, L., Charles, D., Black, M.: Machine learning in digital games: a survey. Artif. Intell. Rev. 29(2), 123–161 (2008).  https://doi.org/10.1007/s10462-009-9112-yCrossRefGoogle Scholar
  5. 5.
    Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. CoRR abs/1703.03864 (2017)Google Scholar
  6. 6.
    Perez-Liebana, D., Samothrakis, S., Togelius, J., Lucas, S., Schaul, T.: General video game AI: competition, challenges, and opportunities, pp. 4335–4337. AAAI Press (2016)Google Scholar
  7. 7.
    Denzinger, J., Kordt, M.: Evolutionary online learning of cooperative behavior with situation-action pairs. In: Proceedings of the Fourth International Conference on MultiAgent Systems, pp. 103–110. IEEE (2000)Google Scholar
  8. 8.
    ElecByte: M.U.G.E.N - make your own 2D fighting game. http://www.elecbyte.com/mugendocs-11b1/mugen.html
  9. 9.
    Martínez-Arellano, G., Cant, R., Woods, D.: Creating AI characters for fighting games using genetic programming. IEEE Trans. Comput. Intell. AI Games 9(4), 423–434 (2017)CrossRefGoogle Scholar
  10. 10.
    Hsu, W.L., p. Chen, Y.: Learning to select actions in starcraft with genetic algorithms. In: Conference on Technologies and Applications of Artificial Intelligence (TAAI), pp. 270–277 (2016)Google Scholar
  11. 11.
    da Silva, R.S., Parpinelli, R.S.: Playing the original game boy tetris using a real coded genetic algorithm. In: Brazilian Conference on Intelligent Systems (BRACIS), pp. 282–287, October 2017Google Scholar
  12. 12.
    Perez, D., Samothrakis, S., Lucas, S., Rohlfshagen, P.: Rolling horizon evolution versus tree search for navigation in single-player real-time games. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, GECCO 2013, pp. 351–358. ACM, New York (2013). http://doi.acm.org/10.1145/2463372.2463413
  13. 13.
    Spronck, P., Sprinkhuizen-Kuyper, I., Postma, E.: Improving opponent intelligence through offline evolutionary learning. Int. J. Intell. Games Simul. 2(1), 20–27 (2003)Google Scholar
  14. 14.
    Mora, A.M., et al.: Evolving bot AI in unrealTM. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplications 2010. LNCS, vol. 6024, pp. 171–180. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-12239-2_18CrossRefGoogle Scholar
  15. 15.
    Perez-Liebana, D., Samothrakis, S., Togelius, J., Schaul, T., Lucas, S.M., Couëtoux, A., Lee, J., Lim, C.U., Thompson, T.: The 2014 general video game playing competition. IEEE Trans. Comput. Intell. AI Games 8(3), 229–243 (2016)CrossRefGoogle Scholar
  16. 16.
    Gaina, R.D., Pérez-Liébana, D., Lucas, S.M.: General video game for 2 players: framework and competition. In: 8th Computer Science and Electronic Engineering (CEEC), pp. 186–191, September 2016Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Electrical and Computer EngineeringNational Technical University of AthensZografouGreece

Personalised recommendations