Advertisement

Game Adaptation by Using Reinforcement Learning Over Meta Games

  • 23 Accesses

Abstract

In this work, we propose a Dynamic Difficulty Adjustment methodology to achieve automatic video game balance. The balance task is modeled as a meta game, a game where actions change the rules of another base game. Based on the model of Reinforcement Learning (RL), an agent assumes the role of a game master and learns its optimal policy by playing the meta game. In this new methodology we extend traditional RL by adding the existence of a meta environment whose state transition depends on the evolution of a base environment. In addition, we propose a Multi Agent System training model for the game master agent, where it plays against multiple agent opponents, each with a distinct behavior and proficiency level while playing the base game. Our experiment is conducted on an adaptive grid-world environment in singleplayer and multiplayer scenarios. Our results are expressed in twofold: (i) the resulting decision making by the game master through gameplay, which must comply in accordance to an established balance objective by the game designer; (ii) the initial conception of a framework for automatic game balance, where the balance task design is reduced to the modulation of a reward function (balance reward), an action space (balance strategies) and the definition of a balance space state.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    https://www.atari.com/.

  2. 2.

    https://openai.com/.

  3. 3.

    http://blog.dota2.com/.

  4. 4.

    https://starcraft2.com/.

References

  1. Abdallah S, Lesser VR (2014) A multiagent reinforcement learning algorithm with non-linear dynamics. CoRR arXiv:1401.3454

  2. Altimira D, Mueller FF, Lee G, Clarke J, Billinghurst M (2014) Towards understanding balancing in exertion games. In: Proceedings of the 11th conference on advances in computer entertainment technology, ACM, New York, NY, USA, ACE ’14, pp 10:1–10:8. https://doi.org/10.1145/2663806.2663838

  3. Altimira D, Mueller FF, Clarke J, Lee G, Billinghurst M, Bartneck C (2017) Enhancing player engagement through game balancing in digitally augmented physical games. Int J Hum-Comput Stud 103(C):35–47. https://doi.org/10.1016/j.ijhcs.2017.02.004

  4. Andersen P, Goodwin M, Granmo O (2018) Deep rts: A game environment for deep reinforcement learning in real-time strategy games. In: 2018 IEEE conference on computational intelligence and games (CIG), pp 1–8. https://doi.org/10.1109/CIG.2018.8490409

  5. Andrade G, Ramalho G, Santana H, Corruble V (2005) Extending reinforcement learning to provide dynamic game balancing, pp 7–12

  6. Arnab S, Lim T, Carvalho MB, Bellotti F, Freitas S, Louchart S, Suttie N, Berta R, De Gloria A (2014) Mapping learning and game mechanics for serious games analysis. Br J Educ Technol 46(2):391–411. https://doi.org/10.1111/bjet.12113

  7. Arulkumaran K, Cully A, Togelius J (2019) Alphastar: an evolutionary computation perspective. CoRR arXiv:1902.01724

  8. Baldwin A, Johnson D, Wyeth P, Sweetser P (2013) A framework of dynamic difficulty adjustment in competitive multiplayer video games. In: 2013 IEEE international games innovation conference (IGIC), pp 16–19. https://doi.org/10.1109/IGIC.2013.6659150

  9. Baumann N, Lürig C, Engeser S (2016) Flow and enjoyment beyond skill-demand balance: the role of game pacing curves and personality. Motiv Emotion 40(4):507–519. https://doi.org/10.1007/s11031-016-9549-7

  10. Beau P, Bakkes S (2016) Automated game balancing of asymmetric video games. In: 2016 IEEE conference on computational intelligence and games (CIG), pp 1–8. https://doi.org/10.1109/CIG.2016.7860432

  11. Burke JW, McNeill MDJ, Charles DK, Morrow PJ, Crosbie JH, McDonough SM (2010) Augmented reality games for upper-limb stroke rehabilitation. In: 2010 second international conference on games and virtual worlds for serious applications, pp 75–78. https://doi.org/10.1109/VS-GAMES.2010.21

  12. Cechanowicz JE, Gutwin C, Bateman S, Mandryk R, Stavness I (2014) Improving player balancing in racing games. In: Proceedings of the first ACM SIGCHI annual symposium on computer–human interaction in play, ACM, New York, NY, USA, CHI PLAY ’14, pp 47–56. https://doi.org/10.1145/2658537.2658701

  13. Csikszentmihalyi M (1975) Beyond boredom and anxiety. The Jossey-Bass behavioral science series. Jossey-Bass Publishers. https://books.google.pt/books?id=afdGAAAAMAAJ. Accessed 9 Jan 2020

  14. Csikszentmihalyi M (1991) Flow: the psychology of optimal experience. Harper Perennial, New York, NY. http://www.amazon.com/gp/product/0060920432/ref=si3_rdr_bb_product/104-4616565-4570345. Accessed 9 Jan 2020

  15. Demediuk S, Tamassia M, Raffe WL, Zambetta F, Li X, Mueller F (2017) Monte carlo tree search based algorithms for dynamic difficulty adjustment. In: 2017 IEEE conference on computational intelligence and games (CIG), pp 53–59. https://doi.org/10.1109/CIG.2017.8080415

  16. Demediuk S, Tamassia M, Li X, Raffe W (2019) Challenging ai: Evaluating the effect of mcts-driven dynamic difficulty adjustment on player enjoyment, pp 1–7. https://doi.org/10.1145/3290688.3290748

  17. Hendrikx M, Meijer S, Van Der Velden J, Iosup A (2013) Procedural content generation for games: a survey. ACM Trans Multimedia Comput Commun Appl 9(1):1–22. https://doi.org/10.1145/2422956.2422957

  18. Hunicke R (2005) The case for dynamic difficulty adjustment in games. In: Proceedings of the 2005 ACM SIGCHI international conference on advances in computer entertainment technology, ACM, New York, NY, USA, ACE ’05, pp 429–433. https://doi.org/10.1145/1178477.1178573

  19. Jaderberg M, Czarnecki WM, Dunning I, Marris L, Lever G, Castañeda AG, Beattie C, Rabinowitz NC, Morcos AS, Ruderman A, Sonnerat N, Green T, Deason L, Leibo JZ, Silver D, Hassabis D, Kavukcuoglu K, Graepel T (2018) Human-level performance in first-person multiplayer games with population-based deep reinforcement learning. CoRR arXiv:1807.01281

  20. Li Y (2017) Deep reinforcement learning: an overview. CoRR arXiv:1701.07274

  21. Malone TW (1981) Toward a theory of intrinsically motivating instruction. Cognit Sci 5(4):333–369. https://doi.org/10.1207/s15516709cog0504_2

  22. Missura O, Gaertner T (2008) Online adaptive agent for connect four. In: Proceedings of the 4th international conference on games research and development cybergames, pp 1–8

  23. Missura O, Gärtner T (2009) Player modeling for intelligent difficulty adjustment. In: Gama J, Costa VS, Jorge AM, Brazdil PB (eds) Discovery science. Springer, Berlin, pp 197–211

  24. Missura O, Gärtner T (2011) Predicting dynamic difficulty. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ (eds) Advances in neural information processing systems 24, Curran Associates, Inc., pp 2007–2015. http://papers.nips.cc/paper/4302-predicting-dynamic-difficulty.pdf. Accessed 9 Jan 2020

  25. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller MA (2013) Playing atari with deep reinforcement learning. CoRR arXiv:1312.5602

  26. Morosan M, Poli R (2018) Lessons from testing an evolutionary automated game balancer in industry. In: 2018 IEEE games, entertainment, media conference (GEM), pp 263–270. https://doi.org/10.1109/GEM.2018.8516447

  27. Mueller F, Vetere F, Gibbs M, Edge D, Agamanolis S, Sheridan J, Heer J (2012) Balancing exertion experiences. In: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, New York, NY, USA, CHI ’12, pp 1853–1862. https://doi.org/10.1145/2207676.2208322

  28. Noblega A, Paes A, Clua E (2019) Towards adaptive deep reinforcement game balancing. In: Proceedings of the 11th international conference on agents and artificial intelligence—volume 1: ICAART,, INSTICC, SciTePress, pp 693–700. https://doi.org/10.5220/0007395406930700

  29. Pratama H, Krisnadhi A (2019) Representing dynamic difficulty in turn-based role playing games using monte carlo tree search. In: 2018 international conference on advanced computer science and information systems, ICACSIS 2018, Institute of Electrical and Electronics Engineers Inc., United States, 2018 international conference on advanced computer science and information systems, ICACSIS 2018, pp 207–212. https://doi.org/10.1109/ICACSIS.2018.8618167

  30. Rego PA, Moreira PM, Reis LP (2011) Natural user interfaces in serious games for rehabilitation. In: 6th Iberian conference on information systems and technologies (CISTI 2011), pp 1–4

  31. Reis S, Reis LP, Lau N (2019) Automatic generation of a sub-optimal agent population with learning. In: Rocha Á, Adeli H, Reis LP, Costanzo S (eds) New knowledge in information systems and technologies. Springer, Cham, pp 65–74

  32. Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, Lanctot M, Sifre L, Kumaran D, Graepel T, et al. (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Preprint arXiv:171201815

  33. Simões D, Lau N, Reis LP (2018a) Adjusted bounded weighted policy learner. In: RoboCup international symposium 2018

  34. Simões D, Lau N, Reis LP (2018b) Mixed-policy asynchronous deep q-learning. In: Ollero A, Sanfeliu A, Montano L, Lau N, Cardeira C (eds) ROBOT 2017: third iberian robotics conference. Springer, Cham, pp 129–140

  35. Sweetser P, Wyeth P (2005) Gameflow: a model for evaluating player enjoyment in games. Comput Entertain 3(3):3–3. https://doi.org/10.1145/1077246.1077253

  36. Tuyls K, Weiss G (2012) Multiagent learning: basics, challenges, and prospects. AI Mag 33(3):41–52

  37. Weitze CL (2014) Learning, education and games. ETC Press, Pittsburgh, PA, USA, chap developing goals and objectives for gameplay and learning, pp 225–249. http://dl.acm.org/citation.cfm?id=2811147.2811160. Accessed 9 Jan 2020

  38. Xia W, Anand B (2016) Game balancing with ecosystem mechanism. In: 2016 international conference on data mining and advanced computing (SAPIENCE), pp 317–324. https://doi.org/10.1109/SAPIENCE.2016.7684145

  39. Zhang H, Wang J, Zhou Z, Zhang W, Wen Y, Yu Y, Li W (2017) Learning to design games: strategic environments in deep reinforcement learning. CoRR arXiv:1707.01310

  40. Zohaib M (2018) Dynamic difficulty adjustment (dda) in computer games: a review. Adv Hum–Comput Interact 2018:1–12. https://doi.org/10.1155/2018/5681652

Download references

Author information

Correspondence to Simão Reis.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by: Portuguese Foundation for Science and Technology (FCT) under Grant SFRH/BD/129445/2017; LIACC (UID/CEC/00027/2019); IEETA (UID/CEC/00127/2019)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Reis, S., Reis, L.P. & Lau, N. Game Adaptation by Using Reinforcement Learning Over Meta Games. Group Decis Negot (2020) doi:10.1007/s10726-020-09652-8

Download citation

Keywords

  • Computer games
  • Dynamic difficulty adjustment
  • Reinforcement learning
  • Multi agent systems