Advertisement

Combining Case-Based Reasoning and Reinforcement Learning for Tactical Unit Selection in Real-Time Strategy Game AI

  • Stefan WenderEmail author
  • Ian Watson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9969)

Abstract

This paper presents a hierarchical approach to the problems inherent in parts of real-time strategy games. The overall game is decomposed into a hierarchy of sub-problems and an architecture is created that addresses a significant number of these through interconnected machine-learning (ML) techniques. Specifically, individual modules that use a combination of case-based reasoning (CBR) and reinforcement learning (RL) are organised into three distinct yet interconnected layers of reasoning. An agent is created for the RTS game StarCraft and individual modules are devised for the separate tasks that are described by the architecture. The modules are individually trained and subsequently integrated in a micromanagement agent that is evaluated in a range of test scenarios. The experimental evaluation shows that the agent is able to learn how to manage groups of units to successfully solve a number of different micromanagement scenarios.

Keywords

CBR Reinforcement learning Game AI Layered learning 

References

  1. 1.
    Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun. 7(1), 39–59 (1994)Google Scholar
  2. 2.
    Aha, D.W., Molineaux, M., Ponsen, M.: Learning to win: case-based plan selection in a real-time strategy game. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 5–20. Springer, Heidelberg (2005). doi: 10.1007/11536406_4 CrossRefGoogle Scholar
  3. 3.
    Auslander, B., Lee-Urban, S., Hogg, C., Muñoz-Avila, H.: Recognizing the enemy: combining reinforcement learning with strategy selection using case-based reasoning. In: Althoff, K.-D., Bergmann, R., Minor, M., Hanft, A. (eds.) ECCBR 2008. LNCS (LNAI), vol. 5239, pp. 59–73. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-85502-6_4 CrossRefGoogle Scholar
  4. 4.
    Baumgarten, R., Colton, S., Morris, M.: Combining AI methods for learning bots in a real-time strategy game. Int. J. Comput. Games Technol. 2009, 1–10 (2008)CrossRefGoogle Scholar
  5. 5.
    Bridge, D.: The virtue of reward: performance, reinforcement and discovery in case-based reasoning. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 1–1. Springer, Heidelberg (2005). doi: 10.1007/11536406_1 CrossRefGoogle Scholar
  6. 6.
    Chung, M., Buro, M., Schaeffer, J.: Monte carlo planning in RTS games. In: Proceedings of the IEEE Symposium on Computational Intelligence and Games (2005)Google Scholar
  7. 7.
    Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: Proceedings of the Eight Artificial Intelligence and Interactive Digital Entertainment International Conference (AIIDE 2012) (2012)Google Scholar
  8. 8.
    Jaidee, U., Muñoz-Avila, H., Aha, D.: Integrated learning for goal-driven autonomy. In: Proceedings of the Twenty-Second International Conference on Artificial Intelligence (IJCAI 2011) (2011)Google Scholar
  9. 9.
    Kocsis, L., Szepesvári, C.: Bandit based monte-carlo planning. In: Machine Learning ECML 2006, pp. 282–293 (2006)Google Scholar
  10. 10.
    MacAlpine, P., Depinet, M., Stone, P.: UT austin villa 2014: Robocup 3D simulation league champion via overlapping layered learning. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) (2015)Google Scholar
  11. 11.
    Molineaux, M., Aha, D., Moore, P.: Learning continuous action models in a real-time strategy environment. In: Proceedings of the Twenty-First Annual Conference of the Florida Artificial Intelligence Research Society, pp. 257–262 (2008)Google Scholar
  12. 12.
    Muñoz-Avila, H., Aha, D., Jaidee, U., Klenk, M., Molineaux, M.: Applying goal driven autonomy to a team shooter game. In: Proceedings of the Florida Artificial Intelligence Research Society Conference, pp. 465–470 (2010)Google Scholar
  13. 13.
    Ontañón, S., Synnaeve, G., Uriarte, A., Richoux, F., Churchill, D., Preuss, M.: A survey of real-time strategy game AI research and competition in starcraft. IEEE Trans. Comput. Intell. AI Games 3(4), 293–311 (2013)CrossRefGoogle Scholar
  14. 14.
    Shannon, C.E.: Programming a computer for playing chess. In: Levy, D. (ed.) Computer Chess Compendium, pp. 2–13. Springer, New York (1950)Google Scholar
  15. 15.
    Smyth, B., Cunningham, P.: Déjà vu: A hierarchical case-based reasoning system for software design. In: ECAI, vol. 92, pp. 587–589 (1992)Google Scholar
  16. 16.
    Stone, P.: Layered Learning in Multiagent Systems: A Winning Approach to Robotic Soccer. MIT Press, Cambridge (1998)Google Scholar
  17. 17.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  18. 18.
    Van Der Heijden, M., Bakkes, S., Spronck, P.: Dynamic formations in real-time strategy games. In: 2008 IEEE Symposium on Computational Intelligence and Games, pp. 47–54. IEEE (2008)Google Scholar
  19. 19.
    Watkins, C.: Learning from Delayed Rewards. Ph.d. thesis, University of Cambridge, England (1989)Google Scholar
  20. 20.
    Weber, B.: Integrating Learning in a Multi-Scale Agent. Ph.d. thesis, University of California, Santa Cruz (2012)Google Scholar
  21. 21.
    Wender, S., Watson, I.: Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: broodwar. In: IEEE Symposium on Computational Intelligence and Games (CIG) (2012)Google Scholar
  22. 22.
    Wender, S., Watson, I.: Combining case-based reasoning and reinforcement learning for unit navigation in real-time strategy game AI. In: Lamontagne, L., Plaza, E. (eds.) ICCBR 2014. LNCS (LNAI), vol. 8765, pp. 511–525. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-11209-1_36 Google Scholar
  23. 23.
    Wender, S., Watson, I.: Integrating case-based reasoning with reinforcement learning for real-time strategy game micromanagement. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014. LNCS (LNAI), vol. 8862, pp. 64–76. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-13560-1_6 Google Scholar
  24. 24.
    Whiteson, S., Stone, P.: Concurrent layered learning. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 193–200. ACM (2003)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.The University of AucklandAucklandNew Zealand

Personalised recommendations