Encyclopedia of Computer Graphics and Games

Living Edition
| Editors: Newton Lee

RTS AI Problems and Techniques

  • Santiago Ontañón
  • Gabriel Synnaeve
  • Alberto Uriarte
  • Florian Richoux
  • David Churchill
  • Mike Preuss
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-08234-9_17-1

Synonyms

Definition

Real-time strategy (RTS) games is a subgenre of strategy games where players need to build an economy (gathering resources and building a base) and military power (training units and researching technologies) in order to defeat their opponents (destroying their army and base). Artificial intelligence problems related to RTS games deal with the behavior of an artificial player. This consists among others to learn how to play, to have an understanding about the game and its environment, and to predict and infer game situations from a context and sparse information.

Introduction

The field of real-time strategy (RTS) game artificial intelligence (AI) has advanced significantly in the past few years, partially thanks to competitions such as the “ORTS RTS Game AI Competition” (held from 2006 to 2009), the “AIIDE StarCraft AI Competition” (held since 2010), and the “CIG StarCraft RTS AI Competition”...

This is a preview of subscription content, log in to check access.

References and Further Reading

  1. Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. Artif. Intell. Commun. 7(1), 39–59 (1994)Google Scholar
  2. Aha, D.W., Molineaux, M., Ponsen, M.J.V.: Learning to win: case-based plan selection in a real-time strategy game. In: ICCBR, pp. 5–20. Chicago, USA (2005)Google Scholar
  3. Avery, P., Louis, S., Avery, B.: Evolving coordinated spatial tactics for autonomous entities using influence maps. In: Proceedings of the 5th International Conference on Computational Intelligence and Games, pp. 341–348. CIG ’09. IEEE Press, Piscataway. http://dl.acm.org/citation.cfm?id=1719293.1719350 (2009)
  4. Balla, R.K., Fern, A.: Uct for tactical assault planning in real-time strategy games. In: International Joint Conference of Artificial Intelligence, IJCAI, pp. 40–45. Morgan Kaufmann Publishers, San Francisco (2009)Google Scholar
  5. Buro, M.: Real-time strategy games: a new AI research challenge. In: IJCAI 2003, International Joint Conferences on Artificial Intelligence, pp. 1534–1535. Acapulco, Mexico (2003)Google Scholar
  6. Buro, M., Churchill, D.: Real-time strategy game competitions. AI Mag. 33(3), 106–108 (2012)Google Scholar
  7. Cadena, P., Garrido, L.: Fuzzy case-based reasoning for managing strategic and tactical reasoning in StarCraft. In: Batyrshin, I.Z., Sidorov, G. (eds.) MICAI (1). Lecture Notes in Computer Science, vol. 7094, pp. 113–124. Springer, Puebla (2011)Google Scholar
  8. Čertický, M.: Implementing a wall-in building placement in starcraft with declarative programming CoRR abs/1306.4460 (2013). http://arxiv.org/abs/1306.4460
  9. Čertický, M., Čertický, M.: Case-based reasoning for army compositions in real-time strategy games. In: Proceedings of Scientific Conference of Young Researchers, pp. 70–73. Baku, Azerbaijan (2013)Google Scholar
  10. Chung, M., Buro, M., Schaeffer, J.: Monte Carlo planning in RTS games. In: IEEE Symposium on Computational Intelligence and Games (CIG), Colchester, UK (2005).Google Scholar
  11. Churchill, D., Buro, M.: Build order optimization in starcraft. In: Proceedings of AIIDE, pp. 14–19. Palo Alto, USA (2011)Google Scholar
  12. Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: AIIDE, Palo Alto, USA (2012a)Google Scholar
  13. Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2012) (2012b)Google Scholar
  14. Danielsiek, H., Stuer, R., Thom, A., Beume, N., Naujoks, B., Preuss, M.: Intelligent moving of groups in real-time strategy games. In: 2008 I.E. Symposium on Computational Intelligence and Games, pp. 71–78. Perth, Australia (2008)Google Scholar
  15. Demyen, D., Buro, M.: Efficient triangulation-based pathfinding. In: Proceedings of the 21st National Conference on Artificial intelligence, vol. 1, pp. 942–947. Boston, USA (2006)Google Scholar
  16. Dereszynski, E., Hostetler, J., Fern, A., Hoang, T.D.T.T., Udarbe, M.: Learning probabilistic behavior models in real-time strategy games. In: AAAI (ed.) Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Palo Alto, USA (2011)Google Scholar
  17. Forbus, K.D., Mahoney, J.V., Dill, K.: How qualitative spatial reasoning can improve strategy game AIs. IEEE Intell. Syst. 17, 25–30 (2002). doi:10.1109/MIS.2002.1024748CrossRefGoogle Scholar
  18. Geib, C.W., Goldman, R.P.: A probabilistic plan recognition algorithm based on plan tree grammars. Artif. Intell. 173, 1101–1132 (2009)MathSciNetCrossRefGoogle Scholar
  19. Hagelbäck, J.: Potential-field based navigation in starcraft. In: CIG (IEEE), Granada, Spain (2012)Google Scholar
  20. Hagelbäck, J., Johansson, S.J.: Dealing with fog of war in a real time strategy game environment. In: CIG (IEEE), pp. 55–62. Perth, Australia (2008)Google Scholar
  21. Hagelbäck, J., Johansson, S.J.: A multiagent potential field-based bot for real-time strategy games. Int. J. Comput. Games Technol. 2009, 4:1–4:10 (2009)Google Scholar
  22. Hale, D.H., Youngblood, G.M., Dixit, P.N.: Automatically-generated convex region decomposition for real-time spatial agent navigation in virtual worlds. Artificial Intelligence and Interactive Digital Entertainment AIIDE, pp. 173–178. http://www.aaai.org/Papers/AIIDE/2008/AIIDE08-029.pdf (2008)
  23. Hladky, S., Bulitko, V.: An evaluation of models for predicting opponent positions in first-person shooter video games. In: CIG (IEEE), Perth, Australia (2008)Google Scholar
  24. Hoang, H., Lee-Urban, S., Muñoz-Avila, H.: Hierarchical plan representations for encoding strategic game ai. In: AIIDE, pp. 63–68. Marina del Rey, USA (2005)Google Scholar
  25. Houlette, R., Fu, D.: The ultimate guide to FSMs in games. In: AI Game Programming Wisdom 2, Charles River Media, Hingham, MA, USA (2003)Google Scholar
  26. Hsieh, J.L., Sun, C.T.: Building a player strategy model by analyzing replays of real-time strategy games. In: IJCNN, pp. 3106–3111. Hong Kong (2008)Google Scholar
  27. Jaidee, U., Muñoz-Avila, H., Aha, D.W.: Case-based learning in goal-driven autonomy agents for real-time strategy combat tasks. In: Proceedings of the ICCBR Workshop on Computer Games, pp. 43–52. Greenwich, UK (2011)Google Scholar
  28. Jaidee, U., Muñoz-Avila, H.: Classq-l: A q-learning algorithm for adversarial real-time strategy games. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, Palo Alto, USA (2012)Google Scholar
  29. Kabanza, F., Bellefeuille, P., Bisson, F., Benaskeur, A.R., Irandoust, H.: Opponent behaviour recognition for real-time strategy games. In: AAAI Workshops, Atlanta, USA (2010)Google Scholar
  30. Koenig, S., Likhachev, M.: D*lite. In: AAAI/IAAI, pp. 476–483. Edmonton, Canada (2002)Google Scholar
  31. Liu, L., Li, L.: Regional cooperative multi-agent q-learning based on potential field. In: Natural Computation, 2008. ICNC’08. Fourth International Conference on, vol. 6, pp. 535–539. IEEE (2008)Google Scholar
  32. Madeira, C., Corruble, V., Ramalho, G.: Designing a reinforcement learning-based adaptive AI for large-scale strategy games. In: AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), Marina del Rey, USA (2006)Google Scholar
  33. Marthi, B., Russell, S., Latham, D., Guestrin, C.: Concurrent hierarchical reinforcement learning. In: International Joint Conference of Artificial Intelligence, IJCAI, pp. 779–785. Edinburgh, UK (2005)Google Scholar
  34. Miles, C.E.: Co-evolving real-time strategy game players. ProQuest (2007)Google Scholar
  35. Miles, C., Louis, S.J.: Co-evolving real-time strategy game playing influence map trees with genetic algorithms. In: Proceedings of the International Congress on Evolutionary Computation, Portland (2006)Google Scholar
  36. Mishra, K., Ontañón, S., Ram, A.: Situation assessment for plan retrieval in real-time strategy games. In: ECCBR, pp. 355–369. Trier, Germany (2008)Google Scholar
  37. Molineaux, M., Aha, D.W., Moore, P.: Learning continuous action models in a real-time strategy strategy environment. In: FLAIRS Conference, pp. 257–262. Coconut Grove, USA (2008)Google Scholar
  38. Ontañón, S.: The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: AIIDE, Boston, USA (2013)Google Scholar
  39. Ontañón, S., Mishra, K., Sugandh, N., Ram, A.: Learning from demonstration and case-based planning for real-time strategy games. In: Prasad, B. (ed.) Soft Computing Applications in Industry, Studies in Fuzziness and Soft Computing, vol. 226, pp. 293–310. Springer, Berlin (2008)CrossRefGoogle Scholar
  40. Ontañón, S., Mishra, K., Sugandh, N., Ram, A.: On-line case-based planning. Comput. Intell. 26(1), 84–119 (2010)MathSciNetCrossRefGoogle Scholar
  41. Othman, N., Decraene, J., Cai, W., Hu, N., Gouaillard, A.: Simulation-based optimization of starcraft tactical AI through evolutionary computation. In: CIG (IEEE), Granada, Spain (2012)Google Scholar
  42. Perkins, L.: Terrain analysis in real-time strategy games: an integrated approach to choke point detection and region decomposition. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2010), vol. 10, pp. 1687–173 (2010)Google Scholar
  43. Ponsen, M., Spronck, I.P.H.M.: Improving adaptive game AI with evolutionary learning. In: University of Wolverhampton, pp. 389–396 (2004)Google Scholar
  44. Pottinger, D.C.: Terrain analysis for real-time strategy games. In: Proceedings of Game Developers Conference 2000, San Francisco, USA (2000)Google Scholar
  45. Preuss, M., Beume, N., Danielsiek, H., Hein, T., Naujoks, B., Piatkowski, N., Ster, R., Thom, A., Wessing, S.: Towards intelligent team composition and maneuvering in real-time strategy games. Trans. Comput. Intell. AI. Game (TCIAIG) 2(2), 82–98 (2010)CrossRefGoogle Scholar
  46. Reynolds, C.W.: Steering behaviors for autonomous characters. Proc. Game Dev. Conf. 1999, 763–782 (1999)Google Scholar
  47. Richoux, F., Uriarte, A., Ontañón, S.: Walling in strategy games via constraint optimization. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2014) (2014)Google Scholar
  48. Schadd, F., Bakkes, S., Spronck, P.: Opponent modeling in real-time strategy games. In: GAMEON, pp. 61–70. Bologna, Italy (2007)Google Scholar
  49. Sharma, M., Holmes, M., Santamaria, J., Irani, A., Isbell, C.L., Ram, A.: Transfer learning in real-time strategy games using hybrid CBR/RL. In: International Joint Conference of Artificial Intelligence, IJCAI, Hyderabad, India (2007)Google Scholar
  50. Smith, G., Avery, P., Houmanfar, R., Louis, S.: Using co-evolved RTS opponents to teach spatial tactics. In: CIG (IEEE), Copenhagen, Denmark (2010)Google Scholar
  51. Sturtevant, N.: Benchmarks for grid-based pathfinding. Transactions on Computational Intelligence and AI in Games. http://web.cs.du.edu/sturtevant/papers/benchmarks.pdf (2012)
  52. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press, Cambridge, MA (1998)Google Scholar
  53. Synnaeve, G., Bessiere, P.: A Bayesian model for opening prediction in RTS games with application to StarCraft. In: Proceedings of 2011 I.E. CIG, p. 000. Seoul, Corée, République De, Seoul, South Korea (2011a)Google Scholar
  54. Synnaeve, G., Bessière, P.: A Bayesian model for plan recognition in RTS games applied to StarCraft. In: AAAI (ed.) Proceedings of the Seventh Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2011), pp. 79–84. Proceedings of AIIDE, Palo Alto, États-Unis, Granada, Spain (2011b)Google Scholar
  55. Synnaeve, G., Bessiere, P.: A Bayesian model for RTS units control applied to StarCraft. In: Proceedings of IEEE CIG 2011, p. 000. Seoul, Corée, République De (2011c)Google Scholar
  56. Synnaeve, G., Bessiere, P.: A dataset for StarCraft AI & an example of armies clustering. In: AIIDE Workshop on AI in Adversarial Real-time games 2012, Seoul, South Korea (2012a)Google Scholar
  57. Synnaeve, G., Bessièere, P.: Special tactics: a Bayesian approach to tactical decision-making. In: CIG (IEEE), Granada, Spain (2012b)Google Scholar
  58. Treuille, A., Cooper, S., Popović, Z.: Continuum crowds. ACM Trans. Graph. 25(3), 1160–1168 (2006)CrossRefGoogle Scholar
  59. Uriarte, A., Ontañón, S.: Kiting in RTS games using influence maps. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, Palo Alto, USA (2012)Google Scholar
  60. Uriarte, A., Ontañón, S.: Game-tree search over high-level game states in RTS games. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2014). AAAI Press (2014)Google Scholar
  61. Weber, B.G., Mateas, M.: A data mining approach to strategy prediction. In: IEEE Symposium on Computational Intelligence and Games (CIG), Milan, Italy (2009)Google Scholar
  62. Weber, B.G., Mateas, M., Jhala, A.: Applying goal-driven autonomy to starcraft. In: Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Palo Alto, USA (2010a)Google Scholar
  63. Weber, B.G., Mawhorter, P., Mateas, M., Jhala, A.: Reactive planning idioms for multi-scale game AI. In: IEEE Symposium on Computational Intelligence and Games (CIG), Copenhagen, Denmark (2010b)Google Scholar
  64. Weber, B.G., Mateas, M., Jhala, A.: Building human-level AI for real-time strategy games. In: Proceedings of AIIDE Fall Symposium on Advances in Cognitive Systems. AAAI Press, AAAI Press, Stanford (2011a)Google Scholar
  65. Weber, B.G., Mateas, M., Jhala, A.: A particle model for state estimation in real-time strategy games. In: Proceedings of AIIDE, pp. 103–108. AAAI Press, AAAI Press, Stanford (2011b)Google Scholar
  66. Wender, S., Watson, I.: Applying reinforcement learning to small scale combat in the real-time strategy game starcraft: Broodwar. In: CIG (IEEE), Granada, Spain (2012)Google Scholar
  67. Wintermute, S., Joseph Xu, J.Z., Laird, J.E.: Sorts: A human-level approach to real-time strategy AI. In: AI and Interactive Digital Entertainment Conference, AIIDE (AAAI), pp. 55–60. Palo Alto, USA (2007)Google Scholar
  68. Young, J., Hawes, N.: Evolutionary learning of goal priorities in a real-time strategy game. In: Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE 2012) (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Santiago Ontañón
    • 1
  • Gabriel Synnaeve
    • 2
  • Alberto Uriarte
    • 1
  • Florian Richoux
    • 3
  • David Churchill
    • 4
  • Mike Preuss
    • 5
  1. 1.Computer Science DepartmentDrexel UniversityPhiladelphiaUSA
  2. 2.Cognitive Science and Psycholinguistics (LSCP) of ENS UlmParisFrance
  3. 3.Nantes Atlantic Computer Science Laboratory (LINA)University of NantesNantesFrance
  4. 4.Computing Science DepartmentUniversity of AlbertaEdmontonCanada
  5. 5.Information Systems and StatisticsWestfälische Wilhelms-Universität MunsterMünsterGermany