Advertisement

Learning from Demonstration and Case-Based Planning for Real-Time Strategy Games

  • Santiago Ontañón
  • Kinshuk Mishra
  • Neha Sugandh
  • Ashwin Ram
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 226)

Introduction

Artificial Intelligence (AI) techniques have been successfully applied to several computer games. However, in the vast majority of computer games traditional AI techniques fail to play at a human level because of the characteristics of the game. Most current commercial computer games have vast search spaces in which the AI has to make decisions in real-time, thus rendering traditional search based techniques inapplicable. For that reason, game developers need to spend a big effort in hand coding specific strategies that play at a reasonable level for each new game. One of the long term goals of our research is to develop artificial intelligence techniques that can be directly applied to such domains, alleviating the effort required by game developers to include advanced AI in their games.

Keywords

Open Goal Game State Strategy Game Execution Module Game Developer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aamodt, A., Plaza, E.: Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. Artificial Intelligence Communications 7(1), 39–59 (1994)Google Scholar
  2. Aha, D., Molineaux, M., Ponsen, M.: Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 5–20. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. Allen, F.: Maintaining Knowledge about Temporal Intervals. Communications of the ACM 26(11), 832–843 (1983)zbMATHCrossRefGoogle Scholar
  4. Andrade, G., et al.: Extending Reinforcement Learning to Provide Dynamic Game Balancing. In: IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games, pp. 7–12 (2005)Google Scholar
  5. Bakkes, S., Spronk, P., Postma, E.: Best-Response Learning of Team Behavior in Quake III. In: IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games, pp. 13–18 (2005)Google Scholar
  6. Buro, M.: Real-time Strategy Games: A New {AI} Research Challenge. In: Proceedings of IJCAI 2003, pp. 1534–1535. Morgan Kaufmann, San Francisco (2003)Google Scholar
  7. Chandrasekaran, B.: Design problem solving: a task analysis. Artificial Intelligence Magazine 11(4), 59–71 (1990)Google Scholar
  8. Fikes, R., Nilsson, N.: STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence 2(3/4), 189–208 (1971)zbMATHCrossRefGoogle Scholar
  9. Freβmann, A., et al.: CBR- based Execution and Planning Support for Collaborative Workflows. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 271–280. Springer, Heidelberg (2005)Google Scholar
  10. Hammond, K.: Case Based Planning: A Framework for Planning from Experience. Cognitive Science 14(3), 385–443 (1990)CrossRefGoogle Scholar
  11. Hoang, M., Lee-Urban, S., Muñoz-Avila, H.: Hierarchical Plan Representations for Encoding Strategic Game AI. Proceedings of AIIDE 2005, 63–68 (2005)Google Scholar
  12. Kolodner, J.: Case-based Reasoning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  13. Mateas, M., Stern, A.: A behavior language for story-based believable agents. IEEE intelligent systems and their applications 17(4), 39–47 (2002)CrossRefGoogle Scholar
  14. Muñoz-Avila, H., Aha, D.: On the Role of Explanation for Hierarchical Case-Based Planning in Real-Time Strategy Games. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, Springer, Heidelberg (2004)Google Scholar
  15. Myers, K.: CPEF: A Continuous Planning and Execution Framework. Artificial Intelligence Magazine 20(4), 63–69 (1999)Google Scholar
  16. Nau, D., et al.: SHOP: Simple Hierarchical Ordered Planner. Proceedings of IJCAI 1999, 968–975 (1999)Google Scholar
  17. Orkin, J.: Applying Goal-Oriented Action Planning to Games. In: AI Game Programming Wisdom II, pp. 217–227. Charles River Media (2003)Google Scholar
  18. Ponsen, M., et al.: Automatically acquiring adaptive real-time strategy game opponents using evolutionary learning. In: Proceedings of the 20th National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence, pp. 1535–1540 (2005)Google Scholar
  19. Sánchez-Pelegrín, R., Gómez-Martín, M.A., Díaz-Agudo, B.: A CBR Module for a Strategy Videgame. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 217–226. Springer, Heidelberg (2005)Google Scholar
  20. Sharma, M., et al.: Transfer Learning in Real Time Strategy Games Using Hybrid CBR/RL. In: Proceedings if IJCAI 2007, pp. 1041–1046. Morgan Kaufmann, San Francisco (2007)Google Scholar
  21. Spalazzi, L.: A Survey on Case-Based Planning. Artificial Intelligence Review 16(1), 3–36 (2001)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Santiago Ontañón
    • 1
  • Kinshuk Mishra
    • 1
  • Neha Sugandh
    • 1
  • Ashwin Ram
    • 1
  1. 1.CCL, Cognitive Computing LabGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations