Skip to main content

A Review of Artificial Intelligence for Games

  • Conference paper
  • First Online:
Artificial Intelligence in China

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 572))

Abstract

Artificial Intelligence (AI) has made great progress in recent years, and it is unlikely to become less important in the future. Besides, it would also be an understatement that the game has greatly promoted the development of AI. Game AI has made a remarkable improvement in about fifteen years. In this paper, we present an academic perspective of AI for games. A number of basic AI methods usually used in games are summarized and discussed, such as ad hoc authoring, tree search, evolutionary computation, and machine learning. Through analysis, it can be concluded that the current game AI is not smart enough, which strongly calls for supports coming from new methods and techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 379.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Cruz RMO, Sabourin R, Cavalcanti GDC et al (2015) META-DES: a dynamic ensemble selection framework using meta-learning. Pattern Recognit 48(5):1925–1935

    Article  Google Scholar 

  2. Schaeffer J, Lake R, Lu P et al (1996) Chinook: the world man machine checkers champion. AI Mag 17(1):21

    Google Scholar 

  3. Campbell M, Hoane AJ, Hsu F (2002) Deep blue. Artif Intell 134(1):57–83

    Article  Google Scholar 

  4. Silver D, Huang A, Maddison CJ et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489

    Article  Google Scholar 

  5. Mnih V, Kavukcuoglu K, Silver D et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

    Article  Google Scholar 

  6. Tang Z, Shao K, Zhu Y et al (2018) A review of computational intelligence for StarCraft AI. In: IEEE SSCI, pp 1167–1173

    Google Scholar 

  7. Ontañón S, Synnaeve G, Uriarte A et al (2013) A survey of real-time strategy game AI research and competition in StarCraft. IEEE Trans Comput Intell AI 5(4):293–311

    Article  Google Scholar 

  8. Ketter W, Collins J, Gini M et al (2012) Real-time tactical and strategic sales management for intelligent agents guided by economic regimes. Inf Syst Res 23(4):1263–1283

    Article  Google Scholar 

  9. Hidalgo-Bermúdez RM, Rodríguez-Domingo MS, Mora AM et al (2013) Evolutionary FSM-based agents for playing super Mario game. LION. https://doi.org/10.1007/978-3-642-44973-4_39

    Article  Google Scholar 

  10. Toll WGV, Iv AFC, Geraerts R et al (2012) Real-time density-based crowd simulation. Comput Animat Virtual Worlds 23(1):59–69

    Article  Google Scholar 

  11. Jadon S, Singhal A, Dawn S (2014) Military simulator—a case study of behaviour tree and unity based architecture. Int J Comput Appl 88(5):26–29

    Google Scholar 

  12. Guillaume MJ, Mark HM, Herik HJ et al (2008) Progressive strategies for Monte-Carlo tree search. NMNC 4(03):343–357

    MathSciNet  MATH  Google Scholar 

  13. Coquelin R, Munos A (2007) Bandit algorithms for tree search. In: Conference on uncertainty in artificial intelligence, pp 67–74

    Google Scholar 

  14. Silver D, Sutton RS, Müller M (2008) Sample-based learning and search with permanent and transient memories. In: Annual international conference on machine learning, pp 968–975

    Google Scholar 

  15. Marcolino LS, Matsubara H (2011) Multi-agent monte carlo Go. In: International conference on autonomous agents and multiagent systems, pp 21–28

    Google Scholar 

  16. Young J, Hawes N (2012) Evolutionary learning of goal priorities in a real-time strategy game. In: AIIDE-12

    Google Scholar 

  17. Sánchez PG, Tonda AP, Garcia AM et al (2015) Towards automatic StarCraft strategy generation using genetic programming. In: IEEE CIG, pp 284–291

    Google Scholar 

  18. Köstler H, Gmeiner B (2013) A multi-objective genetic algorithm for build order optimization in StarCraft II. KI 27:221–233

    Article  Google Scholar 

  19. Carlos A, Gary B, David A (2007) Evolutionary algorithms for solving multi-objective problems. Springer

    Google Scholar 

  20. Justesen N, Risi S (2017) Continual online evolutionary planning for in game build order adaptation in StarCraft. In: GECCO, pp 187–194

    Google Scholar 

  21. Svendsen JB, Rathe EA (2012) Micromanagement in StarCraft using potential fields tuned with a multi-objective genetic algorithm. Master’s thesis, NTNU

    Google Scholar 

  22. Gabriel I, Negru V, Zaharie D (2012) Neuroevolution based multi-agent system for micromanagement in real-time strategy games. In: BCI, pp 32–39

    Google Scholar 

  23. Santiago O, Synnaeve G, Uriarte A et al (2015) RTS AI problems and techniques. https://doi.org/10.1007/978-3-319-08234-9_17-1

    Google Scholar 

  24. Lu F, Yamamoto K, Nomura LH et al (2013) Fighting game artificial intelligence competition platform. In: IEEE GCCE. https://doi.org/10.1109/gcce.2013.6664844

  25. Cho HC, Kim KJ, Cho SB (2013) Replay-based strategy prediction and build order adaptation for StarCraft AI bots. In: IEEE CIG, pp 1–7

    Google Scholar 

  26. Dereszynski E, Hostetler J, Fern A (2011) Learning probabilistic behavior models in real-time strategy games. In: AIIDE, pp 20–25

    Google Scholar 

  27. Justesen N, Risi S (2017) Learning macromanagement in StarCraft from replays using deep learning. In: IEEE CIG, pp 162–169

    Google Scholar 

  28. Zhao D, Shao K, Zhu Y et al (2016) Review of deep reinforcement learning and discussions on the development of computer Go. IET Control Theory A 33(6):701–717

    MATH  Google Scholar 

  29. Usunier N, Synnaeve G, Lin Z et al (2017) Episodic exploration for deep deterministic policies: an application to StarCraft micromanagement tasks. In: International conference on learning representations

    Google Scholar 

  30. Shao K, Zhu Y, Zhao D (2018) StarCraft micromanagement with reinforcement learning and curriculum transfer learning. In: IEEE TETCI. https://doi.org/10.1109/tetci.2018.2823329

    Article  Google Scholar 

  31. Bloembergen D, Tuyls K, Hennes D et al (2015) Evolutionary dynamics of multi-agent learning: a survey. J Artif Intell Res 53(1):659–697

    Article  MathSciNet  Google Scholar 

  32. Al-Yaseen WL, Othman ZA, Nazri MZA (2015) Hybrid modified k-means with C4.5 for intrusion detection systems in multiagent systems. Sci World J. https://doi.org/10.1155/2015/294761

    Article  Google Scholar 

  33. Zaki MJ (2001) SPADE: an efficient algorithm for mining frequent sequences. Mach Learn 42(1–2):31–60

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xueman Fan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fan, X., Wu, J., Tian, L. (2020). A Review of Artificial Intelligence for Games. In: Liang, Q., Wang, W., Mu, J., Liu, X., Na, Z., Chen, B. (eds) Artificial Intelligence in China. Lecture Notes in Electrical Engineering, vol 572. Springer, Singapore. https://doi.org/10.1007/978-981-15-0187-6_34

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-0187-6_34

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-0186-9

  • Online ISBN: 978-981-15-0187-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics