Artificial Intelligence Review

, Volume 29, Issue 2, pp 123–161 | Cite as

Machine learning in digital games: a survey

Article

Abstract

Artificial intelligence for digital games constitutes the implementation of a set of algorithms and techniques from both traditional and modern artificial intelligence in order to provide solutions to a range of game dependent problems. However, the majority of current approaches lead to predefined, static and predictable game agent responses, with no ability to adjust during game-play to the behaviour or playing style of the player. Machine learning techniques provide a way to improve the behavioural dynamics of computer controlled game agents by facilitating the automated generation and selection of behaviours, thus enhancing the capabilities of digital game artificial intelligence and providing the opportunity to create more engaging and entertaining game-play experiences. This paper provides a survey of the current state of academic machine learning research for digital game environments, with respect to the use of techniques from neural networks, evolutionary computation and reinforcement learning for game agent control.

Keywords

Machine learning Computational intelligence Digital games Game AI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agapitos A, Togelius J, Lucas SM (2007a) Evolving controllers for simulated car racing using object orientated genetic programming. In: Thierens D et al (eds) Proceedings of the 2007 genetic and evolutionary computation conference. ACM Press, pp 1543–1550Google Scholar
  2. Agapitos A, Togelius J, Lucas SM (2007b) Multiobjective techniques for the use of state in genetic programming applied to simulated car racing. In: Proceedings of the 2007 IEEE congress on evolutionary computation. IEEE Press, pp 1562–1569Google Scholar
  3. Baekkelund C (2006) A brief comparison of machine learning methods. In: Rabin S (eds) AI game programming wisdom 3, 1st edn. Charles River Media, HinghamGoogle Scholar
  4. Baluja S (1994) Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning. Technical report CMU-CS-94-163, Carnegie Mellon UniversityGoogle Scholar
  5. Bauckhage C, Thurau C (2004) Towards a fair ‘n square aimbot—using mixtures of experts to learn context aware weapon handling. In: El-Rhalibi A, Van Welden D (eds) Proceedings of the 5th international conference on intelligent games and simulation. Eurosis, pp 20–24Google Scholar
  6. Beyer H-G (2001) The theory of evolutionary strategies. Springer, BerlinGoogle Scholar
  7. Billings D, Davidson A, Schaeffer J, Szafron D (2002) The challenge of poker. Artif Intell 134(1–2): 201–240MATHCrossRefGoogle Scholar
  8. Blumberg B, Downie M, Ivanov Y, Berlin M, Johnson MP, Tomlinson B (2002) Integrated learning for interactive synthetic characters. In: Proceedings of the 29th annual conference on computer graphics and interactive techniques. ACM Press, pp 417–426Google Scholar
  9. Bradley J, Hayes G (2005a) Adapting reinforcement learning for computer games: using group utility functions. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, PiscatawayGoogle Scholar
  10. Bradley J, Hayes G (2005b) Group utility functions: learning equilibria between groups of agents in computer games by modifying the reinforcement signal. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 1914–1921Google Scholar
  11. Breiman L (1996) Bagging predictors. Mach Learn 24(2): 123–140MATHMathSciNetGoogle Scholar
  12. Bryant BD, Miikkulainen R (2003) Neuroevolution for adaptive teams. In: Proceedings of the 2003 IEEE congress on evolutionary computation. IEEE Press, pp 2194–2201Google Scholar
  13. Bryant BD, Miikkulainen R (2006a) Exploiting sensor symmetries in example-based training for intelligent agents. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 90–97Google Scholar
  14. Bryant BD, Miikkulainen R (2006b) Evolving stochastic controller networks for intelligent game agents. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1007–1014Google Scholar
  15. Bryant BD, Miikkulainen R (2007) Acquiring visibly intelligent behavior with example-guided neuroevolution. In: Proceedings of the 22nd National conference on artificial intelligence. AAAI Press, Menlo park, pp 801–808Google Scholar
  16. Buckland M (2005) Programming game AI by example. Wordware Publishing, PlanoGoogle Scholar
  17. Buro M (1997) The othello match of the year: Takeshi Murakami vs. Logistello. ICCA J 20(3): 189–193Google Scholar
  18. Campbell M, Hoane AJ, Hsu F-H (2002) Deep blue. Artif Intell 134(1–2): 57–83MATHCrossRefGoogle Scholar
  19. Chaperot B, Fyfe C (2006) Improving artificial intelligence in a motocross game. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 181–186Google Scholar
  20. Charles D (2003) Enhancing gameplay: challenges for artificial intelligence in digital games. In: Proceedings of digital games research conference 2003. University of Utrecht, 4–6 November 2003Google Scholar
  21. Charles D, McGlinchey S (2004) The past, present and future of artificial neural networks in digital games. In: Mehdi Q, Gough N, Natkin S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 163–169Google Scholar
  22. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2): 182–197CrossRefGoogle Scholar
  23. DeBoer P-T, Kroese DP, Mannor S, Rubinstein RY (2004) A tutorial on the cross-entropy method. Ann Oper Res 134(1): 19–67CrossRefMathSciNetGoogle Scholar
  24. DeJong KA, Sarma J (1992) Generation gaps revisited. In: Whitley D (eds) Foundations of genetic algorithms 2. Morgan Kaufmann, San Mateo, pp 19–28Google Scholar
  25. Demasi P, Cruz AJ (2003) Online coevolution for action games. Int J Intell Games Simul 2: 80–88Google Scholar
  26. Dietterich TG (2000) An overview of MAXQ hierarchical reinforcement learning. In: Choueiry BY, Walsh T (eds) Lecture notes in computer science, vol. 1864. Springer, Berlin, pp 26–44Google Scholar
  27. D’Silva T, Janik R, Chrien M, Stanley KO, Miikkulainen R (2005) Retaining learned behavior during real-time neuroevolution. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 39–44Google Scholar
  28. Duan J, Gough NE, Mehdi QH (2002) Multi-agent reinforcement learning for computer game agents. In: Mehdi QH, Gough NE (eds) Proceedings of the 3rd international conference on intelligent games and simulation. The University of Wolverhampton, pp 104–109Google Scholar
  29. Elman JL (1990) Finding structure in time. Cog Sci 14: 179–211CrossRefGoogle Scholar
  30. Evans R (2001) The future of AI in games: a personal view. Game Dev 8: 46–49Google Scholar
  31. Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Saitta L (ed) Proceedings of the 13th international conference on machine learning. Morgan Kaufmann, pp 148–156Google Scholar
  32. Fürnkranz J (2001) Machine learning in games: a survey. In: Fürnkranz J, Kubat M (eds) Machines that learn to play games. Nova Science Publishers, Huntington, pp 11–59Google Scholar
  33. Gallagher M, Ledwich M (2007) Evolving Pac-Man players: can we learn from raw input? In: Proceedings of the 2007 IEEE symposium on computational intelligence in games. IEEE Press, pp 282–287Google Scholar
  34. Gallagher M, Ryan A (2003) Learning to play Pac-Man: an evolutionary, rule-based approach. In: Proceedings of the 2003 IEEE congress on evolutionary computation. IEEE Press, pp 2462–2469Google Scholar
  35. Galway L, Charles D, Black M (2007) Temporal difference control within a dynamic environment. In: Roccetti M (ed) Proceedings of the 8th international conference on intelligent games and simulation. Eurosis, pp 42–47Google Scholar
  36. Geisler B (2004) Integrated machine learning for behavior modeling in video games. In: Fu D, Henke S, Orkin J (eds) Challenges in game artificial intelligence: papers from the 2004 AAAI workshop. AAAI Press, Menlo Park, pp 54–62Google Scholar
  37. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley, ReadingMATHGoogle Scholar
  38. Gomez F, Miikkulainen R (1997) Incremental evolution of complex general behavior. Adapt Behav 5: 317–342CrossRefGoogle Scholar
  39. Gorniak P, Davis I (2007) SquadSmart hierarchical planning and coordinated plan execution for squads of characters. In: Schaeffer J, Mateas M (eds) Proceedings of the 3rd artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 14–19Google Scholar
  40. Graepel T, Herbrich R, Gold J (2004) Learning to fight. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 193–200Google Scholar
  41. Grand S, Cliff D, Malhotra A (1997) Creatures: artificial life autonomous software agents for home entertainment. In: Proceedings of the 1st international conference on autonomous agents. ACM Press, pp 22–29Google Scholar
  42. Hagan MT, Menhaj M (1994) Training feed-forward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6): 989–993CrossRefGoogle Scholar
  43. Haykin S (1994) Neural networks: a comprehensive foundation. Prentice-Hall, Upper Saddle RiverMATHGoogle Scholar
  44. Hoang H, Lee-Urban S, Muñoz-Avila H (2005) Hierarchical plan representations for encoding strategic game AI. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 63–68Google Scholar
  45. Holland JH (1975) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control and artificial intelligence. MIT Press, CambridgeGoogle Scholar
  46. Hsu F-H (2002) Behind deep blue: building the computer that defeated the World Chess Champion. Princeton University Press, PrincetonMATHGoogle Scholar
  47. Jacobs R, Jordan M, Nowlan S, Hinton G (1991) Adaptive mixture of local experts. Neural Comput 3(1): 79–87CrossRefGoogle Scholar
  48. Kirby N (2004) Getting around the limits of machine learning. In: Rabin S(eds) AI game programming wisdom 2, 1st edn. Charles River Media, HinghamGoogle Scholar
  49. Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43: 59–69MATHCrossRefMathSciNetGoogle Scholar
  50. Koza JP (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, CambridgeMATHGoogle Scholar
  51. Laird JE, Van Lent M (1999) Developing an artificial intelligence engine. In: Proceedings of the 1999 game developers conference, pp 577–588Google Scholar
  52. Laird JE, Van Lent M (2001) Human level AI’s killer application: interactive computer games. AI Mag 22: 15–25Google Scholar
  53. Louis SJ, McDonnell J (2004) Learning with case-injected genetic algorithms. IEEE Trans Evol Comput 8: 316–327CrossRefGoogle Scholar
  54. Louis SJ, Miles C (2005) Playing to learn: case-injected genetic algorithms for learning to play computer games. IEEE Trans Evol Comput 9: 669–681CrossRefGoogle Scholar
  55. Lucas SM (2004) Cellz: A simple dynamic game for testing evolutionary algorithms. In: Proceedings of the 2004 IEEE congress on evolutionary computation. IEEE Press, pp 1007–1014Google Scholar
  56. Lucas SM (2005) Evolving a neural network location evaluator to play Ms. Pac-Man. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, pp 203–210Google Scholar
  57. Lucas SM, Kendall G (2006) Evolutionary computation and games. IEEE Comput Intell Mag. February, pp 10–18Google Scholar
  58. Lucas SM, Togelius J (2007) Point-to-point car racing: an initial study of evolution versus temporal difference learning. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 260–267Google Scholar
  59. MacNamee B, Cunningham P (2001) Proposal for an agent architecture for proactive persistent non player characters. In: Proceedings of the 12th Irish conference on artificial intelligence and cognitive science, pp 221–232Google Scholar
  60. MacNamee B, Cunningham P (2003) Creating socially interactive non player characters: the μ-SIC system. Int J Intell Games Simul 2: 28–35Google Scholar
  61. Maderia C, Corruble V, Ramalho G, Ratitch B (2004) Bootstrapping the learning process for semi-automated design of a challenging game AI. In: Fu D, Henke S, Orkin J (eds) Challenges in game artificial intelligence: papers from the 2004 AAAI workshop. AAAI Press, Menlo Park, pp 72–76Google Scholar
  62. Maderia C, Corruble V, Ramalho G (2006) Designing a reinforcement learning-based adaptive AI for large-scale strategy games. In: Laird J, Schaeffer J (eds) Proceedings of the 2nd artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 121–123Google Scholar
  63. Maes P (1995) Artificial life meets entertainment: lifelike autonomous agents. Commun ACM 38: 108–114CrossRefGoogle Scholar
  64. Manslow J (2002) Learning and adaptation. In: Rabin S (eds) AI game programming wisdom, 1st edn. Charles River Media, HinghamGoogle Scholar
  65. McGlinchey S (2003) Learning of AI players from game observation data. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 106–110Google Scholar
  66. Miconi T, Channon A (2006) The N-strikes-out algorithm: a steady-state algorithm for coevolution. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1639–1646Google Scholar
  67. Miikkulainen R, Bryant BD, Cornelius R, Karpov IV, Stanley KO, Yong CH (2006) Computational intelligence in games. In: Yen GY, Fogel DB (eds) Computational intelligence: principles and practice. IEEE Computational Intelligence Society, Piscataway, pp 155–191Google Scholar
  68. Miles C, Louis SJ (2006) Towards the co-evolution of influence map tree based strategy game players. In: Louis SJ, Kendall G (eds) Proceedings of the 2006 IEEE symposium on computational intelligence and games. IEEE Press, pp 75–82Google Scholar
  69. Miles C, Louis SJ, Cole N, McDonnell J (2004a) Learning to play like a human: case injected genetic algorithms for strategic computer gaming. In: Proceedings of the 2004 IEEE congress on evolutionary computing. IEEE Press, pp 1441–1448Google Scholar
  70. Miles C, Louis SJ, Drewes R (2004b) Trap avoidance in strategic computer game playing with case injected genetic algorithms. In: Proceedings of the 2004 genetic and evolutionary computation conference. ACM Press, pp 1365–1376Google Scholar
  71. Miles C, Quiroz J, Leigh R, Louis SJ (2007) Co-evolving influence map tree based strategy game players. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 88–95Google Scholar
  72. Orkin J (2005) Agent architecture considerations for real-time planning in games. In: Young M, Laird J (eds) Proceedings of the 1st artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 105–110Google Scholar
  73. Parker G, Rawlins G (1996) Cyclic genetic algorithms for the locomotion of hexapod robots. In: Proceedings of the 2nd world automation congress, pp 617–622Google Scholar
  74. Parker GB, Doherty TS, Parker M (2005a) Evolution and prioritization of survival strategies for a simulated robot in xpilot. In: Proceedings of the 2005 IEEE congress on evolutionary computing. IEEE Press, pp 2410–2415Google Scholar
  75. Parker G, Parker M, Johnson S (2005b) Evolving autonomous agent control in the xpilot environment. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 2416–2421Google Scholar
  76. Parker GB, Doherty TS, Parker M (2006) Generation of unconstrained looping programs for control of xpilot agents. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1216–1223Google Scholar
  77. Parker M, Parker GB (2006a) Learning control for xpilot agents in the core. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 800–807Google Scholar
  78. Parker GB, Parker M (2006b) The incremental evolution of attack agents in xpilot. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 969–975Google Scholar
  79. Parker M, Parker GB (2006c) Using a queue genetic algorithm to evolve xpilot control strategies on a distributed system. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1202–1207Google Scholar
  80. Parker M, Parker GB (2007a) The Evolution of multi-layer neural networks for the control of xpilot agents. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 232–237Google Scholar
  81. Parker GB, Parker M (2007b) Evolving parameters for xpilot combat agents. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 238–243Google Scholar
  82. Pfeiffer M (2004) Reinforcement learning of strategies for settlers of catan. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 384–388Google Scholar
  83. Ponsen M, Spronck P (2004) Improving adaptive game AI with evolutionary learning. In: Mehdi Q, Gough N, Natkin, S, Al-Dabass D (eds) Proceedings of the 5th international conference on computer games: artificial intelligence, design and education. The University of Wolverhampton, pp 389–396Google Scholar
  84. Ponsen M, Spronck P, Tuyls K (2006) Hierarchical reinforcement learning with deictic representation in a computer game. In: Schobbens P-Y, Vanhoof W, Schwanen G (eds) Proceedings of the 18th Belgium-Netherlands conference on artificial intelligence. University of Namur, pp 251–258Google Scholar
  85. Pottinger D (2000) The Future of AI in games. Game Dev 8: 36–39Google Scholar
  86. Rumelhart DE, Hinton GE, Williams RJ (1989) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol. 1. MIT Press, CambridgeGoogle Scholar
  87. Rummery GA, Niranjan M (1994) On-line Q-learning using connectionist systems. Technical Report CUED/F-INFENG/TR 166, Cambridge University, UKGoogle Scholar
  88. Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3(3): 211–229MathSciNetCrossRefGoogle Scholar
  89. Samuel AL (1967) Some studies in machine learning using the game of checkers. ii – recent progress. IBM J Res Dev 11(6): 601–617Google Scholar
  90. Schaeffer J (1997) One jump ahead: challenging human supremacy in checkers. Springer, BerlinGoogle Scholar
  91. Schaeffer J (2000) The games computers (and people) play. In: Zelkowitz M (eds) Advances in computers: 50. Academic Press, San Diego, pp 189–266Google Scholar
  92. Schaeffer J, Vanden Herik HJ (2002) Games, computers, and artificial intelligence. Artif Intell 134: 1–7MATHCrossRefGoogle Scholar
  93. Schwab B (2004) AI game engine programming, 1st edn. Charles River Media, HinghamGoogle Scholar
  94. Spronck P, Sprinkhuizen-Kuyper I, Postma E (2003a) Improving opponent intelligence through offline evolutionary learning. Int J Intell Games Sim 2: 20–27Google Scholar
  95. Spronck P, Sprinkhuizen-Kuyper I, Postma E (2003b) Online adaptation of game opponent AI in simulation and in practice. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 93–100Google Scholar
  96. Spronck P, Ponsen M, Sprinkhuizen-Kuyper I, Postma E (2006) Adaptive game AI with dynamic scripting. Mach Learn 63: 217–248CrossRefGoogle Scholar
  97. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10: 99–127CrossRefGoogle Scholar
  98. Stanley KO, Bryant BD, Miikkulainen R (2005a) Real-time neuroevolution in the NERO video game. IEEE Trans Evol Comput 9: 653–668CrossRefGoogle Scholar
  99. Stanley KO, Bryant BD, Miikkulainen R (2005b) Evolving neural network agents in the NERO video game. In: Kendall G, Lucas SM (eds) Proceedings of the 2005 IEEE symposium on computational intelligence and games. IEEE Press, Piscataway, NJGoogle Scholar
  100. Sutton RS (1988) Learning to predict by the method of temporal differences. Mach Learn 3: 9–44Google Scholar
  101. Sutton RS (1996) Generalization in reinforcement learning: successful examples using sparse coarse coding. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems 8. MIT Press, Cambridge, pp 1038–1044Google Scholar
  102. Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, CambridgeGoogle Scholar
  103. Szita I, Lõrincz A (2006) Learning tetris using noisy cross-entropy method. Neural Comput 18: 2936–2941MATHCrossRefGoogle Scholar
  104. Szita I, Lõrincz A (2007) Learning to play using low-complexity rule-based policies: illustrations through Ms. Pac-Man. J Artif Intell Res 30: 659–684MATHGoogle Scholar
  105. Tesauro G (1995) Temporal difference learning and td-gammon. Commun ACM 38(3): 58–68CrossRefGoogle Scholar
  106. Thurau C, Bauckhage C, Sagerer G (2003) Combining self-organizing maps and multilayer perceptrons to learn bot-behavior for a commercial game. In: Mehdi Q, Gough N, Natkin S (eds) Proceedings of the 4th international conference on intelligent games and simulation. Eurosis, pp 119–123Google Scholar
  107. Togelius J, Lucas SM (2005) Evolving controllers for simulated car racing. In: Proceedings of the 2005 IEEE congress on evolutionary computation. IEEE Press, pp 1906–1913Google Scholar
  108. Togelius J, Lucas SM (2006) Evolving robust and specialized car racing skills. In: Proceedings of the 2006 IEEE congress on evolutionary computation. IEEE Press, pp 1187–1194Google Scholar
  109. Togelius J, Burrow P, Lucas SM (2007a) Multi-population competitive co-evolutions of car racing controllers. In: Proceedings of the 2007 IEEE congress on evolutionary computation. IEEE Press, pp 4043–4050Google Scholar
  110. Togelius J, Nardi RD, Lucas SM (2007b) Towards automatic personalized content creation for racing games. In: Proceedings of the 2007 IEEE symposium on computational intelligence and games. IEEE Press, pp 252–259Google Scholar
  111. Tozour P (2001) Influence mapping. In: DeLoura M (eds) Game programming gems, vol. 2. Charles River Media, Hingham, pp 287–297Google Scholar
  112. Tozour P (2002) The evolution of game AI. In: Rabin S (eds) AI game programming wisdom, 1st edn. Charles River Media, HinghamGoogle Scholar
  113. Watkins C, Dayan P (1992) Q-learning. Mach Learn 8: 279–292MATHGoogle Scholar
  114. Whiteson S, Stone P, Stanley KO, Miikkulainen R, Kohl N (2005) Automatic feature selection in neuroevolution. In: Proceedings of the 2005 genetic and evolutionary computation conference. ACM Press, pp 1225–1232Google Scholar
  115. Wolpert D, Sill J, Tumer K (2001) Reinforcement learning in distributed domains: beyond team games. In: Proceedings of the 17th international joint conference on artificial intelligence. Morgan Kaufmann, pp 819–824Google Scholar
  116. Woodcock S (1998) Game AI: the state of the industry. Game Dev 5: 28–35Google Scholar
  117. Woodcock S (1999) Game AI: The state of the industry. Game Dev 6: 34–43Google Scholar
  118. Woodcock S (2000) Game AI: the state of the industry. Game Dev 7: 24–32Google Scholar
  119. Woodcock S (2001) Game AI: the state of the industry 2000–2001. Game Dev 8: 36–44Google Scholar
  120. Woodcock S (2002) Game AI: the state of the industry 2001–2002. Game Dev 9: 26–31Google Scholar
  121. Yannakakis GN, Hallam J (2004) Evolving opponents for interesting interactive computer games. In: Schaal S, Ijspeert A, Billard A, Vijayakumar S, Hallam J, Meyer J-A (eds) From animals to animats 8: proceedings of the 8th international conference on simulation of adaptive behaviour. MIT Press, pp 499–508Google Scholar
  122. Yannakakis GN, Levine J, Hallam J (2004) An evolutionary approach for interactive computer games. In: Proceedings of the 2004 IEEE congress on evolutionary computation. IEEE Press, pp 986–993Google Scholar
  123. Yannakakis GN, Levine J, Hallam J, Papageorgiou M (2003) Performance, robustness and effort cost comparison of machine learning mechanisms in FlatLand. In: Proceedings of the 11th Mediterranean conference on control and automation. Rhodes, 18–20 June 2003Google Scholar
  124. Yong CH, Stanley KO, Miikkulainen R, Karpov IV (2006) Incorporating advice into evolution of neural networks. In: Laird J, Schaeffer J (eds) Proceedings of the second artificial intelligence and interactive digital entertainment conference. AAAI Press, Menlo Park, pp 98–104Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  1. 1.School of Computing and Information Engineering, Faculty of EngineeringUniversity of UlsterColeraineUK

Personalised recommendations