Encyclopedia of Computer Graphics and Games

Living Edition
| Editors: Newton Lee

Computer Games and Artificial Intelligence

  • Hanno HildmannEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-08234-9_234-1

Synonyms

Computer Games: Video Games

Artificial Intelligence: Machine Intelligence; Artificial Cognitive Intelligence

Definition

Computer Games are games that are realized in one form or another using digital hardware. Artificial Intelligence is commonly considered a branch of Computing Science which has matured to become a field of its own right. It is concerned with the creation/simulation of intelligent behaviour in/by machines.

Background

In the context of computer games and the computer games industry, games have become an essential part of the software and entertainment industry (Saarikoski and Suominen 2009). Recently there is increased interest in games (Lowood 2009) from a sociological, cultural, and educational point of view (Hildmann and Hildmann 2012a, b). For decades now, it has been argued that games and the playing of games are culturally significant (Mechling 1987).

Games and Game Theory

Games have traditionally existed in versions involving more than one player (Books 2010) and those that do constitute a formalized form of interaction that is more than physical activity or information exchange. The literature on games in general – as well as on specific games and their individual rules – is extensive (Tylor 1879). In the field of game theory, which is concerned with the formal analysis of games and their properties, one traditionally (Nash 2001; Osborne and Rubenstein 1994; von Neumann and Morgenstern 1974) distinguishes a variety of game types:
  • Noncooperative vs. cooperative games: With regard to the actions available to a player during a game, we call an action primitive if it cannot be decomposed further into a series of other, smaller actions. The distinction is then made on the basis of whether such primitive actions are performed by individual players (noncooperative) or by groups of players (cooperative).

  • Strategic vs. extensive games: A game is strategic if all decisions of all players are made a priori (i.e., before the start of the game) and independent of the opponent’s decision (e.g., Paper-Rock-Scissors). Extensive games allow for decisions to be made during game play and with respect to the opponents moves (e.g., Chess).

  • Perfect information vs. imperfect information: In, e.g., Chess both players have perfect insight into the state of the game; thus Chess is a game of perfect information. However, in most card games, there is an element of uncertainty as one normally does not know the distribution of cards for the opponents. The latter games are imperfect information games.

Today, games are mainly of interest to the AI community if they are too complex to calculate all possible variations. For example, the game Nim, though having a rich mathematical theory behind it, is small enough to calculate a winning strategy (Conway 2001). Due to its mathematical foundation, it has left its mark on combinatorial game theory through terminology (e.g., nim-addition and nimsum) (Jørgensen 2009). However, it is possible to precalculate the moves and play a perfect game as there are known algorithms for doing so (Spencer 1975); thus the theoretical winner is already known, and there exist straightforward algorithms to determine which side will have this advantage (Bouton 1902). Due to this, the game is uninteresting today for both game theoreticians and researchers in AI. This was not always the case though, and in the early years, a number of devices were built to play the game: e.g., “Nimatron,” built by Westinghouse for the New York Worlds Fair in 1940 (http://www.goodeveca.net/nimrod/nimatron.html) or “Nimrod,” developed for the Festival of Britain in London in 1951. These games were the highlights where they were shown, with Nimatron playing 100.000 games (and winning 90.000) and Nimrod “taking on all comers” at the Festival in London (Jørgensen 2009).

Games and Artificial Intelligence

Yan (2003) lists artificial intelligence as an inherent design issue for online games, and indeed, “appropriately challenging” AI is considered to be crucial to the commercial success of a game (Aiollil and Palazzi 2009). Generally it can be said that “non-repetitive, adaptive, interesting, and in summary intelligent behavior offers a competitive edge for commercial games” (Naddaf 2010).

As the combinatorial challenge of complex games such as Chess and Go (discussed below) is successively mastered, some emphasis (e.g., Hildmann and Crowe 2011) is placed on designing AI players that instead of playing very well can play realistically in the context of playing non-repetitively and in an interesting way. Emphasis is placed on intelligent behavior (Hildmann 2013) in the way that a human player would play, including the handicaps, shortcomings and common mistakes made by humans (Hildmann 2011).

Besides the commercial interest of the gaming industry, the research field itself has always had a large interest in games. The prime example is the game of Chess – famously called the “Drosophila of AI” (McCarthy 1990). It was used by key figures in the field of artificial intelligence (such as, e.g., Claude Shannon 1950; Allen Newell et al. 1988; Herbert Simon 1970) in the context of their research on machine and human intelligence. The interest in this game predates the computer age: Charles Babbage already considered using his “Analytical engine” to play Chess in the 1840s, and by 1949 researchers on both sides of the Atlantic were advancing theoretical approaches to automate the playing of this game (Hsu et al. 1995). Similarly, Checkers was used by, e.g., Arthur Samuel to study learning (Jørgensen 2009).

In the past, specifically the Atari 2600 game console has been used extensively as a platform for developing and demonstrating AI algorithms. The reasons for this are the that there are over 900 game titles available for the Atari 2600 console, the games are simple and concentrate on the problematic aspects of the game (while newer games showcase the latest in video and graphic performances), they have a small and discrete action space, and many emulators are available (Naddaf 2010). Games will continue to be a focal point of AI research. For example, the game Octi (Schaeffer and van den Herik 2002) is a game specifically invented to be resistant to computer algorithms.

Ethical Considerations

Many things, including far-reaching business and political decisions, can be modeled as a game, and the application of artificial intelligence to the therefore broad area of games has stirred many controversies over the years. Among the relevant discussion topics are whether a computer can be said to play at all and whether computers should be allowed to play with (meddle in) some of the more serious matters that can be cast into a game-theoretic model.

For example, Weizenbaum (1972) and Wiener (1960) wrote on the moral and ethical issues of AI and allowing computers to make decisions, the latter being criticized by Samuel (1960) and the former by Coles (1972). Taube (1960) wrote in response to Wiener (1960) that machines cannot play at all. He supposes that the act of playing is in line with enjoyment, and the author makes the distinction between the notions of game as it is “used in the usual sense” contrasted with game after it is “redefined by computer enthusiasts with nothing more serious to do.”

Johnson and Kobler (1962) agree with Wiener (1960) that machines can be original but warn of allowing them to play war-games and simulations of nuclear war, as their predictions might not be entirely accurate and they would lack the understanding of all the values that need to be considered (i.e., to gain the insight ultimately exhibited by the fictional computer system “Joshua”). While the moral issues of artificial intelligence and political decisions have recently become a mainstream matter of discussion, this was already discussed in the context of game theory and its application to the nuclear “first strike” doctrine in the cold war. It was a well-known matter of disagreement between Wiener and vonNeumann; the latter advocated the idea of a preemptive nuclear strike on Russia. His argument was that their reasoning would lead them to the same conclusion, making it simply a matter of time before one side struck first. Since the mid-1990s, computers have successively dismantled the reign of human players in the realms of perfect information games. More recently, this has also happened in games with imperfect information, where intuition or the understanding of complex real-world relationships is required. Machines are increasingly mastering the decision-making in situations that go beyond those found in Chess and Go. Whether mastery of these situations includes the understanding that, e.g., “mutually ensured annihilation” is undesirable even if it constituted a victory by points is a question worth asking (but not the subject of this entry).

Landmark Artificial Intelligence Victories

Games are culturally important (Sutton-Smith 1986) and have been used for millennia to allow humans to exercise the brain (Smith 1956) and to compete against each other on the basis of mastery of skill and intelligence.

Therefore, games have been of large interest to the AI community, and the design of programs that can match the performance of human top players has been considered the ultimate achievement for a long time. Once a program can hold its own against a human, the next step is to try and outperform humans, and since the mid-1990s, one game after the other has been mastered by computers: in 1995 a program called “Chinook” played the game of Checkers at world class level and succeeded in beating one of the world’s best players (Kroeker 2011). In 1996 “Deep Blue” beat the reigning world champion in the game of Chess for the first time in history (Kasparov 1996), and 1 year later, it won an entire chess tournament, ending the supremacy of humans in this game (Schaeffer and van den Herik 2002). The TV show game Jeopardy! was conquered decisively by “Watson” in 2011. Poker, a game of bluffing and intuition, was mastered in 2015 (Bowling et al. 2015) and within months from each other (December 2016 and January 2017), two programs, “Libratus” (Riley 2017) and “DeepStack” (Moravčík et al. 2017), significantly outperformed dozens of professional and top human players. Finally, the best human players of Go – the board game considered by many to be the last great stronghold of human superiority in games – recently succumbed to “AlphaGo.”

Since 2005 the Stanford Logic Group of Stanford University, California is running the General Game Playing Competition (http://games.stanford.edu/) (GGPC) (Genesereth and Love 2005) which is inviting programs to compete in playing games. The games to be played are, however, unknown in advance, and the programs have to be able to compete in a number of different types of games, expressed in a Game Description Language (Love et al. 2008). “AlphaGo Zero” recently not only learned the games Shogi, Chess, and Go – all of which had defined computer programs for decades – from scratch (at a significant computational cost, Silver et al. (2017)), it also proceeded to beat the best programs currently available for each of these games. This might be an indication that the GGPC competition might also soon come to an end. Progress is occurring at a nonlinear speed, and certainly in the area of artificial intelligence, the recent years have seen milestone events happen with increasing frequency.

Before we briefly elaborate on some of the most prominent examples of artificial intelligence systems winning against top human players, it should be pointed out that there does not seem to be a silver bullet, as the discussed games were conquered with different techniques and approaches. The game of Chess was essentially conquered by throwing sufficiently large amounts of computational resources at it (as well as training the program to play a specific human opponent) while the original program that mastered Go used advanced machine learning techniques (combined with Monte Carlos Tree Search (MCTS)) and had access to a large database of previously played games. Its revised version (which learned to play Go as well as Chess from scratch) relied on learning from massive numbers of games it would play against itself. In contrast, the program that won Jeopardy! applied probabilistic methods before selecting the most likely answer. A full technical discussion of these programs is beyond the scope of this entry; the interested reader is referred to the provided references for in-depth discussion on the matter.

Chess: Deep Blue

In his recent book (Kasparov and Greengard 2017), Gary Kasparov writes that over his career, starting at the age of 12, he played about 2400 serious games of Chess and lost only around 170 of these games. In 1989 he had played, and beaten, the computer program “Deep Thought.” On February 10, 1996, playing as the reigning World Chess Champion, he lost a game of Chess to a computer, Deep Blue (Kasparov 1996), but still won the tournament. This date is significant for a number of reasons: firstly, and most widely known, for the first time (under normal tournament conditions), a computer program beat a top human player in a game which – until that day – was considered the ultimate benchmark for human versus machine intelligence (despite the game of Go being more complex). Far more importantly, however, is that Kasparov later claimed that during the game he “could feel” and “smell” a “new kind of intelligence across the table” (Kasparov 1996). What he meant was that the moves of his opponent bore witness to an intelligence that he, at the time, did not believe a computer could exhibit. It is not the superior play and triumphant victory at its end but the outstanding demonstration of insight and intelligence that makes Game 1 of the 1996 “Deep Blue versus Garry Kasparov” match a turning point in the history of artificial intelligence. While Kasparov ended up winning the tournament (2–4), he had, arguably, conceded a win of the Turing test to Deep Blue. This test, famously proposed by Turing (1950), elegantly sidesteps the need to formally define the concept of intelligence before being able to assess it. It does so by suggesting that if the behavior of an opponent (as opposed to the physical appearance) could fool a human judge into believing that this opponent was human, then that opponent should be considered intelligent, independent of the embodiment of the intelligence. Arguably, in February of 1996, Deep Blue did just that.

In contrast, and despite being considered a “watershed for AI” (Hassabis 2017), when Deep Blue finally defeated Kasparov 1 year later in 1997 \( \left(3\frac{1}{2}\ \mathrm{to}\ 2\frac{1}{2}\right) \), this constituted much less of a victory in the sense of the Turing test. This date is considered a “major success for artificial intelligence and a milestone in computing history” (Schaeffer and van den Herik 2002), but as Wiener wrote 37 years earlier, pure calculation is only a part of the process. One needs to consider the playing history of the opponent as well and be able to adapt to it accordingly during the game. In the case of Deep Blue, the machine knew every major game Kasparov had ever played while Kasparov was completely in the dark about Deep Blue’s capabilities.

Of course one can argue that in 1996 this was true in reverse, as Kasparov (1996) himself acknowledged. He admitted that, after losing the first game, his defense “in the last five games was to avoid giving the computer any concrete goal to calculate toward.” He stated that he knew the machine’s priorities and that he played accordingly; he closes by conjecturing that he has “a few years left.” In fact, he had little more than a year before, on May 11th, 1997, Deep Blue won the deciding game of 6, thereby defeating the human world champion \( 3\frac{1}{2} \) to \( 2\frac{1}{2} \). The stronghold of human superiority and intelligence had finally fallen, and other landmark victories of AI were soon to follow.

Go: AlphaGo

Go has been described as the Mount Everest of AI (Lee et al. 2016b). This is fitting in the sense that it represents the highest peak we can climb but not necessarily the most difficult thing conceivable: Go is maybe the most complex game that is actually played by humans, with the number of theoretically possible games being in the order of 10700 (Wang et al. 2016) – a number expressing a quantity larger than the number of atoms in the universe (Lee et al. 2010). But if it was merely complexity we were after, harder challenges could easily be designed. However, it is also the fact that humans have engaged in playing Go for millennia and through this have reached high levels of mastery that makes it of interest to the AI community. In the context of squaring off humans against computers, Go is the likely candidate for the ultimate turn-based board game of perfect information.

From 1998 to 2016, competitions pitting computer programs against human players have been held every year at major IEEE conferences, with the handicap imposed on the human players dropping from 29 in 1998 to 0 in 2016 (Lee et al. 2016a). It is important to understand that the advantage gained from the handicap imposed on the human player is not linear in the size of the handicap. The handicap is implemented as stones the computer may place before the game starts; therefore, e.g., an advantage of 4 allows the computer to claim or fortify all four corners, while just one handicap (stone) less allows the human player to do the same for one corner, arguably allowing for entirely different game play (Lee et al. 2012), especially on a smaller (9 × 9) board. On a full (19 × 19) board, the step from a handicap of 4 to one of 3 was a massive one, and it was taken by AlphaGo in March 2016. Until then the reigning top computer program, “Zen” was only able to prevail against top professional players with a handicap of 4 (Lee et al. 2016b).

AlphaGo, created by DeepMind, entered the circuit around October 2015. It did not just capitalize on improved hardware and increased computational power; it was built differently and combined at least two successful techniques: Monte Carlo Tree Search (Cazenave 2017) with Deep Learning (DL) (LeCun et al. 2015). Technical details of DL are discussed in Clark and Storkey (2014), Gokmen et al. (2017), and Jiang et al. (2017). As the program played its top contemporary computer programs “Crazy Stone,” “Zen,” “Pachi,” “Fuego,” and “GnuGo,” it proceeded to beat them all (winning 494 of 495 games with the single-machine version and triumphing in every single game with the distributed version) (Silver et al. 2016).

AlphaGo first beat the European champion Fan Hui 5 to 0 in September 2015 (becoming the first program ever to defeat a professional player) and within half a year proceeded to defeat the reigning human world champion Lee Sedol (who by some is hailed as the greatest player of the past decade” (Hassabis 2017)) 4 to 1 in March 2016 (Fu 2016). It won the first three games, taking home a sweep victory (best of 5), but maybe more importantly it awed top human players, not unlike Deep Blue had awed Kasparov in 1996: commenting on the legendary move 37 in AlphaGo’s second game against Lee Sedol, Fan Hui is quoted to have said: “[i]t’s not a human move. I’ve never seen a human play this move, […] So beautiful … beautiful” (Metz 2016).

Toward the end of 2017, AlphaGo Zero was introduced as the next incarnation of the system. It not only learned three games (Shogi, Chess, and Go) autonomously from scratch (Silver et al. 2017) but then proceeded to beat the top programs currently playing these games. In a way, AlphaGo Zero is not a program designed to play Go but a program designed to play according to a set of rules. More specifically, AlphaGo Zero can at least learn and play turn-based games of perfect information without chance and has demonstrated its ability to play these games at the expert level.

As such, AlphaGo Zero might mark the end of this sub-area in computational challenges. We can (and will) surely continue to build better programs and let them play each other, but as far as the question of superior game play is concerned, humans have met (and created) their masters.

Poker: DeepStack and Libratus

Both Go and Chess are games of perfect information: all information about the game is known by both players at all times. Zermelo’s Theorem (Zermelo 1913 – English version of the paper: Schwalbe and Walker 2001) states that deterministic (not based on chance) finite two-player games with perfect information […] have a non-losing strategy for one player (i.e., either one player can, theoretically, force a win, or both players can force a draw). This means that – theoretically – one could calculate all possible versions the games of Chess and Go can be played and then simply never choose a move that results in a defeat. While this is practically impossible, due to the exceedingly high number of possible games, it means that for these two games – in theory – a machine could be built that would never lose a single match. Therefore it is only practical limitations that prevent computers from outperforming humans and all that it takes is clever algorithms that get around these limitations. That being said, actually achieving this has been considered a major achievement in the field (Jørgensen 2009) and by no means should the triumphs of Deep Blue and AlphaGo be belittled. As stated above, the defeating of the top human players in these games are landmarks in the history of artificial intelligence.

Heads-up no-limit Texas Hold’em, a popular version of the game Poker is considered the main benchmark challenge for AI in imperfect-information games (Brown and Sandholm 2017). In 2017, a Poker playing AI designed at Carnegie Mellon University (CMU), going by the name Libratus, prevailed against four top human players in a tournament and over the course of an aggregated 120.000 hands. Libratus became the first program to defeat top human players (Metz 2017). This victory has not come easy, as “Claudiro,” an earlier program by the same team, failed in 2015. But while Claudiro was defeated, it was so only by a margin: after a combined $170 million had been bet, the humans were up by a mere 3/4 of a million (Tung 2015). Claudiro’s performance, despite falling short of a decisive victory, indicated that Poker was not outside of what’s possible. Within 2 years, its creators returned with Libratus, and another long-standing challenge in the field of artificial intelligence was met (Moravčík et al. 2017).

At the same time as Libratus was developed at the Carnegie Mellon University, another group at the University of Alberta designed DeepStack. By the time of Libratus’ victory in early 2017, DeepStack had already competed in a tournament and won (in December 2016), with the resulting publication undergoing peer review. While DeepStack played 33 professional poker players from 17 countries, Libratus competed against four of the best human players in the world. Both programs showed a statistically significant superior performance over their human opponents (Riley 2017). DeepStack’s performance exceeded the threshold of what professional players consider a seizable margin by a factor of 10 (Moravčík et al. 2017). Both programs approached the problem differently, which goes to show that the field of AI has not just chipped away at the (next) “last” stronghold of human AI, but it has done so in multiple ways, indicating that these victories were not achieved by machines capitalizing on human shortcomings but on progressively refining techniques which enable computers to improve their playing of the game.

Libratus is using a supercomputer at the Pittsburgh Supercomputing Center to build an extensive “game tree” to evaluate the expected outcome of a particular play. DeepStack instead uses a neural network to “guess” the outcome of a play, not entirely unlike how humans use “intuition” (Metz 2017).

Jeopardy!: Watson

In early 2011, a natural language question and answering program called Watson (named after IBM’s founder Thomas J. Watson (Brown 2013)) became world famous after winning the US TV show Jeopardy! (Kollia and Siolas 2016). This game show has been on TV since 1984 and revolves around contestants correctly identifying questions, on the basis of being given the resulting answers. The maximum time allowed is 5 seconds, and three contestants compete for being the first to correctly identify the question (Ferrucci 2010). Winning the game requires the ability to identify clues involving subtleties, irony, and riddles, and as such, the competition is firmly within areas where humans excel (and machines traditionally fail) (Brown 2013).

Over the course of 3 days, from February 14 to February 16, 2011, IBM’s Watson proceeded to beat the two highest performing humans in the history of the game: Brad Rutter, who had been the show’s largest money winner ever, and Ken Jennings, the record holder for the longest winning streak. Not only did Watson beat both humans, but also it utterly defeated them: in the end Watson had won in excess of $77,000 while its opponents combined won less than $46,000 (Ken Jennings $24,000 and Brad Rutter $21,600). The match was watched by close to 35 million people on TV and an estimated 70% of all Americans knew of the program (Baughman et al. 2014), making the program a bona fide celebrity.

Watson was the result of a 7-year project (Frenkel 2011), which resulted in a program that could interpret and parse statements made in often messy and colloquial English and search through up to 200 million pages of text to identify and generate the appropriate question (Strickland and Guy 2013) for the answers provided. For years to come, it was considered one of the leading intelligent systems in existence (Abrar and Arumugam 2013). IBM itself has claimed it to be the first software capable of cognitive computing (Holtel 2014) and considered its creation and victory at the game show the beginning of an “Era of Cognitive Computing” (Kelly and Hamm 2013).

The expert systems of the early years of artificial intelligence were systems that were designed to reason using learned (hard coded) steps. This process had to be painfully constructed on the basis of the testimony of human experts. In contrast, Watson uses probabilistic methods that result in confidence scores for answers (Ahmed et al. 2017). Due to this, the program is able to attempt answers to problems it has never seen before, i.e., it can operate under incomplete or missing information (Gantenbein 2014).

One more thing sets Watson apart from the computer programs mentioned before and their landmark victories: while being built to compete and win in the TV show Jeopardy!, the 3-day competition that made it famous was only a first step in its (intended) career. IBM considered the game show a real-world challenge in the area of Open Domain Question Answering (Ahmed et al. 2017), but winning against a human was a means to an end, not the goal. The ability to perform on human levels when subjected to open domain questions demonstrated for the first time that a program could engage in such an activity, independent of the setting or context. From the start Watson’s creators had more in mind than merely winning a game show (Baker 2011). In what IBM has called cognitive computing (Kelly and Hamm 2013), the ability of programs to learn from experience and understand problems that were so far firmly in the domain of humans (Asakiewicz et al. 2017) has the potential to disrupt virtually all aspects of our lives, with all the commercial implications and opportunities that come with it.

The nonlinear increase in data generation, storage, and processing in recent years has arguably ushered in a new age, one where intelligent data analytics and text analysis (Cvetković et al. 2017) are rapidly increasing in relevance. The technology behind Watson has been applied to the domains of, e.g., legal services, health care, banking, and tourism (Gantenbein 2014; Murtaza et al. 2016). Of those, applications in the domain of health care constitute the largest benefit to society due to their societal importance (Kelly and Hamm 2013) and reliance on (massive amounts of) data (Ahmed et al. 2017). The exceptional performance exhibited during the game show, especially with regard to natural language processing and under the added challenge of colloquial speech, ambiguity, and reading between the lines, has made it clear that the true calling for systems like Watson may be found outside the studio and in our daily lives (Kelly and Hamm 2013).

Philosophical Thoughts

Intelligence is a concept humans are very familiar with yet to date have failed to define well. Part of the reason why the Turing test is still relevant is because it sidesteps this dilemma by using the one definition human can agree on: human intelligence. If a human can be fooled to believe an opponent is human, then it must exhibit human-like behavior and, in this case, intelligence.

All games discussed above are landmark games in the area of artificial intelligence, but they are neither painting a complete picture nor are they the only ones that have been a key game for machines to compete against humans. One obvious example is the game of Soccer, where entire leagues of various machines compete for titles. In this entry, we entirely ignore the physical aspect of games and therefore all games that require physical behavior. The motivation for this is twofold: on one hand, the progress made in this area is equally stunning and pervasive, with new results and achievements being showcased in videos regularly. Providing a fair overview over these achievements is firmly outside the scope of this entry (the interested reader is invited to search for YouTube videos of, e.g., Boston Dynamics). On the other hand, this entry considers the term intelligence only in the intellectual sense and not in the context of mastery of the physical domain.

For millennia the mastery of certain games, often based on exceedingly simple rules, was seen as the pinnacle of human intelligence. Chess was considered an object of intellectual skill (Jørgensen 2009) and diversion (Spencer 1975). Professional Go players are known to describe promising board configurations as esthetically pleasing. In 200 BC, poetry and Go went hand in hand in Japan (Smith 1956). The complexity of games can extend far beyond what humans can consciously grasp and explain; many top players play games partly intuitively, that is, with a clear understanding of which moves they prefer but without the ability to justify this preference.

In the final years of the last millennia, this stronghold of human intelligence came under attack. While futile promises made by the most prominent experts in the field of artificial intelligence (e.g., Herbert Simon’s “a computer [will] be the World Chess Champion before 1967” (Hsu et al. 1995)) have demonstrated time and time again that the advent of AI is not, by far, as sweeping and complete as they hoped, it seems inevitable and unstoppable.

In this context, it is important to remember that these are just games. These are controlled environments with often very clearly stated rules (Naddaf 2010). In addition and maybe far more importantly, these are interactions in which the evaluation of an outcome is clearly defined and often one-dimensional (i.e., win vs. loss). Real life is of a different complexity, and humans do not share the same views on how outcomes are evaluated. Artificial intelligence may be able to outperform humans in any one subject, but the true benchmark for intelligence may be a matter of defining the meaning of the concept intelligence for us. Machines are bound to be faster and more precise than humans; there should be no surprise about that. Whether they can be better at something than humans is really a question that cannot be answered until we know what better means, in the respective context.

One thing is becoming painfully obvious: we are running out of games to have computers beat us at. Maybe the age of game-playing AI is coming to an end. Or maybe the games need to change, and winning by points is no longer the victory we prize the most. Cooperative game play, with teams of humans and AI players working together, might be a new challenge in the years to come. In the end, when one has mastered a game to perfection and is guaranteed to never lose a match, the only winning move is not to play.

Cross-References

References

  1. Abrar, S. S., Arumugam, R. K.: Simulation model of IBM-Watson intelligent system for early software development. In: 2013 UKSim 15th International Conference on Computer Modelling and Simulation, pp. 325–329. (2013)Google Scholar
  2. Ahmed, M.N., Toor, A.S., O’Neil, K., Friedland, D.: Cognitive computing and the future of health care: the cognitive power of ibm watson has the potential to transform global personalized medicine. IEEE Pulse. 8(3), 4–9 (2017)CrossRefGoogle Scholar
  3. Aiolli, F., Palazzi, C.E.: Enhancing artificial intelligence on a real mobile game. Int. J. Comput. Games Technol. 2009, 1:1–1:9 (2009)CrossRefGoogle Scholar
  4. Asakiewicz, C., Stohr, E.A., Mahajan, S., Pandey, L.: Building a cognitive application using Watson DeepQA. IT Prof. 19(4), 36–44 (2017)CrossRefGoogle Scholar
  5. Baker, S.: Final Jeopardy: The Story of Watson, the Computer That Will Transform Our World. Houghton Mifflin Harcourt, Boston (2011)Google Scholar
  6. Baughman, A.K., Chuang, W., Dixon, K.R., Benz, Z., Basilico, J.: DeepQA Jeopardy! gamification: a machine-learning perspective. IEEE Trans. Comput. Intell. AI Games. 6(1), 55–66 (2014)CrossRefGoogle Scholar
  7. Books, L.: History of Games: History of Board Games, History of Card Decks, History of Role-Playing Games, Playing Card, Alquerque, Senet. General Books LLC, Memphis (2010)Google Scholar
  8. Bouton, C.L.: Nim, a game with a complete mathematical theory. Ann. Math. 3, 35–39 (1902)MathSciNetCrossRefGoogle Scholar
  9. Bowling, M., Burch, N., Johanson, M., Tammelin, O.: Heads-up limit hold’em poker is solved. Science. 347(6218), 145–149 (2015)CrossRefGoogle Scholar
  10. Brown, E.: Watson: the jeopardy! Challenge and beyond. In: 2013 I.E. 12th International Conference on Cognitive Informatics and Cognitive Computing, pp. 2–2. (2013)Google Scholar
  11. Brown, N., Sandholm, T.: Libratus: the superhuman AI for no-limit Poker. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 5226–5228. (2017)Google Scholar
  12. Cazenave, T.: Residual networks for computer go. IEEE Trans. Comput. Intell. AI Games. PP(99), 1–1 (2017)Google Scholar
  13. Clark, C., Storkey, A. J.: Teaching deep convolutional neural networks to play go. CoRR abs/1412.3409 (2014)Google Scholar
  14. Coles, L.S.: Computers and society (reply to Weizenbaum (1972)). Science. 178(4061), 561–562 (1972)CrossRefGoogle Scholar
  15. Conway, J.: On numbers and games. Ak Peters Series, A.K. Peters (2001)Google Scholar
  16. Cvetković, L., Milašinović, B., Fertalj, K.: A tool for simplifying automatic categorization of scientific paper using Watson api. In: 2017 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 1501–1505. (2017)Google Scholar
  17. Ferrucci, D.: Build watson: an overview of DeepQA for the Jeopardy! challenge. In: 2010 19th International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 1–1. (2010)Google Scholar
  18. Frenkel, K.A.: Schooling the jeopardy! champ: far from elementary. Science. 331(6020), 999–999 (2011)CrossRefGoogle Scholar
  19. Fu, M. C.: AlphaGo and Monte Carlo Tree Search: The simulation optimization perspective. In: 2016 Winter Simulation Conference (WSC), pp. 659–670. (2016)Google Scholar
  20. Gantenbein, R. E.: Watson, come here! The role of intelligent systems in health care. In: 2014 World Automation Congress (WAC), pp. 165–168. (2014)Google Scholar
  21. Genesereth, M., Love, N.: General game playing: overview of the AAAI competition. AI Mag. 26, 62–72 (2005)Google Scholar
  22. Gokmen, T., Onen, O. M., Haensch, W.: Training deep convolutional neural networks with resistive cross-point devices, CoRR abs/1705.08014 (2017)Google Scholar
  23. Hassabis, D.: Artificial intelligence: chess match of the century. Nature. 544, 413 (2017)CrossRefGoogle Scholar
  24. Hildmann, H.: Behavioural game AI – a theoretical approach. In: Internet Technology and Secured Transactions (ICITST), 2011 International Conference for, Abu Dhabi, pp. 550–555. (2011)Google Scholar
  25. Hildmann, H.: The players of games: Evaluating and recording human and AI game playing behaviour. In: 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 724–729. (2013)Google Scholar
  26. Hildmann, H., Crowe, M.: Designing game playing behaviour for AI players. In: 2011 IEEE. Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT 2011), Amman, pp. 1–7. (2011)Google Scholar
  27. Hildmann, H., Hildmann, J.: Serious games and edutainment applications (Ma et al. (2012)). In: Ma et al. (2012), Chapter 6: A Formalism to Define, Assess and Evaluate Player Behaviour in Mobile Device Based Serious Games (2012a)Google Scholar
  28. Hildmann, J., Hildmann, H.: Serious games and edutainment applications (Ma et al. (2012)). In: Ma et al. (2012), Chapter 8: Augmenting Initiative Game Worlds with Mobile Digital Devices (2012b)Google Scholar
  29. Holtel, S.: Short paper: more than the end to information overflow: How IBM Watson will turn upside down our view on information appliances. In: 2014 IEEE. World Forum on Internet of Things (WF-IoT), pp. 187–188. (2014)Google Scholar
  30. Hsu, F.-H., Campbell, M.S., Hoane Jr., A.J.: Deep blue system overview. In: Proceedings of the 9th International Conference on Supercomputing, ICS 95, pp. 240–244. ACM, New York (1995)CrossRefGoogle Scholar
  31. Jiang, Y., Dou, Z., Cao, J., Gao, K., Chen, X.: An effective training method for deep convolutional neural network. CoRR abs/1708.01666 (2017)Google Scholar
  32. Johnson, D.L., Kobler, A.L.: The man-computer relationship. Science. 138(3543), 873–879 (1962)CrossRefGoogle Scholar
  33. Jørgensen, A.: Context and driving forces in the development of the early computer game nimbi. An. Hist. Comput. IEEE. 31(3), 44–53 (2009)MathSciNetCrossRefGoogle Scholar
  34. Kasparov, G.: The day that I sensed a new kind of intelligence. Time Magazine (1996)Google Scholar
  35. Kasparov, G., Greengard, M.: Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. PublicAffairs, New York (2017)Google Scholar
  36. Kelly, J., Hamm, S.: Smart Machines: IBM’s Watson and the Era of Cognitive Computing. Columbia Business School Publishing, New York (2013)CrossRefGoogle Scholar
  37. Kollia, I., Siolas, G.: Using the IBM Watson cognitive system in educational contexts. In: 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8. (2016)Google Scholar
  38. Kroeker, K.L.: Weighing Watson’s impact. Commun. ACM. 54, 13–15 (2011)Google Scholar
  39. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature. 521, 436 (2015)CrossRefGoogle Scholar
  40. Lee, C.S., Wang, M.H., Teytaud, O., Wang, Y.L.: The game of go at IEEE WCCI 2010. IEEE Comput. Intell. Mag. 5(4), 6–7 (2010)Google Scholar
  41. Lee, C.S., Teytaud, O., Wang, M.H., Yen, S.J.: Computational intelligence meets game of go at IEEE WCCI 2012. IEEE Comput. Intell. Mag. 7(4), 10–12 (2012)CrossRefGoogle Scholar
  42. Lee, C.S., Wang, M.H., Yen, S.J., Wei, T.H., Wu, I.C., Chou, P.C., Chou, C.H., Wang, M.W., Yan, T.H.: Human vs. computer go: review and prospect. IEEE Comput. Intell. Mag. 11(3), 67–72 (2016a)CrossRefGoogle Scholar
  43. Lee, C.S., Wang, M.H., Yen, S.J., Wei, T.H., Wu, I.C., Chou, P.C., Chou, C.H., Wang, M.W., Yan, T.H.: Human vs. computer go: review and prospect – extended online version. IEEE Comput. Intell. Mag. 11(3), 67–72 (2016b)CrossRefGoogle Scholar
  44. Love, N., Hinrichs, T., Haley, D., Schkufza, E., Genesereth, M.: General Game Playing: Game Description Language Specification. Technical report, Stanford Logic Group (2008)Google Scholar
  45. Lowood, H.: Guest editor’s introduction: perspectives on the history of computer games. An. Hist. Comput. IEEE. 31(3), 4 (2009)MathSciNetCrossRefGoogle Scholar
  46. Ma, M., Oikonomou, A., Jain, L. (eds.): Serious Games and Edutainment Applications. Springer, Huddersfield (2012)Google Scholar
  47. McCarthy, J.: Computers, Chess, and Cognition. Springer, chapter Chess as the Drosophila of AI, pp. 227–237. (1990)Google Scholar
  48. Mechling, J.: Toys as culture (brian sutton-smith). J. Am. Folk. 100(397), 350–352 (1987)CrossRefGoogle Scholar
  49. Metz, C.: The sadness and beauty of watching google’s ai play go. WIRED Magazine (2016)Google Scholar
  50. Metz, C.: A mystery AI just crushed the best human players at Poker. WIRED Magazine (2017)Google Scholar
  51. Moravčík, M., Schmid, M., Burch, N., Lisý, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., Bowling, M.: Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science. 356(6337), 508–513 (2017)MathSciNetCrossRefGoogle Scholar
  52. Murtaza, S. S., Lak, P., Bener, A., Pischdotchian, A.: How to effectively train IBM Watson: Classroom experience. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), pp. 1663–1670. (2016)Google Scholar
  53. Naddaf, Y.: Game-independent AI agents for playing Atari 2600 console games, PhD thesis (2010)Google Scholar
  54. Nash, J.: Non-cooperative games (PhD Thesis). In: Kuhn, H., Nasar, S. (eds.) The Essential John Nash. Princeton University Press, Princetown (2001)Google Scholar
  55. Newell, A., Shaw, C., Simon, H.: Chess Playing Programs and the Problem of Complexity, pp. 29–42. Springer, New York (1988)Google Scholar
  56. Osborne, M.J., Rubenstein, A.: A Course in Game Theory. The MIT Press, Cambridge, MA (1994)Google Scholar
  57. Riley, T.: Artificial intelligence goes deep to beat humans at poker. Science. 16, 37–43 (2017)Google Scholar
  58. Saarikoski, P., Suominen, J.: Computer hobbyists and the gaming industry in Finland. An. Hist. Comput. IEEE. 31(3), 20–33 (2009)MathSciNetCrossRefGoogle Scholar
  59. Samuel, A.L.: Some moral and technical consequences of automation – a refutation [cf. Wiener (1960)]. Science. 132(3429), 741–742 (1960)CrossRefGoogle Scholar
  60. Schaeffer, J., van den Herik, H.J.: Games, computers and artificial intelligence. Artif. Intell. 134, 1–8 (2002)CrossRefGoogle Scholar
  61. Schwalbe, U., Walker, P.: Zermelo and the early history of game theory. Games Econ. Behav. 34(1), 123–137 (2001)MathSciNetCrossRefGoogle Scholar
  62. Shannon, C.E.: Programming a computer for playing chess. Philos. Mag. (Series 7). 41(314), 256–275 (1950)MathSciNetCrossRefGoogle Scholar
  63. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of go with deep neural networks and tree search. Nature. 529, 484 (2016)CrossRefGoogle Scholar
  64. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D.: Mastering chess and shogi by self-play with a general reinforcement learning algorithm. ArXiv e-prints (2017)Google Scholar
  65. Simon, H.A.: Computers as chess partners. Science. 169(3946), 630–631 (1970)CrossRefGoogle Scholar
  66. Smith, A.: The Game of Go: the National Game of Japan. C. E. Tuttle Co., Tokyo (1956)Google Scholar
  67. Spencer, D.: Game Playing with Computers. Hayden Book Co., Rochelle Park (1975)zbMATHGoogle Scholar
  68. Strickland, E., Guy, E.: Watson goes to med school. IEEE Spectr. 50(1), 42–45 (2013)CrossRefGoogle Scholar
  69. Sutton-Smith, B.: Toys as Culture. Gardner Press, New York (1986)Google Scholar
  70. Taube, M.: Computers and game-playing. Science. 132(3426), 555–557 (1960)CrossRefGoogle Scholar
  71. Tung, C.: Humans out-play an AI at Texas Hold ‘Em – for now’. WIRED Magazine (2015)Google Scholar
  72. Turing, A.M.: Computing Machinery and Intelligence. Oxford University Press, Oxford (1950)Google Scholar
  73. Tylor, E.: The History of Games. D. Appleton, New York (1879)Google Scholar
  74. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behaviour, 3rd edn. Princeton University Press, Princeton (1974)Google Scholar
  75. Wang, F.Y., Zhang, J.J., Zheng, X., Wang, X., Yuan, Y., Dai, X., Zhang, J., Yang, L.: Where does alphago go: from church-turing thesis to alphago thesis and beyond. IEEE/CAA J. Autom. Sin. 3(2), 113–120 (2016)CrossRefGoogle Scholar
  76. Weizenbaum, J.: On the impact of the computer on society. Science. 176(4035), 609–614 (1972)CrossRefGoogle Scholar
  77. Wiener, N.: Some moral and technical consequences of automation. Science. 131(3410), 1355–1358 (1960)CrossRefGoogle Scholar
  78. Yan, J.: Security design in online games. In: Computer Security Applications Conference, 2003. Proceedings. 19th Annual, pp. 286–295. (2003)Google Scholar
  79. Zermelo, E.: Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In: Proceedings of the Fifth International Congress Mathematics’, pp. 501–504. Cambridge University Press, Cambridge (1913)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Departamento de Ingenieria de Sistemas y AutomaticaUniversidad Carlos III de MadridLeganes/MadridSpain