“Moxon’s Master” seems straightforward enough—man makes machine, machine murders man—but the famously mordant “Bitter” Bierce often wrote stories where a subtext offers a quite different reading to the surface text, and this story is one of those. SF critics have offered at least three different and diverging interpretations of “Moxon’s Master”: that Moxon’s had a mistress who hid in the automaton and killed Moxon in a fit of rage; that Moxon was gay, and his attempts to seduce the narrator provoked a jealous rage in Haley, who hid in the automaton and killed Moxon; and that the crime was merely staged by Moxon, in order to convince the narrator that machines can be intelligent, and Moxon’s death by thunderbolt was the act of a vengeful God. Bierce’s tale has also been much commented upon in artificial intelligence (AI) circles. In particular, several AI pioneers argued that the automaton must have been intelligent—and indeed been a conscious, cogitating entity—because it was good at chess. But just as a surface reading of the story might cause us to misunderstand who Moxon’s master was, might it have caused those early AI commentators to miss the point of Moxon’s chess game?
The rules of chess are, as boardgames go, complicated. And the number of choices faced by chess players during a game—should I move the bishop to that square or this one? or would it be better to move the knight there? or the queen here? or … —well, the number is immense. This mix of complexity and variety caused most people to believe chess is a game for the “brainy” or intelligent. And perhaps this is why people long believed computers would never beat humans at chess: computers, after all, aren’t intelligent. (The Turk, a chess-playing automaton unveiled in 1770 by von Kempelen, caused wonder across Europe and America: the mechanical man was an extremely strong player, who won the majority of his games. But the Turk was merely a clever hoax. See Fig.
.) Since there isn’t a clearly defined winning strategy for chess, as there is for, say, tic-tac-toe, and since the number of possible moves in a game of chess is too large to permit even the most powerful processor to crunch through all the options, people thought computers would only ever be woodpushers—and even then they’d have to get humans to push the wood.
A reconstruction of von Kempelen’s chess-playing Turk, displayed in the Heinz Nixdorf MuseumsForum—the world’s biggest museum devoted to computers—in Paderborn, Germany. The Turk was a formidable chess player, but it wasn’t an automaton; rather, the Turk was an elaborate trick. A chess master, hidden inside the cabinet, played the game; intricate mechanisms caused the Turk’s arms and fingers to move; and von Kempelen provided the sleight-of-hand and diversions needed to distract the audience. Presumably Moxon’s automaton would have appeared something like the Turk (Credit: Marcin Wichary, San Francisco)
At least, that was the generally accepted belief. AI pioneers thought differently.
As long ago as 1949 Claude Shannon, one of the founders of our modern digital world, showed how one could restrict the number of possibilities a computer would need to consider if it were playing a game of chess. Shannon also showed how a program, if it had been fed the relative values of chess pieces and the values associated with certain positional factors, could evaluate the best move in a given position. In 1950, Alan Turing actually wrote a program to play chess. (Turing’s achievement demonstrates that if there’s a connection between intelligence and chess playing ability then it’s a tenuous one. Turing, by any reasonable definition of the term, was a genius. But he was, apparently, a weak chess player.)
From 1950 onwards, progress in computer chess was slow but relentless. By the 1980s, cheap chess programs were capable of beating the likes of me without breaking sweat. (That was hardly a landmark achievement. I like to think of myself as “brainy”, but if I am then my intelligence certainly doesn’t extend to chess. The average six-year-old can beat me.) Then, in 1997, the IBM supercomputer Deep Blue beat Garry Kasparov, the then reigning world champion and one of the strongest players of all time. Today, no human can possibly hope to compete in a game against a dedicated chess program running on half-decent hardware. Indeed, technology has advanced to the stage where it’s entirely possible for a tech-savvy person to develop a modern-day version of von Kempelen’s Turk: an automaton that not only plays an extremely strong game but that moves the pieces for itself. (See Fig.
Nowadays you can build your own “Turk”—a true automaton that can play chess (at a level that’s almost certainly better than your own). As you can see from the photograph, this doesn’t look as impressive as the original Turk. Nevertheless, unlike the original Turk, this one needs no human intervention to move chess pieces on the board (Credit:
Proponents of AI argued that Deep Blue’s victory was a watershed moment. Dissenters argued (correctly, in my opinion) that it was a mistake to regard chess-playing ability as a signifier of intelligence: just as tic-tac-toe can be defeated by brute force, so can chess. It just takes more force. The dissenters, however, didn’t learn their lesson. They began promoting Go, an ancient Chinese board game, as an example of an intellectual pursuit that could distinguish humans from machines.
Compared to chess, the rules of Go are simple. But the game of Go contains many more possible moves and so brute-force approaches, of the type favoured by computers, are ineffective. Furthermore, the best Go players seem to rely upon intuition—a phenomenon we don’t know how replicate in lines of programming code. Therefore, the argument went, humans could rest assured of their intellectual superiority over computers: we might no longer be better chess players but we would always be better Go players. Except, of course, that’s not how it turned out.
In October 2015, AlphaGo—a program developed by the AI company Google DeepMind—beat a human professional Go player. Five months later, the program beat a human player of the top rank in a five-game match. And in May 2017, AlphaGo beat the world’s number one human player in a three-game match. This
was an achievement worth noting. Whereas Deep had followed a script for chess and was successful simply because its raw processing power enabled it to follow the script quickly, AlphaGo learned to be successful at Go. It did so by studying a database of 30 million moves made by human experts in games—essentially, the program was made to watch thousands of hours of recorded human play. The developers reinforced that training by making AlphaGo play against different instances of itself in order to generate and analyze new moves. This combination of deep learning and reinforcement learning was enough to create a Go program capable of crushing any human opponent.
The human factor was not entirely absent from AlphaGo: the program required a pre-existing database of human games from which to learn. But it turns out humans aren’t needed even for that purpose. In October 2017, Google DeepMind released AlphaGo Zero. This iteration of the program didn’t need to study past games. Instead, scientists programmed it with the rules of Go and then let it start; it played at random against itself and observed the results. After three days AlphaGo Zero played 4.9 million games. In that three-day period AlphaGo Zero accumulated more Go knowledge than humans had acquired since they began playing the game about 2500 years ago. Within three days AlphaGo Zero became the best Go player on the planet, and trounced the initial version of AlphaGo 100–0 in a hundred-game match. Checkers, chess, Go … computers crush all human competition. Using machine learning techniques, computers are capable of beating humans at
any rule-based board game.
But so what? Does excellence in board games make computers intelligent? Computers can perform arithmetic better than people, too, but few of us would argue that makes them intelligent. To be clear: this is
not to say developments such as AlphaGo Zero are unimportant; indeed, the techniques that went into the creation of AlphaGo Zero might soon change the nature of our civilisation. But they aren’t necessarily a signifier of human-level intelligence. We don’t revere the names of Einstein, Shakespeare, and Van Gogh because they excelled at backgammon; the thought of judging our friends, family, and colleagues on the basis of an ability to strategise in checkers simply doesn’t enter our heads. Surely computers must exhibit something more than a prowess at chess if we are to consider them intelligent?
So why did those early AI pioneers think the development of a chess-playing computer, as in “Moxon’s Master”, would light the path to artificial intelligence? Perhaps the answer lies in part because those AI pioneers were themselves exceptionally intelligent. Often such people spend a large part of their life lost in their thoughts. They reason, pontificate, imagine, deduce, argue, contemplate … and the body becomes merely a means of getting the brain from A to B. But of course being fully human isn’t solely about intellect. How could it be? We are biological creatures, and hundreds of millions of years of evolution ensures that the human body influences the thoughts and feelings passing through the brain to which it’s attached. It makes no sense to talk about human-level intelligence as if it were entirely separate from embodied experience.
The commentators who argued the automaton in “Moxon’s Master” must be intelligent because it played a decent game of chess were, I believe, misguided. The automaton possessed no more intelligence than Deep Blue. However, perhaps one could argue the automaton was intelligent, or at least possessed a mind capable of self-awareness, on the basis it wasn’t a very good chess player—
and was angry about the fact! The automaton killed Moxon in a fit of pique, a rage caused by a failure to foresee Moxon’s checkmating move. Anger might not be attractive, and it often leads to behaviour we would classify as stupid, but it’s an element of human intelligence. Similarly, all those other emotions that move us, whether positively or negatively, are an inevitable part of the mix. When attempting to define what we mean by mind surely we shouldn’t pretend emotions are irrelevant and then extract one intellectual pursuit—chess, arithmetic, or whatever—and conclude that excellence in this single domain is a token of general intelligence or self-awareness?
From what source, however, would Moxon’s automaton or any other machine experience emotion? Forget chess—put the automaton in a room with one-way mirrors and play it some catchy music. If it dances as if no one is looking then I’ll grant it might be intelligent. But in the absence of glands, hormones, drives that originate in our mammalian and even pre-mammalian ancestors, a machine would simply stand still and do what it was programmed to do. Wouldn’t it?
My dance test is impractical, so let’s return to more mainstream thinking.
is artificial intelligence, if it isn’t the ability for a machine to beat a person at chess or Go?
One hugely influential contribution to the AI debate came from Alan Turing. The famous codebreaker was interested in the question: can machines think? It’s hard to get a handle on this question because of the difficulty, as we’ve just discussed, of defining precisely what we mean by “think”. (Was Deep Blue “thinking” when it beat Kasparov?) However, around the same time he was writing his chess program, Turing came up with a closely related question that
can be answered by experiment. The so-called “Turing test” has different versions and interpretations, but the general flavour is as follows.
Suppose a person converses naturally with two test participants—one human, one machine, both of whom are hidden from view—and at the end of the interrogation is unable to distinguish the machine from the human. In other words, suppose the machine can imitate a human so well its responses are indistinguishable from those that might be given by a human. Under those circumstances the interrogator should grant that the machine can “think”.
But when you think about it—and I’m granting that you, the reader, can think—the Turing test has some obvious shortcomings. At least, it does if you are interested in measuring a machine’s intelligence.
For example, people often exhibit unintelligent behaviour (don’t get me started!) so an intelligent machine that couldn’t mimic
unintelligent behaviour would eventually fail the test. Similarly, a super-intelligent machine would be able to do things humans can’t; unless it “dumbed down”, and refused to show those behaviours, it would fail the test.
Another shortcoming of this approach: human interrogators might assign qualities to a machine because of our inherent tendency to anthropomorphise. An example of this from my own experience occurred back in the 1980s. I set my cheap chess program to run at low-intermediate level; sometimes I beat it, sometimes it beat me. On its turn to move the program would sometimes pause—and in those cases I couldn’t help but believe I was playing against a conscious entity that was “thinking” about the best move to make. I now know those pauses were inserted deliberately by the code, so human opponents wouldn’t lose heart. But back then I was a victim to anthropomorphism. Garry Kasparov, when he played Deep Blue in 1997, made a similar mistake (albeit one made when he was operating at a completely different level to me). On move 44 of game 1 the program sacrificed a piece to try and gain a short-term advantage, the sort of illogical move a human player might make. Kasparov won the game, but move 44 spooked him: he thought he saw in it a sign of “superior intelligence”. The reality was more mundane. The algorithm was unable to calculate the best move in this position, and so a programmed safeguard kicked in—Deep Blue produced a move at random. Kasparov anthropomorphised by seeing a programming bug as a sign of intelligence. This tendency to anthropomorphise isn’t restricted to chess. Religion, for example, provides yet another case study: throughout history worshippers of various faiths have bestowed personhood (or even some form of divinity) to non-humans. Anthropomorphism occurs in all walks of life.
In any case, the Turing test doesn’t address the deep problem: a computer might be able to pass the test by mechanically following a script. Indeed, some readers of this book might have interacted with a chatbot—a program capable of conversation, usually via text—without realizing it. Chatbots are used increasingly on customer service websites, and if the query that drove you to the website is easily handled then you probably completed the conversation by thanking the program and failing entirely to notice that you were interacting with code. But even if a chatbot is defined to be “thinking” because it passes the Turing test it doesn’t follow that it’s conscious or acting with intent.
Perhaps the sensible approach to the question of AI, at least at our present stage, is not to get too hung up on matters of philosophy. Perhaps we should just write programs, build machines, and simply watch how they get on. The approach exemplified by AlphaGo Zero, for example, works brilliantly for clearly defined worlds possessing simple rules that can be pre-programmed. On the other hand, this approach is not necessarily going to work for complicated tasks in the real world; AlphaGo Zero would make a lousy taxi driver, for example. Complex, real-world tasks might require a quite different approach—but there’s no reason to suppose AI practitioners won’t develop those approaches. At the current rate of progress there’ll come a time—a few years hence, perhaps a few decades—when AIs will be navigating the everyday world as effectively as humans. When that happens it will be difficult to argue those entities
aren’t intelligent. But their intelligence, I believe, will be quite alien. Their method of thinking will be entirely different to our own. And the future, perhaps, will lie in some type of marriage of our two different forms of intelligence.
This speculation naturally leads into the theme of Chapter
: what will be the human response to human-like machines? 6