1 Introduction

In recent work I have argued for a new interpretation of Turing’s concept of intelligence and his test of intelligence in machines, based on his notion of an emotional concept and his versions of the imitation game additional to the famous version in “Computing Machinery and Intelligence” (Proudfoot 2011, 2013, 2017b). On this interpretation, Turing proposed a response-dependence account of the concept of intelligence; his 1948 and 1952 papers, “Intelligent Machinery” and “Can Automatic Calculating Machines be said to Think?”, add intelligence to the list of putative response-dependent concepts or properties. In this chapter I apply Turing’s notion of an emotional concept to his discussion of free will.

A Turing machine (for Turing, a “logical computing machine”) is based upon the actual “human computer” in the process of calculating a number, of whom Turing famously said, “The behaviour of the computer at any moment is determined by the symbols which he is observing, and his ‘state of mind’ at that moment” (1936, p. 75). Not only the human computer’s behavior, but also his or her 'state of mind' is determined; state of mind and observed symbols “determine the state of mind of the computer after the operation is carried out” (ibid., p. 77). Analogously the Turing machine’s behavior is determined by the machine’s “state of mind” and the scanned symbols on the square of the machine’s tape: this combination “determines the possible behavior of the machine” (ibid., p. 59). This leads to the question whether a Turing machine can possess free will. In “Can Digital Computers Think?” Turing said:

To behave like a brain seems to involve free will, but the behaviour of a digital computer, when it has been programmed, is completely determined. These two facts must somehow be reconciled, but to do so seems to involve us in an age-old controversy, that of “free will and determinism”. (1951, p. 484)

Turing addressed this “controversy”, directly or indirectly, in all his papers on artificial intelligence . He did not explicitly claim that human beings do have free will, or that they do not; he allowed the possibility that “the feeling of free will which we all have is an illusion” (1951, p. 484). However, on the assumption that we do have free will, the problem of reconciling the “two facts” arose for his hypothesis that “real brain s, as found in animals, and in particular in men, are a sort of [Turing] machine” (ibid., p. 483). It arose also for his hypothesis that “[Turing] machines can be constructed which will simulate the behaviour of the human mind very closely” (c. 1951, p. 472).

Turing has been read as holding that the mind is indeterministic (or unpredictable, or uncomputable)—and that this is the solution to the problem of free will and determinism. (For this reading, see (Aaronson 2013), (Lloyd 2012); (Copeland 2004, 2013) offers a more cautious interpretation.) I shall argue, in contrast, that for Turing the concept of free will is an emotional concept: whether or not an agent possesses free will depends on how we respond to the agent. His discussion suggests a new form of compatibilism—a response-dependence compatibilism. This is further evidence against the standard depiction of Turing as a behaviorist.

2 Intelligence as an Emotional Concept

The material in this section is set out in detail in (Proudfoot 2013).

In his 1948 report, “Intelligent Machinery”, Turing said that “the idea of ‘intelligence’ is itself emotional rather than mathematical” (1948, p. 411). In a section entitled “Intelligence as an emotional concept”, he wrote:

The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object therefore it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behaviour. (ibid., p. 431)

In places Turing used the expression “emotional” as proxy for “irrational”; he claimed that several arguments against the possibility of thinking machines are emotional in this sense (see Proudfoot 2014). An emotional concept, however, is not an irrational concept; it is a concept the application of which is, as Turing said here, determined “as much by our own state of mind and training as by the properties of the object under consideration”. In modern terminology an emotional concept is a response-dependent concept.Footnote 1

The notion of an emotional concept makes clear Turing’s approach to intelligence. In his view, in the case of intelligence in machines, the appearance of thinking is at least as important as the machine’s processing speed, storage capacity, or complexity of programming. These are examples solely of the machine’s behavior—in Turing’s words, the “properties of the object” rather than the properties assigned by “our own state of mind and training”. In his 1952 broadcast, “Can Automatic Calculating Machines be said to Think?”, Turing said “As soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking, but a sort of unimaginative donkey-work” (1952, p. 500). Here he emphasized that intelligence is not a feature of the world independent of our tendency to “regard” entities as thinking.

This response-dependence approach is embodied in Turing’s famous test of intelligence in machines. The paragraph (quoted above) beginning “The extent to which we regard something as behaving in an intelligent manner …” immediately precedes his description of the first version of the imitation game, which is restricted to chess-playing. Turing continued:

It is possible to do a little experiment on these lines, even at the present state of knowledge. It is not difficult to devise a paper machine which will play a not very bad game of chess. Now get three men as subjects for the experiment A, B, C. A and C are to be rather poor chess players, B is the operator who works the paper machine. (In order that he should be able to work it fairly fast, it is advisable that he be both mathematician and chess player.) Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.(This is a rather idealized form of an experiment I have actually done) (1948, p. 431)

Turing’s “little experiment” is a trial to see whether or not C has the “temptation to imagine intelligence” in the “paper” machine (i.e. a human executing a program).Footnote 2 Whether or not the machine is intelligent is determined in part by C’s response; for example, if C can “predict its behaviour or if there seems to be little underlying plan”, the machine is not judged to be intelligent. Turing’s words make it clear that his game tests the observer rather than—as the canonical behaviorist interpretation of the imitation game assumes—the machine.

Turing made it clear that he was not proposing a necessary condition of intelligence in machines (1950, p. 435), so the response-dependence theory of the concept (or property) of intelligence to be derived from his remarks can provide only a sufficient condition.Footnote 3 A naïve theory might say: x is intelligent (or thinks) if, in normal conditions, x appears intelligent to normal subjects. The central task is then to specify non-vacuous normal (“standard”, “ideal”, or “favorable”) subjects and conditions. Turing’s 1950 and 1952 versions of his imitation game implicitly indicate such subjects and conditions. The imitation-game interrogator stands in for the normal subject, and is to be “average” and “not ... expert about machines” (1950, p. 442; 1952, p. 495). The unrestricted imitation game supplies the normal conditions: a contestant is required to answer questions on “almost any one of the fields of human endeavour that we wish to include” (1950, p. 435). This requirement prevents a simplistic conversational program—for example, the program briefly claimed in mid-2014 to have passed the test, or the other contestants in Hugh Loebner’s annual competition—from appearing intelligent.Footnote 4 Turing’s remarks suggest, then, something like this schema: x is intelligent (or thinks) if, in an unrestricted computer-imitates-human game, x appears intelligent to an average interrogator.

In Turing’s 1952 broadcast, he also emphasized—answering a lookup table objection raised by Max Newman—that his imitation game is a test of real-world machines (1952, p. 503). This suggests that the schema above should be modified as follows: x is intelligent (or thinks) if in the actual world, in an unrestricted computer-imitates-human game, x appears intelligent to an average interrogator. This is Turing’s “criterion for ‘thinking’” (1950, p. 436). This modification fits with the response-dependence interpretation of Turing’s test, since rigid response-dependence theories world-relativize their schemas in order to eliminate objections that are based on counterfactual situations. Only the logical possibility of an unintelligent machine that can pass the test in the actual world undermines the test.

The question arises, then, whether the notion of an emotional concept is to be found elsewhere in Turing’s thoughts on artificial intelligence . First, however, I turn to his explicit remarks on “free will”.

3 Spirit and Matter

In the late 1920s and early 1930s the problem of free will and determinism was energetically and publicly debated. According to the Bishop of Birmingham, “the notion that Nature was rule d by blind mechanism was general alike in the street and in the pew”.Footnote 5 The new idea of an indeterministic universe provided a solution. The Times reported Sir Arthur Eddington, Plumian Professor of Astronomy at the University of Cambridge, as claiming that “[s]o far as we had yet gone in our probing of the material universe, we could not find a particle of evidence in favour of determinism. There was no longer any need to doubt our intuition of free will”.Footnote 6 Eddington’s theories about the universe, set out in books, lectures, and on radio, were well-known.Footnote 7 His book The Nature of the Physical World, based on his 1927 Gifford Lectures and published the following year, was printed five times within the first 2 years. The Times described it as a work “which everyone interested in the modern development of science should procure and study”.Footnote 8 According to the Dean of St Paul’s Cathedral, “that books on astronomy, however clearly and brilliantly written, should be reckoned among the best sellers was a very remarkable and encouraging sign. ... [T]he multitudes who read [James] Jeans and Eddington, or listened to their lectures on the wireless … rightly felt that we were in the presence of a mighty revelation of the grandeur of Nature’s God”.Footnote 9 Even politicians entered the debate; according to the Home Secretary, Sir Herbert Samuel, the new picture of the universe held that “at the heart of Nature pure hazard reigned”, with consequences for “the freedom of the human will”.Footnote 10

In The Nature of the Physical World Eddington said:

It is a consequence of the advent of the quantum that physics is no longer pledged to a scheme of deterministic law. … The future is a combination of the causal influences of the past together with unpredictable elements—unpredictable not merely because it is impracticable to obtain the data of prediction, but because no data connected causally with our experience exist. … [S]cience thereby withdraws its moral opposition to freewill. (1928, pp. 294–5)Footnote 11

I think we may now feel quite satisfied that the volition [i.e. “the decision between the possible behaviours” of the brain] is genuine. The materialist view was that the motions which appear to be caused by our volition are really reflex actions controlled by the material processes in the brain, the act of will being an inessential side phenomenon occurring simultaneously with the physical phenomena. But this assumes that the result of applying physical laws to the brain is fully determinate. … [T]here is nothing in the physical world … to predetermine the decision; the decision is a fact of the physical world with consequences in the future but not causally connected to the past. (ibid., p. 311)

According to Eddington, there is “no cause” of “the decision of the brain” (ibid., p. 312). Decisions are uncaused events and so are safe from the specter of determinism.Footnote 12 Volition is “something outside causality ” (ibid., p. 312). Eddington conceded that his account admitted “some degree of supernaturalism” (ibid., p. 347). (We might in consequence regard his “decisions” not as uncaused but as examples of substance causation.Footnote 13)

Turing borrowed The Nature of the Physical World from the Sherborne School library in March and April 1929 and in May, June, and July 1930—nearly 3 months in total.Footnote 14 Andrew Hodges suggests that Turing could have found many of the ideas that he expressed in “Nature of Spirit”,Footnote 15 a brief unpublished essay, in Eddington’s book (Hodges 1983/2012, p. 64). However, it is not known exactly when Turing wrote “Nature of Spirit”Footnote 16 and so the most that can be said is that there are similarities between this essay’s approach to free will and Eddington’s view.

In “Nature of Spirit”, Turing wrote:

It used to be supposed in science that if everything was known about the universe at any particular moment then we can predict what it will be through all the future. … More modern science however has come to the conclusion that when we are dealing with atoms & electrons we are quite unable to know the exact state of them; our instruments being made of atoms & electrons themselves. The conception then of being able to know the exact state of the universe then really must break down on the small scale. This means then that the theory which held that as eclipses etc were predestined so were all our actions breaks down too.

For Turing actions are not “predestined” by preceding physical events; they are instead the result of “a will” that involves “a spirit”. He said:

Personally I think that spirit is really eternally connected with matter but certainly not always by the same kind of body. I did believe it possible for a spirit at death to go to a universe entirely separate from our own, but I now consider that matter & spirit are so connected that this would be a contradiction in terms. It is possible however but unlikely that such universes may exist.

Then as regards the actual connection between spirit and body I consider that the body by reason of being a living body can “attract” & hold on to a “spirit”, whilst the body is alive and awake the two are firmly connected & when the body is asleep I cannot guess what happens but when the body dies the “mechanism” of the body, holding the spirit is gone & the spirit finds a new body sooner or later perhaps immediately.

This is consistent with Eddington’s view of volition.

What is “the actual connection between spirit and body”? According to Turing:

We have a will which is able to determine the action of the atoms probably in a small portion of the brain, or possibly all over it. The rest of the body acts so as to amplify this.

This is analogous to Eddington’s view. He said:

At some brain centre the course of behaviour of certain atoms or elements of the physical world is directly determined for them by the mental decision … It seems that we must attribute to the mind power not only to decide the behaviour of atoms individually but to affect systematically large groups—in fact to tamper with the odds on atomic behaviour. (1928, pp. 312–3)

Eddington added, “This has always been one of the most dubious points in the theory of the interaction of mind and matter” (1928, p. 313).

In Turing’s later writings there is no (unequivocal) reference to uncaused events. Although he remarked that the “activity of the intuition consists in making spontaneous judgments”, there is no reason to think that “spontaneous” decisions are uncaused (1938, p. 192). Turing said that these judgments “are not the result of conscious trains of reasoning”, which is consistent with their having causes inaccessible to consciousness (ibid., p. 192). Nor is there any serious reference in his later writings to the notion of spirit. This notion appears only in his reply to the “Theological Objection” to the possibility of thinking machines—this is the objection that thinking is “a function of man’s immortal soul” (1950, p. 449). Turing’s reply was that this objection “implies a serious restriction of the omnipotence of the Almighty … [who] has freedom to confer a soul on an elephant if He sees fit” (ibid., p. 449; see also Turing 1948, p. 410). This reply is rhetorical and does not imply that Turing endorsed the notion of a supernatural soul or spirit. It would seem that Turing’s more considered view of free will must differ from his account in “Nature of Spirit”.Footnote 17

4 A Random Element

In 1951, in his broadcast “Can Digital Computers Think?”, Turing referred again to Eddington. Turing’s remark about the “age-old controversy” continues as follows:

There are two ways out. It may be that the feeling of free will which we all have is an illusion. Or it may be that we really have got free will, but yet there is no way of telling from our behaviour that this is so. In the latter case, however well a machine imitates a man’s behaviour it is to be regarded as a mere sham. I do not know how we can ever decide between these alternatives but whichever is the correct one it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behaviour depend on something like a roulette wheel or a supply of radium. The behaviour of these may perhaps be predictable, but if so, we do not know how to do the prediction. (1951, p. 484)

Turing said that “it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible” (ibid., p. 483). There is no hint here of uncaused events or substance causation. If Turing’s account of free will is indeterministic, it must involve nondeterministic causation—a “random element” rather than “spirit”.

Against this reading of Turing’s view of free will is the fact that in “Computing Machinery and Intelligence” Turing said:

An interesting variant on the idea of a digital computer is a “digital computer with a random element”. These have instructions involving the throwing of a die or some equivalent electronic process; one such instruction might for instance be, “Throw the die and put the resulting number into store 1000”. Sometimes such a machine is described as having free will (though I would not use this phrase myself). (1950, p. 445)

The parenthesis suggests that he did not think that equipping a digital computer with a random element sufficed for the machine to act freely.Footnote 18 The reasons for this are not difficult to guess. Eddington had implied that positing uncaused events failed to solve the problem of free will and determinism, saying that it seemed “contrary to our feeling of the dignity of the mind” to “put it at the mercy of impulses with no causal antecedents” (1928, p. 293). Positing nondeterministic causation has a similar defect. Accounts of free will typically hold that an agent’s action is free if and only if the agent could have acted differently or is the ultimate origin of the action. A machine equipped with a “roulette wheel” could have behaved differently, but only in that the spin of the wheel might have generated a different outcome—it is not that the machine could voluntarily have acted otherwise. Likewise the machine is the ultimate origin of its behavior, but only in that its behavior is settled by the spin of the wheel—the machine did not freely choose how to act. Equipping a computer with a random element does not have the result that the machine’s behavior is free in any intuitive sense.

Other remarks by Turing on including a random element in a machine concern the “education” of the machine:

Each machine should be supplied with a tape bearing a random series of figures, e.g. 0 and 1 in equal quantities, and this series of figures should be used in the choices made by the machine. This would result in the behaviour of the machine not being by any means completely determined by the experiences to which it was subjected, and would have some valuable uses when one was experimenting with it. By faking the choices made one would be able to control the development of the machine to some extent. One might, for instance, insist on the choice made being a particular one at, say, 10 particular places, and this would mean that about one machine in 1024 or more would develop to as high a degree as the one which had been faked. (c. 1951, p. 475)

He also advocated using a random element as a shortcut in calculation :

A random element is rather useful when we are searching for a solution of some problem. Suppose for instance we wanted to find a number between 50 and 200 which was equal to the square of the sum of its digits, we might start at 51 then try 52 and go on until we got a number that worked. Alternatively we might choose numbers at random until we got a good one. … Since there is probably a very large number of satisfactory solutions the random method seems to be better than the systematic. (1950, p. 463)

Turing also thought that the random method is used in “the analogous process of evolution” (ibid., p. 463).Footnote 19 There is no evidence in any of these remarks that he thought that a partially random machine possesses free will.

The principal reason to deny that Turing suggested indeterminism as the route to free will is that, after saying “One possibility is to make [the machine’s] behaviour depend on something like a roulette wheel or a supply of radium”, he remarked:

It is, however, not really even necessary to do this. It is not difficult to design machines whose behaviour appears quite random to anyone who does not know the details of their construction. (1951, p. 485)

This is Turing’s “apparently partially random” machine—a machine that

may be strictly speaking determined but appear superficially as if it were partially random. This would occur if for instance the digits of the number π were used to determine the choices of a partially random machine, where previously a dice thrower or electronic equivalent had been used. (1948, p. 416)

If an apparently partially random machine is Turing’s response to the “age-old controversy” of free will and determinism, he must have embraced some solution other than indeterminism.

5 Free Will as an Emotional Concept

In describing and addressing this “controversy” the specific language of “free will” is not essential. The issue is whether an action’s having the property that makes it free—for example, being produced by an agent able to act otherwise—is compatible with the action’s being determined. The critics (of the possibility of thinking machines) that Turing challenged employed a variety of expressions to discuss exactly this issue.

Geoffrey Jefferson, a participant in Turing’s 1952 radio discussion and cited in “Computing Machinery and Intelligence”, said:

It can be urged, and it is cogent argument against the machine, that it can answer only problems given to it, and, furthermore, that the method it employs is one prearranged by its operator. … It is not enough … to build a machine that could use words (if that were possible), it would have to be able to create concepts and to find for itself suitable words in which to express additions to knowledge that it brought about. (1949, pp. 1109–10)

Jefferson’s criticism that the machine’s method is “prearranged by its operator” is the complaint that the machine could not have acted otherwise. Likewise, his requirement that the machine “create concepts” and “find for itself” words is the demand that the machine be the true source of its behavior. That free will is Jefferson’s concern is demonstrated by his saying, regarding the nervous system as opposed to “modern automata”, that “although much can be properly explained by conditioned reflexes and determinism (in which idea mechanism lurks in the background), there is a fringe left over in which free will may act (i.e. choice not rigidly bound to individual precedent)” (1949, p. 1107).Footnote 20

Likewise Ada Lovelace, also cited in “Computing Machinery and Intelligence”, claimed that the Analytical Engine could not “originate” anything. She wrote:

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. (Lovelace 1843, p. 722)

Turing quoted the first sentence of this remark, as quoted by Douglas Hartree in his Calculating Instruments and Machines. Hartree said, of “equipment such as electronic calculating machines, automatic pilots for aircraft”, that such a machine cannot “think for itself”: “all the thinking has to be done beforehand by the designer and by the operator who provides the operating instructions for the particular problem; all the machine can do is to follow these instructions exactly” (1949, p. 70). Lovelace and Hartree, like Jefferson, discussed the problem of free will and determinism without using the expression “free will”.

Turing’s approach can be gleaned, then, not only from his explicit remarks on “free will” but also from his responses to these criticisms. He focused on the question whether a machine could be the ultimate origin of its behavior. His response to Lady Lovelace’s objection was to say that this “whole question will be considered again under the heading of learning machines” (1950, p. 455; on Turing’s “child machines” see Proudfoot 2017a).

On the question whether a learning machine could be intelligent, Turing said:

One can imagine that after the machine had been operating for some time, the instructions would have altered out of all recognition, but nevertheless still be such that one would have to admit that the machine was still doing very worthwhile calculations. … In such a case one would have to admit that the progress of the machine had not been foreseen when its original instructions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing intelligence. (1947, p. 393)

In Turing’s view, a machine can be intelligent (or think) regardless of whether its behavior is determined. The same applies to the question whether a machine is the ultimate origin of its behavior. He said:

Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. … If we give the machine a programme which results in its doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something, rather than to claim that its behaviour was implicit in the programme, and therefore that the originality lies entirely with us. (1951, p. 485)

The machine can be the true source of its behaviour even if its behaviour is determined.

Using Turing’s approach, whether or not an entity possesses free will is determined (at least in part) by how the observer responds. He said, “We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do” (1951, p. 485). The concept of free will, like that of intelligence, is an emotional concept: whether or not an entity possesses free will is determined “as much by our own state of mind and training” as by the entity’s response-independent properties.

A response-dependence approach to the concept of intelligence provides a philosophical justification for Turing’s use of the imitation game—that is, for his move from the question Can machines think? to the question Are there imaginable digital computers which would do well in the imitation game? Turing made an analogous move in discussing free will. A variant of Lady Lovelace’s objection, he claimed,

says that a machine can never “take us by surprise”. This statement … can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. (1950, pp. 455–6)

Turing in effect replaced the question Can a machine possess free will? with the question Can a machine take us by surprise? A response-dependence approach to the concept of free will provides a philosophical justification for this move. It also explains why for Turing “it is certain that a machine which is to imitate a brain must appear to behave as if it had free will”—and why an apparently random machine will do in place of a genuinely random machine (with respect to free will). Appearing free is what matters to being free.

Copeland offers a different explanation of the apparently partially random machine’s role in Turing’s discussion of free will. He says:

Apparently partially random machines imitate partially random machines. As is well known, Turing advocated imitation as the basis of a test—the Turing test—that “[y]ou might call … a test to see whether the machine thinks”. An appropriately programmed digital computer could give a convincing imitation of the behaviour produced by a human brain even if the brain is a partially random machine. The appearance that this deterministic machine gives of possessing free will is, Turing said, “mere sham”, but it is in his view nevertheless “not altogether unreasonable” to describe a machine that successfully “imitate[s] a brain” as itself being a brain.

Turing’s strategy for dealing with what can be termed the freewill objection to human-level AI is elegant and provocative. ((2013), p. 657; see also (Copeland 2000), pp. 30–31)

However, Turing was inclined to say that a machine (that does something interesting and unexpected) “had originated something”. This suggests that in his view a machine can be the ultimate origin of its behavior—rather than, as Copeland suggests here, merely give a “convincing imitation” of an entity with free will. Copeland’s interpretation does not take into account Turing’s notion of an emotional concept. If the concept of free will is a response-dependent concept, a machine’s appearing to possess free will is not “mere sham” just because the machine is deterministic—any more than an object’s looking red is mere sham just because its particles lack color. Likewise, it is not merely “not altogether unreasonable” to describe this machine as free—any more than it is merely not altogether unreasonable to describe an object that looks red (by normal subjects in normal conditions) as red.

Turing did not expect his way of dealing with Lady Lovelace’s objection—substituting Can a machine take us by surprise? for Can a machine possess free will?—”to silence my critic”. The critic, he said,

will probably say that such surprises are due to some creative mental act on my part, and reflect no credit on the machine. This leads us back to the argument from consciousness, and far from the idea of surprise. It is a line of argument we must consider closed (1950, p. 456)

The “argument from consciousness” is the claim that a machine’s mere behavior is not indicative of consciousness or thought. Turing “closed” this line of argument, he believed, by issuing a challenge. He argued that the options are: either behavior is a sign of consciousness or “the only way to know that a man thinks is to be that particular man” (ibid., p. 452). The latter, he said, is “the solipsist point of view” and he assumed that most people would accept his imitation game as a test of thinking rather than be committed to solipsism (ibid., p. 452). Perhaps Turing thought that the objection to the possibility of machines with free will is simply the argument from consciousness in another guise. Alternatively, perhaps he thought these objections analogous and susceptible to the same reply. Turing said that if “there is no way of telling from our behaviour” that “we really have got free will” then a machine’s behavior “is to be regarded as a mere sham” (Section 4). Mimicking his rejoinder to the argument from consciousness, we might say: either (surprising and interesting) behavior is a sign of free will or the only way to know that a human being possesses free will is to be that human being. With respect to thinking, Turing predicted that we would reject solipsism and accept the alternative—in this case, that is to admit that a machine’s appearing to possess free will is not “mere sham”.

6 Response-Dependence Compatibilism

A response-dependence approach to the concept of free will would be formulated along these lines: x has free will (or acts voluntarily) if and only if, in actual normal conditions, x appears to have free will to actual normal subjects.

Turing’s remarks point to both normal subjects and conditions in the case of machines. The child machine’s “teacher” is proxy for the normal subject—an experimenter who is ignorant of the machine’s engineering, just as a teacher is ignorant of the structure and function of the human child’s brain. Turing suggested that “the education of the machine should be entrusted to some highly competent schoolmaster who is interested in the project but who is forbidden any detailed knowledge of the inner workings of the machine” (c. 1951, p. 473). He said:

An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour. … The view that “the machine can only do what we know how to order it to do” appears strange in face of this. (1950, p. 462)

This parallels Turing’s requirement, with respect to the concept of intelligence, that the imitation-game interrogator be “average” and “not ... expert about machines”.

The teaching situation also provides normal conditions. The experimenter is surprised by the machine’s behavior only against a background of training the machine, just as in the case of teaching the human infant. According to Turing, the experimenter is to start from “a comparatively simple machine, and, by subjecting it to a suitable range of “experience” transform it into one which was more elaborate, and was able to deal with a far greater range of contingencies. … This might be called ‘education’” (c. 1951, p. 473).Footnote 21 The aim is to produce “initiative”:

[D]iscipline is certainly not enough in itself to produce intelligence. That which is required in addition we call initiative. … Bit by bit one would be able to allow the machine to make more and more “choices” or “decisions”. [Eventually] interference would no longer be necessary, and the machine would have “grown up”. (1928, pp. 429–30)

The machine that has “grown up”, makes “choices” and “decisions”, and acquires “initiative” is the ultimate source of its own behavior. Whereas in “Nature of Spirit” free will seemingly involves the supernatural, in Turing’s later writings whether an agent’s action is free depends on how we react to the agent.

If the concept of intelligence is response-dependent, it is very different from the concept of computation, even ifbrain processes implementing computations are the physical basis of “thinking” behavior. In analogous fashion, if the concept of free will is a response-dependent concept, it is very different from the concept of indeterminism—involving either an uncaused “will” or a “random element”—even if there is an indeterminism in the (physical or non-physical) basis of “free” behavior.Footnote 22 This point applies more generally. Libertarian and compatibilist accounts of free will argue that some response-independent property of an agent (or the agent’s behavior) suffices for free will: nondeterministic causation, the role of internal states or second-order reflective mental states, the absence of impediments to action, responsiveness to reasons, and so on. If the concept of free will is an emotional concept, no observer-independent feature suffices for free will—regardless of whether the feature is internal or external to the agent.

Is a response-dependence approach to the concept of free will a form of illusionism?Footnote 23 Response-dependence theorists typically distinguish their accounts from subjectivist or illusionist accounts; for example, Mark Johnston claims that response-dependence is consistent with a “qualified” realism (1989, p. 148) and Philip Pettit that “response-dependence does not compromise realism in a serious manner” (1991, p. 588). The notion of normal subjects and conditions is intended precisely to provide an objective basis for the application of a response-dependent concept; also, judgments of response-dependent concepts are held by many proponents of response-dependence accounts to be genuinely evaluable as true or false (Johnston 1989, p. 148). If these theorists’ arguments are sound, both intelligence and free will can be “real”—even if the concepts of intelligence and free will are response-dependent.

Turing’s discussion of the emotional concept of intelligence is consistent with this approach to response-dependent concepts (see Proudfoot 2013). An emotional concept is certainly not, in his view, merely subjective, since he said that the application of an emotional concept is determined in part by its response-independent properties (“the properties of the object under consideration”). He provided normal subjects and conditions to exclude aberrant judgments of intelligence in machines. Moreover, he referred to the question Are there imaginable digital computers which would do well in the imitation game ? as a “variant” of the question Can machines think? (1950, p. 442). The former is evaluable as genuinely true or false, and so (we can presume) is the latter. There is no reason to think that Turing took a different stance with respect to the emotional concept of free will.

If the concept of free will is a response-dependent concept, the “age-old controversy” of free will and determinism does not undermine Turing’s groundbreaking hypotheses about the mind. Instead the tension between determinism and free will is simply another example of the perplexing relation between primary and secondary qualities: just as an object can be colored even if its particles are not, so an action can be free even if it is also determined. The relation is certainly still to be explained, but does not (without considerable argument) rule out the possibility of color—or of free will. This leaves, as the likely other main concern about a response-dependence approach to free will, the close link between free will and moral responsibility. Does morality demand a more substantial account of free will? However, this is hardly a new concern, and proponents of anti-metaphysical accounts of free will or anti-realist ethical theories claim that such approaches are consistent with full-fledged accountability.

7 Conclusion

Can machines possess free will? Turing said, “It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. It is certainly true that ‘acting like a machine’ has become synonymous with lack of adaptability” (1947, p. 393). The same might be said of the idea of a machine with free will. If, however, the concepts of intelligence and free will are response-dependent concepts along the lines suggested by Turing’s remarks, there is no “fundamental contradiction” in the idea of a machine with intelligence or free will. Whether computers will pass Turing’s test of intelligence or will “take us by surprise” is an open question.

Turing’s own view is cautiously expressed. On intelligence, he said “Of course I am not saying at present either that machines really could pass the test, or that they couldn’t” (1952, p. 495). Yet he also made it clear that in his view some machine will do well in the imitation game (e.g. 1950, p. 449). His discussion of free will—in humans and machines—follows the same pattern. As we have seen, he said “It may be that the feeling of free will which we all have is an illusion”. Yet he also said that a “grown up” child or machine can possess “initiative”. He rejected “the view that the credit for the discoveries of a pupil should be given to his teacher”, saying:

In such a case the teacher would be pleased with the success of his methods of education, but would not claim the results themselves unless he had actually communicated them to his pupil. (1948, pp. 411–12)

For Turing, both humans and machines can be the ultimate source of their behavior, even if this behavior is also determined.