1 Turing’s Milieu

The British astronomer Stanley Arthur Eddington (1882–1944) was one of the first scientists to understand and disseminate the general theory of relativity of Albert Einstein (1879–1955) in the Anglophone world (Vibert, 2021). While Eddington regarded the universe as physically indeterminate, Einstein remained convinced that it was intrinsically deterministic: “God does not play at dice [with the universe].”Footnote 1 This dichotomy shall play a key role in the present article in connection with Alan Turing (1912–1954), who was eagerly reading Eddington’s writings in late 1929.Footnote 2

Eddington, who would become Turing’s mentor at Cambridge University, believed that beyond the symbols of the theoretical physicist lurks a mystical truth, which the modern scientist cannot grasp mathematically. As he wrote in 1929:

Surely then that mental and spiritual nature of ourselves, known in our minds by an intimate contact transcending the methods of physics, supplies just that interpretation of the symbols which science is admittedly unable to give.Footnote 3

Besides “symbolic knowledge,” there was “intimate knowledge,” which remained out of mathematical reach.Footnote 4 Enjoying a rainbow, laughing at a joke and playing a musical instrument all require intimate knowledge.Footnote 5 According to Eddington, there was an unbridgeable chasm between the universe in which we live (on the one hand) and logico-mathematical symbolism (on the other hand).Footnote 6

A similar narrative came from the astronomer James Jeans (1877–1946). By late 1929, Turing had already started reading one or more books by Jeans.Footnote 7 The following excerpt from The Mysterious Universe (Jeans, 1930) captures Turing’s intellectual milieu of idealism and free will:

The terrestrial pure mathematician does not concern himself with material substance, but with pure thought. His creations are not only created by thought but consist of thought, just as the creations of the engineer consist of engines.

To my mind, the laws which nature obeys are less suggestive of those which a machine obeys in its motion than of those which a musician obeys in writing a fugue, or a poet in composing a sonnet. The motions of electrons and atoms do not resemble those of the parts of a locomotive so much as those of the dancers in a cotillion.

If all this is so, then the universe can be best pictured, although still very imperfectly and inadequately, as consisting of pure thought, the thought of what, for want of a wider word, we must describe as a mathematical thinker.Footnote 8

Reading the passage in reverse, the “mathematical thinker” of Jeans was no machine; instead, he was comparable to a musician who writes a fugue. Likewise, nature was no automaton.

For Eddington, Jeans and other scholars in the twentieth century, scientific uncertainty replaced the Laplacian maxim that it is possible to analytically predict all future states of the universe from initial conditions and natural laws alone. A young Turing expressed a similar view of scientific uncertainty around 1932:

It used to be supposed in Science that if everything was known about the Universe at any particular moment then we can predict what it will be through all the future. … More modern science however has come to the conclusion that … we are quite unable to know the exact state … As McTaggart shews[,] matter is meaningless in the absence of spirit … Personally I think that spirit is really eternally connected with matter but certainly not always by the same kind of body.Footnote 9

Turing’s reference in the previous passage to the Hegelian philosopher, John McTaggart (1866–1925), of Cambridge University aligns with his reception of the works of Eddington and Jeans. For McTaggart, “the recognition of a unity of a universe” was “greater than that recognized by ordinary experience or by science” (McDaniel, 2020). He held the view that “it is possible to be conscious of this unity in a way different from that of ordinary discursive thought” (McDaniel, 2020). McTaggart’s idealism was, as the scholar Gerald Rochelle puts it, “not so far removed from the contemporary scientific world-view as we might at first think.”Footnote 10

Besides the writings of Eddington, Jeans and McTaggart, Turing was also aware of the work of Bertrand Russell (1872–1970) from the first decade of the twentieth century, a period in which Russell had promoted his logicism. His intent had been: “To discover a logically ideal language … that will exhibit the nature of the world in such a way that we will not be misled by the accidental, imprecise surface structure of natural language.”Footnote 11 Russell believed that mathematics in its entirety could be grasped by logical rules: “The fact that all Mathematics is Symbolic Logic is one of the greatest discoveries of our age; and when this [so-called] fact has been established, the remainder of the principles of mathematics consists in the analysis of Symbolic Logic itself.”Footnote 12 Mathematics could be completely reduced to a fixed, predefined logic, according to Russell.

Did Turing agree with Russell’s outlook? Did Turing believe that all creative actions of “the” mathematician can be reduced to one and the same symbolic framework? Perhaps not, in line with the aforementioned view of Eddington et al. Perhaps not, due to the intellectual influence exerted by Cambridge University mathematician, Ernest Hobson (1856–1931), on a young Alan Turing.Footnote 13 While Eddington was stressing the distinction between symbolic and intimate knowledge in the 1920s, Hobson had already distanced himself from Russell several times a decade earlier, including with the following words: “Mathematics is a living and growing science” and “a mathematician is a human being, not a logic-engine” (Hobson, 1910). According to Hobson, “the” mathematician did not exist, and it is impossible to specify all future actions in mathematics in one fixed language. Russell’s logicism had to give way. In a similar vein, there is historical evidence suggesting that Turing gave a lecture to philosophers at Cambridge University in 1933, in which he made it clear that Russell’s logicism had shortcomings: “… a purely logistic view of mathematics was inadequate; and … mathematical propositions possessed a variety of interpretations, of which the logistic was merely one.”Footnote 14

According to W.J. Mander, author of British Idealism: A History, Russell and his colleague G.E. Moore (1873–1958) at Cambridge University produced “celebrated arguments against idealism” and “there can be no doubting their vigour” but “it must also be appreciated that, as the opening salvos of a war they went on to win, these attacks have been remembered as more powerful and decisive than they really were, either historically or philosophically.”Footnote 15 Did Turing argue for or against idealism in the 1930s?

Britain’s philosophical landscape had been predominantly idealist at the close of the nineteenth century.Footnote 16 To presuppose that 30 years later idealism was no more to be found (in, say, Turing’s Cambridge) would be a mistake. Mander’s idealists include McTaggart, Eddington and Jeans, but not Turing, hence my proposal for fellow Turing scholars to engage with Mander, particularly in connection with his opening statement:

The movement is known as British Idealism, and here too we find a vital point of unity; a common affiliation—not to Berkeley but to Plato, Kant, and Hegel—which bound a generation together. In 1860 there were scarcely any idealists, by 1900 the majority of philosophers so designated themselves, but thirty years later they were rare again. Yet it will not do just to leave matters at that; for although they were all idealists, the philosophers to be studied were not all idealists in the same way. Indeed as the movement progressed there came to be developed a great variety of such positions, ranged against a set of outlooks they variously called empiricism, materialism, or dualism …, many incompatible with each other and some scarcely distinguishable from those of their opponents.Footnote 17

In addition to philosophical inclinations, such as those of idealism and materialism, I shall also, from now on, explicitly distinguish between two groups of historical actors: Camp A versus Camp B. According to members of Camp A, the gap between the logical world of natural laws (on the one hand) and reality (on the other hand) can be bridged. It is merely a matter of finding those laws of nature. According to Gottfried Leibniz (1646–1716), Russell in the early twentieth century and several computer scientists in recent decades, we can grasp reality symbolically, once we have developed the right symbolic logic.Footnote 18 Moreover, that symbolism is even taken to be algorithmic among computer scientists today; it is then a matter of unraveling the universe’s algorithm. According to Camp B, which includes members such as Eddington and Hobson, the gap is difficult if not impossible to bridge. It is out of the question if only one fixed, symbolic framework can be employed. Any pre-specified symbolic language will fail to capture reality in all its facets.

The specific members of Camp A will be of less concern in the remainder of this article than those of Camp B. Most computer scientists and believers in artificial intelligence are members of Camp A, while the same cannot be said of physicists and engineers.

Five historical interpretations, which I wish to put forward to historians, can now be conveyed in brief. Firstly, at the start of his university studies in Cambridge, Turing was well aware of both views A and B; moreover, just like Eddington et al. and Hobson, he was a fervent supporter of Camp B. Secondly, as author of his 1936 paper, “On computable numbers …” Turing adhered to the agenda of Camp A, but he did not believe in that agenda. Thirdly, in later years and especially after the war, Turing championed the viewpoint of Camp B, as my discussion of Turing’s work on machine intelligence will reveal. Fourthly, unlike Turing and many physicists at the time, most computer scientists today do not consistently distinguish between both views. Computer scientists believe that all processes in the physical world are algorithmic (and thus are also logically determined). Fifthly, they wrongfully thank Turing for this algorithmic outlook on the universe.

I will largely substantiate these five points in my contribution, which consists of the present article and a forthcoming exposition. The present article provides a bird’s-eye view on Turing’s developing thoughts as an idealist and his legacy in the USA to date. My forthcoming narrative provides a worm’s-eye view, revealing that Turing was an idealist both before and after World War II.

Does the word “idealist” in the previous paragraph refer to purely Platonic idealism or to Eddington’s transcendental idealism? (Perhaps the word refers to one of Mander’s many other guises of idealism.) A key difference between both mentioned forms of idealism can be conveyed by revisiting the following part of Jeans’s excerpt:

… the universe can be best pictured, although still very imperfectly and inadequately, as consisting of pure thought …

If Jeans meant to say that more research would allow scientists to acquire a complete, mathematical grip on the universe, then I take him to be a purely Platonic idealist. (The mathematics involved would have to be, say, probabilistic in order to account for the universe’s uncertainty.) However, if Jeans believed that the mathematical picture of the universe would always remain imperfect, then his philosophical position aligned more closely with Eddington’s transcendental idealism; Eddington proclaimed that symbolic knowledge was orthogonal to intimate knowledge. The latter could not be reduced to the former, not even in principle. Zooming in on Eddington’s and Jeans’s forms of idealism, Mander reports thus:

[Eddington’s] thought that the world we experience is but a symbol of some ‘more behind’ relates this idealism back to that of Carlyle, but its subjective scepticism owes more to Berkeley than either Kant or Hegel. Jeans too felt that quantum theory had brought us to a world ‘very different from the full-blooded matter and the forbidding materialism of the Victorian scientist … But in so far as he regarded the new reality revealed as mathematical—‘The Great Architect of the Universe now begins to appear as a pure mathematician’ his creation ‘more like a great thought than like a great machine’ …—his was a more purely Platonic idealism.Footnote 19

My preferred way to read Turing (anno 2023) is to view him as an idealist without further stipulation. I will not substantiate my working hypothesis that Turing was a transcendental idealist, that his philosophical position was far from consistent and that he sensed a potential inconsistency in his own Weltanschauung—which would explain why he was keen to learn from Ludwig Wittgenstein in 1939 and Dorothy Sayers in 1941.Footnote 20 Even someone of Russell's caliber was in a conceptual muddle in the 1920s, largely because of the advent of modern physics.Footnote 21 It should, therefore, come as no surprise that Turing’s position was philosophically scrutinized at the time, as well as in later years.Footnote 22

Since Andrew Hodges is Turing’s leading biographer (Daylight, 2014), I shall repeatedly rely on his detailed findings.Footnote 23 In my reading, Hodges views Turing in the main as a materialist.Footnote 24 Since I propose that Turing was an idealist, the potential novelty of my contribution is immediate.Footnote 25

2 Contentious Points in Mathematics

At the turn of the twentieth century, scholars disagreed about the consistency of specific mathematical measures taken in connection with infinitely large sets. Too much freedom in pursuing abstract mathematics easily led to contradictions.Footnote 26 It was, for example, unclear whether a mathematician was allowed to posit the existence of a set containing an infinite number of elements and subsequently use it in his mathematical argument, without providing further specifications of how the set could come about. Was it permissible for a mathematician to rely on an infinite number of throws of a fair die, when postulating the existence of a new set? Or was he always expected to specify its contents in a logically determinate manner? In Europe, Georg Cantor (1845–1918) and Godfrey Harold Hardy (1877–1947) belonged to the first group: almost anything was allowed, to put it simplistically, so long as one continued to reason consistently—but guaranteeing that consistency was precisely where the difficulty lay. Hobson was a member of the second group of logically determined sets. Russell was at first close to Hardy but gradually slipped to a position between Hardy and Hobson. These developments mainly occurred during the first decade of the twentieth century.Footnote 27

Hobson was not principally against the idea of rigorously formalizing various branches of mathematics. He championed logical determinacy and the dictum that every infinitely large set had to be specified on the basis of a “norm,” which meant that the arbitrariness obtained with an infinite number of throws of a fair die was not permitted inside each mathematical branch.Footnote 28 Hobson’s concept of “norm” would later be refined by Turing, leading up to Alonzo Church’s concept of “algorithm,” which depended on the so-called “Turing machine” (Church, 1937). Today, the Turing machine is rightly or wrongly regarded by many computer scientists as the mathematical model of the modern computer.Footnote 29

However, Hobson did not believe that mathematics could be captured once and for all. All creative actions of all mathematicians (including those yet to be born) were not specifiable in one predetermined language. Hobson expressed his disagreement with Russell on this point in the third part of his 1910 Address to the British Association for the Advancement of Science:

It is quite impossible for me here to discuss … the interesting question of the possibility of setting up a final system of indefinables and axioms which shall suffice for all present and future developments of Mathematics. (Hobson, 1910)

Mathematics was a living organism, it evolved and could not be captured symbolically in advance, as, according to Hobson, Russell believed was possible. Hobson continued:

After all, a mathematician is a human being, not a logic-engine. … Not every great mathematician possesses in a specially high degree that critical faculty which finds its employment in the perfection of form, in conformity with the ideal of logical completeness … (Hobson, 1910)

Mathematicians had to free themselves from Russell’s logicistic shackle. In the second part of his Address, Hobson put the matter thus:

The belief is very general amongst instructed persons that the truths of Mathematics have absolute certainty, or at least that there appertains to them the highest degree of certainty of which the human mind is capable … [A] considerable amount of difference of opinion of this character exists among mathematicians at the present time. (Hobson, 1910)

While “instructed persons,” such as Russell, belonged to Camp A, Hobson was defending the views of Camp B.

Perhaps Turing reasoned in a similar vein two decades later, advocating a pragmatic take on symbolic logic (cf. “a purely logistic view … was inadequate”). If it is fair to presume that Turing was, pace Eddington, no advocate of a Laplacian worldview (let alone of an algorithmic universe) then perhaps Turing belonged to Camp B instead of Russell’s Camp A. The irony is that, as author of his 1936 article, he merely played along in conformity with the logicistic agenda of Russell and, to be more precise, the formalism of the German mathematician, David Hilbert (1862–1943).Footnote 30

3 The 1936 Article

Turing’s pre-war contributions to modern logic were connected to the Russellian-Hilbertian developments of the second and third decades of the twentieth century. The title of Turing’s 1936 article in full is: “On computable numbers, with an application to the Entscheidungsproblem.” A discussion of the Entscheidungsproblem itself lies outside the scope of this article, but the words “On computable [real] numbers” echo the aforementioned disagreements between Hardy, Russell and Hobson with regard to the infinite number of digits constituting the representation of a real number.

Technically, Hobson had articulated his 1906 position concerning the admissibility of actual infinity thus:

The process of arbitrarily choosing figures one after the other, without cessation, involves the idea of endlessness only, and this is quite distinct from the truly infinite process which can be regarded as defining a definite object. In the latter case the process regarded from outside is a completed one embodied in the law which dominates it; in the former case it is impossible to regard the process from the outside.Footnote 31

The actual infinite was permissible for Hobson, provided the “process” at hand was “dominated” by a “law.” Hobson did not believe, however, that all of mathematics could be dominated by a fixed, predefined body of laws.

Should historians view Hobson (rather than, say, Russell) as Turing’s mathematical ancestor?Footnote 32 In my reading, the “computable numbers” of Turing in 1936 were real numbers with “computable” taking on one of two connotations: an A-connotation and a B-connotation:

A. “automatic machines,” in line with Hobson’s 1906 technical exposition

  • “… there is an axiom … which expresses the rules governing the behavior of the [human] computer …”Footnote 33

B. “choice machines,” in line with Hobson’s helicopter view on mathematics as a living organism

  • “… cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems …”Footnote 34

The behavior of a disciplined human computer is, as expressed in case A, completely logically determined by an axiomatic recipe. In case B, however, that same computer may rely on creative—or, at any rate, non-predetermined—input from the outside. While the logicistic shackle of Russell is expressed in passage A, I suggest that Turing’s overall take on mathematics was, pace Hobson, that “the” creative actions of “the” mathematician could not be reduced to one, a priori, fixed symbolic framework, i.e., Turing’s brief mention of option B reveals his personal preference.

Following Hobson, my Turing favored a pragmatic attitude toward modern logic. He did not believe that the gap between the practices of creative mathematicians (on the one hand) and their formalization (on the other hand) could be closed by relying solely on his A-notion of computability. The ability to switch from one symbolism to another, yet-to-be-developed, framework remained inherent in human intelligence. Turing used his A-notion to demonstrate that Hilbert’s finitistic stipulations were not powerful enough to solve the Entscheidungsproblem algorithmically.Footnote 35 The adverb “algorithmic” in the previous sentence refers to the A-connotation of computability, which Turing defined in 1936 on the basis of his “automatic machines” (including his universal automatic machines). Those machines were called “Turing machines” as early as 1937 by the logician Alonzo Church. Following Church and much of his post-World War II doctoral students, computer scientists today are very familiar with Turing machines, but most of them do not know about Turing’s “choice machines” (option B).

This brings us to Turing’s repeated usage of the word “machine” in his 1936 article and his machine metaphor, in general. His writings indicate that, for him, both a human being (e.g., a creative mathematician) and an actual device were, in essence, mathematical machines—or “machines” in short. However, each of these machines was not per se an automatic machine (option A), some of them could be his choice machines (option B) instead. Likewise, Turing did not categorically distinguish between symbolic machinery (such as his automatic machines) and energy-consuming machinery. In essence, they were all mathematical. To recapitulate, I characterize Turing as a mathematician who did not categorically differentiate between an abstract static world of universals and the concrete changing world of individuals.Footnote 36 Zooming out, it then comes as little surprise that those scholars who were tasked with interpreting and evaluating Turing’s 1936 article—including his Cambridge lecturer (Max Newman, 1897–1984) and his future doctoral supervisor at Princeton (Alonzo Church, 1903–1995)—did not act in full accordance with Turing’s thoughts. They were sympathizers, if not members, of Camp A and they were less familiar with Eddington’s physical indeterminism and British idealism in general. Turing was thus the odd one out; on reflection, it is not surprising that he often felt misunderstood. In contrast to Turing and physicists of the likes of Jeans and Eddington, many of Turing’s colleagues in both modern logic and computer building did persistently distinguish between abstract (non-causal) objects on the one hand and causal (spatiotemporal) objects on the other hand.

4 Machine Intelligence

Immediately after World War II and a decade before the advent of artificial intelligence in the USA, Turing became involved in the design of the Automatic Computing Engine [ACE] and subsequently programmed it to advance his personal “machine intelligence” project.Footnote 37 He wanted to use a post-war engineered machine to achieve intelligent behavior. However, his theoretical impossibility result in connection with the Entscheidungsproblem, along with those of Kurt Gödel and Alonzo Church also in the 1930s, showed that every disciplined (human) computer—i.e., every “automatic machine” from 1936, or every “Turing machine” or “algorithm” from 1937—is intrinsically limited in his/its computing power. That is, every type-A computing machine comes with problems which it cannot solve autonomously, while human intelligence leads to the insight just conveyed and, therefore, seems to be capable of accomplishing more. In Turing’s words: “The human intelligence seems to be able to find methods of ever increasing power for dealing with such problems ‘transcending’ the methods available to machines.”Footnote 38 The creative actions of “the” mathematician, if there were any such notion to begin with, could never be captured with a Turing machine.

To nip the (just voiced) criticism against machine intelligence in the bud, Turing turned to his second source of inspiration: Eddington’s physical indeterminism. A creative mathematician, such as Carl Gauss (1777–1855), is not a disciplined mathematician, according to Turing.Footnote 39 From the perspective of Camp A, Gauss’s actions were logically determined, despite the fact that Gauss had made mistakes in his mathematical practice—mistakes that, pace Turing, might even have been essential for Gauss to accomplish his mathematical feats.Footnote 40 If Gauss had been allowed to make mistakes in his mathematical work, why would the ACE machine not also be allowed to enjoy such freedom? The deployment of the ACE and other post-war machines had to be, so Turing insisted, brought in line with his B-notion of undetermined computability.Footnote 41

According to Turing, the ACE need not be perceived as an A-machine, but rather, as an “interference machine” that continually interacts with its environment, like humans do, before exhibiting determined (and hopefully intelligent) behavior. In 1948, Turing wrote:

[Man] is in frequent communication with other men, and is continually receiving visual and other stimuli which themselves constitute a form of interference … We shall now consider machines in which such interference is the rule rather than the exception.Footnote 42

A student learns a lot by making mistakes and by regularly receiving feedback from his teacher. When the student has matured intellectually and isolates himself for several hours, then (and only then) does he approximate the behavior of a foolproof 1936 automatic machine:

It will only be when the man is ‘concentrating’ with a view to eliminating these stimuli or ‘distractions’ that he approximates a machine without interference.Footnote 43

Turing’s “machine intelligence” thus referred to machines that continually interact with their environment during a learning process of sufficiently long duration. Initially, the machinery at hand can be compared to a newborn whose brain is undeveloped:

All of this suggests that the cortex of the infant is an unorganized machine, which can be organized by suitable interfering training. The organizing might result in the modification of the machine into a universal machine or something like it.Footnote 44

For example, all readers of the present article are “interference machines” (option B) and not automatic machines (option A). However, as soon as one of my readers stops interacting with her peers, she becomes an automatic machine and, possibly, a universal automatic machine that is intelligent in certain areas of discourse. Every automatic machine has intrinsic limitations, for that is what the results of Gödel, Church and Turing from the 1930s tell us. However, as long as the reader continues to collaborate with fellow readers, engage in discussion with her peers, etc., she will be intrinsically worth more than any fixed universal machine.

To implement the ACE as an “interference machine,” Turing programmed it to have one or more “undetermined” states. For any undetermined state, two different follow-up computations were possible. In such cases, the subsequent computation of the ACE was determined, pace Turing’s Eddingtonian outlook, by the randomness inherent in nature:

When a configuration is reached for which the action is undetermined, a random choice for the missing data is made and the appropriate entry is made in the description, tentatively, and is applied.Footnote 45

Turing provided the “random” values as separate inputs to the ACE. (I presume he did this with multiple throws of a die.)

In sum, Turing implemented a learning process in which certain values (of program variables) were first “uncertain,” then became “tentative” and then either became “definite” or “uncertain” again, depending on the human-ACE interaction. Turing’s learning process relied on “random choices” from an external operator and was consistent with his B-notion of computability from 1936.

Turing’s preference for B-computability is at odds with what computer science students are taught today, namely that humans and computers (including yet-to-be-invented machines) will never be able to compute more functions than those that a universal Turing machine can compute. Examples of such grand statements will follow later. While the term “universal Turing machine” covers everything in computer science, this term was far too restrictive for Turing himself. Thus, he wrote in 1948:

A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine.Footnote 46

The emphasized words refer to limitations set out by the Hilbert Program before World War II. However, for Turing, an intelligent human being, such as the creative Gauss, was anything but disciplined, anything but a universal automatic machine.Footnote 47

Turing’s 1936 universal machine did, however, come to play an increasingly significant role in the establishment of “computer science” as a new scientific discipline in the USA.

5 Turing’s Legacy in the USA

With the end of World War II came the dawn of the computer industry. In the 1940s and early 1950s, specialists in this growing industry converted their mathematically modeled problems into machine notation, which would then serve as input for their computer. Towards the end of the 1950s, some of these specialists were instructing their machine to carry out the translation for them, i.e., from a “program” in a “programming language” into the machine’s own code, whence the words “Automatic Programming of Digital Computers” in the quoted passage below.

Researchers in automatic programming sought “machine-independent languages” that were independent of any computer manufacturer (such as IBM). It is precisely in this context that the universal Turing machine began to play a key role in the history of computing. Turing’s theoretical concept from 1936 helped a select group of American and British computer programmers to see the wood for the trees. Each tree represented a programming notation (such as FORTRAN and AUTOCODE) and the forest itself corresponded to the brand new, yet-to-be-explored territory of automatic programming (Daylight, 2015). For example, one of those involved, Andrew Booth (1918–2009), spoke the following words in 1959:

It was Turing who first enunciated the fundamental theorem upon which all studies of automatic programming are based. In its original form the theorem was so buried in a mass of mathematical logic that most readers would find it impossible to see the wood for the trees. Simply enunciated, however, it states that any computing machine which has the minimum proper number of instructions can simulate any other computing machine, however large the instruction repertoire of the latter. All forms of automatic programming are merely embodiments of this rather simple theorem and, although from time to time we may be in some doubt as to how FORTRAN, for example, differs from MATHMATIC or the Ferranti AUTOCODE from FLOW-MATIC, it will perhaps make things rather easier to bear in mind that they are simple consequences of Turing’s theorem.Footnote 48

The italicized text expresses a fundamental insight of contemporary computer science. All kinds of developed programming notations are essentially equivalent to one other. At best, they can match the mathematical computing power of a universal Turing machine, but never exceed it. In later years, computer scientists have further generalized this statement to yet-to-be-invented programming languages and computers, as the following words by David Harel from 1992 suggest:

[A]ny algorithmic problem for which we can find an algorithm that can be programmed in some programming language, any language, running on some computer, any computer, even one that has not been built yet but can be built … is also solvable by a Turing machine.Footnote 49

The previous quotations by Booth and Harel shed light on how Russell’s pre-war logicism became algorithmic throughout the second half of the twentieth century, with “algorithmic” referring to the universal Turing machine. Computer scientists (including Harel) take it for granted that the universal Turing machine covers the full load of computability. They regard the universal Turing machine as the most suitable model for all kinds of products designed by engineers (such as iPhones, laptops, desktops, etc.). Engineers, on the other hand, are trained to deal with mathematical models in a completely different way. They rely on a multitude of models for each engineered product and each model they employ has both advantages and disadvantages compared to any other model they use (Daylight, 2016, 2021).

Turing had gained recognition in the USA by the late 1950s, after his death in 1954. Initially, he was regarded by Booth and like-minded specialists as the intellectual father of automatic programming. A few decades later, he was even proclaimed to be the inventor of the modern computer (Bullynck et al., 2015). Over the past 10 years, historians have come to query this claim (Corry, 2017; Daylight, 2012; Mounier-Kuhn, 2012; Price, 2021), much in line with the sequel to Booth’s 1959 introduction:

Why was it, then, that Turing’s original work, finished in 1937 before any computing machine of modern type was available, assumed importance only some years after machines were in common use? The reasons, I think, stem entirely from the historical development of the subject.Footnote 50

According to Booth, the very first “computing machines” from the 1940s and early 1950s were “almost exclusively [used] by their constructors”Footnote 51 and thus by users who did not abstract from their machine:

… and, hence, by people who were intimately aware of their internal construction. It took some years before the machines were used for scientific applications, devised by people who were and wanted to remain ignorant of the machine itself and, hence, had to rely on automatic programming techniques.Footnote 52

The power of the universal Turing machine lay, pace Booth, in its abstraction, which only became relevant once applied mathematicians, who were not computer builders, began to register en masse as users of the new technology.

In addition to Turing’s posthumous recognition among a select but influential group of American and British computer programmers, I mention in passing that Turing himself came to realize much earlier that one general-purpose machine, such as the ACE, was sufficient to perform the tasks of several special-purpose machines. Andrew Hodges asks whether Turing was the first person ever to appreciate the programmable nature of modern machines in this manner. Not so, as it turns out.Footnote 53 A century earlier, Charles Babbage (1791–1871) had already had a penetrating view on essentially the same matter (Daylight, 2014). Yet neither the latter’s work, nor Turing’s theoretical work, fundamentally influenced the construction of a first generation of post-war computers.Footnote 54 Moreover, many computing professionals came to Turing’s insight in a practical way, not via modern logic.Footnote 55

6 Artificial Intelligence

Turing’s 1936 article helped establish computer science as an academic discipline in the USA. Only a few years after Booth’s speech did American universities begin to offer “computer science” curricula, based on the theoretical Turing machine (Daylight, 2015). In 1966, the first “Turing Prize” was awarded to Alan Perlis (1922–1990), a specialist in automatic programming. Yet the name “Alan Turing” only gained popularity gradually through the last quarter of the twentieth century, if not later.Footnote 56

There is no indication that Booth and Perlis read Turing’s 1936 article in full, let alone understood it according to Turing’s original line of thought. Rather, their insights came from recast analyses of the early 1950s, written by former doctoral students of Alonzo Church. However, that literature, too, was only partially intelligible. The computer programmer was not a logician; conversely, the average logician knew very little about computers.

Besides Booth and Perlis, John Carr (1923–1997) appropriated modern Turing machinery as a computer programmer. Various computing concepts, such as “simulation,” acquired a theoretical underpinning. In Carr’s words from 1959:

If one universal machine can simulate any other machine of a somewhat smaller storage capacity (which is what Turing’s statement on universal machines means), it should therefore be possible for a computer to simulate a version of itself with a smaller amount of storage.Footnote 57

With the germination of academic computer science came algorithmic thinking. Every physical phenomenon could, pace Carr, be grasped symbolically via the underlying Turing machine. In Carr’s words:

Based on Turing’s proof about universal machines:

  1. 1.

    Living organisms can be abstractly defined as [a] symbol manipulator.

  2. 2.

    Actions of living beings can be described by a program.

  3. 3.

    Digital computers have all the features of Universal Turing Machines.

  4. 4.

    Digital computers can duplicate human beingsFootnote 58

These powerful words, coming from the President of the Association of Computing Machinery, helped build up a support base for American artificial intelligence. The world was algorithmically controllable, including the natural and artificial beings who lived or would come to live in it.

Carr’s view on artificial intelligence is consistent with Turing’s A-notion of computability and Russell’s imaginary bridge, linking the world of physical processes to a logical space. Note, however, that Russell’s logical space now consisted solely of computer programs. Fast forward to the present and we see Carr’s A-notion of artificial intelligence reigning, as the following words by theoretical computer scientist, Scott Aaronson, vividly illustrate:

I was lazily relying on the fact that everyone in the room already agreed with me—that ... it was simply self-evident that the human brain is nothing other than a “hot, wet Turing machine,” and weird that I would even waste … time with such a settled question. Since then, I think I’ve come to a better appreciation of the immense difficulty of these issues—and in particular, of the need to offer arguments that engage people with different philosophical starting-points than one’s own.Footnote 59

The last sentence presumably refers to scientists in established disciplines, such as physics, biology and geology. In these sciences, the prevailing idea is that nature contains randomness, which is something that a “hot, wet Turing machine” cannot capture (by its very design). Despite Aaronson’s seemingly more cautious stance in the second part of the previous passage, his public appearances confirm that he is a die-hard computer scientist: in principle, any physical process can be grasped intellectually with a computer program (or a Turing machine).Footnote 60

Yet, with mainstream thinking and discipline building comes freethinking. There are plenty of historical players who have come to challenge the Carr-Aaronson establishment, including Carl Hewitt and Giuseppe Longo, whose work I will briefly discuss below. Other freethinkers, whose writings will not be reviewed here, include Carl Petri, Peter Wegner, Dina Goldin, Jan van Leeuwen, B. Jack Copeland, Oron Shagrir, Mark Burgin, Graham Priest and Edward A. Lee.

7 Freethinkers

In the 1970s, Carl Hewitt began to take issue with the universal Turing machine as an overarching theoretical concept. Contrary to the classical, functional view of computability, he described “computation” as a “cooperating society of ‘little men’ each of whom can address others with whom it is acquainted and politely request that some task be performed.”Footnote 61 Hewitt viewed the real world, including the world of interrupts, messages (emails) and computer networks, as intrinsically indeterminate. The following words from his former doctoral student, Gul Agha, nicely sum up Hewitt’s B-view:

In any real network of computational agents, one cannot predict precisely when a communication sent by one agent will arrive at another. ... Therefore a realistic model must assume that the arrival order of communications sent … is physically indeterminate.Footnote 62

Hewitt’s modeling language does not abstract away the “physically indeterminate” character which engineers of distributed systems consider in their work. From Hewitt’s perspective, Russell’s logical atomism was futile:

The actor paradigm [of Carl Hewitt] stands in sharp contrast to the logicist approach, which not only postulates the existence of a unique reality, but commits us to representing our knowledge in terms of a consistent collection of information (Agha, 1987).

Hewitt’s postmodernist attitude to symbolic logic is largely unexplored territory in present-day computer science.Footnote 63 The gist is that while Hewitt’s actor paradigm does not guarantee algorithmic control in the Carr-Aaronson spirit, it has nonetheless served the software industry well.Footnote 64

Theoretical computer scientist Longo also abandoned the standard view. In 1995, he wrote the reverse of what was posited above by Aaronson:

Nobody seems to doubt that our brain is a massively parallel, distributed, interactive device, even though a few still try to reduce it to Turing machines and claim that, “in principle”, any finite piece of the world should be fully describable by symbolic manipulations.Footnote 65

The last words (“any finite piece of …”) summarize the most extreme view of members of Camp B, although Longo initially belonged to Carr and Aaronson’s Camp A.Footnote 66 To express his dismay, Longo recently wrote a letter to the late Turing, denouncing the notion that “everything is computational” and querying “the myth of the universe as a Turing Machine, against your very precise observations,” referring to Turing’s insights, which Longo shares.Footnote 67 Subsequently, Longo pointed to his colleagues in computer science “who are using the only technique that they know … flattening it onto a universe … made only of formal calculations,” and expressed agreement with Turing’s position that there is no reason to regard the universal Turing machine as the limit of computability:

...... as if yours is the last machine that man will be capable of inventing ... I am convinced that we shall invent others ...Footnote 68

Engineering is, pace Longo, not limited by Turing’s 1936 theory.

In a similar vein, Andrew Hodges has recently suggested that the human brain was anything but a hot, wet Turing machine for Turing (Copeland et al., 2017). Specifically, Hodges wrote in 2012:

[Turing] was also one of the first to use a computer for simulating physical systems. In 1951, however, Turing gave a radio talk with a different take on this question, suggesting that the nature of quantum mechanics might make simulation of the physical brain impossible.Footnote 69

In this article, I have proposed to interpret Turing as a member of Camp B from the very start of his university studies. According to Eddington, Turing, Hewitt, Longo and other B-members, the gap between brain processes and symbolic logic is difficult, if not impossible, to bridge. It is definitely not feasible if one can only resort to Turing machinery.

8 Conclusions

For Turing, his 1936 impossibility result did not, in general, apply to human mathematicians or to actual, programmable devices. His automatic machines had only served to formally capture the notion of a disciplined human computer, in line with Russellian and Hilbertian intellectual developments in the first third of the twentieth century, and to reveal their intrinsic limitations. After the war, Turing wanted to use the ACE machine so that it would resemble a creative mathematician rather than his automatic machinery from 1936. Hence, he turned to Eddington’s physical indeterminism—i.e., to “random” inputs for the ACE—to provide room for the making of errors, akin to those made by a child that learns.

Despite Turing’s preference for his broader B-notion of computability, computer scientists today follow a neo-Russellian tenet; that is, they look at actual computing devices through algorithmic glasses, in compliance with Turing’s A-notion of computability.Footnote 70 While computer science takes Turing’s universal machine as the limit of all achievable forms of computability, it was explicitly perceived as banal by Turing in 1948. From his perspective, a creative person is significantly more than that which a fixed symbolic logic or a universal Turing machine can offer. Human creativity cannot be fully captured by a series of logical rules, that is, by a program text written in a programming language. Conceptually, Turing’s machine intelligence thus differed significantly from American artificial intelligence, with which many readers are more familiar.

In the decades following his death in 1954, Turing’s universal machine has become the quintessential model of the modern computer and, by extension, of every process that physics has to offer. In my present contribution I provide reasons to suggest that Turing would have challenged the dictum that the computing power of any yet-to-be-invented machine is, a priori, limited by that of a universal Turing machine. In contrast to Turing, computer scientists generally regard the universal Turing machine as the most suitable model for all kinds of engineered products (e.g., iPhones, laptops, desktops) presumably because program productivity hinges on a digital (Turing machine) abstraction of physical reality. All programming languages are disguised universal Turing machines, according to the theoretical computer scientist. In retrospect, then, it is unsurprising that computer science portrays Turing as the intellectual father, and even as the inventor, of the modern computer.

A detailed chronology pertaining to several episodes in the intellectual life of the true Turing is forthcoming. This article already provides evidence to support the claim that Turing, himself, did not view computations in the real world to be exhaustively characterized by his automatic machines from 1936. The Turing machine is dead. Long live Turing!