Bringing Up Turing’s ‘Child-Machine’

  • Susan G. Sterrett
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7318)


Turing wrote that the “guiding principle” of his investigation into the possibility of intelligent machinery was “The analogy [of machinery that might be made to show intelligent behavior] with the human brain.” [10] In his discussion of the investigations that Turing said were guided by this analogy, however, he employs a more far-reaching analogy: he eventually expands the analogy from the human brain out to “the human community as a whole.” Along the way, he takes note of an obvious fact in the bigger scheme of things regarding human intelligence: grownups were once children; this leads him to imagine what a machine analogue of childhood might be. In this paper, I’ll discuss Turing’s child-machine, what he said about different ways of educating it, and what impact the “bringing up” of a child-machine has on its ability to behave in ways that might be taken for intelligent. I’ll also discuss how some of the various games he suggested humans might play with machines are related to this approach.


Machine Analogue Human Community Central Character Education Process Computing Machinery 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    BCS Computer Conservation Society. Recollections of early AI in Britain: 1942 - 1965 (2002); (video for the BCS Computer Conservation Society’s October 2002 Conference on the history of AI in Britain) transcript (downloaded on March 25, 2012)Google Scholar
  2. 2.
    Floyd, J.: Turing, Wittgenstein, and Types: Philosophical Aspects of Turing’s ‘The Reform of Mathematical Notation and Phraseology’ (1944-1945). In: Barry Cooper, S., van Leeuwen, J. (eds.) Alan Turing - His Work and Impact. The Collected Works of A. M. Turing, revised edn. North-Holland, Elsevier (2001) (to appear)Google Scholar
  3. 3.
    James, W.: The Principles of Psychology, vol. I. Henry Holt and Company, New York (1890)CrossRefGoogle Scholar
  4. 4.
    Michie, D.: Trial and Error. Science Survey, part 2, 129–145(1961)Google Scholar
  5. 5.
    Quinn, N.: Cultural Selves. Annals of the New York Academy of Sciences 1001, 145–176 (2003)CrossRefGoogle Scholar
  6. 6.
    Sterrett, S.G.: Too Many Instincts: Contrasting Philosophical Views on Intelligence in Humans and Non-Humans. JETAI Journal of Experimental and Theoretical Artificial Intelligence 14(1), 39–60 (2002b); reprinted in: Ford, K., Glymour, C., Hayes, P. (eds.) Thinking About Android Epistemology. MIT Press (March 2006)zbMATHCrossRefGoogle Scholar
  7. 7.
    Sterrett, S.G.: Nested Algorithms and The Original Imitation Game Test: A Reply to James Moor. Minds and Machines 12, 131–136 (2002a)CrossRefGoogle Scholar
  8. 8.
    Sterrett, S.G.: Turing’s Two Tests for Intelligence. Minds and Machines 10, 541–559 (2000); reprinted in: Moor, J.H. (ed.) The Turing Test: The Elusive Standard of Artificial Intelligence. Kluwer Academic (2003)Google Scholar
  9. 9.
    Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Turing, A.M.: Intelligent Machinery. In: Turing, A.M., Ince, D.C. (ed.) Mechanical Intelligence, Collected Works, pp. 107–127. North Holland (1948/1992)Google Scholar
  11. 11.
    Wikipedia contributors. To Tell the Truth. Wikipedia, The Free Encyclopedia (January 27, 2012), Web (January 27, 2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Susan G. Sterrett
    • 1
  1. 1.Department of PhilosophyCarnegie Mellon UniversityPittsburghUnited States of America

Personalised recommendations