Advertisement

Abstract

Two important articles by Alan Turing are discussed: On Computable Numbers with an Application to the Entscheidungsproblem (1936) and Computing, Machinery and Intelligence (1950). The second article demonstrates the conviction of the unlimited possibilities of the “universal machine” to imitate human intelligence. But, paradoxically, the first article points out the limitations of such machines. The distance between the ability of machines and the intelligence of humans is to be found throughout the development of computer technology.

Keywords

System Engineer Turing Machine Professional Knowledge Turing Test Universal Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. 1.
    Turing, AM (1937) On computable numbers, with an application to the Entscheidungsproblem, Proc London Mat Soc (2), 42:230–265.CrossRefGoogle Scholar
  2. 2.
    Gödei, Kurt (1986) Collected works, vol 1, publications 1929–1936. Oxford University Press, p 136.Google Scholar
  3. 3.
    Karlqvist, Anders (1984) Om Skapande Improvisation — några reflektioner utifrån matematikens perspektiv. In: Per Sällström (ed) Funderingar kring VETENSKAP & MUSIK. Royal Academy of Music series no. 44, Stockholm, p 14.Google Scholar
  4. 4.
    The author Göran Printz-Påhlson gave a penetrating portrayal in his poem “Turingmaskin”, published in Säg Minns Du Skeppet Refanaut? Bonniers, 1984, p 96.Google Scholar
  5. 5.
    Whitemore, Hugh (1988) Enigmakoden. Royal Dramatic Theatre, Stockholm (translated into Swedish by Per-Erik Wahlund), p 26ff. The play is based on Alan Turing, the enigma of intelligence, by Alan Hodges, Counterpoint, Unwin Paperbacks, 1983, and is published in English as Breaking the Code by Hugh Whitemore, Amber Lane Press, 1987. A Swedish radio programme on Alan Turing was broadcast in the series “Vetandets värld” on Programme 1 on 19 July 1988. It was called “Jag vill bygga en hjärna”, (I want to build a brain), and presented the ideas contained in the Andrew Hodges biography of Alan Turing.Google Scholar
  6. 6.
    As above, p 26f. Translator’s note: The parts of the quote in brackets do not appear in the English text of the play; they are translated from the Swedish version.Google Scholar
  7. 7.
    Wiener, Norbert (1961) Cybernetics, or control and communication in the animal and machine, 2nd edn. MIT Press/Wiley, p 23.Google Scholar
  8. 8.
    Rosenbluth, Arthur, Wiener, Norbert and Bigelow, Julien (1943) Behaviour, purpose and teleology. Philosophy of Science 10:18–24. The interest is focused on a characteristic of theology or, in other words, “appropriate behaviour”. This requires the term behaviour to be classified. In this classification “theological” is used as synonymous with “intention controlled through negative feedback”. This means that a given goal gradually influences a course of events, with the aim of achieving the goal.CrossRefGoogle Scholar
  9. 9.
    Wiener, Norbert and Bigelow, Julien (1943) Behaviour, purpose and teleology. Philosophy of Science 10 As above, pp 18–24.Google Scholar
  10. 10.
    John McCarthy coined the phrase “Artificial Intelligence” as a heading for the first research seminar at Dartmouth College, USA, in 1956. Those present included Marvin Minsky, Allan Newell and Herbert Simon. See Pratt, Vernon Thinking machines: the evolution of artificial intelligence. Basil Blackwell, Oxford, pp 203, 215. See also Bolton, David (1984) Turing’s man: Western culture in the computer age. Duckworth, London, p 193.Google Scholar
  11. 11.
    Wiener, Norbert (1952) Materia, Maskiner, Människor. Cybernetiken och Samhället, Forum.Google Scholar
  12. 12.
    Early in his research programme Wiener established contacts with the prominent social anthropologists Margaret Meade and Gregory Bateson. See Wiener (1961) p 18.Google Scholar
  13. 13.
    Dreyfus, Hubert L (1979) What computers can’t do: the limits of artificial intelligence. Harper Colophon Books.Google Scholar
  14. 14.
    Turing AM (1960) Can a machine think? Published in Swedish in Sigma vol 6, Forum.Google Scholar
  15. 15.
    Bolton (1984) p 12.Google Scholar
  16. 16.
    Whitemore (1988) p 56. Compare the following quote: “We may hope that finally machines will compete with humans in all purely intellectual areas. But which areas is it best to start with? That too is hard to determine. Many people think that some very abstract activity, such as playing chess, would be the best. It may also be asserted that the best thing would be to equip the machine with the best possible sensory organs and then teach it to understand and speak English. This would be the normal teaching process for children. One could point at objects and ask the machine to name them etc. I do not know which is the best solution, but one should attempt them both.” From Turing (1960) Can a machine think? Forum, p 227.Google Scholar
  17. 17.
    Hodges, Andrew (1987) Turing’s conception of intelligence. In: Gregory RL, Marstrand PK (eds) Creative intelligences. Francis Pinter, London, p 84. The English philosopher AJ Ayer discusses this paradox of Turing’s in the preface to Bolton (1984) p XI. The most remarkable thing in this context is, however, Kurt Gödel’s refutation of Alan Turing’s article, Can a machine think? Gödei: “Expressing opposition to Turing’s mechanistic view of mind”, Gödei (1986) p 25.Google Scholar
  18. 18.
    Turing (1960) Forum, p 2207f.Google Scholar
  19. 19.
    In his article, Turing refers to Charles Babbage when he says that the idea of computers is an old one. Charles Babbage, professor of mathematics at Cambridge from 1828 to 1893, planned such a machine, the so-called analytical engine, but it was never completed. Even though Babbage had understood the basic principles, at the time his design did not look particularly attractive. Turing also says that Babbage’s analytical engine, being exclusively mechanical in its working, helps us shake off the common prejudice of placing great importance on the fact that modern calculating machines are electrical, as is the human nervous system. As Babbage’s machine was not electric, and as, seen logically, all computers are equivalent, we realize that whether we use electricity or not can have no theoretical significance. In the nervous system, chemical phenomena are at least as important as electrical phenomena. David Bolton writes: “The artificial intelligence specialist is not interested in imitating the whole man. The very reason he regards intelligence (rational ‘problem solving’) as fundamental is that such intelligence corresponds to the new and compelling qualities of electronic technology. Today, as before, technology determines what part of the man will be imitated.” Bolton (1984) p 213.Google Scholar
  20. 20.
    Neumann, John von En allmän och logisk teori för automater. In: Sigma vol 6: 2194.Google Scholar
  21. 21.
    Descartes formulated a robust version of Turing’s test, which we discussed in the previous chapter. The Cartesian test looks like this: before it can be judged to be intelligent, a machine must be capable of language actions and sensible actions independent of the programmer. Descartes arrived at a completely different conclusion from Turing. The difference between a human and an animal — machine is that because he has a language a human is able to develop his thinking and the way he formulates concepts.Google Scholar
  22. 22.
    Bolton (1984) p 13.Google Scholar
  23. 23.
    Searle, John (1988) Kognitivism och datormetaforer. In: Dialoger, no. 778, Artificial stupidity.Google Scholar
  24. 24.
    Buttimer, Anne (1983) Creativity and context, Lund’s Studies in Geography, Human geography no. 50. Royal University of Lund, Department of Geography, p 17.Google Scholar
  25. 25.
    Sällström, Pehr (1987) Editorial comments in Dialoger magazine, no. 5, Artificiell intelligens, p 4.Google Scholar
  26. 26.
    Weizenbaum, Josef (1967) Computer power and human reason: from judgement to calculation. Freeman, San Francisco.Google Scholar
  27. 27.
    As above, p 181.Google Scholar
  28. 28.
    Buttimer (1983) pp 14–15. See also the editorial comments in Dialoger no. 1, Dialogens väsen, and Denett, Daniel (1984) The role of the computer metaphor in understanding the mind. In: Pagel HR (ed) Computer culture: the scientific, intellectual and social impact of the computer. Annals of the New York Academy of Sciences 426: 274. The crossing of boundaries in this way was noted by Aristotle who, in Ethica Nichomachea, states that a sign of an educated person is that he only attempts to achieve the degree of precision in each subject that the nature of the subject permits. Using more exact terms than the subject permits may lead to a false description of reality. This theme is also discussed in Degerblad, Jan-Eric (1988) Planering och yrkeskultur. Council for Building Research, and in Göranzon, Bo (1985) Bildning vid systemutveckling. En förståelse av den mänskliga dialogens karaktär. In: Ahlin J (ed) Konsekvenser för industrioch arbetsmiljöplanering av ny informationsteknolovi, Projektrapport 3. Department of Architecture, Stockholm Institute of Technology, pp 101–120. This essay is a comment on Systems development: a presentation of four different views. Development Programme for New Technology, Work Organization and Work Environment, Work Environment Fund, 1984. See also Göranzon, Bo (1991) The Practical Intellect, Unesco, Paris, and Springer-Verlag, London.Google Scholar
  29. 29.
    It may be of interest to introduce a distinction between two main categories of computer applications. One group comprises software for simple calculations of the type previously done by manual calculation, while the other group comprises software intended for problems for which there were no manual calculation methods or which are too lengthy and cumbersome when done by these methods. Thus the two groups aim at either a quantitative improvement of the competence of an occupational group, or a qualitative improvement of the competence of an occupational group. Folke Peterson, professor of heating, water and sanitation technology at the Stockholm Institute of Technology, who introduced this distinction, now says that the use of software to improve qualitative competence requires: the staff to have a high degree of professional knowledge (in the case of heating, water and sanitation technology, they must have a good engineering qualification); and the ability to analyse the results of the calculations. Peterson puts particular emphasis on the second of these two factors. Without profound knowledge of the physical processes that underlie the software (the models), and the ability to analyse the calculations, they will give virtually none of the information it is possible to get. In addition to a thorough knowledge of heating, water and sanitation technology, the technicians of the future in this field will have to have a strong aptitude for analysis. The main use of the computer is to perform complex technical calculations.Google Scholar
  30. 30.
    This quote is from Göranzon Bo (ed) (1983) Datautvecklingens Filosofi. Tyst kunskap och ny teknik, Carlssons, p 46.Google Scholar
  31. 31.
    As above, p 48.Google Scholar
  32. 32.
    As above, p 48.Google Scholar
  33. 33.
    See, for example, Janik, Allan (1988) Reflexioner över teknologi, konflikter, medborgarskap och mod. In: Dialoger no. 7–8, p 44.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • Bo Göranzon

There are no affiliations available

Personalised recommendations