Advertisement

Passing Loebner’s Turing Test: A Case of Conflicting Discourse Functions

  • Sean Zdenek
Chapter
Part of the Studies in Cognitive Systems book series (COGS, volume 30)

Abstract

This paper argues that the Turing test is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and participants’ relative degrees of power and status (Holmes, 1992). In the case of the Loebner Contest, a present day version of the Turing test, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure.

Key words

communicative competence cooperative principle discourse function Grice linguistic politeness Loebner Contest Turing test 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. Brown, P. and Levinson S.C. (1987), Politeness: Some Universals in Language Use, Reissue, Cambridge, UK: Cambridge University Press.Google Scholar
  2. Collins, H.M. (1997), ‘Rat-tale: Sociology’s contribution to understanding human and machine cognition, in P.J. Feltovich, K.M. Ford, and R.R. Hoffman, eds., Expertise in Context: Human and Machine, Menlo Park, CA: AAA1 Press, pp. 293–311.Google Scholar
  3. Collins, H.M. (1993), ‘The Turing test and language skills’, in G. Button, ed., Technology in Working Order: Studies of Work. Interaction, and Technology, London, UK: Routledge, pp. 231–245.Google Scholar
  4. Crawford, C. (1994), Letter in response to Shieber’s ‘Lessons from a restricted Turing test’ and Loebner’s ‘In response’, Communications of the ACM 37.9, pp. 13–14.CrossRefGoogle Scholar
  5. Culpeper, J. (1996), ‘Towards an anatomy of impoliteness’, Journal of Pragmatics 25, pp. 349–367.CrossRefGoogle Scholar
  6. Dennett, D.C. (1985), Can machines think?’, in M. Shafto, ed., How We Know, San Francisco, CA: Harper & Row, pp. 121–45.Google Scholar
  7. Epstein, R. (1992), ‘The quest for the thinking computer’, AI Magazine 13.2, pp. 81–95.Google Scholar
  8. Garfinkel, H. (1972), ‘Studies of the routine grounds of everyday activities’, in D. Sudnow, ed., Studies in Social Interaction, New York, NY: The Free Press, pp. 1–30.Google Scholar
  9. Grice, H.P. (1991), ‘Logic and conversation’, in S. Davis, ed., Pragmatics: A Reader, Oxford, UK: Oxford University Press, pp. 305–315.Google Scholar
  10. Goffinan, E. (1967), Interaction Ritual: Essays on Face to Face Behavior, New York, NY: Anchor Books.Google Scholar
  11. Holmes, J. (1992), An Introduction to Sociolinguistics, London, UK: Longman.Google Scholar
  12. Kasper, G. (1990), ‘Linguistic politeness: Current research issues’, Journal of Pragmatics 14, pp. 193–218.CrossRefGoogle Scholar
  13. Lakoff, R. T. (1989), ‘The limits of politeness: Therapeutic and courtroom discourse’, Multilingua 8(2/3), pp. 101–129.Google Scholar
  14. Leech, G. (1983), Principles of Pragmatics, London, UK: Longman.Google Scholar
  15. Loebner, H. (1994), ‘In response’, Communications of the ACM 37.6, pp. 79–82. [http://pascal.acm.org/~loebner/In-response.html] (20 July 1999).CrossRefGoogle Scholar
  16. Mauldin, M. (1994), ‘Chatterbots, Tinymuds, and the Turing test: Entering the Loebner Prize Competition’, in Proceedings of AAAI-94. [http://www.fuzine.com/m1m/aaai94-Slides.htlfll] (27 Aug. 1999).
  17. Moor, J. (1976), An analysis of the Turing test’, Philosophical Studies 30, pp. 249–257.CrossRefGoogle Scholar
  18. Platt, C. (1995), ‘What’s it mean to be human, anyway?’, Wired 3.04. [http://www.hotwired.com/collections/robots_ai/3.04_smart_machines_pr.html] (27 Aug. 1999).Google Scholar
  19. Powers, D. (1999), ‘1999 Loebner Prize Competition’, [http://www.cs.flinders.edu.au/research/AI/LoebnerPrize/] (6 Oct. 1999).
  20. Quan, T. (1997), ‘Machine language’, Salon 21 (May), [http://www.salon.com/may97/21st/article970S15.html] (22 Feb. 2000).
  21. Rees, R. (1994), Letter in response to Shieber’s ‘Lessons from a restricted Turing test’ and Loebner’s ‘In response’, Communications of the ACM 37.9, p. 13.CrossRefGoogle Scholar
  22. Shieber, S. (1994), ‘Lessons from arestricted Turing test’, Communications of the ACM 37.6, pp. 70–78. [http://www.eecs.harvard.edn/shieber/papers/loebner-rev-html/loebner-rev-html.html] (29 Aug. 1999).
  23. Turing, A.M. (1950), ‘Computing machinery and intelligence’, Mind LIX. 236, pp. 433–460.CrossRefGoogle Scholar
  24. Weizenbaum, J. (1966), ’ELIZA — A computer program for the study of natural language communication between man and machine’, Cmmuunicaih ’n.r of the ACM 9.1, pp. 36–45.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2003

Authors and Affiliations

  • Sean Zdenek
    • 1
  1. 1.Department of EnglishProgram in RhetoricUSA

Personalised recommendations