AI & SOCIETY

, Volume 31, Issue 1, pp 5–15 | Cite as

Effects of lying in practical Turing tests

Original Article

Abstract

Interpretation of utterances affects an interrogator’s determination of human from machine during live Turing tests. Here, we consider transcripts realised as a result of a series of practical Turing tests that were held on 23 June 2012 at Bletchley Park, England. The focus in this paper is to consider the effects of lying and truth-telling on the human judges by the hidden entities, whether human or a machine. Turing test transcripts provide a glimpse into short text communication, the type that occurs in emails: how does the reader determine truth from the content of a stranger’s textual message? Different types of lying in the conversations are explored, and the judge’s attribution of human or machine is investigated in each test.

Keywords

Deception detection Hidden human interviewer Lying Machine Truth Turing test 

References

  1. 16th Loebner Prize for Artificial Intelligence (2006). Home of the Loebner prize: http://loebner.net/Prizef/2006_Contest/loebner-prize-2006.html. Accessed 4 Nov 2013
  2. Adler J (1997) Lying, deceiving or falsely implicating. J Philos 94:435–452CrossRefGoogle Scholar
  3. Carson T (2006) The definition of lying. Nous 40(2):284–386CrossRefGoogle Scholar
  4. Copeland B (2004) The essential Turing—the ideas that gave birth to the computer age. Clarendon Press, OxfordGoogle Scholar
  5. Erat S (2013) Avoiding lying: the case of delegated deception. J Econ Behav Organ 93:273–278CrossRefGoogle Scholar
  6. Eugene Goostman Transcript (2008). 18th Loebner prize for artificial intelligence contest. University of Reading: http://www.loebner.net/Prizef/2008_Contest/Eugene.pdf. Accessed 4 Nov 2013
  7. Fallis D (2009) What is lying? J Philos 106(1):29–56CrossRefGoogle Scholar
  8. Freitas-Magalhães A (2013) The face of lies. FEELab Science Books, PortoGoogle Scholar
  9. Hancock J, Curry L, Goorha S, Woodworth M (2007) On lying and being lied to: a linguistic analysis of deception in computer-mediated communication. Discourse Process 45(1):1–23CrossRefGoogle Scholar
  10. Kobsa A (1990) User modelling in dialog systems: potentials and hazards. AI & Soc 4(3):214–231CrossRefGoogle Scholar
  11. Leslie I (2011) Born liars. Quercus, LondonGoogle Scholar
  12. Linstead S (1985) Jokers Wild: the importance of humor in the maintenance of organizational culture. Soc Rev 33(4):741–767CrossRefGoogle Scholar
  13. Maitland C (1999) Global diffusion of interactive networks: the impact of culture. AI & Soc 13(4):341–356CrossRefGoogle Scholar
  14. Meibauer J (2005) Lying and falsely implicating. J Pragmat 37:1373–1399CrossRefGoogle Scholar
  15. Shah H (2010) Deception detection and machine intelligence in practical Turing tests. PhD thesis, University of Reading, UKGoogle Scholar
  16. Shah H (2013) Conversation, deception and intelligence: Turing’s question-answer game. In: Cooper SB, van Leeuwen J (eds) Part III: Building a brain, intelligent machines, practice and theory. Alan Turing: his work and impact. Elsevier ISBN: 9780123869807Google Scholar
  17. Shah H, Warwick K (2009) Emotion in the Turing test: a downward trend for machines in recent Loebner Prizes. In: Vallverdú J, Casacuberta D (eds), Handbook of research in synthetic emotions and sociable robotics: new applications in affective computing and artificial intelligence, pp 325–349. IGI Global: doi:10.4018/978-1-60566-354-8
  18. Shah H, Warwick K (2010a) Testing Turing’s five minutes, parallel-paired imitation game. Kybernetes 39(3):449–465CrossRefGoogle Scholar
  19. Shah H, Warwick K (2010b) Hidden interlocutor misidentification in practical Turing tests. Minds Mach 20(3):441–454CrossRefGoogle Scholar
  20. Turing A (1950) Computing, machinery and intelligence. Mind LIX(236):433–460CrossRefMathSciNetGoogle Scholar
  21. Vrij A, Granhag PA (2012) Eliciting cues to deception and truth: what matters are the questions asked. J Appl Res Mem Cogn 1:110–117CrossRefGoogle Scholar
  22. Warmelink L, Virj A, Mann S, Jundi S, Granhaj PA (2012) The effect of question expectedness and experience on lying about intentions. Acta Psychol 141:178–183CrossRefGoogle Scholar
  23. Warwick K (2011) Artificial intelligence: the basics. London, RoutledgeGoogle Scholar
  24. Warwick K, Shah H (2013) Good machine performance in Turing’s imitation game. IEEE transactions on computational intelligence and AI in games. doi:10.1109/TCIAIG.2013.2283538
  25. Warwick K, Shah H, Moor J (2013) Some implications of a sample of practical Turing tests. Minds Mach 23(2):163–177CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  1. 1.School of Systems EngineeringUniversity of ReadingReadingUK

Personalised recommendations