Skip to main content

Prevarication and the Polygraph

Can Computers Detect Lies?

  • Chapter
  • First Online:
How Algorithms Create and Prevent Fake News

Abstract

Wouldn’t it be nice if we could take a video clip of someone talking and apply AI to determine whether or not they’re telling the truth? Such a tool would have myriad applications, including helping in the fight against fake news: a dissembling politician giving a dishonest speech would immediately be outed, as would a conspiracy theorist knowingly posting lies on YouTube. With the remarkable progress in deep learning in recent years, why can’t we just train an algorithm by showing it lots of videos of lies and videos of truth and have it learn which is which based on whatever visual and auditory clues it can find? In fact, for the past fifteen years, people have been trying this 21st-century algorithmic reinvention of the polygraph. How well it works and what it has been used for are the main questions explored in this chapter. To save you some suspense: this approach would create almost as much fake news as it would prevent—and claims to the contrary by the various companies involved in this effort are, for lack of a better term, fake news. But first, I’ll start with the fascinating history of the traditional polygraph to properly set the stage for its AI-powered contemporary counterpart.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    “The Polygraph and Lie Detection,” National Research Council, 2003: https://www.nap.edu/catalog/10420/the-polygraph-and-lie-detection.

  2. 2.

    Kenneth Weiss, Clarence Watson, and Yan Xuan, “Frye’s Backstory: A Tale of Murder, a Retracted Confession, and Scientific Hubris,” Journal of the American Academy of Psychiatry and the Law, June 2014, Volume 42 no. 2, pages 226–233: http://jaapl.org/content/42/2/226.

  3. 3.

    Did you catch that? If it sounded too much like a line by Dr. Seuss, let me try again: if Marston could show that Frye was not lying about this failed scheme, then the detectives would be obliged to accept the retraction of the confession and enter Frye’s plea of innocence.

  4. 4.

    https://www.law.cornell.edu/rules/fre/rule_702.

  5. 5.

    Mark Harris, “The Lie Generator: Inside the Black Mirror World of Polygraph Job Screenings,” Wired, October 1, 2018: https://www.wired.com/story/inside-polygraph-job-screening-black-mirror/.

  6. 6.

    See Footnote 5.

  7. 7.

    “Use of Polygraphs as ‘Lie Detectors’ by the Federal Government,” H. Rep. No. 198, 89th Cong. 1st Sess.

  8. 8.

    https://www.youtube.com/watch?v=bJ6Hx4xhWQs.

  9. 9.

    See Footnote 5.

  10. 10.

    See Footnote 5.

  11. 11.

    Jay Stanley, “How Lie Detectors Enable Racial Bias,” ACLU blog, October 2, 2018: https://www.aclu.org/blog/privacy-technology/how-lie-detectors-enable-racial-bias.

  12. 12.

    This name is Latin for “with truth,” though perhaps an unfortunate choice in our current time of coronavirus pandemic.

  13. 13.

    For book-length treatments of the topic, I recommend Cathy O’Neil’s 2016 New York Times best seller Weapons of Math Destruction and Virginia Eubanks’ 2018 title Automating Inequality.

  14. 14.

    Mark Harris, “An Eye-Scanning Lie Detector Is Forging a Dystopian Future,” Wired, December 4, 2018: https://www.wired.com/story/eye-scanning-lie-detector-polygraph-forging-a-dystopian-future/.

  15. 15.

    See Footnote 14.

  16. 16.

    “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software,” NIST, December 19, 2019: https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software.

  17. 17.

    This was true historically, and it might be even more true today because of “predictive policing” which is one of the most devastating known instances of a pernicious data-driven algorithmic feedback loop. See, e.g., Karen Hao, “Police across the US are training crime-predicting AIs on falsified data,” MIT Technology Review, February 13, 2019: https://www.technologyreview.com/2019/02/13/137444/predictive-policing-algorithms-ai-crime-dirty-data/ and Will Heaven, “Predictive policing algorithms are racist. They need to be dismantled.” MIT Technology Review, July 17, 2020: https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.

  18. 18.

    See Footnote 14.

  19. 19.

    Eliza Sanders, “The Logistics of Lie Detection for Trump,” Converus blog, April 5, 2019: https://converus.com/blog/the-logistics-of-lie-detection-for-trump/.

  20. 20.

    Jake Bittle, “Lie detectors have always been suspect. AI has made the problem worse.” MIT Technology Review, March 13, 2020: https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/.

  21. 21.

    Incidentally, the first field study the team published, back in 2012, used the technology not to detect lies but to measure comprehension: in collaboration with a healthcare NGO in Tanzania, the facial expressions of eighty women were recorded while they took online courses on HIV treatment and condom use, and the system was able to predict with around eighty-five percent accuracy which of them would pass a brief comprehension test. See Fiona Buckingham et al., “Measuring human comprehension from nonverbal behavior using Artificial Neural Networks,” The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, QLD, Australia, 2012: https://ieeexplore.ieee.org/abstract/document/6252414.

  22. 22.

    See Footnote 20.

  23. 23.

    “Smart lie-detection system to tighten EU’s busy borders,” European Commission, October 24, 2018: https://ec.europa.eu/research/infocentre/article_en.cfm?artid=49726.

  24. 24.

    Ryan Gallagher and Ludovica Jona, “We Tested Europe’s New Lie Detector For Travelers—And Immediately Triggered a False Positive,” The Intercept, July 26, 2019: https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/.

  25. 25.

    See Footnote 24.

  26. 26.

    Camilla Hodgson, “AI lie detector developed for airport security,” Financial Times, August 2, 2019: https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959.

  27. 27.

    See Footnote 26.

  28. 28.

    See Footnote 20.

  29. 29.

    Shuyuan Mary Ho and Jeffrey Hancock, “Context in a bottle: Language-action cues in spontaneous computer-mediated deception,” Computers in Human Behavior Vol 91, February 2019, 33–41: https://sml.stanford.edu/pubs/2019/context-in-a-bottle/.

  30. 30.

    Andy Greenberg, “Researchers Built an ‘Online Lie Detector.’ Honestly, That Could Be a Problem.” Wired, March 21, 2019: https://www.wired.com/story/online-lie-detector-test-machine-learning/.

  31. 31.

    See Footnote 20.

  32. 32.

    Amit Katwala, “The race to create a perfect lie detector—and the dangers of succeeding,” Guardian, September 5, 2019: https://www.theguardian.com/technology/2019/sep/05/the-race-to-create-a-perfect-lie-detector-and-the-dangers-of-succeeding.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Giansiracusa, N. (2021). Prevarication and the Polygraph. In: How Algorithms Create and Prevent Fake News. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7155-1_5

Download citation

Publish with us

Policies and ethics