Skip to main content

Advertisement

Log in

The emergence of “truth machines”?: Artificial intelligence approaches to lie detection

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles of human test administrators and human subjects, adding machine learning-based AI agents to the situation and establishing invasive data collection processes as well as introducing certain biases in results. I project that the potentials for pervasive and continuous lie detection initiatives (“truth machines”) are substantial, displacing human-centered efforts to establish trust and foster integrity in organizations. I argue that if it is possible for HR managers to do so, they should cease using technologically-based lie detection systems entirely and work to foster trust and accountability on a human scale. However, if these AI-enhanced technologies are put into place by organizations by law, agency mandate, or other compulsory measures, care should be taken that the impacts of the technologies on human rights and wellbeing are considered. The article explores how AI can displace the human agent in some aspects of lie detection and credibility assessment scenarios, expanding the prospects for inscrutable, “black box” processes and novel physiological constructs (such as “biomarkers of deceit”) that may increase the potential for such human rights concerns as fairness, mental privacy, and bias. Employee interactions with autonomous lie detection systems rather with than human beings who administer specific tests can reframe organizational processes and rules concerning the assessment of personal honesty and integrity. The dystopian projection of organizational life in which analyses and judgments of the honesty of one’s utterances are made automatically and in conjunction with one’s personal profile provides unsettling prospects for the autonomy of self-representation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2), 1–14. https://doi.org/10.1177/2053951720949566

    Article  Google Scholar 

  • Alder, K. (2009). The lie detectors: The history of an American obsession. University of Nebraska Press.

    Google Scholar 

  • Alliger, G. M., & Dwight, S. A. (2000). A meta-analytic investigation of the susceptibility of integrity tests to faking and coaching. Educational and Psychological Measurement, 60(1), 59–72.

    Article  Google Scholar 

  • Ayoub, A., Rizvi, F., Akram, S., & Tahir, M. A. (2018). The polygraph and lie detection: A case study. Arab Journal of Forensic Sciences & Forensic Medicine, 1(7), 902–908.

    Article  Google Scholar 

  • Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication and Ethics in Society, 17(3), 321–335. https://doi.org/10.1108/JICES-05-2018-0050

    Article  Google Scholar 

  • Balmer, A. (2018). Lie detection and the law: Torture, technology and truth. Routledge.

    Book  Google Scholar 

  • Barathi, C. S. (2016). Lie detection based on facial micro expression, body language, and speech analysis. International Journal of Engineering Research & Technology, 5(2), 337–343.

    Google Scholar 

  • Bard, J. S. (2015). Ah yes, I remember it well: Why the inherent unreliability of human memory makes brain imaging technology a poor measure of truth-telling in the courtroom. Oregon Law Review, 94, 295–332.

    Google Scholar 

  • Barn, B. S. (2019). Mapping the public debate on ethical concerns: Algorithms in mainstream media. Journal of Information, Communication and Ethics in Society., 18(1), 124–139. https://doi.org/10.1108/JICES-04-2019-0039

    Article  Google Scholar 

  • Ben-Shakhar, G., & Barr, M. (2018). Science, pseudo-science, non-sense, and critical thinking: Why the differences matter. Routledge.

    Google Scholar 

  • Bergers, L. (2018). Only in America? A history of lie detection in the Netherlands in comparative perspective, ca. 1910–1980. Master’s thesis, Utrecht University, The Netherlands

  • Bird, L., Gretton, M., Cockerell, R., & Heathcote, A. (2019). The cognitive load of narrative lies. Applied Cognitive Psychology, 33(5), 936–942. https://doi.org/10.1002/acp.3567

    Article  Google Scholar 

  • Bittle, J. (2020). Lie detectors have always been suspect. AI has made the problem worse. Technology Review. https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/. Accessed 16 Jan 2022

  • Bryant, P. (2018). Will eye scanning technology replace the polygraph. Government Technology. Retrieved from http://www.govtech.com/public-safety/Will-Eye-Scanning-Technology-Replace-the-Polygraph.html. Accessed 16 Jan 2022

  • Bunn, G. C. (2019). “Supposing that truth is a woman, what then?”: The lie detector, the love machine, and the logic of fantasy. History of the Human Sciences, 32(5), 135–163.

    Article  Google Scholar 

  • Burgoon, J. K. (2019). Separating the wheat from the chaff: Guidance from new technologies for detecting deception in the courtroom. Frontiers in Psychiatry, 9, 774–780. https://doi.org/10.3389/fpsyt.2018.00774

    Article  Google Scholar 

  • Comer, M. J., & Stephens, T. E. (2017). Deception at work: Investigating and countering lies and fraud strategies. Routledge.

    Book  Google Scholar 

  • Dafoe, A. (2018). AI governance: A research agenda. University of Oxford.

    Google Scholar 

  • Darby, R. R., & Pascual-Leone, A. (2017). Moral enhancement using non-invasive brain stimulation. Frontiers in Human Neuroscience, 11, 77. https://doi.org/10.3389/fnhum.2017.00077

    Article  Google Scholar 

  • Denault, V., & Dunbar, N. E. (2019). Credibility assessment and deception detection in courtrooms: Hazards and challenges for scholars and legal practitioners. The Palgrave handbook of deceptive communication (pp. 915–935). Palgrave Macmillan.

    Chapter  Google Scholar 

  • Domanski, R. (2019). The AI Pandorica: Linking ethically-challenged technical outputs to prospective policy approaches (pp. 409–416). Association for Computing Machinery.

    Google Scholar 

  • Elkins, A. C., Dunbar, N. E., Adame, B., & Nunamaker, J. F. (2013). Are users threatened by credibility assessment systems? Journal of Management Information Systems, 29(4), 249–262. https://doi.org/10.2753/MIS0742-1222290409

    Article  Google Scholar 

  • Elkins, A. C., Gupte, A., & Cameron, L. (2019). Humanoid robots as interviewers for automated credibility assessment (pp. 316–325). Springer.

    Google Scholar 

  • Farrell, B. (2009). Can’t get you out of my head: The human rights implications of using brain scans as criminal evidence. Interdisciplinary Journal of Human Rights Law, 4, 89–95.

    Google Scholar 

  • Fischer, L. (2020). The idea of reading someone’s thoughts in contemporary lie detection techniques. Mind reading as a cultural practice (pp. 109–137). Palgrave Macmillan.

    Chapter  Google Scholar 

  • Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1(6), 261–262.

    Article  Google Scholar 

  • Fuller, C., Biros, D., & Delen, D. (2011). An investigation of data and text mining methods for real world deception detection. Expert Systems with Applications, 38, 8392–8398.

    Article  Google Scholar 

  • Garrett, B. L. (2020). Wrongful convictions. Annual Review of Criminology, 3, 245–259.

  • Giattino, C. M., Kwong, L., Rafetto, C., & Farahany, N. A. (2019). The seductive allure of artificial intelligence-powered neurotechnology (pp. 397–402). Association for Computing Machinery (ACM).

    Google Scholar 

  • Gonzalez-Billandon, J., Aroyo, A., Pasquali, D., Tonelli, A., Gori, M., Sciutti, A., Gori, M., Sandini, G., & Rea, F. (2019). Can a robot catch you lying? A machine learning system to detect lies during interactions. Frontiers in Robotics and AI, 6(64), 1–12. https://doi.org/10.3389/frobt.2019.00064

    Article  Google Scholar 

  • Grubin, D., Kamenskov, M., Dwyer, R. G., & Stephenson, T. (2019). Post-conviction polygraph testing of sex offenders. International Review of Psychiatry., 31(2), 141–148.

    Article  Google Scholar 

  • Harding, C. D. (2019). Selecting the ethical employee: Measuring personality facets to predict integrity behavior. Carleton University.

    Google Scholar 

  • Hashemi, M., & Hall, M. (2020). Criminal tendency detection from facial images and the gender bias effect. Journal of Big Data, 7(1), 1–16.

    Google Scholar 

  • Heaven, D. (2018). AI to interrogate travellers. New Scientist, 240(3202), 5.

    Article  Google Scholar 

  • Iacono, W. G., & Patrick, C. J. (2018). Assessing deception. In R. Rogers & S. D. Bender (Eds.), Clinical assessment of malingering and deception. Guilford Publications.

    Google Scholar 

  • Jupe, L. M., & Keatley, D. A. (2019). Airport artificial intelligence can detect deception: Or am I lying? Security Journal., 24, 1–4.

    Google Scholar 

  • Katwala, A. (2019). The race to create a perfect lie detector- and the dangers of succeeding. The Guardian. Retrived from https://www.theguardian.com/technology/2019/sep/05/the-race-to-create-a-perfect-lie-detector-and-the-dangers-of-succeeding. Accessed 16 Jan 2022

  • Kennedy, P. (2014). Artificial intelligence lie detector developed by imperial alumnus. Imperial College London. Retrived from https://www.imperial.ac.uk/news/144486/artificial-intelligence-detector-developed-imperial-alumnus/. Accessed 16 Jan 2022

  • Khatri, S., Pandey, D. K., Penkar, D., & Ramani, J. (2020). Impact of artificial intelligence on human resources. In D. Management (Ed.), Analytics and innovation (pp. 365–376). Springer.

    Google Scholar 

  • Kleinberg, B., Arntz, A., & Verschuere, B. (2019). Detecting deceptive intentions: Possibilities for large-scale applications. The Palgrave handbook of deceptive communication (pp. 403–427). Palgrave Macmillan.

    Chapter  Google Scholar 

  • Kurland, J. (2019). Truth-detection devices and victims of sexual violence. Family & Intimate Partner Violence Quarterly, 11(4), 39–44.

    Google Scholar 

  • La Tona, G., Terranova, M. C., Vernuccio, F., Re, G. L., Salerno, S., Zerbo, S., & Argo, A. (2020). Lie detection: fMRI. Radiology in forensic medicine (pp. 197–202). Springer.

    Chapter  Google Scholar 

  • Landau, O., Puzis, R., & Nissim, N. (2020). Mind your mind: EEG-based brain-computer interfaces and their security in cyber space. ACM Computing Surveys (CSUR), 53(1), 1–38.

    Article  Google Scholar 

  • Laws, D. R. (2020). A history of the assessment of sex offenders: 1830–2020. Emerald Publishing Limited.

    Book  Google Scholar 

  • Leonetti, C. (2017). Abracadabra, hocus pocus, same song, different chorus: The newest iteration of the science of lie detection. Richmond Journal of Law & Technology., 24(1), 1–35.

    Google Scholar 

  • MacNeill, A. L., & Bradley, M. T. (2016). Temperature effects on polygraph detection of concealed information. Psychophysiology, 53(2), 143–150.

    Article  Google Scholar 

  • Maréchal, M. A., Cohn, A., Ugazio, G., & Ruff, C. C. (2017). Increasing honesty in humans with noninvasive brain stimulation. Proceedings of the National Academy of Sciences, 114(17), 4360–4364.

    Article  Google Scholar 

  • Maroulis, A. (2014). Blinking in deceptive communication. State University of New York at Buffalo.

    Google Scholar 

  • Masip, J., Levine, T. R., Somastre, S., & Herrero, C. (2020). Teaching students about sender and receiver variability in lie detection. Teaching of Psychology, 47(1), 84–91.

    Article  Google Scholar 

  • Mayoral, L. P. C., Mayoral, E. P. C., Andrade, G. M., Mayoral, C. P., Helmes, R. M., & Pérez-Campos, E. (2017). The use of polygraph testing for theft investigation in private sector institutions. Polygraph, 46(1), 44–52.

    Google Scholar 

  • McAllister, A. (2016). Stranger than science fiction: The rise of AI interrogation in the dawn of autonomous robots and the need for an additional protocol to the UN convention against torture. Minnesota Law Review, 101, 2527–2573.

    Google Scholar 

  • Mecke, J. (2007). Cultures of lying: Theories and practice of lying in society, literature, and film. Galda & Wilch.

    Google Scholar 

  • Meijer, E. H., & Verschuere, B. (2017). Deception detection based on neuroimaging: Better than the polygraph? Journal of Forensic Radiology and Imaging, 8, 17–21.

    Article  Google Scholar 

  • Melendez, S. (2018). Goodbye polygraphs: New tech uses AI to tell if you’re lying. Fast Company. Retrieved from https://www.fastcompany.com/40575672/goodbye-polygraphs-new-tech-uses-ai-to-tell-if-youre-lying. Accessed 16 Jan 2022

  • Moreno, J. A. (2009). The future of neuroimaged lie detection and the law. Akron Law Review, 42, 717–737.

    Google Scholar 

  • Nahari, G., Ashkenazi, T., Fisher, R. P., Granhag, P. A., Hershkowitz, I., Masip, J., Meijer, E. H., Nisin, Z., Sarid, N., Taylor, P. J., Vrii, A., & Verschuere, B. (2019). ‘Language of lies’: Urgent issues and prospects in verbal lie detection research. Legal and Criminological Psychology, 24(1), 1–23. https://doi.org/10.1111/lcrp.12148

    Article  Google Scholar 

  • Natale, S. (2019). Amazon can read your mind: A media archaeology of the algorithmic imaginary. In S. Natale & D. Pasulka (Eds.), Believing in bits: Digital media and the supernatural (pp. 19–36). Oxford University Press.

    Chapter  Google Scholar 

  • Noonan, C. F. (2018). Spy the lie: Detecting malicious insiders (No. PNNL-SA-122655). Pacific Northwest National Lab (PNNL).

    Book  Google Scholar 

  • Pasquale, F. (2015). The black box society. Harvard University Press.

    Book  Google Scholar 

  • Pasquali, D., Aroyo, A. M., Gonzalez-Billandon, J., Rea, F., Sandini, G., & Sciutti, A. (2020). Your eyes never lie: A robot magician can tell if you are lying (pp. 392–394). ACM.

    Google Scholar 

  • Peleg, D., Ayal, S., Ariely, D., & Hochman, G. (2019). The lie deflator-the effect of polygraph test feedback on subsequent (dis) honesty. Judgment & Decision Making, 16(6), 728–738.

    Google Scholar 

  • Poldrack, R. A. (2018). The new mind readers: What neuroimaging can and cannot reveal about our thoughts. Princeton University Press.

    Book  Google Scholar 

  • Prince, P. G., Rajkumar, R. I., & Premalatha, J. (2020). Novel non-contact respiration rate detector for analysis of emotions. In D. J. Hemanth (Ed.), Human behaviour analysis using intelligent systems (pp. 157–178). Springer.

    Chapter  Google Scholar 

  • Räikkä, J. (2017). Privacy and self-presentation. Res Publica, 23(2), 213–226.

    Article  Google Scholar 

  • Reiner, P. B., & Nagel, S. K. (2017). Technologies of the extended mind: defining the issues. Neuroethics: Anticipating the future (pp. 108–122). Oxford University Press.

    Google Scholar 

  • Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20(2), 127–142.

    Article  Google Scholar 

  • Sánchez-Monedero, J., & Dencik, L. (2020). The politics of deceptive borders: “Biomarkers of deceit” and the case of iBorderCtrl. Information, Communication & Society. https://doi.org/10.1080/1369118X.2020.1792530

    Article  Google Scholar 

  • Schauer, F. (2009). Can bad science be good evidence? Neuroscience, lie detection, and beyond. Cornell Law Review, 95(6), 1191–1219.

    Google Scholar 

  • Singh, E., & Doval, J. (2019). Artificial intelligence and HR: Remarkable opportunities, hesitant partners. In Proceedings of the 4th National HR Conference on Human Resource Management Practices and Trends. Retrived from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3553448. Accessed 16 Jan 2022

  • Singh, R. (2019). Profiling and its facets. In R. Singh (Ed.), Profiling humans from their voice (pp. 3–26). Springer.

    Chapter  Google Scholar 

  • Stathis, M. J., & Marinakis, M. M. (2020). Shadows into light: The investigative utility of voice analysis with two types of online child-sex predators. Journal of Child Sexual Abuse. https://doi.org/10.1080/10538712.2019.1697780

    Article  Google Scholar 

  • Strle, T., & Markič, O. (2019). Looping effects of neurolaw, and the precarious marriage between neuroscience and the law. Balkan Journal of Philosophy, 10(1), 17–26.

    Article  Google Scholar 

  • Stroud, M. (2019). Thin blue lie: The failure of high-tech policing. New York: Metropolitan Books.

  • Takabatake, S., Shimada, K., & Saitoh, T. (2018). Construction of a liar corpus and detection of lying situations (pp. 971–976). IEEE Press.

    Google Scholar 

  • Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15–42.

    Article  Google Scholar 

  • Thomasen, K. (2016). Examining the constitutionality of robot-enhanced interrogation. Edward Elgar Publishing.

    Book  Google Scholar 

  • Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N., & Manser, E. (2019). Considerations for AI fairness for people with disabilities. AI Matters, 5(3), 40–63. https://doi.org/10.1145/3362077.3362086

    Article  Google Scholar 

  • Van den Hoven, J., & Manders-Huits, N. (2008). The person as risk, the person at risk’. ETHICOMP 2008: Living working and learning beyond technology (pp. 408–14). SAGE.

    Google Scholar 

  • Vissak, T., & Vadi, M. (2013). (Dis) honesty in management: Manifestations and consequences. Emerald Group Publishing.

    Book  Google Scholar 

  • Walczyk, J. J., Schwartz, J. P., Clifton, R., Adams, B., Wei, M. I. N., & Zha, P. (2005). Lying person-to-person about life events: A cognitive framework for lie detection. Personnel Psychology, 58(1), 141–170. https://doi.org/10.1111/j.1744-6570.2005.00484.x

    Article  Google Scholar 

  • Watson, H. J., & Nations, C. (2019). Addressing the growing need for algorithmic transparency. Communications of the Association for Information Systems, 45(1), 26. https://doi.org/10.17705/1CAIS.04526

    Article  Google Scholar 

  • Winter, A. (2005). The making of “truth serum”. Bulletin of the History of Medicine, 79(3), 500–533.

  • Witt, P. H., & Neller, D. J. (2018). Detection of deception in sex offenders. In R. Rogers & S. D. Bender (Eds.), Clinical assessment of malingering and deception (pp. 401–421). The Guilford Press.

    Google Scholar 

  • Wright, E. (2018). The future of facial recognition is not fully known: Developing privacy and security regulatory mechanisms for facial recognition in the retail sector. Fordham Intellectual Property Media & Entertainment Law Journal, 29(2), 611–685.

    Google Scholar 

  • Zhang, X. (2011). The evolution of polygraph testing in the People’s Republic of China. Polygraph, 40(3), 181–193.

    Google Scholar 

  • Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist—it’s time to make it fair. Nature, 559, 324–326.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jo Ann Oravec.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Oravec, J.A. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection. Ethics Inf Technol 24, 6 (2022). https://doi.org/10.1007/s10676-022-09621-6

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-022-09621-6

Keywords

Navigation