Skip to main content

Advertisement

Log in

Is Your Computer Lying? AI and Deception

  • Published:
Sophia Aims and scope Submit manuscript

Abstract

Recent developments in AI, especially the spectacular success of Large Language models, have instigated renewed questioning of what remains distinctively human. As AI stands poised to take over more and more human tasks, what is left that distinguishes humans? One way we might identify a humanlike intelligence would be when we detect it telling lies. Yet AIs lack both the intention and the motivation to truly tell lies, instead producing merely bullshit. With neither emotions, embodiment, nor the social awareness that leads to a theory of mind, AIs lack the internal referents on which to judge truth or falsity. When we are deceived by our computers, we need to look for the hidden agent who benefits from the deception.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The remainder of this section draws on information from pages 108–109 of Herzfeld, 2023.

References

  • Augustine (1887). De Mendacio. Nicene and Post-Nicene fathers, First Series, Vol. 3. Ed. Philip Schaff. Christian Literature.

  • Bender, Emily, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell (March 2021). On the dangers of stochastic parrots: Can language models be too big? , FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922.

  • Bok, S. (1978). Lying: Moral choice in public and private life. Harvester.

  • Bond, C. F., & Robinson, M. (1988). The evolution of deception. Journal of Nonverbal Behavior, 12(4, Pt 2), 295–307. https://doi.org/10.1007/BF00987597

    Article  Google Scholar 

  • Bryson, J. (2019). Robot, all too human. XRDS, 25(3), 56–59.

    Article  Google Scholar 

  • Bryson, Joanna, Mihailis Diamontes and Thomas Grant (September 8, 2017). Of, for, and by the people: The legal lacuna of synthetic persons, 25 Artificial Intelligence & Law, 273, University of Cambridge Faculty of Law Research Paper No. 5/2018. https://ssrn.com/abstract=3068082

  • Caspermeyer, J. (2019). When is it OK for AI to Lie? https://news.asu.edu/20190130-when-it-ok-ai-lie. Accessed 11 November 2023.

  • Chakraborti, T., & Kambhampati, S. (2018). Algorithms for the greater good! On mental modeling and acceptable symbiosis in human-AI collaboration. ArXiv. arXiv:1801.09854. Accessed 11 November 2023.

  • Damasio, A. (1994). Descartes' error: Emotion, reason, and the human brain. Grosset/Putnam.

  • Damasio, A. (2018). The strange order of things: Life, feeling, and the making of cultures. Pantheon.

  • Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology, 9, 468.

    Article  Google Scholar 

  • Danaher, J. (2022). Robot betrayal: a guide to the ethics of robotic deception. Ethics and Information Technology, 22, 117–128.

    Article  Google Scholar 

  • Dennet, D. (1987). The intentional stance. MIT Press.

    Google Scholar 

  • DeWaal, F. (2016). Are we smart enough to know how smart animals are? Norton.

    Google Scholar 

  • Dragan, A., Holladay, R., & Srinivasa, S. (2015). Deceptive robot motion: synthesis, analysis and experiments. Autonomous Robots, 39(3), 331–345.

    Article  Google Scholar 

  • Frankfurt, H. (2009). On Bullshit. Princeton University.

  • Hernandez, D. (2015). The Google Photos ‘Gorilla’ fail won’t be the last time AIs offend us, Fusion, http://fusion.net/story/160196/the-google-photos-forilla-fail-wont-be-the-last-time-ais-offend-us. Accessed 11 November 2023.

  • Herzfeld, N. (2023). The artifice of intelligence: Divine and human relationship in a robotic age. Fortress.

  • Hui, H. (2012). Piercing the corporate veil in China: where is it now and where is it heading? American Journal of Comparative Law, 60(3), 743–774.

    Article  Google Scholar 

  • Hurt, A. (2022). Are humans the only animal that lies? Retrieved January 3, 2022, from https://www.discovermagazine.com/planet-earth/are-humans-the-only-animals-that-lie. Accessed 11 November 2023.

  • Isaac, A., & Bridewell, W. (2017). White lies on silver tongues: Why robots need to deceive (and how). In P. Lim, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford University.

    Google Scholar 

  • James, W. (1884). What is an emotion? Mind, 9(34), 188–205.

    Article  Google Scholar 

  • Joseph, L. (2020). What robots can’t do, Commonweal, 147:11. https://www.commonwealmagazine.org/what-robots-cant-do. Accessed 11 November 2023.

  • Kagan, J. (2007). What are emotions? Yale.

  • King, B. (2019). Deception in the animal kingdom. Scientific American, 321(3), 50–54. https://doi.org/10.1038/scientificamerican0919-50

    Article  Google Scholar 

  • Kneer, M. (2021). Can a Robot Lie? Unpacking the Folk Concept of Lying as Applied to Artificial Agents. Cognitive Science, 45(10). https://doi.org/10.1111/cogs.13032.

  • Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its challenge to Western thought. Basic Books.

  • Marcus, G. (2022) A few words about bullshit. Retrieved January 5, 2023 from https://garymarcus.substack.com/p/a-few-words-about-bullshit. Accessed 11 November 2023.

  • Miller, C. (2020). Honesty and dishonesty: unpacking two character traits neglected by philosophers. Revista Portuguesa Filosophia, 76(1), 343–362.

    Article  Google Scholar 

  • Mori, M. (1970). The uncanny valley. Energy, 7(4), 33–35. (in Japanese).

    Google Scholar 

  • Moro, C., et al. (2019). Social robots and seniors: a comparative study on the influence of dynamic social features on human-robot interaction. International Journal of Social Robotics, 11, 5–24.

    Article  Google Scholar 

  • Niebuhr, R. (1941). The Nature and Destiny of Man: A Christian Interpretation. (Volume 1: Human Nature). Scribner.

  • Nilsson, N. (2005). Human-level artificial intelligence? Be serious! AI Magazine, 26(4), 68.

    Google Scholar 

  • Peretti, G., Manzi, F., Di Dio, C., Cangelosi, A., Harris, P. L., Massaro, D., & Marchetti, A. (2023). Can a robot lie? Young children’s understanding of intentionality beneath false statements. Infant and Child Development, 32(2), e2398.

    Article  Google Scholar 

  • Roff, H. (2020). AI deception: When your artificial intelligence learns to lie. IEEE Spectrum. https://doi.org/10.1609/aimag.v26i4.1850. Accessed 11 November 2023.

  • Ryan, K. (2017). Why it matters that artificial intelligence is about to beat the world's best poker players. Retrieved from https://www.inc.com/kevin-j-ryan/ai-system-libratus-beating-worlds-best-poker-players.html.

  • Schlosser, M. S. (2015) “Agency,” Stanford encyclopedia of philosophy. Retrieved June 10, 2020, from https://plato.stanford.edu/entries/agency. Accessed 11 November 2023.

  • Searle, J. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64(3), 21–37. https://doi.org/10.2307/3130074

    Article  Google Scholar 

  • Serota, K., Levine, T. & Docan-Morgan, T. (2022). Unpacking variation in lie prevalence: Prolific liars bad lie days or both?, Communication Monographs, 89(3):307–331. https://doi.org/10.1080/03637751.2021.1985153.

  • Sharkey, N., & Sharkey, A. (2010). The crying shame of robot nannies: an ethical appraisal. Interaction Studies, 11(2), 161–190.

    Article  Google Scholar 

  • Shim, Jaeeun, and Arkin, Ronald (2012). Biologically-inspired deceptive behavior for a robot. Proceedings of the 12th International Conference on Simulation of Adaptive Behavior, Odense, Denmark, 27–30 August, 401–411.

  • Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The Future of Aged Care, Minds and Machines, 16(2), 141–161.

    Article  Google Scholar 

  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.

  • Vallor, S. (2011). Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philosophy of Technology, 24, 251–268.

    Article  Google Scholar 

  • Wagner, A., & Arkin, R. (2011). Acting deceptively: providing robots with the capacity for deception. International Journal of Social Robotics, 3(1), 5–26.

    Article  Google Scholar 

  • Weaver, J. (2014). Robots are people too: How Siri, Google Car, and artificial intelligence will force us to change our laws. Praeger.

    Google Scholar 

  • Weiner, N. (1950). The human use of human beings: cybernetics and society. Houghton Mifflin.

  • White, T. & Baum, S. (2017). Liability for present and future robotics technology, Robot ethics 2.0: From autonomous cars to artificial intelligence, Eds. Patrick Lin, Ryan Jenkins, and Keith Abney. Oxford University.

  • Williams, J. (2018). Stand out of our light: Freedom and resistance in the attention economy. Cambridge University.

  • Yu, C., et al. (2022). Socially assistive robots for people with dementia: Systematic review and meta-analysis of feasibility, acceptability and the effect on cognition, neuropsychiatric symptoms and quality of life. Aging Research Review, 78, 1016–1033.

    Article  Google Scholar 

Download references

Funding

Funding was provided in part by Znanstveno-raziskovalno središče Koper and Slovenian Research Agency (ARRS) grants J6-1813, ‘Creations, Humans, Robots: Creation Theology Between Humanism and Posthumanism’ and P6-0434, ‘Constructive Theology in the Age of Digital Culture and the Anthropocene.’

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noreen Herzfeld.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Herzfeld, N. Is Your Computer Lying? AI and Deception. SOPHIA 62, 665–678 (2023). https://doi.org/10.1007/s11841-023-00989-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11841-023-00989-6

Keywords

Navigation