Advertisement

Philosophy & Technology

, Volume 28, Issue 1, pp 91–105 | Cite as

Developing Automated Deceptions and the Impact on Trust

  • Frances S. GrodzinskyEmail author
  • Keith W. Miller
  • Marty J. Wolf
Original Paper

Abstract

As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.

Keywords

Deception Trust Artificial agents 

Notes

Acknowledgments

The authors thank Herman Tavani, Mariarosaria Taddeo, and Luciano Floridi for their insightful comments that strengthened the paper.

References

  1. Aristotle (1984). The complete works of Aristotle. Vol. 2. Ed. Barnes J, Princeton: Princeton University Press, Nicomachean Ethics, IX, 3, p. 1842.Google Scholar
  2. Baier, A. (1986). “Trust and antitrust”. In C. Sunstein (Ed.), Feminism and political theory (pp 231–260). Chicago: University of Chicago Press.Google Scholar
  3. Bellah, R. N., Madsen, R., Sullivan, W. M., Swidler, A., & Tipton, S. M. (1985). Habits of the heart: individualism and commitment in American life. Berkeley: University of California Press.Google Scholar
  4. Bryson, J. (2012). Patiency is not a virtue: suggestions for co-constructing an ethical framework including intelligent artefacts, presented at the Symposium on the Machine Question: AI, Ethics and Moral Responsibility, part of the AISB/IACAP World Congress 2012 (2–6 July 2012), Birmingham, UK.Google Scholar
  5. Buechner, J., & Tavani, H. (2011). Trust and multi-agent systems: applying the ‘diffuse, default model’ of trust to experiments involving artificial agents. Ethics and Information Technology, 13(1), 39–51.CrossRefGoogle Scholar
  6. Buechner, J., Simon, J., & Tavani, H. T. (2013). Re-thinking trust and trustworthiness in digital environments. In E. Buchanan, P. de Laat, & H. T. Tavani (Eds.), Ambiguous technologies: Proceedings of the Tenth International Conference on Computer Ethics—Philosophical Enquiry. (July 1–3, 2013). Portugal: Autónoma University.Google Scholar
  7. Carson, T. L. (2009). In M. Clancy (Ed.), Lying, deception and related concept, the philosophy of deception (pp. 153–187). New York: Oxford University Press.Google Scholar
  8. CBS News (2013). Manti Te’o says he’s the victim of “girlfriend” hoax. (January 16, 2013) http://www.cbsnews.com/8301-400_162-57564381/manti-teo-says-hes-the-victim-of-girlfriend-hoax/, accessed January 23, 2013.
  9. Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18, 303–329. doi: 10.0007/s11023-008-9113-7.CrossRefGoogle Scholar
  10. Floridi, L. (2013). The ethics of information. Oxford: Oxford University Press.CrossRefGoogle Scholar
  11. Grodzinsky, F., Miller, K., & Wolf, M. J. (2009). Why Turing shouldn’t have to guess. Tokyo: Asia-Pacific Computing and Philosophy Conference (Oct. 1–2, 2009).Google Scholar
  12. Grodzinsky, F., Miller, K., & Wolf, M. J. (2011). Developing artificial agents worthy of trust: ‘would you buy a used car from this artificial agent?’. Ethics and Information Technology, 13(1), 17–27.CrossRefGoogle Scholar
  13. Lynch, M. P. (2009). In M. Clancy (Ed.), Deception and the nature of truth, the philosophy of deception (pp. 188–200). New York: Oxford University Press.Google Scholar
  14. McLeod, C. "Trust", The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2011/entries/trust/>.
  15. Mikulincer, M. (1998). Attachment working models and the sense of trust: an exploration of interaction goals and affect regulation. Journal of Personality and Social Psychology, 74(5), 1209–1224.CrossRefGoogle Scholar
  16. Miller, K., Wolf, M.J., Grodzinsky, F. (2012). Behind the mask: machine morality, presented at the Symposium on the Machine Question: AI, Ethics and Moral Responsibility, part of the AISB/IACAP World Congress 2012 (2–6 July 2012), Birmingham, UK.Google Scholar
  17. Plato. (1965). In F. M. Cornford (Ed.), The Republic of Plato. New York: Oxford University Press.Google Scholar
  18. Potter, N. N. (2002) How Can I be Trusted? A virtue theory of trustworthiness, Maryland: Rowman and Littlefield. ( accessed on line McLeod, Carolyn, "Trust", The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2011/entries/trust/>.
  19. Simon, J. (2010). The entanglement of trust and knowledge on the Web. Ethics and Information Technology, 12(4), 343–355.CrossRefGoogle Scholar
  20. Solomon, R. C. (2009). In M. Clancy (Ed.), Self, deception, and self-deception in philosophy, the philosophy of deception (pp. 15–36). New York: Oxford University Press.Google Scholar
  21. Strudler, A. (2009). In M. Clancy (Ed.), Deception and trust, the philosophy of deception (pp. 139–152). New York: Oxford University Press.CrossRefGoogle Scholar
  22. Taddeo, M. (2009). Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction, 5(2), 23–35.CrossRefGoogle Scholar
  23. Turkle, S. (2011). Alone together. New York: Basic Books.Google Scholar
  24. Wilson, P. (2011). Computer spots micro clue to lies. http://www.ox.ac.uk/media/science_blog/111123.html (accessed 18 September 2012).
  25. Wolf, M. J., Grodzinsky, F., & Miller, K. (2011). In J. M. Milwaukee (Ed.), Is quantum computing inherently evil? CEPE 2011 Proceedings: Crossing Boundaries (pp. 302–309). Wisconsin: Center for Information Policy Research.Google Scholar
  26. Wolf, M. J., Grodzinsky, F., Miller, K. (2012). “Artificial Agents, Cloud Computing, and Quantum Computing: Applying Floridi’s Method of Levels of Abstraction” in Luciano Floridi’s Philosophy of Technology: Critical Reflections Springer: Philosophy & Engineering Book Series. Guest Editor: Hilmi Demir, pp. 23–42.Google Scholar
  27. Wrathall, M. A. (2009). In M. Clancy (Ed.), On the “existential positivity of our ability to be deceived”, the philosophy of deception (pp. 67–81). New York: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • Frances S. Grodzinsky
    • 1
    Email author
  • Keith W. Miller
    • 2
  • Marty J. Wolf
    • 3
  1. 1.Sacred Heart UniversityFairfieldUSA
  2. 2.University of Missouri—St. LouisSt. LouisUSA
  3. 3.Bemidji State UniversityBemidjiUSA

Personalised recommendations