Skip to main content

Developing Automated Deceptions and the Impact on Trust

Abstract

As software developers design artificial agents (AAs), they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs.

This is a preview of subscription content, access via your institution.

Notes

  1. For Floridi, a LoA qualifies the level at which a system is considered and informs the discussion of such a system. When we analyze a system, we do so from a particular perspective or level of abstraction. This often results in a model or prototype that identifies the system at the “given LoA.” Floridi refers to this as the “system-level-model-structure (or SLMS) scheme:” “Thus, introducing an explicit reference to the LoA makes it clear that the model of a system is a function of the available observables, and that it is reasonable to rank different LoAs and to compare and assess the corresponding models.” (Floridi 2008) from (Wolf et al. 2012, p. 24).

  2. Some philosophers disagree with the claim that trust is (simply) a relation; for example, some argue that it is a “disposition.” Others argue that it can be an attitude (affective) as well as cognitive (see, for example, Baier 1986). Also, see McLeod (2011) for a description of some alternative philosophical models of trust that have been advanced in the literature.

  3. See Buechner et al. (2013) for a discussion of trustworthiness.

  4. We thank Luciano Floridi for this example.

  5. We thank Herman Tavani for this clarification.

References

  • Aristotle (1984). The complete works of Aristotle. Vol. 2. Ed. Barnes J, Princeton: Princeton University Press, Nicomachean Ethics, IX, 3, p. 1842.

  • Baier, A. (1986). “Trust and antitrust”. In C. Sunstein (Ed.), Feminism and political theory (pp 231–260). Chicago: University of Chicago Press.

  • Bellah, R. N., Madsen, R., Sullivan, W. M., Swidler, A., & Tipton, S. M. (1985). Habits of the heart: individualism and commitment in American life. Berkeley: University of California Press.

    Google Scholar 

  • Bryson, J. (2012). Patiency is not a virtue: suggestions for co-constructing an ethical framework including intelligent artefacts, presented at the Symposium on the Machine Question: AI, Ethics and Moral Responsibility, part of the AISB/IACAP World Congress 2012 (2–6 July 2012), Birmingham, UK.

  • Buechner, J., & Tavani, H. (2011). Trust and multi-agent systems: applying the ‘diffuse, default model’ of trust to experiments involving artificial agents. Ethics and Information Technology, 13(1), 39–51.

    Article  Google Scholar 

  • Buechner, J., Simon, J., & Tavani, H. T. (2013). Re-thinking trust and trustworthiness in digital environments. In E. Buchanan, P. de Laat, & H. T. Tavani (Eds.), Ambiguous technologies: Proceedings of the Tenth International Conference on Computer Ethics—Philosophical Enquiry. (July 1–3, 2013). Portugal: Autónoma University.

    Google Scholar 

  • Carson, T. L. (2009). In M. Clancy (Ed.), Lying, deception and related concept, the philosophy of deception (pp. 153–187). New York: Oxford University Press.

    Google Scholar 

  • CBS News (2013). Manti Te’o says he’s the victim of “girlfriend” hoax. (January 16, 2013) http://www.cbsnews.com/8301-400_162-57564381/manti-teo-says-hes-the-victim-of-girlfriend-hoax/, accessed January 23, 2013.

  • Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18, 303–329. doi:10.0007/s11023-008-9113-7.

    Article  Google Scholar 

  • Floridi, L. (2013). The ethics of information. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Grodzinsky, F., Miller, K., & Wolf, M. J. (2009). Why Turing shouldn’t have to guess. Tokyo: Asia-Pacific Computing and Philosophy Conference (Oct. 1–2, 2009).

    Google Scholar 

  • Grodzinsky, F., Miller, K., & Wolf, M. J. (2011). Developing artificial agents worthy of trust: ‘would you buy a used car from this artificial agent?’. Ethics and Information Technology, 13(1), 17–27.

    Article  Google Scholar 

  • Lynch, M. P. (2009). In M. Clancy (Ed.), Deception and the nature of truth, the philosophy of deception (pp. 188–200). New York: Oxford University Press.

    Google Scholar 

  • McLeod, C. "Trust", The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2011/entries/trust/>.

  • Mikulincer, M. (1998). Attachment working models and the sense of trust: an exploration of interaction goals and affect regulation. Journal of Personality and Social Psychology, 74(5), 1209–1224.

    Article  Google Scholar 

  • Miller, K., Wolf, M.J., Grodzinsky, F. (2012). Behind the mask: machine morality, presented at the Symposium on the Machine Question: AI, Ethics and Moral Responsibility, part of the AISB/IACAP World Congress 2012 (2–6 July 2012), Birmingham, UK.

  • Plato. (1965). In F. M. Cornford (Ed.), The Republic of Plato. New York: Oxford University Press.

    Google Scholar 

  • Potter, N. N. (2002) How Can I be Trusted? A virtue theory of trustworthiness, Maryland: Rowman and Littlefield. ( accessed on line McLeod, Carolyn, "Trust", The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2011/entries/trust/>.

  • Simon, J. (2010). The entanglement of trust and knowledge on the Web. Ethics and Information Technology, 12(4), 343–355.

    Article  Google Scholar 

  • Solomon, R. C. (2009). In M. Clancy (Ed.), Self, deception, and self-deception in philosophy, the philosophy of deception (pp. 15–36). New York: Oxford University Press.

    Google Scholar 

  • Strudler, A. (2009). In M. Clancy (Ed.), Deception and trust, the philosophy of deception (pp. 139–152). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Taddeo, M. (2009). Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction, 5(2), 23–35.

    Article  Google Scholar 

  • Turkle, S. (2011). Alone together. New York: Basic Books.

    Google Scholar 

  • Wilson, P. (2011). Computer spots micro clue to lies. http://www.ox.ac.uk/media/science_blog/111123.html (accessed 18 September 2012).

  • Wolf, M. J., Grodzinsky, F., & Miller, K. (2011). In J. M. Milwaukee (Ed.), Is quantum computing inherently evil? CEPE 2011 Proceedings: Crossing Boundaries (pp. 302–309). Wisconsin: Center for Information Policy Research.

    Google Scholar 

  • Wolf, M. J., Grodzinsky, F., Miller, K. (2012). “Artificial Agents, Cloud Computing, and Quantum Computing: Applying Floridi’s Method of Levels of Abstraction” in Luciano Floridi’s Philosophy of Technology: Critical Reflections Springer: Philosophy & Engineering Book Series. Guest Editor: Hilmi Demir, pp. 23–42.

  • Wrathall, M. A. (2009). In M. Clancy (Ed.), On the “existential positivity of our ability to be deceived”, the philosophy of deception (pp. 67–81). New York: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

The authors thank Herman Tavani, Mariarosaria Taddeo, and Luciano Floridi for their insightful comments that strengthened the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frances S. Grodzinsky.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Grodzinsky, F.S., Miller, K.W. & Wolf, M.J. Developing Automated Deceptions and the Impact on Trust. Philos. Technol. 28, 91–105 (2015). https://doi.org/10.1007/s13347-014-0158-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-014-0158-7

Keywords

  • Deception
  • Trust
  • Artificial agents