Skip to main content

Levels of Trust in the Context of Machine Ethics


Are trust relationships involving humans and artificial agents (AAs) possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani (Ethics and Information Technology 13(1):39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents (HAs) and AAs. In defending this view, I show how James Moor’s model for distinguishing four levels of ethical agents in the context of machine ethics (Moor, IEEE Intelligent Systems 21(4):18–21, 2006) can help us to develop a framework that differentiates four (loosely corresponding) levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA–AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: (i) the level of autonomy of the individual AAs involved, (ii) the degree of risk/vulnerability on the part of the HAs who place their trust in the AAs, and (iii) the kind of interactions (direct vs. indirect) that occur between the HAs and AAs in the trust environments.

This is a preview of subscription content, access via your institution.


  1. Wallach and Allen (2009) and Anderson and Anderson (2011) use the expression “machine ethics” to describe this relatively new field or sub-field. Others, however, such as Verrugio (2006) and Decker and Gutmann (2012) use the expression “robo-ethics,” while Lin et al. (2012) use the phrases “robot ethics” to describe this field.

  2. Whereas I use the expression “AA” in this essay, some researchers in the field of machine ethics use the term “machine” to refer to (computer-generated) artificial entities. Wallach and Allen use the expression “(ro)bot” to capture the broad meaning of AAs or “machines,” which, in their view, can be either pure digital entities (such as soft bots) or physical robots.

  3. Some philosophers, however, have questioned whether artificial entities are capable of being genuine “agents” because, in their view, these entities are not capable of acting or “performing” an act(ion). While (human) agents are typically viewed as persons that can act either on their own behalf or on the behalf of others, some critics suggest that an artificial entity is capable merely of a “doing”; in this scheme, an “act(ing)” is far more complex than a mere “doing.” However, I will not further examine this distinction here. (For a fuller discussion of this point, see Himma 2009.)

  4. Just as Moor’s conceptual framework involving “logical malleability,” “policy vacuums,” and “conceptual muddles” (in his seminal essay “What Is Computer Ethics?” (Moor 1985)) provides an alternative and insightful scheme for analyzing a classic debate in computer ethics about whether any ethical issues in computing are “unique” ethical issues, I believe that his framework of four levels of “ethical agents” provides an insightful and alternative scheme for analyzing the current debate in machine ethics about whether AAs can qualify as moral agents.

  5. See, for example, Tavani (2012) and Tavani and Buechner (2014).

  6. See Buechner and Tavani (2011) for a detailed analysis of how “trust relationships” between HAs and AAs are possible within certain contexts—i.e., in the kinds of “zones” that Walker (2006) calls “default trust” and “diffuse-default trust” (described in section 3 of this essay).

  7. My summary of Moor’s model in section 2 of this essay draws substantially from my analysis of his framework in Tavani (2013).

  8. For example, consider that while I may depend on my car to start tomorrow, and I while may rely on it for getting to work tomorrow, it would be a bit odd for me to say that I trust my car to start tomorrow.

  9. All three authors provide sophisticated theories of trust in connection with AAs. See, for example, Taddeo’s model of a “non-psychological approach” to trust, which rests on a “Kantian regulative ideal of rational agent”; Durante’s “socio-cognitive” approach to trust in the context of multi-agent systems; and the “object-oriented” model of trust in Grodzinsky et al. Unfortunately, I am unable to analyze, or even summarize, those trust theories here, because of space limitations. Interested readers will likely want to examine in detail the models of trust presented in each of these three works. The interested reader may also wish to consult Taddeo (2009) for an excellent discussion of the main theories of trust and e-trust (affecting digital environments) that have been published during the past 20 or so years.

  10. See Buechner and Tavani (2011) for a fuller account of this model, which has five distinct conditions or requirements that show how one acquires a “disposition to trust” based on certain normative expectations. (It is worth noting, however, that the sense of “disposition” used in this model refers to the mental state of the trustor and thus should not be confused with behavioristic accounts of dispositions.) Some alternative models that also stress the normative expectations that underlie a trust relation involving HAs are described in Baier (1986) and McLeod (2011).

  11. Note that because this essay is mainly concerned with normative issues, including “obligation,” that pertain to “ethical trust,” I do not explicitly examine the conditions required for “epistemic trust” involving AAs. For an excellent discussion of aspects of epistemic trust and AAs, see Simon (2010).

  12. The domain of “non-human agents” includes aggregates of individual HAs, such as organizations, institutions, and corporations, as well as (purely) computer-generated AAs. Consider that we frequently enter into trust environments that include not only HAs but also non-human agents (such as financial institutions and corporations that manufacture automobiles).

  13. As such, this model can be viewed as a “contextual theory of trust” that is similar, in some ways, to contextual models of privacy such as those articulated by Moor (1997) and Nissenbaum (2004, 2010). In both cases, contexts or zones play a critical role for understanding the nature of privacy, in much the same way that these kinds of zones also play an important role in grasping a key aspect of the theory of trust defended here. For some insights into the various conceptual connections between privacy and trust, see the collection of papers included in a special issue of Information (Tavani and Arnold 2011), especially the articles by Buechner (2011), deVries (2011), and Durante (2011). And for some alternative models for analyzing trust and e-trust in the context of AAs and digital environments, see the papers included in special issues of journals edited by Taddeo (2010b) and Taddeo and Floridi (2011).

  14. Note that in this essay, I focus on trust relationships involving HAs and AAs in one direction only (HA—>AA). As mentioned earlier, I leave open the question of whether AAs are capable of having trust relationships with other AAs. For an interesting discussion of AA—>HA and AA<—>AA trust relationships, see Grodzinsky et al. (2011). Also, Lim et al. (2008) briefly examine some questions pertaining to the possibility of reciprocal and symmetric trust relationships between “machines” (or AAs).

  15. Elsewhere (Tavani and Buechner in press), I have introduced the concept of an FAAA. Building on the model of “autonomous artificial agent (AAA)” advanced by Floridi (2008), as well as on the notion of "autonomous system" described in the Royal Academy of Engineering (2009) report, I argue that FAAAs can be understood as AAs that are (i) rational, (ii) interactive, (iii) adaptive, and (iv) independent (i.e., in the sense that they can exhibit at least some independence from humans). Wallach and Allen (2009) use the expression “functional morality” to describe AMAs that exhibit some degree of autonomy (as well as some degree of “ethical sensitivity”). However, the authors do not comment specifically on either the degree or kind of autonomy, functional or otherwise, that an AA need needs to qualify as an AMA (This critique of their notion of functional morality is developed more fully in Tavani 2011).

  16. For example, Turkle (2011) can be interpreted as suggesting that those AAs whose appearance is more human-like can “elicit” trust on the part of an HA in ways that AAs appearing less human-like would not. We consider Turkle’s point in more detail in section 5.

  17. Later in this essay, I will argue that in the case of HA–FAAA trust relationships, the FAAAs are capable of “disappointing” or “letting down” the HA. Consider that if an AA is not functionally autonomous, however, it would be odd to say that the AA let down an HA; an AA that was not autonomous (in some sense) could not have behaved differently than it did (except, of course, by malfunctioning).

  18. It is perhaps important to note that I have not discussed any concerns affecting the trustworthiness of the agents, as distinct from the trust relation itself. (Whereas philosophers typically regard trust as a certain kind of relation between two agents—a trustor and a trustee—trustworthiness tends to be viewed as a property or characteristic pertaining to the trustee.) In a separate paper (Buechner et al. 2014), I consider the question whether there can also be levels or “degrees of trustworthiness” for AAs in trust relationships affecting HAs and AAs and, if so, whether they also parallel the four levels of trust articulated in the present essay.

  19. For a fuller discussion of issues affecting the role of stakes in various kinds of trust relations, see Carr (2012).

  20. Turkle (2011) refers to this phenomenon as the “Eliza effect,” in light of the way in which Joseph Weizenbaum’s “Eliza” program was able to “elicit trust” on the part of some humans who interacted with that computer program.

  21. Some also suggest that the more human-like the AA appears, and the more emotion the AA seems to exhibit, the greater the amount of trust the HA will likely be willing to place in it. For example, Coeckelbergh (2010, 2012) suggests that for humans to trust AAs, future AAs will need to have some affective/emotive qualities built into them and will need to (physically) appear more human-like. Along somewhat similar lines, Turkle (2011) suggests that the more human-like an AA looks and behaves, the greater the amount of “attachment” and “trust” the HA (who interacts with such an AA) may be disposed to place in it. However, it is also worth pointing out that others believe that the “uncanny valley” hypothesis (in which robots that appear “almost,” but not exactly, human-like” can repulse humans) might have the opposite effect on the amount of trust and attachment that an HA would place in the AA. Regardless of which view turns out to be correct, it would be useful to disentangle the concepts of attachment and trust in contexts involving AAs that appear human-like, because of the way in which the two concepts can become so easily convoluted in such contexts.

  22. The “stronger” trust relationship that is possible in this scenario would also seem to qualify as a more “meaningful” (and more robust) trust relationship, as well. But that point needs to be developed in a separate paper, as it is beyond the scope of the present essay.

  23. Note that the levels of weak/low/minimal trust that apply to ethical-impact agents (and possibly to some implicit-ethical agents as well) should not be interpreted in a way that suggests that these kinds of agents are to be “distrusted.” In other words, the absence of a possibility for a high level of trust for some kinds of AAs should not be viewed as sufficient grounds for distrusting those AAs. Rather, trust in the context of AAs should be viewed as a kind of “threshold concept” (that agents either can exhibit, albeit in varying levels or degrees, or cannot exhibit because they fail to meet the required conditions for trust relationships).



Artificial agent


Autonomous artificial agent


Artificial moral agent


Functionally autonomous artificial agent


Human agent


  • Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.

    Article  Google Scholar 

  • Buechner, J. (2011). Trust, privacy, and frame problems in social and business E-networks. Information, 2(1), 195–216.

    Article  Google Scholar 

  • Buechner, J., & Tavani, H. T. (2011). Trust and multi-agent systems: applying the ‘diffuse, default model’ of trust to experiments involving artificial agents. Ethics and Information Technology, 13(1), 39–51.

    Article  Google Scholar 

  • Buechner, J., Simon, J., & Tavani, H. T. (2014). Re-Thinking trust and trustworthiness in digital environments. In E. Buchanan et al. (Eds.), Ambiguous technologies: philosophical issues, practical solutions, human nature: Proceedings of the Tenth International Conference on Computer Ethics—philosophical enquiry (pp. 65–79). Menomonie, WI: INSEIT.

    Google Scholar 

  • Carr, L.J. (2012). Trust: an analysis of some aspects. Available at

  • Coeckelbergh, M. (2010). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.

    Article  Google Scholar 

  • Coeckelbergh, M. (2012). Can we trust robots? Ethics and Information Technology, 14(1), 53–60.

    Article  Google Scholar 

  • Decker, M., & Gutmann, M. (Eds.). (2012). Robo-and-Information ethics: some fundamentals. Berlin, Germany: LIT.

    Google Scholar 

  • deVries, W. (2011). Some forms of trust. Information, 2(1), 1–16.

    Article  Google Scholar 

  • Durante, M. (2010). What is the model of trust for multi-agent systems? Whether or not E-trust applies to autonomous agents. Knowledge, Technology and Policy, 23, 347–366.

    Article  Google Scholar 

  • Durante, M. (2011). The online construction of personal identity through trust and privacy. Information, 2, 594–620.

    Article  Google Scholar 

  • Floridi, L. (2008). Foundations of information ethics. In K. E. Himma & H. T. Tavani (Eds.), The handbook of information and computer ethics (pp. 3–23). Hoboken, NJ: John Wiley and Sons.

    Google Scholar 

  • Floridi, L. (2011). On the morality of artificial agents. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 184–2012). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Grodzinsky, F. S., Miller, K., & Wolf, M. J. (2011). Developing artificial agents worthy of trust: ‘would you buy a used car from this artificial agent?’. Ethics and Information Technology, 13(1), 17–27.

    Article  Google Scholar 

  • Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.

    Article  Google Scholar 

  • Johnson, D. G. (2006). Computer systems: moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.

    Article  Google Scholar 

  • Lim, H. C., Stocker, R., & Larkin, H. (2008). Review of trust and machine ethics research: towards a bio-inspired computational model of ethical trust (CMET). In Proceedings of the 3rd International Conference on Bio-Inspired Models of Network, Information, and Computing Systems. Hyogo, Japan, Nov. 25–27, Article No. 8.

  • Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Robot ethics: the ethical and social implications of robotics. Cambridge, MA: MIT Press.

    Google Scholar 

  • McLeod, C. (2011). Trust. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Available at

  • Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.

    Article  Google Scholar 

  • Moor, J. H. (1997). Towards a theory of privacy for the information age. Computers and Society, 27(3), 27–32.

    Article  Google Scholar 

  • Moor, J. H. (2006). The nature, difficulty, and importance of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  • Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119–157.

    Google Scholar 

  • Nissenbaum, H. (2010). Privacy in context: technology, policy, and the integrity of social life. Palo Alto, CA: Stanford University Press.

    Google Scholar 

  • Royal Academy of Engineering. (2009). Autonomous systems: social, legal and ethical issues. London. Available at:

  • Simon, J. (2010). The entanglement of trust and knowledge on the web. Ethics and Information Technology, 12(4), 343–355.

    Article  Google Scholar 

  • Taddeo, M. (2009). Defining trust and E-trust: old theories and new problems. International Journal of Technology and Human Interaction, 5(2), 23–35.

    Article  Google Scholar 

  • Taddeo, M. (2010a). Modeling trust in artificial agents: a first step in the analysis of E-trust. Minds and Machines, 20(2), 243–257.

    Article  Google Scholar 

  • Taddeo, M. (Ed.) (2010b). Trust in technology: a distinctive and problematic relationship. Special Issue of Knowledge, Technology and Policy 23(3–4).

  • Taddeo, M., & Floridi, L. (Eds.) (2011). The case for E-trust: a new ethical challenge. Special Issue of Ethics and Information Technology 13(1).

  • Tavani, H. T. (2011). Can we develop artificial agents capable of making good moral decisions? Minds and Machines, 21, 465–474.

    Article  Google Scholar 

  • Tavani, H. T. (2012). Ethical aspects of autonomous systems. In M. Decker & M. Gutmann (Eds.), Robo-and-information ethics: some fundamentals (pp. 89–122). Berlin, Germany: LIT.

    Google Scholar 

  • Tavani, H. T. (2013). Ethics and technology: controversies, questions, and strategies for ethical computing (4th ed.). Hoboken, NJ: John Wiley and Sons.

    Google Scholar 

  • Tavani, H. T., & Buechner, J. (in press). Autonomy and trust in the context of artificial agents. In M. Decker & M. Gutmann (Eds.), Evolutionary robotics, organic computing, and adaptive ambience. Berlin, Germany: LIT.

  • Tavani, H. T., & Arnold, D. (Eds.), (2011). Trust and privacy in a networked world. Special Issue of Information 2(4).

  • Turkle, S. (2011). Authenticity in the age of digital companions. In M. Anderson & S. L. Anderson (Eds.), Machine ethics (pp. 62–78). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Verrugio, G. (2006). EURON roboethics roadmap (Release 1.1). In G. Verrugio (Ed),. EURON roboethics atelier. Genoa, Italy. Available at

  • Walker, M. U. (2006). Moral repair: reconstructing moral relations after wrongdoing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: teaching robots right from wrong. New York: Oxford University Press.

    Book  Google Scholar 

Download references


An earlier version of this essay was presented at the Second International Symposium on Digital Ethics, Loyola University—Chicago (USA), October 29, 2012. I am grateful to Jeff Buechner, Lloyd Carr, and Jim Moor for their very helpful comments and suggestions on earlier drafts of this essay. I am also grateful to the anonymous Philosophy and Technology reviewers for their constructive criticisms and keen insights, many of which have been incorporated into the final version of this essay. Finally, I wish to thank Joanne Abate Tavani for permitting me to include the various scenarios involving “my wife” in section 4; contrary to what these hypothetical scenarios might (unintentionally) suggest, I am pleased to note that Joanne is in excellent physical and mental health!

Author information

Authors and Affiliations


Corresponding author

Correspondence to Herman T. Tavani.

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tavani, H.T. Levels of Trust in the Context of Machine Ethics. Philos. Technol. 28, 75–90 (2015).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: