Advertisement

Design for the Value of Trust

  • Philip J. NickelEmail author
Living reference work entry

Abstract

The relationship between design and trust has recently been a topic of considerable scholarly discussion. This is due to several reasons. First, interpersonal trust is an especially relevant concept in information, communication, and networking technologies, because these technologies are designed to facilitate transactions and exchanges between people. Second, digital information has become ubiquitous and can itself be the object of a trust-like attitude, since people rely on it to meet their expectations under conditions of time and information scarcity. And finally, perhaps as a result of the first two points, designers have started to take on the role of expressly encouraging user trust by incorporating in their designs perceptual and social cues known to increase trust. This chapter explores some of the philosophical issues surrounding trust “by design” and explains how to apply Design for Values to trust.

Keywords

Trust in technology Technological mediation Trustworthiness Ethics of trust Epistemology of trust 

Introduction

A traditional approach to design for trust is to make artifacts, processes, and systems that are reliable or trustworthy. A car is reliable when it has been designed to function safely and efficiently, and this reliability fosters trust in the technology and the company that produces it. Recently, however, designers have taken on a different role, inviting trust directly by using perceptual and social cues known to encourage trust (e.g., Glass et al. 2008). The focus then shifts from the reliability of the system to the psychological state of the user. Whereas previously the psychological state of the user was left to the user himself or to the advertising department, now it is a focus of design. This shift in focus was caused in part by the ICT (information and communications technology) revolution, in which technologies were designed to mediate social relationships. The dispositions of users to place trust in other users became an explicit subject of design (Riegelsberger et al. 2003). As a result, design for trust in its new incarnation is reflexive: it encompasses both the creation of reliable and trustworthy products and systems and also explicit reflection on the trust of the user.

Trust is a good thing, but for many years the academic literature on trust has emphasized that it is not good always and everywhere (Baier 1986; Coleman 1990; Hardin 1993). The reason is not just that the value of trust is sometimes overridden by other values such as security or fairness. After all, many important values can be overridden in some circumstances by other more salient values. The reason is more pointed: unlike some other values, trust is a psychological state that represents the trusted person or object as being trustworthy, and this may or may not actually be the case. When the thing one relies on is not trustworthy, then trust is inappropriate or even dangerous. In other words, trust involves accepting one’s vulnerability to others, willingly placing oneself in their hands to some extent. Therefore, to encourage trust is to encourage a kind of vulnerability (Baier 1986). This is why it is important for designers’ current reflexive concerns about trust to remain coupled with a traditional concern for trustworthiness and reliability. In this chapter, I will not focus on reliability, responsibility, and the like, which provide backing or grounding to the value of trust, because these are separate values, treated elsewhere in this volume. I will instead keep the emphasis on the psychological state of trust in the user and the ethics of designing with this psychological state in mind, so as to deal with what is distinctive about trust.

This chapter introduces some prevalent conceptions of trust and indicates how they can be used to inform design. In the section “The Value of Trust”, the definition of trust is set out and some disagreements about how to conceptualize it are discussed. In the section “Design for Trust”, several philosophical approaches to the idea of design for trust are discussed, and the methodology of Design for Values is applied to the value of trust. In the section “Cases and Examples”, a number of case studies of design for the value of trust are described. The sections “Open Issues and Future Work” and “Conclusions” raise questions for future work and conclude the chapter, respectively.

The Value of Trust

Conceptions of Trust and Its Value

In order to design for trust, we want to know both what trust is and when it is appropriate or inappropriate. The first question – “What is trust?” – depends on what we hope to explain using the concept. Social scientists and philosophers agree, by and large, that trust refers to human reliance that is:
  • Willing and voluntary

  • Carried out under conditions of uncertainty and vulnerability

Trust is often conceptualized as a relationship between a trustor, a trustee, and a desired performance or a domain of interaction or cooperation. This is called “three-place trust” (Baier 1986), in which one person trusts a second person or entity to do some particular thing (e.g., to make a payment) or to promote his or her interests in a domain of shared interaction (e.g., the financial domain). Thinking in this way, we can focus on certain performances by the trustee that fulfill or satisfy the trustor’s trust-based expectations.

Although social scientists and philosophers are also sometimes interested in “two-place” trust, in which a person simply trusts another person or entity, with no particular performance or domain in mind, this two-place notion of trust is usually understood as being derivative from and less explanatorily useful than three-place trust. For this reason it will not be emphasized in what follows. It can, however, be fruitful to consider a less conscious notion of “basal” trust (the term comes from Jones 2004) that concerns the regular behavior of the natural world, the functioning of one’s own body and faculties, of social practices and institutions, and of the built and engineered environment. Throughout our lives we rely on tacit assumptions about how things will or are “supposed to” behave (Carusi 2009). This basal trust only becomes visible when there is some kind of breakdown (Jones 2004) or when we imagine a scenario of breakdown (Nickel 2010). Technology can induce this as well (Carusi 2009). Despite its implicitness, basal trust is not the same thing as two-place trust. Arguably, even in basal trust a person trusts some entity to do something, even if this reliance is tacit and unreflective.

In addition to questions about what trust is, there are also evaluative questions concerning when it is appropriate or good to trust and concerning the importance of trust in relation to other values. Suppose you are using a Web-based tool for project collaboration and want to know whether you can rely on another user whom you do not know personally. Should the system give you reasons to believe that the person is trustworthy before you are expected to interact with him or her, or would that impose an unrealistic barrier that gets in the way of the practical value of cooperation? Emphasizing the evidential dimension of trust over its practical and pragmatic dimension, or vice versa, can yield different outcomes for design. In addition, what kinds of other values within design strengthen or weaken trust? For example, accessibility by many users might be “democratic,” but it may also introduce untrustworthy users, thereby decreasing trust. Rigorous security and safety measures, on the other hand, seem to take the place of trust rather than encouraging it.

Issues of Controversy Regarding Trust and Its Value

There are a number of scientific controversies about the nature of trust, especially about the characteristic kinds of motives and evidence that underlie trust. First, philosophers tend to think of trust as a moral concept and say that moral or richly affective motivations underlie trust (McLeod 2002; Simpson 2011; Lagerspetz and Hertzberg 2013), whereas social scientists often leave this open or tie it to a nonmoral, impersonal motivation such as expectation that one will have additional interactions with the trusted party in the future (Coleman 1990; Hardin 2006).1 Second, some stress that the idea of trust applies most clearly to people who know each other well and have an ongoing relationship, whereas others emphasize the importance of cooperation between strangers. For our purposes here, we will leave these questions open and consider trust in a broad sense including all these motivations.

However, some philosophical problems and controversies surrounding trust have a special relevance for design and will reappear in other parts of this chapter:
  • Anthropocentrism: People often speak about trust in technology or in specific artifacts, but it seems inappropriate to take a rich affective or moral attitude toward a mere thing, rather than a person (Nickel et al. 2010) because this is a kind of pathetic fallacy, anthropomorphizing an object.

  • Evidence for Trust: Evidence is highly relevant when figuring out whether to rely on another person or entity (e.g., a computer system or another system user), particularly when the stakes are high. There is disagreement about whether trust is typically based on evidence that the person or thing relied upon is reliable (Gambetta 1988) or whether it is a “leap of faith” carried out under conditions of uncertainty on the basis of non-evidential information (Möllering 2006). People often trust on the basis of weak evidence, and so long as the trusted entity is actually trustworthy, it is not clear why having more evidence is better. Indeed, a policy of never trusting without conclusive evidence is usually harmful overall (Hardin 1993).

  • Discretion of the Trusted: Greater assurance of reliability seems to increase trust, but on the other hand, constant surveillance and strict enforcement of performance (e.g., with legal sanctions) tend to make trust irrelevant (O’Neill 2002; Smolkin 2008). Trust is most relevant when the person or thing trusted has the discretion to choose how to behave.

These problems raise a number of questions about willing reliance upon a technology, and they prompt a closer look at the motives for trust. Psychological research has mostly shown that risk adversity and overall trustfulness are not closely related (e.g., Ben-Ner and Halldorsson 2010). This points toward an important conceptual truth about trust: although we may wish to say that trust is based on evidence, it is a different kind of evidence than that relevant to a risk judgment. Whereas risk judgments are based on predictions and associated emotional states such as fear, the evidence relevant to trust concerns something else entirely, namely, knowledge of others’ motives and interests, norms regarding roles and relationships, and situational knowledge. Once one has a standing relationship with somebody on the basis of which normative expectations are formed, further evidence is no longer needed to trust them unless specific doubts arise. In addition, constant surveillance and strict enforcement make motive-based knowledge irrelevant. This means that how a technology mediates the motives of its users is a central issue of design for trust.

Design for Trust

Existing Approaches and Tools

This section looks at existing conceptions of design for trust and its methodology. Broadly speaking, we distinguish between anthropocentric conceptions of design for trust, which limit trust to interpersonal relationships, and non-anthropocentric conceptions which allow trust to have things other than humans as its object. Friedman, Kahn, and Howe (2000) define trust in a way that explains their anthropocentric approach: in trust one ascribes goodwill to others, allowing oneself to be vulnerable to them (citing Baier 1986). Goodwill requires consciousness and agency. But since “technological artifacts have not yet been produced … that warrant in any stringent sense the attribution of consciousness or agency,” it follows that “people trust people, not technology” (Friedman et al. 2000, p. 36). Despite their focus on interpersonal trust, they are nonetheless especially concerned about trust with regard to the Internet and other ICTs. The reason is that these are technologies that facilitate social transactions and commercial exchange – situations where interpersonal trust between users is required or highly instrumental.

In the previous section, we raised the question of what kind of evidence should be made available to users in order to establish trust (the issue of Evidence for Trust). Friedman, Kahn, and Howe define the relevant Evidence for Trust in terms of three types of information:
  • About possible harms

  • About the motives of the persons with whom a user interacts by way of the technology

  • About whether those persons’ motives could cause the indicated harms (2000, p. 35)

In accordance with this view, Friedman, Kahn, and Howe often restrict design for trust to voluntary human factors in interpersonal interaction. They argue that it is important “not to conflate trust with other important aspects of social interaction” that could also fail, such as having insufficient information about an entity or person on which one relies (2000, p. 37). Harm that occurs “outside the parameters of the trust relationship” does not count against trust (2000, p. 35). Friedman, Kahn, and Howe put forward a number of engineerable factors that help cultivate trust online, such as reliability and security of the technology, protection of privacy, self-assessment of reliability, honest informational cues, accountability measures, and informed consent. According to them, these factors help facilitate interpersonal trust between buyers and merchants and between individuals who participate in online fora.

By contrast, others writing about design for trust have an inclusive, non-anthropocentric conception of trust. They include technology itself as an appropriate object of trust. For example, Kelton, Fleischmann, and Wallace argue that digital information is itself a paradigmatic object of trust, pointing out that “the overwhelming volume of information on the Internet creates exactly the type of complexity that gives rise to the need for trust” (2008, p. 368). They argue that some important hallmarks of trust are present in our relation to information on the Internet, namely, uncertainty, vulnerability, and dependence. A similar but broader argument could be given regarding our more general reliance on technological systems. Given that we are uncertain about, vulnerable to, and dependent on technological systems and that this is ineliminable because of the sheer complexity of these systems and their integral involvement in our daily lives, we should also be willing to speak of trust in technological systems while acknowledging that this is of a different nature than interpersonal trust (Nickel 2013). Our expectations of technology are not just predictive but also normative – they involve the attitude that the technology should perform in certain ways and should promote or protect our relevant interests. In that case, design for trust should provide cues and evidence that help people ground their trust in technological systems (Evidence for Trust). Design for trust implies that the designer pays attention to the user’s expectations of a technological artifact and tries to create a condition in which the user has warranted expectations with regard to the actual functions of the artifact (Nickel 2011).

Another important strand of non-anthropocentric thought about design for trust draws on continental philosophy of technology. Heidegger, for example, has been interpreted by Kiran and Verbeek (2010) as holding that technology can itself be an object of trust or suspicion. In this line of thought, trust signals a constructive relationship to technology. As users of technology, we do not take technology as a mere instrument for prior goals. Instead, we engage with it actively and takes responsibility for how our “existence is impacted” by it (2010, p. 424). The analysis of how technology mediates human action and freedom in this way, and of its foreseen and unforeseen effects on our concepts and social and ethical practices, provides a starting point for ethical and practical reflection. For example, Verbeek (2008) analyzes how imaging technology mediates medical decision-making, arguing that it has a profound impact on agency in this area. Although Verbeek (2008) does not mention trust, we can read Kiran and Verbeek’s analysis of trust in technology back into the case of medical imaging in two ways. First, imaging technology complicates trust between physician and patient, a relationship in which trust is acknowledged to be of central importance (see, e.g., Eyal 2012). The trustworthiness of the physician and the reliability of the imaging technology are defined in relation to one another: the physician vouches for its reliability, and its detailed images may invite or require expert interpretation. But furthermore, trust by both the physician and the patient in the imaging technology is a kind of purposive, constructive engagement with technology and plays a powerful role in determining what is going on in a given encounter: what kinds of considerations are taken as relevant, how clinical consultations proceed, and what responses are considered standard. An analysis of trust in technology in this deeper sense can be useful for design since it gives insight into how a technology transforms or is likely to transform, our actions, perceptions, and practices. “Basal trust” is often the focus here, since basal trust concerns what we regard as “comfortable,” “everyday,” or “normal,” creating the background assumptions framing human action and interaction.

Now that we have discussed both anthropocentric and non-anthropocentric accounts of design for trust, we can discuss how design methods incorporate the value of trust. How can value-based reflection discover that trust is important to a design, and how can this reflection be made a part of the subsequent design process? Design for Values, as described in other chapters in this volume, covers several methods for doing this. Here we will focus on value-sensitive design. Value-sensitive design is a specific approach to Design for Values that has been applied to such widely divergent cases as the user interfaces for missile-guidance systems (Cummings 2006) and office interior design (Friedman et al. 2006). As originally set out in Friedman et al. (2006), value-sensitive design consists of three related aspects or phases of research and design work: a conceptual phase in which relevant values and potential conflicts between them are identified, an empirical evaluation of how (well) various values are realized in various permutations of a design, and a technical phase that attempts to resolve conflicts between values or achieve a more effective realization of those values through engineered solutions. Here I will briefly discuss these three phases in relation to trust.

Conceptual. According to value-sensitive design, trust will sometimes emerge as an important value during the conceptual phase. But it is not always clear how this is supposed to be determined: what process or criterion should be used? Manders-Huits, for example, raises the question of what counts as a value within value-sensitive design (2011). We can answer this question by referring to the conception of trust discussed above. There are two main cues that indicate that trust is a salient value in design. The first is whether a design mediates interpersonal relationships in a new way, or for new users, or in a new context, or whether existing relationship-mediating features of a technology are believed to be lacking in some way. The second is whether technology adoption occurs under conditions of uncertainty, dependence, and resource limitations for users, where the context is new or existing solutions are believed to be lacking. These two tests can be used to discover whether trust is a salient value in the conceptual phase of value-sensitive design or other Design for Values methodologies. This can be facilitated by explicitly discussing issues of trust during interactions with users and stakeholders, but often it will emerge naturally as a design problem is made more concrete.

Technical. When trust is discovered to be a key value, other design methods can be used to help determine how to balance it with other values and realize it technically within the design. Which method is used will depend on whether it is a redesign or a more radical innovation. Vermaas et al. (2010) embed design for trust within two existing design methodologies: Quality Function Deployment (King 1989; Akao 1990), used for redesign, and a creative design methodology for new designs as described by Cross (2006). In the case of redesign, “the values derived from trust … are listed as user requirements and … IT developers analyze which of the characteristics of their existing systems are relevant to meeting these values” (Vermaas et al. 2010, p. 502). In the case of creative design, an ongoing process of discussion results in an open dialogue in which “a ‘space’ of design solutions co-evolves with a ‘space’ of design problems” and strong engagement with users and clients is needed throughout the process as solutions evolve (Vermaas et al. 2010). Vermaas et al. view the use of these design methodologies as compatible with value-sensitive design, although they do not explicitly relate them to the structure of that approach which divides design tasks into conceptual, empirical, and technical phases.

Empirical. The empirical phase measures the realization of trust within different technical implementations of a design. Here it is important to remember three crucial points. First, trust is more than mere reliance. At a minimum, it is a voluntary disposition toward reliance under conditions of uncertainty. Trust uses information about the motives, interests, and character of individuals or about the functions of artifacts and systems, together with situational knowledge, to overcome uncertainty. For that reason, trust cannot be equated simply with a risk estimate (since estimating risk directly is not the characteristic basis of trust), nor can it be equated with a disposition to cooperate or engage in reliant behavior (since such a disposition might not be fully voluntary, e.g., when there are no other good options). If the expected behavior is certain, e.g., because it is being enforced coercively, then trust is not the explanation for one’s reliance on that behavior. (This is the issue of the Discretion of the Trusted mentioned earlier.) Second, trust is usually thought to involve a normative expectation that somebody or something should perform a certain way, or to a certain standard, and this normative expectation forms part of the reason for relying on the trusted entity. This is what distinguishes trust from a willing disposition toward reliance grounded in a purely predictive or statistical expectation. And third, trust is not the same thing as trustworthiness. When we measure trust, we are dealing with a psychological disposition centrally consisting of people’s expectations. Trustworthiness , on the other hand, is a quality of a person, system, or artifact such that it is likely to perform as expected. Showing that people trust (within) a design does not imply that it is trustworthy, nor the other way around.

Comparison and Critical Evaluation

It can be difficult to settle differences between philosophical views about design for trust, because each view emphasizes different elements that are important for design. Friedman, Kahn, and Howe’s emphasis on trust as involving the willful actions of those who interact online, such as buyers and sellers, is supported by the philosophical literature emphasizing interpersonal trust. However, it sharply restricts the domain of design for trust. For example, some harms or bad outcomes incurred through acts of reliance are caused by technical problems or human accidents rather than the (ill) will of those interacting, and yet these technical problems and accidents also seem to affect trust. Suppose a Web merchant accidentally doubles an order and overcharges the buyer. It would seem to follow from Friedman, Kahn, and Howe’s view that such an incident does not have to do with trust so long as the humans involved are non-culpable and non-negligent. Responsibility for the incident cannot be attributed to the merchant’s ill will, after all. Since Friedman, Kahn, and Howe’s guidelines for design for trust are intended to address cases such as these, they have to make an indirect argument, claiming that engineered factors affect trust because “people frequently draw on cues from the [engineered] environment to ascertain the nature of their own vulnerabilities and the good will of others” (Friedman et al. 2000, p. 37). However, this claim seems to assume that the technology has a strong mediating role in the formation and presentation of human motives. It does not sit easily with their claim, quoted earlier, that we must not confuse failure of trust with having insufficient (or incorrect) information about an entity or person on which one relies.

Broader, non-anthropocentric views of trust that allow for trust in technological artifacts do not have this problem, but they encounter the criticism that trust in technology is indistinguishable from mere reliance or judgments of reliability (Nickel et al. 2010). To some extent, this criticism can be met by giving an account of the moral, affective, or emotional aspects of trust in artifacts and technological systems, such as the frustration one feels when an artifact or system breaks down. Such emotions and normative judgments go beyond mere reliance. However, it may seem that normative, affective, or emotional attitudes about technological artifacts are irrational, since technologies are in the final reckoning just “brute matter.” One may get angry at one’s car when it does not start, but perhaps this does not signal a rich relationship of trust yielding insights for design (although the concept of “reactance” to technology is taken seriously by human-technology interaction theorists – see, e.g., Lee and Lee 2009; Roubroeks et al. 2011). It has even been argued that affective, social attitudes toward computers, robots, or persuasive technologies should be discouraged by design and that encouraging these attitudes is morally questionable because it deceives technology users (Friedman 1995).

A compromise view could be reached by focusing on the situation of users who in various situations want or need to rely on technological artifacts and systems and have little time or expertise for the evaluation of whether this reliance is a good idea. Think for a moment of users who are bombarded with information and opportunities to use technologies and who have a finite supply of attention and cognitive resources to spend on the question of whether and how to use them. Various possible ultimate objects of trust such as other users, system operators, designers, manufacturers, owners, and the technology itself are not clearly separated in the user’s mind. From a design point of view, it is more important to consider the parameters of user choices (How much time does the user have? How many options does he or she have? What is at stake for him or her? What does he or she expect?) and the various kinds of evidence available to him or her (Is the technology familiar? Have my past experiences with it been good? Does it look reliable? Does a known entity vouch for its reliability? Can I retaliate or complain if it does not work?) than to draw overly fine distinctions between types of objects of trust. It is also important for the designer to keep these questions in mind in case he or she wishes to create uncertainty, distrust, or doubt (the flip side of Evidence for Trust). The designer can help direct people’s attention, bringing them to focus critically on some questions of reliance and not others. Anthropocentric and non-anthropocentric theories of trust both contribute to these practical questions about reliance and the reasons behind it. Although interpersonal trust is special, in these contexts it is also useful to consider a trust-like attitude that can be taken toward technologies and socio-technical systems.

Now that we have compared these views of what design for trust includes in its scope, we briefly consider Design for Values in relation to trust. Critical issues about the methodology of Design for Values and value-sensitive design are discussed elsewhere in this volume (see the chapters “Value-Sensitive Design,” “Design Methods in Design for Values,” and “Operationalization in Design for Values”). Here we focus on two notes of critical caution specific to trust. First, it is important not to take too narrow a view of which potential trust relationships are relevant to a design. A residential community secured with forbidding gates and high walls may encourage trust among its residents, but discourage wider public trust among citizens. If a designer only looks at the effect on residents, they might deem this design to promote trust. However, a study with a wider perspective might judge that it hampers trust overall, partly because it enforces security physically instead of leaving it as a matter for the broader community to manage through a sense of mutual reliance and common purpose (a point that relates to the idea of Discretion of the Trusted from earlier). Manders-Huits (2011) makes the important point that value-sensitive design does not always make it clear who is a stakeholder requiring consideration from the perspective of values. From an ethical point of view, all of those affected by a design are potentially relevant. This is also true regarding the value of trust: one should begin with a wide view of whose trust is relevant and what objects of trust are relevant.

Second, it is important not to separate the process of design for trust from its outcome. How one involves stakeholders, clients, and users in a process of Design for Values can have an effect on whether they form trust in and within the system when the design is actually implemented. The lessons of participatory design and stakeholder involvement are crucial (Reed 2008). Participatory processes can facilitate trust. For example, as one pair of authors writing about participatory design in architecture writes, “it may be only after clients understand what architects face in designing that trust develops. That is one advantage of a participatory design process … where people have direct experience of the design challenges architects face” (Franck and Von Sommaruga Howard 2010). Here, it is the “experience” of Design for Values that reflexively stimulates trust in design.

Cases and Examples

In this section we consider some concrete examples and case studies of design for the value of trust. First, it is useful to note these case studies do not explicitly mention or attempt to use the methodology of Design for Values described above (with the exception of Vermaas et al. 2010). A second general observation is that most of the examples and case studies come from the domain of ICT. Case studies in ICT often emphasize the importance of user identity mediation: how information about other users and their actions is mediated by the technology . For example, Pila (2009) and Carusi (2009) discuss how systems designed to distribute scientific knowledge and medical tasks can encourage (justified) user trust, focusing on the case studies of CalFlora (Van House 2002) and eDiaMoND (Jirotka et al. 2005). CalFlora is a library of botanical photographs and reports contributed by and available to scholars and horticulturalists. Van House raises the question of how the digital environment of CalFlora mediates judgments about the authenticity and authorship of the photos in the database, relating this to trust. She argues that an important condition for trust in such an environment is that contributors to the database have a stable virtual identity that can serve as a nominal pigeonhole in which to store knowledge about trustworthiness and authenticity (Van House 2002). Pila endorses this idea, but argues furthermore that having too much identifying information can cause users to rely on others in ways that emphasize existing biases, personal ties, and power relations, which distorts their judgment and blocks healthy skepticism (Pila 2009). The design of the system mediates these epistemic practices of trust and skepticism, in a way that links with the issue of Evidence for Trust mentioned earlier.

Pettit (2004) takes a gloomier view of trust online, arguing that the anonymity of those who interact online is a barrier to trust. Basing his argument on an account of trust that links it strongly to the socially created and maintained currency of reputation (Pettit 1995), he argues that systems that do not allow for users to develop reputations make trust impossible. He argues that “on the Internet … we all wear the ring of Gyges,” referring to the tale in Plato’s Republic in which a shepherd becomes invisible and takes advantage of his ability to engage in unjust acts with impunity (Pettit 2004, p. 118). For that reason, reputation and trust are both impossible on the Internet. Although Pettit’s argument has been criticized (de Laat 2005), there is something highly valuable in it, as one can see from the fact that major commercial websites such as Amazon and eBay make reputational information storable and visible to buyers, and this has been a central feature of their sites over many years (see Resnick and Zeckhauser 2002). The system recreates this aspect of non-virtual trust within the virtual market environment, mediating the identities of users to one another.

Carusi (2009) also points out that it is important to build non-virtual aspects of trust into collaborative work-support systems. She focuses on the case study of eDiaMoND, a system for the distribution of breast cancer screening tasks among professionals. The eDiaMoND system allows judgments about medical images to be performed remotely as well as double-checked by professionals from another health-care facility. Jirotka et al. (2005) formulate the epistemological problems raised by the design of the system: “how can a reader who lacks knowledge of the (local) conditions of a mammogram’s production read that mammogram confidently[?] … second, how can a reader unknown to one be trusted to have read mammograms in an accountably acceptable manner?” (389, cited in Carusi 2009, p. 31). Carusi argues that when the system allows users to create and leave familiar kinds of contextual information, such as notes on a particular mammogram (even if they are anonymized), this helps professionals to situate their judgments of trust and distrust within familiar epistemic and social practices. New information technologies tend to disrupt these familiar practices and thereby bring issues of trust to the foreground. It may be necessary to build these practices into the system in some way in order to allow for trust among users.

User identity mediation and Evidence for Trust are major issues throughout these case studies and others. Describing an article by Bicchieri and Lev-On (2011) that looks at the effect of information about other agents on cooperative outcomes in a computer model of cooperative behavior, Vermaas et al. state that designers of ICT must confront questions of “which information about reputations, history, and identity should be made available … how much … and when” (2010, p. 498). Case studies from ICT thus clearly indicate that how a system constructs and mediates users’ identities, motives, and reputation, and how much it allows them to provide trust cues to other users, is one of the primary issues of design for trust.

Some of the insights of these case studies of user identity mediation in ICT can be extended to architecture, urban planning, and other areas of design. For example, Katyal discusses how architecture can be used to encourage trust. “As architects bring natural surveillance to an area, they may ease community-police tensions” by encouraging mutual trust (2002, p. 1073). More generally, “architects can create spaces that bring people together or ones that set them apart. They can reinforce feelings of familiarity and trust or emphasize harshness and social chaos” (2002, pp. 1086–1087). Like ICT, architecture mediates human relations, although it does so in physical space, where informational channels about other people are less rigidly controlled (and also less easy to change once the design has been brought into being).

User identity mediation is not the only issue concerning interpersonal trust in ICT, however. As mentioned in section “Comparison and Critical Evaluation” above, the trust of stakeholders who are not “users” in any standard sense is often highly relevant. For example, a system can facilitate the trust of an external party that has an interest in how the system functions. Vermaas et al. (2010) consider the design of ICT systems that allow companies to control and report commercial activities subject to tax and customs on behalf of the government tax authority. In this case, trust is relevant because the tax authority depends on the companies themselves to monitor and enforce the relevant laws and regulations on their transactions. The process of doing so is largely carried out by complex ICT systems. Vermaas et al. suggest that in these kinds of cases, what is needed is participatory involvement of the external control and regulatory body in the design of the system, ensuring that the system facilitates trustworthy behavior by companies who use it.2 Other examples also indicate the importance of trust for nonusers: the design of an electronic voting machine system should encourage justified trust, not just among voters, but among the wider public, government bodies, etc. (see Pieters 2006).

So far we have focused on case studies of interpersonal trust as realized in and mediated by technology. There are other cases concerning trust in technology , concerning how people make willing choices to rely on complex technology under time pressure with limited evidence about reliability. This process is highly subject to influence by designers, because designers can use the technology and its embedding to communicate with users, establishing and/or building on normative expectations these users already have. Consider the examples from earlier in the paper of medical imaging technology used to help patients and physicians make diagnoses and decisions. The design of such systems mediates how patients and physicians rely on them. If a system is designed, as in Verbeek’s (2008) example, so that an electronic image is seen and partially understood by the patient, then it provides a source of information independent from the physician’s interpretation of the image. It can even be designed to deliver a written or symbolic message directly to the patient, to which the physician is a bystander. This creates complex relationship of reliance between patient, system, and physician. In addition to mediating patient-physician interpersonal trust, then, such systems also invite trust in technology itself. And despite anthropocentric accounts that emphasize interpersonal trust, here the patient’s reliance on the system has complex normative aspects. Ideas of the technology’s purpose and function help determine what the patient expects of the system and what they rely on it to do. In the context of the practice of medicine and its associated ethical responsibilities, such a technology invites a normative, even moralized notion of trust.

Open Issues and Future Work

There are several areas where more research is needed to advance the idea of design for the value of trust. Three areas will be highlighted here. First, our normative and moral expectations of other persons and entities are part of our reason for trusting them. How are such expectations about technology learned and communicated, and how do they guide our interaction with technology and other technology users? A framework is needed for thinking about the role of the designer as a communicator of expectations, affordances, and norms that guide us in when and how to rely on technology and when and how to rely on other technology users whose identities are mediated by the technology. This would involve interdisciplinary attention from philosophy, psychology, and design theory.

Second, returning to the issue of Evidence for Trust, what evidential standard should we try to meet in providing people with the materials for trust? What counts as the right amount and kind of evidence for people in a position to rely on technology and its other users? Since users are often under time and resource pressure, what simple cues can be used that also indicate or “ground” genuine trustworthiness? Some recent work suggests that the relevant standard is one on which the potential trustor should have an “adequate, sound justification for her trust” in a given technological system (Nickel 2013). But this may be too conservative a standard, because if the potential trustor has to wait for a sound justification, it may actually inhibit his or her trust formation. To what extent should we allow or even encourage people to make a leap of faith, counting on the reliability of others to catch them? If we do encourage people to make that leap, then we should be sure that the technology really lives up to their (reasonable) expectations of reliability.

A third area of future research concerns technological artifacts that involve built-in social and linguistic attributes that invite interpersonal trust (e.g., a talking robot with a friendly face). To what extent may we design anthropomorphic trust-inviting attributes such as these for users who cannot discern that they are not “real,” e.g., children or severely mentally disabled persons? How does such technology need to be framed and implemented in order to be respectful to technology users? For example, who is responsible for what the robot says? Should we make such technology directly responsive to user expectations (e.g., about what values such as sustainability are implemented in the interface of a car dashboard) in order to enhance trust?

Conclusions

This chapter has explored design for trust, focusing on how our conceptualization of trust affects what we take “design for trust” to mean. If we take interpersonal trust as the only kind of trust, then technology’s role is to mediate interpersonal trust. The area of application in which this is most apparent in case studies is user identity mediation, the way in which users of a technological system are presented to other users. However, design can also mediate human interpersonal trust in other ways. For example, it can do so by changing relationships (e.g., between patient and physician or between a private company and a government agency) or by establishing a new relationship (e.g., between a homeowner in a gated community and a stranger from outside that community). Furthermore, it is useful to consider designed artifacts and systems as being the object of trust. This is accentuated even further as technology and design are used increasingly to mediate central elements of our lives such as friendships and family relationships, work, mobility, and political participation. Future empirical and theoretical work is needed to understand better what we owe to those who rely on design in order to foster and to provide sound, well-grounded support for trust in technology.

Cross-References

Footnotes

  1. 1.

    Cf. Uslaner (2002) who links trust to a general moral worldview linked with personality and childhood experiences.

  2. 2.

    Decentralized processes of control and regulation have developed greatly over the past forty years (Power 2007), and this has coincided with the development and integration of ICT systems in virtually all financial and business processes, so we can expect that similar kinds of cases, and similar issues of trust, will also appear in other regulatory and institutional contexts.

References

  1. Akao Y (ed) (1990) Quality function deployment: integrating customer requirements into product design. Productivity, CambridgeGoogle Scholar
  2. Baier A (1986) Trust and antitrust. Ethics 96:231–260CrossRefGoogle Scholar
  3. Ben-Ner A, Halldorsson F (2010) Trusting and trustworthiness: what are they, how to measure them, and what affects them. J Econ Psychol 31:64–79CrossRefGoogle Scholar
  4. Bicchieri C, Lev-On A (2011) Studying the ethical implications of e-trust in the lab. Ethics Inf Technol 13:5–15Google Scholar
  5. Carusi A (2009) Implicit trust in the space of reasons and implications for technology design: a response to Justine Pila. Soc Epistemol 23:25–43CrossRefGoogle Scholar
  6. Coleman J (1990) Foundations of social theory. Harvard University Press, Cambridge, MAGoogle Scholar
  7. Cross N (2006) Designerly ways of knowing. Springer, LondonGoogle Scholar
  8. Cummings ML (2006) Integrating ethics in design through the value-sensitive design approach. Sci Eng Ethics 12:701–715CrossRefGoogle Scholar
  9. de Laat PB (2005) Trusting virtual trust. Ethics Inf Technol 7:167–180CrossRefGoogle Scholar
  10. Eyal N (2012) Using informed consent to save trust. J Med Ethics. doi:10.1136/medethics-2012-100490Google Scholar
  11. Franck KA, Von Sommaruga Howard T (2010) Design through dialogue: a guide for architects and clients. Wiley, ChichesterGoogle Scholar
  12. Friedman B (1995) “It’s the computer’s fault” – reasoning about computers as moral agents. In: Conf Companion of CHI 1995, ACM Press, pp 226–227Google Scholar
  13. Friedman B, Kahn PH Jr, Howe DC (2000) Trust online. Commun ACM 43:34–40CrossRefGoogle Scholar
  14. Friedman B, Kahn PH Jr, Borning A (2006) Value sensitive design and information systems. In: Zhang P, Galletta D (eds) Human-computer interaction and management information systems. M.E. Sharp, New York, pp 348–372Google Scholar
  15. Gambetta D (1988) Can we trust? In: Gambetta D (ed) Trust: making and breaking cooperative relations. Basil Blackwell, Oxford, pp 213–237Google Scholar
  16. Glass A, McGuinness DL, Wolverton M (2008) Toward establishing trust in adaptive agents. In: Proceedings of the Conference on Intelligent User Interfaces (IUI), pp 227–236Google Scholar
  17. Hardin R (1993) The street level epistemology of trust. Polit Soc 21:505–529CrossRefGoogle Scholar
  18. Hardin R (2006) Trust. Polity, New YorkGoogle Scholar
  19. Jirotka M, Procter R, Hartswood M, Slack R, Simpson A, Coopmans C, Hinds C, Voss A (2005) Collaboration and trust in healthcare innovation. The e-DiaMoND case study. Comput Support Collab Work 14:369–398CrossRefGoogle Scholar
  20. Jones K (2004) Trust and terror. In: DesAutels P, Urban Walker M (eds) Moral psychology: feminist ethics and social theory. Rowman and Littlefield, Lanham, pp 3–18Google Scholar
  21. Katyal NK (2002) Architecture as crime control. Yale Law J 111:1039–1139CrossRefGoogle Scholar
  22. Kelton K, Fleischmann KR, Wallace WA (2008) Trust in digital information. J Am Soc Inf Sci Technol 59(3):363–374CrossRefGoogle Scholar
  23. King B (1989) Better design in half the time: implementing QFD in America, 3rd Ed. GOAL/QPC, MethuenGoogle Scholar
  24. Kiran AH, Verbeek P-P (2010) Trusting our selves to technology. Knowl Technol Policy 23:409–427CrossRefGoogle Scholar
  25. Lagerspetz O, Hertzberg L (2013) Trust in Wittgenstein. In: Mäkelä P, Townley C (eds) Trust: analytic and applied perspectives. Rodopi, Amsterdam, pp 31–51Google Scholar
  26. Lee G, Lee WJ (2009) Psychological reactance to online recommendation services. Inf Manag 46(8):448–452CrossRefGoogle Scholar
  27. Manders-Huits N (2011) What values in design? The challenge of incorporating moral values into design. Sci Eng Ethics 17(2):271–287CrossRefGoogle Scholar
  28. McLeod C (2002) Self-trust and reproductive autonomy. MIT Press, Cambridge, MAGoogle Scholar
  29. Möllering G (2006) Trust: reason, routine, reflexivity. Elsevier, AmsterdamGoogle Scholar
  30. Nickel PJ (2010) Horror and the idea of everyday life: on skeptical threats in Psycho and The Birds. In: Fahy T (ed) The philosophy of horror: philosophical and cultural interpretations of the genre. University of Kentucky Press, Louisville, pp 14–32Google Scholar
  31. Nickel PJ (2011) Ethics in e-trust and e-trustworthiness: the case of direct computer-patient interfaces. Ethics Inf Technol 13:355–363CrossRefGoogle Scholar
  32. Nickel PJ (2013) Trust in technological systems. In: de Vries MJ, Hansson SO, Meijers AWM (eds) Philosophy of engineering and technology, vol 9, Norms in technology., pp 223–237Google Scholar
  33. Nickel PJ, Franssen M, Kroes P (2010) Can we make sense of the notion of trustworthy technology? Knowl Technol Policy 23:429–444CrossRefGoogle Scholar
  34. O’Neill O (2002) Autonomy and trust in bioethics. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  35. Pettit P (1995) The cunning of trust. Philos Public Aff 24:202–225CrossRefGoogle Scholar
  36. Pettit P (2004) Trust, reliance and the internet. Anal Krit 26:108–121Google Scholar
  37. Pieters W (2006) Acceptance of voting technology: between confidence and trust. In: Stølen K et al. (eds) Trust management. Lecture notes in computer science, vol 3986. Springer, Berlin, pp 283–297Google Scholar
  38. Pila J (2009) Authorship and e-science: balancing epistemological trust and skepticism in the digital environment. Soc Epistemol 23:1–24CrossRefGoogle Scholar
  39. Power M (2007) Organized Uncertainty. Oxford University Press, OxfordGoogle Scholar
  40. Reed MS (2008) Stakeholder participation for environmental management: a literature review. Biol Conserv 141:2417–2431CrossRefGoogle Scholar
  41. Resnick P, Zeckhauser R (2002) Trust among strangers in internet transactions: empirical analysis of eBay’s reputation system. In: Baye MR (ed) The economics of the internet and e-commerce, Advances in applied microeconomics, vol 11.. Elsevier, Amsterdam, pp 127–157Google Scholar
  42. Riegelsberger J, Sasse MA, McCarthy JD (2003) Shiny happy people building trust? Photos on e-commerce websites and consumer trust. CHI Lett 5:121–128Google Scholar
  43. Roubroeks M, Ham J, Midden C (2011) When artificial social agents try to persuade people: the role of social agency on the occurrence of psychological reactance. Int J Soc Robot 3(2):155–165CrossRefGoogle Scholar
  44. Simpson E (2011) Reasonable trust. Eur J Philos. doi:10.1111/j.1468-0378.2011.00453.xGoogle Scholar
  45. Smolkin D (2008) Puzzles about trust. South J Philos 46:431–449CrossRefGoogle Scholar
  46. Uslaner E (2002) The moral foundations of trust. Cambridge University Press, CambridgeGoogle Scholar
  47. Van House N (2002) The CalFlora study and practices of trust: networked biodiversity information. Soc Epistemol 16:99–114CrossRefGoogle Scholar
  48. Verbeek P-P (2008) Obstetric ultrasound and the technological mediation of morality: a postphenomenological analysis. Hum Stud 31:11–26CrossRefGoogle Scholar
  49. Vermaas PE, Tan Y-H, van den Hoven J, Burgemeestre B, Hulstijn J (2010) Designing for trust: a case of value-sensitive design. Knowl Technol Policy 23:491–505CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  1. 1.Eindhoven University of TechnologyEindhovenNetherlands

Personalised recommendations