Knowledge, Technology & Policy

, Volume 23, Issue 3, pp 461–482

Competence and Trust in Choice Architecture

Authors

    • Rochester Institute of Technology
  • Kyle Powys Whyte
    • Michigan State University
Special Issue

DOI: 10.1007/s12130-010-9127-3

Cite this article as:
Selinger, E. & Whyte, K.P. Know Techn Pol (2010) 23: 461. doi:10.1007/s12130-010-9127-3

Abstract

Richard Thaler and Cass Sunstein’s Nudge advances a theory of how designers can improve decision-making in various situations where people have to make choices. We claim that the moral acceptability of nudges hinges in part on whether they can provide an account of the competence required to offer nudges, an account that would serve to warrant our general trust in choice architects. What needs to be considered, on a methodological level, is whether they have clarified the competence required for choice architects to prompt subtly our behaviour toward making choices that are in our best interest from our own perspectives. We argue that, among other features, an account of the competence required to offer nudges would have to clarify why it is reasonable to expect that choice architects can understand the constraints imposed by semantic variance. Semantic variance refers to the diverse perceptions of meaning, tied to differences in identity and context, that influence how users interpret nudges. We conclude by suggesting that choice architects can grasp semantic variance if Thaler and Sunstein’s approach to design is compatible with insights about meaning expressed in science and technology studies and the philosophy of technology.

Keywords

TrustNudgeLibertarian paternalismDesign ethicsExpertiseInterface

1 Introduction

Richard Thaler and Cass Sunstein’s Nudge: Improving Decisions about Health, Wealth, and Happiness advances a theory of how designers can improve decision-making in various situations where people have to make choices, or where failing to make deliberate choices results in their behavior being influenced by default settings (See also Thaler and Sunstein 2003a, b).1 The following four claims summarize some of the main ideas in Nudge:
  1. 1.

    Behavioral economics suggests that people tend to rely on biases like mental shortcuts, inclinations, models, gut feelings, and heuristics when they make decisions in situations that do not afford them sufficient time or information, or in situations when people find themselves subject to unanticipated arousal and temptation.

     
  2. 2.

    It is reasonable to assume that reliance on biases influences people to make some ill-informed and costly decisions about their health, financial security, and pursuit of the good life. The costs are often borne by other members of society. Examples of bad decisions range from choosing the wrong health or retirement plans when interfacing with the forms, paper or electronic, that human resource departments give to new employees, to driving too fast through a treacherous curve while taking in a scenic stretch of highway.

     
  3. 3.

    Choice architects design those aspects of technologies, interfaces, and built environments that present users with distinctive opportunities (which we will refer to as the choice context) for interacting with people, objects, and surroundings. Their work is ubiquitous, present in computer interfaces, credit card consoles, strategically arranged merchandise, strongly worded contracts, and so on. They can help to improve choices and behavior by subtly calibrating the choice context to work with peoples’ predictable tendencies to rely on biases. These calibrations are called nudges.

     
  4. 4.

    It is reasonable to expect that nudges will, on average, cut the costs of bad decisions and thereby increase savings to individuals and, in some cases, other members of society.

     

We argue in this paper that the moral acceptability of nudges hinges in part on whether Thaler and Sunstein can provide an account of the competence required to offer nudges—an account that would serve to warrant our general trust in choice architects. Our case will be presented as follows. In Sections 1 and 2, we expand upon our initial description of nudges and clarify why Thaler and Sunstein appeal to the principles of libertarian paternalism to set ethical limits on possible exploitative uses of nudges. In Section 3, we defend the idea that even if libertarian paternalism is as attractive as Thaler and Sunstein contend, it does not establish whether choice architects can be trusted to offer nudges that improve our health, wealth, and well-being. What needs to be considered, on a methodological level, is whether Thaler and Sunstein have clarified the competence required for choice architects to subtly prompt our behavior toward making choices that are in our best interest from our own perspectives. Competence is among the widely accepted characteristics that justifies the trustworthiness of the testimony and products of scientists, engineers, technicians, and others who we depend on to influence and improve our choices and behavior. Choice architects who offer nudges play an analogous role, which suggests that Thaler and Sunstein should have an account of competence that warrants our general trust in choice architects, especially since we will not be aware of the fact we are being nudged most of the time.2

But because Nudge lacks an account of competence, we want to identify the salient features that a prospective account would have to include. In Section 4, we argue that, among other features, an account of the competence required to offer nudges would have to clarify why it is reasonable to expect that choice architects can understand the constraints imposed by semantic variance. Semantic variance refers to the diverse perceptions of meaning, tied to differences in identity and context that influence how users interpret nudges.3 We conclude by suggesting that choice architects can grasp semantic variance if Thaler and Sunstein’s approach to design is compatible with insights about meaning expressed in science and technology studies and the philosophy of technology.

2 Nudges

In this section, we restrict our discussion of nudges to those aspects which are relevant to our main argument, beginning by noting that the theory of choice architecture and nudges is rooted in an understanding of biases that people are subject to in various situations where they have choices to make (for another description, see Lobel and Amir 2009). The biases that Thaler and Sunstein focus on are ones that affect the quality of our choices and behavior across the spectrum of racial, sexual, and educational differences. Some, like Thaler, classify them as basic constituents of human nature.4

Thaler and Sunstein build their account of biases from the basic tenets of dual-process theory, a view that stipulates that people’s thought is structured by two systems, automatic and reflective (Epstein 1994; Thaler and Sunstein 2008). Automatic thinking is characterized as uncontrolled, effortless, associative, fast, unconscious, and unskilled (Thaler and Sunstein 2008, 20). It is personified by the gut reactions of Homer Simpson, an impulsive cartoon character, and contrasted with the idealized assumptions about reasoning associated with homo economicus (22). By contrast, reflective thinking is controlled, effortful, deductive, slow, self-aware, and rule-following (20). Reflective thinking is embodied by science fiction character Mr. Spock’s rational, deliberate, and unemotional approach to problem solving (22).

Without reflective thinking, humans could not make careful and effective long-term plans. But since reflective thinking is time consuming and requires people to effectively process good information, it cannot be relied on in a variety of contexts, which not only include situations where we have to act quickly, but also situations where our choices have delayed effects, are difficult in nature, occur infrequently, yield poor feedback, and present choice context that we have little to no experience dealing with. In addition, reflective thinking does not help us in situations where greater triggers for arousal and temptations exist than we initially anticipated (23). In these instances, we turn to the automatic system of thinking, rapidly drawing from prior, often unrelated, experiences and allowing our behavior to be guided by rules of thumb or mental shortcuts developed in those experiences (23).5 These rules of thumb serve as biases when the experiences from which they are drawn or the evidence to which they appeal are irrelevant to the decision at hand.

An illustrative example of a bias is the use of inappropriate anchors, which are mental shortcuts that compensate for lack of information. Consider what happens when someone is asked to estimate the total population of Milwaukee, but does not know the answer and has to respond immediately. In this case, if the person is from Green Bay, he or she likely realizes that Milwaukee has more people than Green Bay (100,0000 people). As a result, the person could offer the guess that Milwaukee has about 300,000 people. But, if the person is from Chicago (3,000,000 people), then, knowing that Milwaukee is definitely smaller, he or she might guess that the population is 1,000,000 people. Unfortunately, neither answer is very accurate; the actual population is about 580,000. The reason why each person decides on a wrong answer is that he or she uses the city where he or she lives as the basis, or anchor, for making a decision and subsequently makes inappropriate mathematical adjustments. Of course, it is irrelevant that one person is from Green Bay and the other from Chicago, as the population of neither city has any objective bearing on estimating the population of Milwaukee (Thaler and Sunstein 2008, 23).

Reliance on biases can be costly in situations where we interact with technologies, artifacts, and built environments, and there is often more at stake than correctly answering a trivia question. An apt example is the power exerted by default options. Thaler and Sunstein claim that many US citizens lack the willpower or relevant information about investment strategies to implement a sound savings plan. Motivated by the ease of sticking to the default opt in setting found in many employee benefit forms, they fail to save enough money for retirement. This adverse outcome causes personal discomfort and stresses the social security system. A similar example concerns the limited influence exerted by street signs that convey the importance of slowing down for a treacherous curve. Drivers who are already comfortable speeding or who do not have the time or sufficient information to process the upcoming curve can be tempted to stay in their comfort zones rather than heeding the warning to reduce speed. Such inertia can be costly, especially if accidents result. Other biases include but are not limited to the optimism bias (i.e., people have an unrealistic grasp of how good their abilities are), loss aversion bias (i.e., people prefer gains to losses, even in situations incurring losses is in their best interest), confirmation bias (i.e., people have a tendency to overestimate information that reinforces things we already believe), hyperbolic discounting (i.e., people have stronger preference for immediate payoffs relative to later payoffs), focusing effect (i.e., people have a tendency to place too much emphasis on one variable when making predictions about future outcomes), and impact bias (i.e., people have a tendency to overestimate the length and intensity of future feeling states).

Nudges are simple solutions to our problems with biases. Choice architects design nudges when they subtly calibrate how choices are presented to us in order to work with our predictable tendencies to rely on biases. In this sense, a nudge, for Thaler and Sunstein, is “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives” (6). Paying students to study thus does not count as nudging them to be studious. Such a bribe changes the financial incentive.6 Additionally, Thaler and Sunstein stipulate that nudges must be “cheap and easy to avoid” (6). Mandates, such as banning junk food or requiring merchants to write their instructions, product labels, warranties, and the like, are not nudges either (6).

Put hyperbolically, Nudge is a kind of manifesto that claims that choice architects in all facets of life should get into the business of accounting for biases through nudges. As we read Thaler and Sunstein, nudges do better than educational programs for at least the following two reasons. First, the automatic system is an inherent part of how we think, and even serves us well in some situations. Consequently, it does not make sense to treat automatic thinking as undisciplined reasoning that should be eliminated. Second, since we rely on automatic thinking ubiquitously, no educational program could be comprehensive enough to anticipate every occasion in which we rely on biases, or offer effective rules for practical coping that correlate with all such occasions. In light of these and related reasons, Thaler and Sunstein repeatedly remind readers of Nudge that they remain as vulnerable to the sway of biases as everyone else.7

Beyond differing from educational programs and proposals that alter behavior through new financial incentives and mandates, choice architecture also can be contrasted with critical humanities and social science theories of design. Notably, philosophers of technology and science and technology studies theorists have developed a large literature on how values and politics influence the so-called “technical” assumptions and expertise that engineers employ when they design technologies and built environments (Winner 1980; Bijker et al. 1987; Law 1991; Bijker 1995; Franssen and Bucciarelli 2004; Feng and Feenberg 2008a, b; Vermaas and Pieter 2008). Paradigmatic examples include Langdon Winner’s work on how race and class prejudices allegedly were built into the construction of bridges in Long Island, and Andrew Feenberg’s analysis of design space, which clarifies how political context can constrain technical options. One of the aims of this research is to show how technical assumptions can be mediated by social and political values, and perhaps really are not “technical” at all (Bijker et al. 1987). Ultimately, critical theories like these aim to clarify the nuances of insufficiently understood social relations and political context. Unlike Thaler and Sunstein’s project, they do not offer ways of changing people’s behavior nor do they bracket politics. Despite this difference in orientation, it may be possible to bring the theory of choice architecture into productive dialogue with science and technology studies and the philosophy of technology. We will revisit this issue in Section 4 when we address the problem of meaning.

Perhaps, then, interface design is one of the easiest domains to see choice architecture at work. At present, considerable energy and resources are being devoted to projects that try to create natural user interfaces—interfaces that can perceive, communicate, and act smartly on our behalf by responding rapidly and intuitively to our bodily movements, gestures of touch, and acts of speech, and that will not play into biases we may have in those particular situations (Fogg and Brian 1997; Fogg 2003). In order to build these interfaces, engineers use tools associated with choice architecture, appropriating insights from behavioral economics and psychology, to anticipate how users will interpret and respond to various presentations of information. Of course, they also make use of other forms of behavioral and psychological knowledge. For example, choices people make about how to use their mobile phones are influenced by the size and position of the keyboard, the size and position of the screen, the geometry of the structures that link keyboard and screen, the software that governs how the phone functions and which present options for using different features in distinctly stylized ways, the phone’s weight, and its size. Another relevant example is the graphical user interface tools and methods that allow users to control and manipulate computers and the applications that run on them. They include the keyboard and mouse, the acts of clicking and scrolling, and the presentation of menus, files and folders, et cetera. These examples of interface designs can be improved if their designers understand how to work with users’ predictable biases.

In sum, nudges are a special kind of design calibration that is built for only one purpose: prompting better decisions by working with biases. The credit card console used for gaining entry to a parking garage is an example of choice architecture. The interface—which is constrained by the fact that there is only one possible direction for a credit card to be inserted—includes a diagram that serves as a heuristic for how the users should insert their cards (Thaler and Sunstein 2008, 89–90). There are many different diagrams that could be used for credit card consoles like this one. Whereas some diagrams will incline the majority of users to insert their card the wrong way, others will incline the majority get it right the first time. In cases where people insert their card the wrong way, they are likely relying on a bias developed from either previous experience with a console (or making bad inferences, as a result of having no prior experience). The diagrammatic interface plays a significant role in shaping decisions about how people decide to use the technology. This is no different from how, in cafeterias, the size of available plates influences how much food people will eat, and the arrangement of food options influences the items that hungry customers will select (Thaler and Sunstein 2008, 1–3). In each of these examples, nudging people is a matter of creating a better interface situation that will encourage them, whether they are aware of it or not, to make a better choice.

3 Libertarian Paternalism

Having just provided some additional details on nudges, we will now clarify why Thaler and Sunstein believe that nudges are permissible when they fall within the ethical limitations set forth by the principles of libertarian paternalism. We then argue, in Section 3, that even if libertarian paternalism is as attractive as Nudge stipulates, Thaler and Sunstein have not shown that choice architects who offer nudges merit our general trust.

Although libertarian paternalism may appear to be an oxymoron, Thaler and Sunstein defend it as an attractive moral outlook, which we will discuss as featuring three related principles. The first principle of libertarian paternalism is that benefits and savings that improve lives are good. The sorts of benefits and savings that Thaler and Sunstein refer to are those that individuals themselves view as such. They include common goods, such as increased health, improved safety, financial security, and so on. The second principle states that the freedom to select one’s own ends should be preserved. In other words, the savings endorsed in the first principle should not be pursued through means that lead to other people determining our preferences and interests. Thaler and Sunstein only equate savings with benefits in cases where individuals themselves see their preferences and best interests as being served. In this respect, page after page of Nudge distances the text from hard paternalist depictions of people as poor judges of value in need of assistance from an interest-directing, benevolent pater. Furthermore, the second principle is proposed with the standard proviso that extreme situations exist where free choice should be limited, including emergencies, the avoidance of catastrophes, and cases of violent criminality. The third principle expresses the paternalistic side of libertarian paternalism. It states that it is permissible and choiceworthy to help others achieve their self-directed ends when they cannot pursue these ends efficiently. Situations where people do not have enough time or information to make deliberative choices are paradigmatic contexts where this last principle holds. If the other principles are not violated, Thaler and Sunstein state, then helping others in this way actually enhances peoples’ ability to choose.

Thaler and Sunsteins’ theory of nudges also includes a set of back-up principles for use in exceptional cases where compliance with the three core principles is insufficient. They insist that when conflicts of interest occur and when incentives cannot be lined up clearly, nudging is only permissible when the choice architect’s design intentions are transparent and capable of being monitored (242). Nudges that cannot be made transparent and public thus are impermissible, as are ones that reflect racist, sexist, or other oppressive agendas. Such agendas could not be defended in the public sphere. Ultimately, Thaler and Sunstein deem the combination of back-up and core principles sufficient for demarcating nudges from exploitative behavior-modifying techniques, such as can be found in advertising and propaganda.

To illustrate the ethical limitations just discussed, let us consider briefly two examples of nudges that are consistent with libertarian paternalism. The first example brings us to Lake Shore Drive, a roadway that has stunning views of Chicago’s skyline. One particular segment includes a series of S curves that require drivers to slow down to 25 mph. Many drivers ignore the posted sign that states the reduced speed limit. They are easily distracted by the scenery, or else unable to assess how steep the curve is, and both causes result in accidents. By introducing a new way of nudging drivers to slow down, the individual and societal costs of these accidents have been reduced. Immediately after passing a warning sign, drivers encounter a series of white stripes that are painted onto the road. Thaler and Sunstein describe this interface as a prompt that inclines drivers to slow down: “When the stripes first appear, they are evenly spaced, but as drivers reach the most dangerous portion of the curve, the stripes get closer together, giving the sensation that driving speed is increasing. One’s natural instinct is to slow down.” (38–39). In short, the stripes work with drivers’ tendencies better than conventional signs because they convey the point about slowing down intuitively and subtly. That is, the stripes do not require drivers to interpret propositional information and think about how they should behave in relation to considerations pertaining to the abstract unit of miles per hour and a potentially arbitrary speeding scale. Rather, at an embodied level, they influence how drivers perceive the turn, which becomes a way of decreasing the incidence of bad decisions, thereby cutting the costs of accidents to both individuals and other members of society.

The second example is Clocky, a special alarm clock. At some point, we probably all have made plans to get up early on a given morning in order to get a fresh start on the day. The plan seems practical at night, but even when we have gotten enough sleep, it is often hard to wake up because fatigue and the expected comfort of additional sleep are too much to resist. Conventional alarm clocks do not solve this problem for everyone. They are easily turned off, or paused via hitting a snooze button. Clocky differs from the rest, as it “runs away and hides if you don’t get out of bed” (Thaler and Sunstein 2008, 44). To use Clocky, one has to set the acceptable number of snoozes and snooze minutes before going to sleep. When all the snooze time is used up, the clock literally springs off the nightstand and moves around the room while making annoying sounds. The only way to turn it off is to actually get out of bed and engage one’s mental powers by tracking it down. By the time Clocky is retrieved, the pursuer can expect to be awake. Clocky’s behavior helps people get up when their will power and resolve require extra help.

Both the traffic stripes and Clocky examples count as nudges. In these cases, choice architects calibrate for the biases and blunders that constrain how some people cope with the relevant information. Crucially, neither example changes economic incentives, eliminates options, or makes freedom of choice difficult to exercise. After all, if one does not like Clocky, one need not use it. No one forces people to adopt it, and market competition is not skewed unfairly because the product exists. Likewise, the stripes simply yield an impression that a driver’s speed might be dangerous for the upcoming curves. If one is determined, it is still possible to drive fast on the road and take in the spectacular views. With these examples in the background, Thaler and Sunstein inform their readers that libertarian paternalism functions as a viable ethical constraint that prevents nudges from being abused.

4 Trust and Competence

Though libertarian paternalism can be shown to provide certain ethical limitations on the possible uses of nudges, not everyone is persuaded by their account. Some critics claim that the foundations of libertarian paternalism are flawed, and even can be appealed to—at least in some cases—to endorse harmful outcomes (Stevenson 2005). While this is an interesting rejoinder, we will bracket assessment of it in order to focus on a related issue, the competence required for a choice architect to offer nudges. If our critique is valid, it will hold whether libertarian paternalism provides adequate normative constraints.

As Thaler and Sunstein define it, anyone can play the role of a choice architect, and do so in a variety of contexts. The appellation is not exclusive to engineers and designers, and Thaler and Sunstein ask everyone to offer nudges when in the position to do so. But even though most of us will encounter situations where it appears useful to offer nudges, it does not follow that everyone is capable of altering people’s behavior appropriately. To this end, Nudge should clarify how choice architects can construct technologies, interfaces, and built environments that help to bring about desired and appropriately predictable outcomes in peoples’ choices and behavior. Indeed, a convert to the nudge program who understands his or her role as a choice architect would still need to know how to prompt people subtly toward making better decisions when they do not have enough information or time, or are aroused and tempted in ways they had not anticipated.

Focusing on this issue leads us to ask: What kind of competence is required for a choice architect to offer a nudge? With this question in mind, we proceed by showing why Thaler and Sunstein’s theory should include an account of competence. We emphasize the fact that since nudges typically change behavior without people being aware that they are being nudged, there ought to be reasons offered for why we should, in general, trust the competence of choice architects to design nudges that improve our lives.

It is implausible to believe that creating nudging choice architectures requires no competence whatsoever. According to our interpretation of Thaler and Sunstein, choice architects must be able to do two things at a minimum. First, they must be able to figure out what biases, arousals, and temptations people are subject to from studies in behavioral economics. Second, they must have an adequate understanding of how people perceive choice contexts. To do so, choice architects must have a sufficient grasp of the scientific material and a good understanding of how people think in particular situations. They must also be able to pick out the appropriate biases, arousals, and temptations that track people’s thinking when they make choices in distinctive contexts and when presented with distinctive forms of information. Once choice architects identify the relevant mental stumbling blocks, they need to be able to postulate which calibrations in choice context will nudge people away from them. In other words, choice architects must grasp how people will perceive and respond to adjustments of their choice context. Without the ability to do so, there can be no basis for judging whether a nudge will succeed in altering people’s behavior appropriately.

Unfortunately, Thaler and Sunstein do not discuss how choice architects are supposed to determine which biases, arousals, or temptations are relevant to a given situation, or how to arrive at appropriate proposals concerning the adjustment of choice context. This omission begs the question of whether such inferences and postulates can be made, especially since designing nudges requires that choice architects have the competence to make inferences from a limited body of empirical studies in behavioral economics and psychology. Additionally, choice architects have to possess an adequate understanding of the situations for which they intend to insert a nudge, and be able to come up with proposals about which nudges will encourage people to make better decisions, out of the host of possible adjustments that could be made to any situation. This is not a matter of whether choice architects can defend the purpose of a particular nudge; rather, it is a matter of whether choice architects can make the case that their nudge ideas will actually prompt people, on average, to make better choices.

It is unclear what sort of competence choice architects must have to be able to make these inferences and postulates. By asking for an account of competence, we have in mind an account of competence that would warrant our general trust in choice architects to offer nudges that would achieve the outcomes suggested by Thaler and Sunstein. By general trust, we mean the sort of trust that we ought to have of those whose testimony and products improve and influence our choices and behavior, such as scientists, engineers, lawyers, financial advisors, and the like. In the case of scientists, one of the characteristics which warrant our general trust in their testimony and research is their competence (Hardwig 1985, 1991). This is not only the case among scientists, but also between scientists and non-scientific members of the public (Scheman 2001; Rolin 2002; Wilholt 2009). For ordinary citizens to be able to benefit from the testimony and research of scientists, there ought to be reasons available to them to trust scientific testimony and research (Scheman 2001). Some of these reasons should be devoted to showing that scientists have the right kind of competence. In some ways, this is a matter of moral acceptability. Someone who wants to propose, for example, that science should play an increased role in some aspect of our lives, should be expected to show that the scientists in question are competent to do so. There is an analogy between this example of scientists and Thaler and Sunstein’s nudges. Choice architects who offer nudges are producing changes in choice context that will allegedly improve and influence our choices and behavior. Because of this, the moral acceptability of nudges hinges, in part, on whether an account of competence can be provided that is sufficient to warrant our general trust in these products (the nudges).

To avoid misunderstanding, the emphasis that we are placing on competence does not entail that we believe that for every nudge offered, there should be good reasons for people to trust the competence of the individual choice architects who designed it. Rather, we are claiming that Thaler and Sunstein should be able to vouch for the competence of choice architects by offering general reasons for why their competence to offer nudges should be trusted. To be even more specific, we now will present four problems with competence and nudges that are rooted in Thaler and Sunsteins’ not providing an account of competence.

Problem of inference

If choice architecture is essentially an idea based on select empirical studies of biases and anecdotal stories of bias correction, but remains detached from an adequate account of competence, then it is unclear how choice architects are supposed to use these studies and anecdotes as supporting evidence for nudges. Indeed, anyone could claim to be offering nudges based on ad hoc inferences underwritten by nothing more than the popularized account of empirical studies and peoples’ perceptual habits that Nudge presents. Simply put, Thaler and Sunstein do not provide clear criteria for determining the minimal background conditions that need to be met in order for someone to be capable of claiming that they can offer a nudge based on appropriate consideration of the empirical studies. Here, we are not making substantive claims like choice architects should be able to understand technical papers in behavioral economics. Instead, we are playing the skeptic’s role, and insisting that Thaler and Sunstein should clarify to what degree this is the case. This is reasonable because there are currently many efforts to understand how competence is related making inferences and judgments based on evidence, an example being the Studies in Expertise and Experience method at Cardiff University (Collins and Evans 2007).

This problem also includes the issue of how choice architects are supposed to get feedback on the successes and failures of nudges they have offered. If there is some competence associated with offering nudges, then it should be possible to identify reliable methods for obtaining feedback. This is particularly important in cases where nudges prove successful when first introduced, but fail to yield the intended results as time passes, perhaps as a result of users making new decisions upon learning how the choice architecture is configured (e.g., perhaps some users feel annoyed about being nudged, and subsequently challenge the behavior-modifying trajectory through defiant behavior that has the potential to catch on and inaugurate a counter program).

Problem of replication

If nudging is a good idea that should become more widely adopted, then the competence of choice architects should be defended against skeptics who may question whether choice architects can offer reliable nudges that fulfill their intended purpose. We may all agree that cases exist where people have been nudged. However, if there is no account of the techniques that willing choice architects should use to replicate successful nudges, then skeptics are entitled to claim that, given its basis in mere anecdotal evidence, choice architecture simply is not the sort of endeavor that can be cultivated as a competence or expertise.8 Skeptics could even claim that nudges are unacceptably risky to the people being nudged, inevitably leading to unintended consequences.

Another issue somewhat related to replication has to do with the sorts of available studies about successful nudges. Thaler and Sunstein insist that there is growing data confirming the success of certain nudges based on changing default settings in particular situations. But, if such confirming studies exist, the only justification lent by them is that the particular change in default setting produced the desired results. They do not establish that there is some general competence behind nudges that was used in the particular situation and that can be transferred to other situations—especially situations where it is not the default setting that requires change.

Problem of domains

Thaler and Sunstein fail to clarify whether choice architecture is an independent science, technique, or expertise, or an adjunct to existing ones. If it is an adjunct, then the competence required to offer nudges largely depends on the competences and expertise associated with a professional domain. For example, if a group of choice architects are devising a plan to reduce speeding in a given area, then the competence at issue is the competence attributable to traffic calming professionals. However, if choice architecture should be conceptualized as an adjunct of this kind, then the following problem arises. Thaler and Sunstein do not specify how choice architecture can be integrated with the protocols, theoretical commitments, and tacit knowledge found in other domains. Nor do they clarify what exactly choice architecture is, such that it becomes possible to specify clearly what integration into another domain involves.

Problem of projection

A fourth reason is raised by Mario Rizzo and Glen Whitman in “Little Brother is Watching,” where they identify a variety of slippery slope possibilities that they believe justify skepticism towards nudges. Rizzo and Whitman insist that because choice architects necessarily have “only a tenuous grasp on the values of targeted agents,” they can only concretely apply their paternalist ideas by making inferences about what the users of their designs are likely to value (Rizzo and Whitman 2008, 26). In making such probabilistic inferences, Rizzo and Whitman assert, “there will be a tendency for the experts to reify their own values and simplify their own theories” (Rizzo and Whitman 2008, 26). When choice architects are unsure of the values that their targeted agents will possess, they will be inclined to perform the following four step process (Rizzo and Whitman 2008, 26–29):
  1. 1.

    Simplify the range of possible values by projecting their own contingently held predispositions onto a theoretical conception of genuine target audience preferences, such that if, for example, the experts possess “intellectual and middle class values,” they will assume that the same values should obtain for everyone;

     
  2. 2.

    Justify using projection as the best means of accomplishing the needed simplification by associating the postulated ideals with “rational” thinking and the expected consequences of pursuing the identified “rational” ends with “optimal” outcomes;

     
  3. 3.

    Treat the representations of expected preferences as isomorphic depictions of what the targeted agents definitively desire, and not fictions that were selected for pragmatic reasons;

     
  4. 4.

    Obscure their “ethical” decision to use projection as a means of simplification by acting as if objective scientific principles were used to bridge the knowledge gap when, in fact, “neither scientific theory nor scientific evidence provided the basis for favoring one preference ordering over another.”

     

Rizzo and Whitman’s concerns are similar to what we termed postulates earlier in this section. Without an account of how choice architects can make competent postulates or projections, we have no reason to expect choice architects to make good inferences about what people’s preferences are. Thus, nudges are only good ideas if we can be sure that we can reliably know people’s preferences. But, how can we know this?

It might be claimed that the four reasons just offered are really not good reasons at all because, ultimately, the competence of choice architects and the success of nudges is a function of trial and error. This counter claim, however, does not match up with current controversies over how to design interfaces, which demonstrate that, especially for innovative technologies that promise public benefits, reasons need to be provided in advance for why a particular methodology, like choice architecture, should be trusted over its rivals.

Consider recent NPR broadcast on smart meters that featured Dan Reicher, Director of Climate Change and Energy Initiatives at Google Inc., and a Carnegie-Mellon behavioral economist George Loewenstein. In it, Reicher took the more information is always better stance, which presumes that people will make better decisions when they have access to all salient information. Loewenstein, to the contrary, suggested that the informational interface provided by smart meters may actually activate biases that incline people to make costly decisions: “It’s amazingly cheap to air-condition your whole house for a few hours. And if the smart meter is giving you objective information about how much it’s costing you, you might be surprised at how cheap it is rather than surprised at how expensive it is.” Accordingly, Lowenstein suggested more automation in some aspects that people could not control based on biased judgments of the cost of energy. Debates like this one impede the ability to set up trial and error tests precisely because they convey disagreement over what type of interface should be tested. Indeed, the dispute at issue concerns different hypotheses about how people perceive smart meter interfaces, and different judgments about which, biases—if any—can impede users from using smart meters efficiently. Reicher assumes that it is possible to match information displays to people’s lifestyles in ways that allow them to make free yet more efficient choices about how much energy to use. By contrast, Lowenstein assumes that certain biases will make it more likely that people will make more costly decisions. Without reasons available to defend which side is correct, we have no reason to favor the plausibility of either side, or to expect that they exhaust the spectrum of possible interface designs. This problem extends to Nudge where Thaler and Sunstein suggest that an ambient orb can nudge users towards energy efficient behavior by distinctively conveying how much energy a household is using (Thaler and Sunstein 2008, 196).

So far, we have shown that the absence of an account of competence in Nudge has implications that compromise Sunstein and Thaler’s defense of the moral acceptability of choice architecture. But, are there good reasons that could be offered for why we should trust choice architects to offer nudges? If so, these reasons would, in effect, have to allay the concerns just raised about competence. In Section 4, we extend the discussion beyond the general methodological issues raised here and argue that if such reasons were to be provided, then there is a particularly hard problem that such reasons would also have to resolve. While Rizzo and Whitman focus on choice architects’ postulates about preferences, we believe that choice architects have to be competent at postulating how any calibrations will affect people’s perception of the meaning of those calibrations, especially since the meaning is likely to change in ways that are difficult to predict. We refer to this as the problem of semantic variance, which we will describe in more detail in the next section. A good account of competence should track semantic variance and provide reasons for why choice architects are able to design nudges that are sensitive to it.

5 Semantic Variance

As a basic definition that serves the purpose of this paper, meaning refers to significance. An invitation to smoke a Cuban cigar can mean different things to different people because each person can perceive the invitation as having different significance. For example, an aesthete can perceive the invitation as an opportunity to enjoy a pleasurable experience. A US citizen can perceive the invitation as an opportunity to enjoy a risky experience with contraband material. A Cuban expatriate can perceive the invitation as an opportunity to have a nostalgic experience of home. Many other possibilities exist; this brief list is being used for the sole purpose of concretizing a definition.

Meaning is a crucial feature of Thaler and Sunsteins’ account of nudges because they write in a way that presupposes choice architects can competently identify people’s perceptions of meaning. An illustrative example of this presupposition can be found in their discussion of how fly-etched urinals in the men’s rooms in Amsterdam’s Schiphol airport significantly reduces spillage.

Small and apparently insignificant details can have major impacts on people’s behavior. A good rule of thumb is to assume that “everything matters.” In many cases, the power of these details comes from focusing the attention of users in a particular direction. A wonderful example of this principle comes from, of all places, the men’s rooms at Schiphol Airport in Amsterdam. There the authorities have etched the image of a black housefly into each urinal. It seems that men usually do not pay much attention to where they aim, which can create a bit of a mess, but if they see a target, attention and therefore accuracy are much increased (Thaler and Sunstein 2008, 3–4).

While Thaler and Sunstein implore choice architects to consider “everything,” their own attention is focused selectively. In this example, only two variables are identified as being worthy of consideration: the normal attention men exhibit when urinating in public restrooms, and the capacity of a target to capture their attention in this context. While these considerations are sensible, they do not account for all relevant possibilities.

For example, Thaler and Sunstein do not ask if it is a universal fact that under certain conditions men exhibited shortened attention span, or if only some do, and perhaps for culturally or personally contingent reasons. Nor do they inquire into whether certain predictable bodily responses to targets are more likely to occur in the case of bodies that have been culturally disciplined to behave in distinctive ways. Furthermore, Thaler and Sunstein do not examine whether the black housefly, which is the core choice architecture contribution that changes the standard urinal interface, is a value-laden symbol. As a thought experiment, we can imagine a culture existing that exhibits such deep reverence for all life that its members would be offended by the prospect of someone urinating on a representation of an insect. Their outrage could be extreme, and parallel the indignation that select Muslim communities felt over the infamous Danish political cartoon of the prophet Muhammad!

We would remind the reader who only sees our thought experiment as an unrealistic depiction of a far-fetched cultural reaction, that the editorial staff of the Danish newspaper Jyllands-Posten did not anticipate that publishing the cartoons would lead to outspoken denunciations, legal motions alleging that violations of the Danish Penal Code occurred, consumer boycotts, acts of retaliatory violence being committed against Danish embassies, death threats being made against those responsible for the cartoons, as well as a host of other unpleasant consequences. But even if it is scarcely conceivable that a culture could exist that would pose death threats to people who design fly-etched urinals, the point of principle illustrated by the thought experiment merits further consideration. Thaler and Sunstein may have made a lucky pick when selecting a nudge that contains only innocuous detailing. Alternatively, they may have rhetorically disguised a heavy-handed example by making it appear to be an interface that choice architects could design solely by applying behavioral economics insights into cognitive bias. Only these two possibilities exist, as Thaler and Sunsteins’ account of nudges presupposes that choice architects know how to calibrate choice contexts to capitalize on commonly shared perceptions of meaning, even though they never ask the following two questions that should be considered basic to any design initiative that aspires to shape people’s behavior responsibly.
  • How do technologies, interfaces, and built environments come to be invested with meaning?

  • How does the meaning attributed to a technology, interface, and built environment change?

Both questions refer to the problem of meaning, a perceptual, epistemological, and sometimes political issue investigated by scholars in philosophy of technology and science and technology studies who analyze the significance attributed to material culture.

We believe that Thaler and Sunstein underestimate how difficult it can be to understand and predict how different communities of people will perceive the meanings that nudges present because Nudge’s illustrative examples all focus on situations where technologies, interfaces, and built environments are used in delimited contexts. In these contexts, (1) users appear to have common perceptions of meaning and (2) user interaction with the technologies, interfaces, and built environments does not appear to engender new perceptions of meaning. These contexts can be uniformly characterized as instances of semantic invariance, and a brief comparison with a relevant example discussed by science and technology studies theorist Bruno Latour will illuminate some of its features.

In a frequently cited passage, Latour analyzes a typical key found in European hotels that is bound by a cumbersome weight (Latour 2000, 41). The weight was added to the key to solve the problem of guests failing to return their keys to the concierge or hotel manager before leaving the hotel. Latour notes that the cumbersome weight is a more effective behavioral prompt than some of the other discursive solutions, such as posted reminders and inscribed keys (Latour 2000).

In this case, the hotel management clearly nudges the guests to return their keys before leaving. In many respects, they offer a key that exhibits the same design principles as any generic key. Its difference lies in the special interface, one that binds the key to a cumbersome weight and therein changes the arrangement of user options, nudging guests toward a distinctive outcome without altering any of the relevant incentive structures. Crucially, the more guests return their keys, the less the hotel incurs the costs associated with their mismanagement. Since such costs are often passed on to customers through penalty fees, savings are enhanced all around.

Designers can be choice architects in this instance because whether weighted or free, guests and managers can be expected to perceive the hotel keys as conveying two meaningful ideals: access and security. Indeed, although the key’s interface has been altered, nothing has been done to modify the guest’s perception that a key’s main purpose is to open and lock doors. In that situation, the key’s design can be adjusted without affecting anything else but the decision of the guest to avoid wandering around with the key in his or her possession. All things being equal (e.g., the quality of the stay, the cost of a room, the lack of desire for an illicit souvenir, and so on), adding the cumbersome weight simply makes it easier for guests to come to a non-controversial and self-benefitting decision.

Thaler and Sunstein’s examples of Clocky and Lake Shore Drive have a similar semantically invariant structure. Adding the nudge of the loud, annoying, and hiding routine, changes nothing else other than supporting the person’s decision to wake up at the intended time; adding stripes to the road helps people to slow down, but does not engender any other significant changes. In these examples, the nudges appear to be effective because people perceive common meanings, and no new perceptions of meaning are generated.

It is problematic to construct a theory of choice architecture exclusively around examples of semantic invariance because such examples do not capture the full range of situations where people interact with technologies, interfaces, and built environments. In many cases, such interactions occur under conditions of semantic variance, which means that diverse perceptions of meaning occur. To concretize this point, let us consider the raised surface of a speed bump, which provides a disincentive for people to pass over it too quickly, as approaching drivers readily recognize that the artifact can damage their cars and provide an uncomfortable, bumpy experience. Latour provocatively characterizes speed bumps as “actants” and reminds us that they have been referred to as “sleeping policemen” by virtue of their capacity to perform the same functional role as law enforcement officers. Using a similar, albeit more prosaically expressed perspective, Thaler and Sunstein characterize “make-believe speed bumps,” which are “painted 3-D triangles that look like speed bumps” but cost much less to make than the real ones, as nudges (Thaler and Sunstein 2008, 261).

The problem, here, is that categorically depicting speed bumps as nudges begs the question of how perceptions of meaning can vary. In some contexts, such as speed bumps being placed in roads adjacent to schools, most people likely will see the artifact in just this way. However, as science and technology studies theorist Trevor Pinch clarifies in “On making infrastructure visible: putting the non-humans to rights,” several contexts exist in which proposals concerning the use of “traffic calming devices” ranging from speed bumps to cobbled shoulders leads to acrimonious debate; diverse users attribute different meanings to the artifacts (Pinch 2009). Partisans who champion the cause of maintaining infrastructure that is supportive of safe bicycling can readily clash with partisans who place a higher premium on efficient automobile travel or pedestrian rights. While such debates ostensibly concern the desirability of given traffic calming proposal, the contested values underlying the debates are broader, often involving issues related to environmental sustainability, economic expense, life-enhancing aesthetics, and the difficulties that attend to allowing old technologies (e.g., bicycles), to be used under material conditions that are designed to support new ones (e.g., cars; Pinch 2009).

To clarify further why semantic variance poses a potential problem for Thaler and Sunstein’s needed account of competence, we will now offer brief discussion of the following examples: (1) a standard global positioning systems (GPS) designed for car use and (2) a program that uses a mobile phone’s photographic capacities to help people eat better, (3) an exercise promoting program on Nintendo Wii and (4) a proposal for increasing organ donations, and (5) the Google buzz program for Gmail. Each of these examples is similar to examples of nudges provided in Nudge (and covered previously in this paper), and (4) is an actual example of a nudge detailed in the book. We will show how each example suggests that perception of meaning may change depending on the contexts and the identities of the people involved in the situation. Though context and identity issues permeate each example, for analytic reasons, we will tend to emphasize one or the other for each example.
  1. 1.

    GPS devices are designed to make it easy to navigate from one destination to another by providing drivers with step-by-step prompts (e.g., turn left or right) that guide a trip from start to finish. Such prompts are especially useful given the information processing limits of the typical human mind. Recently, GPS systems have been designed to do more than help drivers cope with the natural limits. They now nudge drivers away from speeding. In the case of a popular TomTom model, once one starts breaking the speed limit, a notification comes on the screen that is highlighted in red, a color that evokes stop signs and the stop signal of a traffic light. The driver is then able to see, at a glance, the difference between the speed he or she is travelling at, and the speed that is legal to be driving at. The perception of this disparity is intended to motivate drivers to slow down. However, one of the authors of this paper experiences the TomTom as having precisely the opposite effect! When he notices that he is speeding, he also notices that the information listing how much time remains before the trip is completed becomes shortened. Seeing the reduced trip time changes the meaning of speeding in his perception and experience. Rather than that awareness triggering a desire to minimize the likelihood of causing an accident or getting a speeding ticket, it actually prompts him to try to reduce the trip time by visually measurable increments—5–30 min, depending on the length of the trip—that correlate with affectively charged responses. As if playing a videogame, the driver finds himself increasingly satisfied as those increments go down.

     
People do usually speed because they are not paying attention to the odometer, not reflecting on the consequences of speeding, and focusing on phenomenological features that do allow them to register just how fast they are going. It thus would seem perfectly reasonable to use these assumptions to build-in the feature just described so as to decrease the amount of drivers speeding on the road. But, as our example suggests, the choice architect is going to have to understand how the contexts in which TomToms are used will change how the meaning of the notification is perceived by drivers. The choice architect will have to have a sufficient grasp on how various contexts relate to the perception of meaning and how a notification in general will reduce speeding across these contexts.
  1. 2.

    The FoodPhone program allows mobile phone users to take pictures of food they plan on eating and electronically send the images to dietary experts who promptly respond with nutrition guidance, the intended result being that people will naturally make better food choices when they lack time and information to find out for themselves. Dutch philosopher of technology Peter-Paul Verbeek, however, depicts the program as a potential conduit of the following two externalities, all of which relate to the changes in meaning that the technological adjustment can engender when it mediates the experience of the person using it. First, it can make the activity of eating unduly stressful by transforming meal time into a period of constant judgment. Second, it can incline participants to view health through the overly narrow lens of food consumption and it can negatively impact the quality of the social relations that transpire around shared meals by nudging participants to obsess over their food and act more like observers of eating behavior than absorbed participants in a communal experience (Verbeek 2009). Like with the TomTom, the context in which the FoodPhone is used may change how people perceive the meaning of the situation. Whereas the FoodPhone was intended as a subtle prompt, it may actually make people overly reflective and stressed out, overtly aware that they do not have enough time and information and yet must attend to every bite. Can choice architects account for how perception of the meaning of choice context changes with context?

     
  2. 3.

    The videogame Wii Fit is marketed as an entertainment system that can help players of all ages enhance their fitness through fun exercises. To inspire users to stay on track of their fitness goals, the Wii Fit scale makes groaning sounds when players gain weight, and it also analyzes players’ body-mass-index, providing them with correlative qualitative labels, such as underweight, ideal, and fat. The problem, though, is that different users can attribute different meanings to these outputs. Controversy thus arose when young girls, a population that is especially vulnerable to concerns related to body image, were informed they were fat. Their parents perceived this as demeaning and complained that it is myopic to view the interface solely as inspiring healthy living. A similar outcome occurred when one of the authors tried playing sports games on Wii Fit with his young daughter. These games depict successful performance through avatars that express positive body language and unsuccessful performances through avatars that look downtrodden. While these outputs might nudge adults to do better, it had the opposite effect on his daughter. She was very upset to find her avatar looking despondent. Moreover, because she perceived such strong meaning in the body language conveyed, she refused to believe the white lies her concerned parents offered to try to make the situation less frustrating (i.e., that the avatar was tired, not sad). In this case, some of the features of Wii Fit are overt and some are subtle and similar to nudges. Here, we do not want to highlight context in relation to meaning, but identity. The subtle prompts toward weight loss and persistent game play will be perceived differently according to the identity of person using the technology. Choice architects would have to be able to have some understanding of the relation between identity and meaning simply to avoid unintended harms like hurt feelings and low self-esteem.

     
  3. 4.

    In order to increase the rate of organ donation in the US, Thaler and Sunstein suggest it is worth considering instituting a new default setting called “presumed consent” (Thaler and Sunstein 2008, 179). Unlike “explicit consent,” which requires that citizens take active steps to demonstrate that they want to be organ donors, presumed consent would be guided by the ideal that “all citizens” should be “presumed to be consenting organ donors,” unless, through some easily available means, like checking an opt-out box when applying for a driver’s license, they specify otherwise (Thaler and Sunstein 2008, 181–182). While they acknowledge that such a nudge passes the libertarian paternalism test, they also concede that to avoid making citizens unduly upset about such a “sensitive matter,” it might be best to pursue the less radical option of “mandated choice”—an option that simply requires that everyone who applies for a driver’s license explicitly check off a box that indicates their preference to donate or not donate organs upon death.

     
Bioethicist Art Caplan correctly points out that even this less radical proposal fails to be viable precisely because Thaler and Sunstein have not raised the appropriate questions related to meaning and identity.9 According to Caplan, what Thaler and Sunstein do not appreciate is the fact that in the US, many people are so skeptical about the motivations guiding a range of healthcare workers that mandated choice option would backfire and lead to a decrease in organ donations. Caplan speculates that people would be too afraid to check off the box out of concern that doing so would provide incentives for healthcare workers to provide them with bad treatment in order to obtain organs.
  1. 5.

    Google Buzz is a social networking service that allows Gmail users to see the contents of other peoples Gmail accounts, similar to Facebook and Twitter. This includes photos, videos, status updates, and more. The people at Google assumed that users should be automatically opted-in to Google buzz when they sign up for Gmail. This would prompt people to sign up for the program and receive its benefits to them as a default option. The problem that arose—and which is indicated by several class action suits—is that many people perceive being automatically opted-in to Google buzz as a violation of privacy, and that such a default option, even if it would benefit many people who normally would not have enrolled in Google buzz, would not be considered beneficial to everyone. In fact, it appears as if Google made the decision to automatically enroll without any competent evaluation of the data or information on Gmail users, which left open the possibility that many people would be upset. The particular context of Google Buzz and the identities of Gmail users had a lot to do with how the meaning of the default option was perceived, and ultimately created difficulties for Google.

     

Each of these examples illustrates simple semantic variations that can occur when the choice context is calibrated. Sometimes semantic variations can lead to harms, as in the case if Wii Fit; in other cases, the intentions of the designer are undermined for some people, but perhaps not for others. The point is that choice situations include multiple ways in which the meaning of the choice context is perceived by those who inhabit them. By appealing to these examples, we are not making the point that semantic variance will always occur in some sense that rules out Thaler and Sunstein’s approach in Nudge. We include these examples only to suggest that semantic variance is something for which Thaler and Sunstein should account if they are to convince of us of the moral acceptability of nudges. Our examples only allow us to make the claim that semantic variance is a significant concern, and that any reasons for why choice architects who offer nudges should be trusted would have to cover how they can negotiate semantic variance. If there are reasons why choice architects should be trusted, at least some of these reasons have to be able to show how choice architects can competently anticipate semantic variance.

6 Conclusion

Our discussion of semantic variance draws on science and technology studies and the philosophy of technology. Because Thaler and Sunstein restrict Nudge’s examples to cases of semantic invariance, philosopher of technology Don Ihde characterizes their text in the following negative terms:

While Thaler and Sunstein are indeed more inventive and original than the econometric and technocratic pack that they run with, they remain through and through econometric and technocratic. That is, the people who inhabit their world are cardboard characters…These guys, like the Cold Warriors before them, think in a weird world—not one that I’d call a Lifeworld. It’s more a world inhabited by Heideggerian, calculating robots than messy emotional humans. Their world is like the one Heidegger fears, now already assumed to be the real one.10

Ihde’s point is well-taken, albeit only as an assessment of how Nudge is written, and not a decisive judgment about whether Thaler and Sunstein in principle can provide an account of competence that would respond to the general methodological issues raised in Section 3 and the problem of tracking semantic variance discussed in Section 4. Whether such an account could be given, and done in a way that makes the proper connections between trust and competence, is an open question, but one that many science and technology studies scholars and philosophers of technology would be skeptical of given how social and material reality is framed in Nudge.

We hope that the present essay succeeds in furthering the conversation about the nature and scope of choice architecture, and helps clarify fundamental issues that choice architecture proponents like Thaler and Sunstein need to address. Although our critical remarks have focused on omissions in Nudge, our structuring emphasis on the relation between competence and trust should be understood as falling directly in line with the type of project for which Thaler and Sunstein advocate. While examples like Clocky are interesting, Thaler and Sunstein appear to seek more than a set of interpretations of these examples, but an useful strategy for increasing savings, cutting costs, and getting people more of what they consider to be valuable by their own lights. This alternative matters, according to Thaler and Sunstein, because nudge projects can be high stakes endeavors. For example, if switching the presentation of a human resource document for employee savings did not actually lead to enhanced savings, then employers and employees alike would have good reason for feeling disappointed, and perhaps even betrayed.

We conclude by suggesting that choice architects can grasp semantic variance if Thaler and Sunsteins’ approach to design is compatible with insights about meaning expressed in science and technology studies and philosophy of technology. Further multi-disciplinary and collaborative research should be undertaken that emphasizes the qualities of various approaches to understanding how people make decisions when they interface with technologies, artifacts, and built environments. Perhaps there are important projects ahead that unite behavioral economics, science and technology studies, and the philosophy of technology.

Footnotes
1

The authors are listed alphabetically and equally contributed to this article.

 
2

Our treatment of trust is limited to how trust relates to competence. Based on this treatment, we do not consider whether trust exists in certain technological interfaces (Taddeo 2009), nor do we address the philosophical debates on what trust is in social relations (Wright 2009; Baier 1986; Holton 1994; Jones 1996; Hinchman 2005a, b; Hieronymi 2008).

 
3

Readers familiar with Don Ihde's phenomenological philosophy will note that semantic variance bears conceptual affinity with multi-stability (Ihde 1977; Selinger 2006; Ihde 2007). We do not use the latter term because it is broader in scope than the former. We selected a more delimited concept for the simple reason that the broad scope of multi-stability has prompted skeptical debate that should not be applicable here (Cerbone 2009).

 
4

The following is an excerpt from the transcript of Thaler’s interview on the Tavis Smiley’s PBS show: “Tavis: The research indicates that this varies from men to women, from racial group to racial group, or is this across the board? Thaler: Well, there are small differences. I think women pay a little more attention to detail than men. There are other cultural differences, but by and large, humans are all hardwired the same way. We’re all busy. We all have trouble controlling our impulses and the kinds of things that we talk about in the book are the universals. Tavis: Even for those of us who are more educated? Thaler: Absolutely. We're all human. You know, one of the most powerful biases we have is over-confidence.”

 
5

In such situations, automatic thinking can provide us with a sensible orientation to what some phenomenologists call practical intelligence and practical coping.

 
6

See, for example, Freakonomics (Levitt and Dubner 2005), for examples of microeconomic fixes. Of course, stipulating that nudges cannot change financial incentives does not entail that nudged behavior is immune to economic consequences. Rather, the whole point of a properly calibrated nudge is to promote savings and avoid the undue costs that come from poorly designed behavior-modifying interfaces. The most accurate way to make this point, therefore, is to say that nudges are not pecuniary. With this point in mind, we can revisit an example discussed in Section 2, cafeterias that nudge consumers to eat less by shrinking the portions they serve. A more robust way of putting this point is to say that the cafeteria owners at issue must be motivated to make their consumers select healthy choices. If these owners shrink the portion size so that they can charge consumers more for the food they are serving, then the act of changing the plate size does not count as a nudge.

 
7

Thaler and Sunstein also emphasize their vulnerability to cognitive bias to offset claims regarding the epistemic privilege of experts. See (Shrader-Frechette 2005).

 
8

Some philosophers have taken on a project similar to Thaler and Sunsteins’, but draw on a more complicated understanding of the sociality and materiality of design situations (Verbeek 2005).

 
9

Personal correspondence.

 
10

Personal correspondence.

 

Acknowledgments

We would like to thank several people for helping us with this essay. We appreciate Mariarosaria Taddeo taking the initiative to edit this special issue on trust and technology. Michael Lynch and Kathleen Vogel were extremely gracious, allowing one of us to present an early version of the paper at Cornell University science and technologies studies seminar. Soren Riis generously allowed one of us to give two public presentations of the material at Roskilde University, and Peter-Paul Verbeek went out of his way to arrange for that same person to give a talk at the University of Twente.

Copyright information

© Springer Science+Business Media B.V. 2010