We started this book with a simple ethical question: How should you live your life? In closing, we turn to a related question: Who should decide how you live your life? This new question may seem strange given where started, but it’s important because we have recently witnessed the emergence of a powerful behavioral targeting revolution. This modern technology-driven reality is capable of shaping, tracking, and controlling many of our behaviors and decisions, while simultaneously influencing much of what we are exposed to (e.g., digital marketing, social media content, personalized browsing). Some related behavioral science advances are being leveraged by governments to promote prosperity and innovation across public and private sectors (e.g., U.S. Presidential Executive Order—Using Behavioral Science Insights to Better Serve the American People). Unfortunately, there is also an inestimable potential threat resulting from the rapid rise of science and technology that efficiently manipulates perceptions and decisions without any consent or awareness of those manipulated. Even for some of the most popular scientific frameworks that have been explicitly designed to help people make decisions in their own best interests (R. H. Thaler & C. R. Sunstein, 2008), some common applications appear to be just as ethically questionable as some predatory profiteering schemes or social credit systems. In the light of these rapidly evolving and powerful technologies, in what follows we present a novel conceptual framework to inform the design and evaluation of choice architectures and interactive systems. Specifically, based on the evidence and theory presented in previous six chapters, in this chapter we will establish a foundation for practical guidance on methods and heuristics for Interactive Policy Analysis (e.g., policy design, evaluation processes, and standards) based on Ethical Interaction Theory—i.e., a normative theory that provides a philosophically grounded and evidence-based account of how, why, when, and for whom various interaction policies and choice architecture are likely to be more or less ethical and efficient.

It is probably not surprising that Ethical Interaction Theory incorporates the diversity of fundamental philosophical intuitions we have reviewed in the previous chapters. But Ethical Interaction Theory also turns on one additional basic assumption, roughly stated as follows: All else equal, we assume that every person who is competent should have the opportunity to make their own decisions in most situations, absent unwanted infringement on the autonomy of others (e.g., so long as it doesn’t hurt other people). While this is a broad assumption with many nuances and caveats, it is also a relatively uncontroversial assumption that has been codified in wide-ranging practices and widely accepted standards for informed decision making (e.g., bioethics, law, professional ethics codes; Benn, 1976; Drane, 1985; Dworkin, 1981, 1988; Felsen, Castelo, & Reiner, 2013; Haworth, 1986; Mele, 1995).

To ensure Ethical Interaction Theory is philosophically and empirically sound, in the first half of this chapter we map connections among diversity of philosophical intuitions (Chaps. 26), scientific findings, and human values. We then consider theory and science related to one of our mostly widely agreed-upon values—i.e., autonomy—and its role in informed decision making and human well-being more generally. This is followed with a review of some powerful emerging approaches to non-rational persuasion techniques used to shape decision making without technically limiting choice—i.e., Libertarian Paternalism. Ultimately, we show that the fundamental philosophical biases and disagreements associated with personality that we have extensively documented in this book profoundly complicate arguments supporting Libertarian Paternalism. These complications are especially pronounced under conditions in which there are other known and available strategies that promote ethical interactive systems (e.g., transparent decision aids and representative educational materials). We then argue that the weight and breadth of the evidence implies that informed decision making is generally ethically and practically superior compared to alternative non-rational persuasion and paternalistic policies, especially in the context of value diversity related to fundamental philosophical values. However, we want to be very clear that Ethical Interaction Theory does not imply that one type of choice architecture or interactive policy is always better than another. In contrast, the framework simply holds that although some factors are ethically and practically preferable, this superiority may only be obtained under specific conditions. As such, under other common conditions viable alternatives (e.g., Nudges) may become preferable given necessary trade-offs among essential values and ethical priorities including autonomy, efficiency, and beneficence (e.g., sometimes it is too expensive or too slow to inform everyone).

Ultimately, it is not primarily the outcome of a decision that is the target of our ethical interactive policy analysis (e.g., the decision need not capture some Neo-Platonic truth about correct answers). Rather, what is ethically evaluated is the policy that determines the process that is used to shape how people interact with systems (or with each other), thereby shaping their judgments and decisions. In other words, interactive policy analysis involves strategies, tools, and methods that may be used to help evaluate ethical threats and vulnerabilities in systems designed to shape interactions (i.e., the how to of analysis), whereas Ethical Interaction Theory provides the theoretical foundation and philosophical justification for such analyses (i.e., the why and when of analysis). To begin to make our case, we turn our attention first to conceptions of human values and their connection with philosophical intuitions.

Values, Philosophical Intuitions, and Personality

What are “values” and how should we characterize the relations between values and fundamental philosophical intuitions? Clearly, one reason theorists purport to study basic philosophical issues is that these issues have some important implications for health, wealth, happiness, justice, and so many of the other things people value deeply (Bishop, 2015; Bishop & Trout, 2005; Kane, 1996). Indeed, fairly uncontroversial and converging conceptions of what (human) values are have been addressed across academic disciplines, including philosophy and behavioral science. For example, values are often said to be anything deemed good or that are appropriate to desire (Velleman, 2008). For practical purposes, we find the influential account offered by Ruth Chang instructive, wherein “A ‘value’ is any consideration with respect to which a meaningful evaluative comparison can be made” (Chang, 1997, p. 5). More specifically she notes:

[Values] can be oriented toward the good, like generosity and kindness; toward the bad, like dishonor and cruelty; general, like prudence and moral goodness; specific like tawdriness and pleasingness-to-my-grandmother; intrinsic, like pleasurableness and happiness; instrumental, like efficiency; consequentialist, like pleasurableness of outcome; deontological, like fulfillment of one’s obligations; moral, like courage; prudential, like foresight; aesthetic, like beauty; and so on. (Chang, 1997, p. 5)

Some of the items on this list might not initially strike you as values, such as fulfilling one’s obligation. But, on Chang’s view, we have clear and relatively non-controversial standards that allow us to unequivocally characterize them as such. That is, since we can compare and evaluate actions with respect to whether the action fulfills an obligation, fulfilling one’s obligation can and should be considered a value (similar conceptions can be found in the behavioral sciences, e.g., Schwartz and Bilsky (1987)).

Given standard and well-accepted notions of values, tight and manifold connections between values and the kinds of philosophical intuitions that are linked to personality that we’ve discussed throughout this book become clear. Consider some obvious cases in ethics. Some philosophers think that objectively wrong or right actions carry with them a special status compared to things that are judged to be conventionally right or wrong. We can expect that many people think that eating one’s soup with one’s salad fork is wrong but not as wrong as eating one’s neighbor with one’s salad fork. The latter may be perceived as objectively wrong, whereas the former may be perceived as conventionally wrong. And, the tendency to judge things objectively wrong as worse than things that are conventionally wrong is one way people can make better, worse, or equivalent to judgments. As such, this is an instance wherein one’s intuitions about moral objectivism inform a judgment of whether some actions are morally better, worse, or equivalent to others. Thus, ethical intuitions can directly shape and inform some values. Additionally, the tendency to have intuitions consistent with moral objectivism is diverse and linked to personality traits (e.g., openness to experience).

Intuitions about free will and moral responsibility also reflect and involve values in many obvious ways. If a person is morally responsible for an action, then that person is a more apt target for praise and blame than a person who is not. If there are degrees of moral responsibility (Mele, 2008), then judging that someone is more morally responsible should factor into judgments of more praise or blame for those actions compared to actions for which one is less morally responsible. As we have documented, the tendency to judge a person free and morally responsible in a variety of different contexts is related to personality traits (e.g., extraversion) suggesting that people have diverse values concerning moral responsibility.

Of note, there is a non-accidental connection between autonomy and freedom. Sometimes theorists simply take freedom and autonomy to be synonymous (for a discussion see Dworkin, 1981). From these perspectives, if autonomy is valuable then so is freedom (Haworth, 1986). And, it is almost universally agreed upon that autonomy is a value (cf. Skinner (1971) and see next section for a related review). Moreover, some argue that freedom underwrites things that we value such as friendship, worth of actions, and so much more (Kane, 1996). On these views, freedom has at least some instrumental value and can profoundly shape values more generally.

Intentional action intuitions also involve and reflect values in many ways. Like judgments of freedom and moral responsibility, intentionality judgments are important elements in how much praise or blame we attribute to somebody. This fact is reflected in everyday judgments where we blame somebody more for actions done intentionally compared to unintentionally (e.g., stepping on your foot). The values associated with intentional action are also reflected in ubiquitous legal standards throughout industrialized countries where typically the most severe punishments are reserved for somebody performing an action intentionally. Because intentionality judgments can be used in comparisons such as these, they can involve, reflect, or be values. Given the frequency with which people appear to reflect on their own intentions as well as those of others, here too it seems obvious that these intentional action intuitions may often be values in many contexts. Again, just as was the case for ethical and free will intuitions, some intentional action intuitions are associated with personality traits (e.g., extraversion) suggesting that there are stable, yet diverse values concerning intentional action.

In the light of these and many other examples, the weight of the evidence suggests that by and large philosophically relevant intuitions commonly reflect or are reflections of our values, and are often associated with personality. Of course, this does not imply that every single philosophical intuition or belief is related in some way to values or personality. Rather, it is sufficient for our current analysis that some philosophical intuitions are connected to values and that these philosophical intuitions are predictable, diverse, and stable. As we have argued, the diversity of philosophical intuitions associated with personality along with the conclusion of the Philosophical Personality Argument means that we may not be able to do some Neo-Platonic projects. The inability to do Neo-Platonic projects poses special challenges to common paternalistic strategies. In short, we may not be able to identify the single value to promote with the paternalistic policies and that inability may have important ethical costs that should be evaluated.

Autonomy: One Shared Value

As we have argued, many values and philosophical intuitions appear to be diverse and related to personality. However, there is surprisingly wide and enduring agreement on others. This fact does not entail that these values are right (in a Neo-Platonic sense). Nevertheless, the convergence and acceptance of some values are practically and theoretically noteworthy. Out of the many seemingly shared values, one such fundamental and widely shared human value is autonomy. While any well-specified definition of autonomy is philosophically contentious, all accounts in some way capture the central notion that autonomy involves people being self-determined and making informed decisions in accordance with their values (Benn, 1976; A. E. Buchanan & Brock, 1989; Dworkin, 1981, 1988; Ellis, 2008; Mele, 1995).

Many accounts of autonomy converge that the value of autonomy can be either instrumental (i.e., helps bring about other things that have value) or intrinsic (i.e., valuable in and of itself). Our review of the literature reveals great consistently across philosophical and empirical accounts on the instrumental value of autonomy. For example, Bentham (2008) famously noted that autonomy is an instrumental good that leads to higher overall well-being, which is a finding that is today among the most well-established in the scientific literature on psychological health, well-being, and achievement (e.g., self-determination theory; Deci & Ryan, 1995; see also Bandura, 1986; Peterson & Seligman, 2004; Seligman & Csikszentmihayli, 2000). Likewise, John Stuart Mill captured this sentiment when we wrote: “If a person possesses of any tolerable amount of common sense and experience, his own mode of laying out his existence is the best, not because it is the best in itself, but because it is his own mode” (Mill & Williams, 1993, p. 135). In these ways and others, philosophical accounts and empirical analyses demonstrate how and why autonomy contributes to human health and welfare and may generally support some other, perhaps more basic, values (e.g., justice, well-being) (Dworkin, 1988). The instrumental value of autonomy may also in part explain why so many practical policies and protections for autonomy have been institutionalized throughout modern societies and organizations (e.g., in professional ethics codes).

Clearly, the instrumental value of autonomy is well-established, but for many autonomy is also an intrinsic value. On these views, autonomy is good not (only) for anything that autonomy helps bring about, but it is good just for its own sake. For example, some people value making their own decisions that are expressions of their values and desires. They may value making these kinds of decisions for their own sake and for no other reason. For better or worse, we see that many people want to have the freedom to make independent, even if potentially poor, choices. The desire for the freedom to make potentially poor choices is clearly not because the ability always (or even on average) brings about good consequences (i.e., the bad choice actually brings about bad things, on average). Rather, the value is because it was “I” who made that choice. Theoretically, that state of affairs could have value in and of itself independent of any outcomes. As such, the presence of environmental constraints or even beneficial manipulators can infringe on this fundamental value because it is not “I” who is the (primary) author of change (Benn, 1976). Ultimately, for many it is simply important to be the authors of our own lives (Dworkin, 1988). Whether autonomy is an instrumental or intrinsic value (or both), autonomy is commonly thought to have significant value—a notion that is widely endorsed by experts and folk alike. But autonomy is not the only value, and under the right conditions, it is widely agreed that autonomy should be violated. In the next section, we discus some of the instances when autonomy can be violated and why.

The philosophical work on the value of autonomy is also reflected in empirical work and scientific theory. Schwartz and Bilksy (1987) conducted extensive cross-culture studies and they arrived at converging conceptions of values based on empirical studies of diverse people (and professionals). They found eight basic values that appeared to be universal in humans, which were later revised to ten basic values in the light of more comprehensive data and analyses (Schwartz, 1992). Among these values was the value of being self-directed. And on all accounts of personal autonomy that we are aware, self-direction is a core element. Consequently, not only is autonomy thought to be important philosophically there is good evidence that autonomy is in fact valued by people in various cultures.

A short set of principles provides an efficient basis for a discussion of how autonomy may factor into personal choice and broader policy debates. For example, consider Mele’s set of jointly sufficient (but not necessary) conditions for autonomy. According to Mele, a self-controlledFootnote 1 person acts autonomously if:

  1. 1.

    The agent has no compelled motivational states, or any coercively produced motivational states.

  2. 2.

    The agent’s beliefs are conducive to informed deliberation about all matters that concern him.

  3. 3.

    The agent is a reliable deliberator. (A. R. Mele, 2001)

The first condition states that one’s motivation should not be the result of things such as uncontrollable phobias or brainwashing. Mele’s second condition holds that one’s beliefs should not be the result of false or misleading information on which the person deliberates before deciding. The final condition covers the skills, conditions, and habits used to deliberate effectively about means and ends.

One set of factors Mele identifies is particularly important for our purposes—namely, being informed and competent (conditions 2 and 3). These two factors are constitutive of what we will call rational agency. Rational agency characterizes the state where one is competent and informed and can integrate information and one’s values into decisions. The relevant decision making pathway is consciously accessible and the person is actively involved in the decision. Broadly, in accord with standard philosophical conventions, we take rational agency to refer to a set of capacities (i.e., competence and being informed) that allows one to take information and representatively and coherently integrate the available information, values, and prior beliefs to make a decision (J. Baron, 2008; Weirich, 2004).

However, there are many influences on people’s decision making, some of which do not factor into rational agency. As detailed in Chap. 5, the underlying psychological processes involved in boundedly rational agency don’t require that a decision maker be neo-classically rational or employ formal normative decision analyses during decision making (e.g., deriving and solving a statistical equation in one’s mind or with the help of a computer). On our scientifically informed view, adaptive (boundedly) rational agency generally only requires that decision makers use the relevant information, along with their relevant values, to reach a locally coherent representative understanding of a decision that robustly accords with standards of normatively superior decision making (e.g., aligns with but does necessarily follow from logical, probabilistic, and statistical standards; see Skilled Decision Theory, Cokely et al., 2018; see also Gerd Gigerenzer et al. (1999); Levi (1967)). Of course, because people are not logical super-computers, sometimes the way that information is presented will predictably bias even the most skilled and informed decision makers.

To illustrate, people can be persuaded, coerced, or influenced by a number of (potentially) non-rational factors like the way that information is framed. Framing can happen when essentially logically identical, but different, descriptions of a choice are used to structure the presentation of information (for a review, see Levin, Schneider, and Gaeth (1998)). Among several robust behavioral biases that can result from framing, one influential bias exhibited by many people is Loss Aversion: People act as if losses loom larger than equivalent gains (almost three times larger on average). As a result, even when presenting people with logically equivalent information, people’s choices can be biased by framing choices with respect to the potential gains versus losses involved (see Chap. 5 for other examples).

Paternalism and Nudging: Features of Some Choice Architectures

There is no question that people often make bad decisions. They decide to do some things that are not in line with their own best interests or their own values. Sometimes, these decisions are driven primarily by environmental factors (e.g., framing, time constraints, ignorance). It may be reasonable to assume that in instances where people make predictably bad decisions it is ethically justified to intervene on their decision making to encourage those people to make better decisions. But the question is how best to intervene?

One way to intervene on decision making is to adopt some paternalistic policy. Paternalism, like most philosophically complicated concepts, is somewhat difficult to define precisely and satisfactorily. There is no consensus on any single definition of paternalism (Trout, 2005). Some think that the core element of paternalism is a violation of a person’s autonomy (Dworkin, 1988). Others think that one of the essential features of paternalism is the willful withholding of important information or the providing of false or misleading information to decision makers (A. Buchanan, 1978). For most practical purposes, Gert and Culver’s analyses of paternalism is instructive and representative:

A is acting paternalistically toward S if and only if A’s behavior (correctly) indicates that A believes that (1) his action is for S’s good; (2) he is qualified to act on S’s behalf; (3) his action involves violating a moral rule (or will require him to do so) with regard to S; (4) S’s good justifies him in acting on S’s behalf independently of S’s past, present, or immediately forthcoming (free, informed) consent; and (5) S believes (perhaps falsely) that he (S) generally knows what is for his own good. (Gert & Culver, 1979, p. 199)Footnote 2

Accordingly, the justification for any paternalistic policy is that the overall benefits that are accrued by the policy outweigh costs associated with the policy. The benefits are supposed to be for the individual for whom the policy is designed. However, these benefits come at the cost of violating some moral rule or violating some other moral good. For example, seat belt laws are often thought to be justifiable paternalistic policies. The benefit to individuals (reduction of risk of death and injury) justifies the infringement on personal freedom (which is typically thought to be a significant moral cost). Even though it is possible that nobody ever consents to the seat belt laws, the policy is justified based on the overall reduced risk of death and injury.

Nudging is one popular recent strategy that may greatly reduce the moral costs associated with “hard” paternalistic policies like seat belt laws (Johnson et al., 2012; Oliver, 2015; Sunstein & Thaler, 2003; R. Thaler & Sunstein, 2003; R. H. Thaler & C. R. Sunstein, 2008). According to R. H. Thaler and C. R. Sunstein (2008), nudging is

[A]ny aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. (p. 6).

To be clear, nudges are just one specific type of Choice Architecture. Choice Architecture more generally has been described as “The idea that changes to the decision environment can affect individual decision making and behavior” (Muenscher, Vetter, & Scheuerle, 2016).Footnote 3 The idea that we can and should scientifically design interfaces and affordances (e.g., choice architectures) to facilitate and enhance human and sociotechnical system performance has a nearly century-long history in scientific sub-disciplines such as human factors, cognitive engineering, and engineering psychology (Wickens, Hollands, Banbury, & Parasuraman, 2013), in addition to many other historically long-standing efforts in marketing, business, political science, public relations, and communications (e.g., propaganda). What is somewhat more novel in recent years is the widespread, intentional application of scientifically grounded efforts applied for wider benefit to the decision maker themselves, via public or institutional policies.

Consistent with notions of paternalism and psychological biases, it is commonly thought that nudges “are called for because of the flaws in individual decision making, and work by making use of those flaws” (Hausman & Welch, 2010, p. 136). These flaws are often said to be the result of automatic processing that the nudges can take advantage of (Hansen & Jespersen, 2013). For example, Johnson et al. (2012) state “the same factors that lead us to make a mindless suboptimal or unhealthy choice can often be reversed to help us make a mindless better choice” (p. 500). Selinger and Whyte (2011) think that the characteristic feature of a nudge is that it changes the context “in subtle ways that often function below the level of our conscious awareness, to make decisions that leave us and our society better off” (p. 925). Muenscher et al. (2016) say that nudges “can be understood as a specific type of behavior technique primarily relying on reflexive cognitive processes.” Across these and many other characterizations, the common reflexive cognitive processes that are usually leveraged by nudges are not part of one’s conscious, rational agency (even if the automaticity is adaptive), and therefore nudges often bypass one’s conscious rational agency.Footnote 4

On our understanding, some such nudges may qualify as instances of an approach to paternalistic policies called Libertarian Paternalism. Libertarian Paternalistic policies influence people’s decisions while leaving alternative choices genuinely open without penalty or other incentives. Nudges that are Libertarian Paternalistic are paternalistic in that the nudges try to alter choices. Those nudges are libertarian in the sense that alternative choices are available without changes to incentives. Ultimately, however, Libertarian Paternalism is a type of (soft) paternalistic policy that involves some moral violation. So, ethically speaking, these kinds of nudges require the same kinds of justification that would be required for any other type of paternalistic policy.

It may appear that Libertarian Paternalism is identical with Choice Architecture. They both change the decisions that people make in predictable directions. But as we understand the two concepts, there is an important distinction. Recall that paternalism involves a moral violation, such as violations of autonomy. Yet instances of Choice Architecture do not necessarily involve any moral violation. To illustrate, providing people with accurate and relevant information that predictably changes decisions qualifies as a type of Choice Architecture. But informing people’s decision need not involve a moral violation because that information could be integrated into rational agency and actually promote autonomy (Johnson et al., 2012; Muenscher et al., 2016). In those instances, providing information does not violate a moral rule and hence is not a kind of paternalism (much less Libertarian Paternalism). However, other types of Choice Architecture can be justifiably characterized as Libertarian Paternalism in cases when interventions bypass rational agency (e.g., nudging people toward a choice by taking advantage of automatic processing alone, as happens when people influence choices by setting opt-in or opt-out defaults). Thus, Choice Architecture can, but does not necessarily, involve a moral violation and so should not be viewed as synonymous with Libertarian Paternalism. An illustrative model of the potential relations between Choice Architecture, Nudges, and Libertarian Paternalism is displayed in Fig. 7.1.

Fig. 7.1
Three nested circles with names, from inner to outer, libertarian paternalism, nudge, and choice architecture.

Conceptual diagram of distinctions and nested relations among choice architecture, nudging, and libertarian paternalism

On this model, Choice Architecture is the broadest kind of way to alter people’s choices (e.g., any behavioral interactions in the world can be described in terms of its Choice Architecture). Choice Architecture can be characterized as the environment in which people make decisions that may influence choices. Many instances of Choice Architecture are naturally occurring and non-intentional (e.g., sunlight can influence moods and related behaviors). A more specific, proper subset of Choice Architecture is nudging. Among the distinctions between nudging and choice architecture is that nudging in some way intentionally structures the decision making environment to promote a specific choice. For example, a dried riverbed (i.e., a natural environment) can change the path you walk just like a ditch could. But the ditch could be intentionally placed to alter your choice, nudging you down a different path. Finally, an even more specific kind of choice architecture is Libertarian Paternalism, a proper subset of Nudges (and, by transitivity, a proper subset of Choice Architecture). Libertarian Paternalistic policies involve some (perhaps justifiable) moral violation, whereas nudges need not (e.g., some nudges are transparent and engage rational agency). While there is still no wide consensus about these conceptual distinctions, our working assumption is that the most controversial kinds of choice architecture are those that are Libertarian Paternalistic because they involve some moral violation. These kinds of choice architectures (and nudges) will be the focus of the rest of this chapter unless specifically noted.

The Ethics of Libertarian Paternalism

The ethics of Libertarian Paternalism are hotly debated (Blumenthal-Barby & Burroughs, 2012; Hausman & Welch, 2010; Welch, 2013). Almost everyone agrees that in some circumstances, paternalistic policies that influence choices are justified and sometimes even necessary. For example, it is relatively morally uncontroversial that, on average, having more organs available for transplant is desirable and having fewer deaths is preferable to having more deaths. And, the argument goes, almost everybody values those outcomes. So, the ethical costs associated with nudging people to be organ donors are justifiable given sufficiently good enough outcomes.

However, even in the instances where the good seems to outweigh the cost of the libertarian paternalistic policy, there is still a cost. In particular, one cost that is commonly identified with libertarian paternalistic policies is that those policies undermine autonomy. To illustrate,

To the extent that they [nudges] are attempts to undermine the individual’s control over her own deliberation, as well as her ability to assess for herself alternatives, they are prima facie as threatening to liberty, broadly understood, as is overt coercion. (Hausman & Welch, 2010, p. 131)

The worry is that through the predictable influence of non-rational features, one’s choice can be influenced by the intentions of another person via the nudge: “Their actions reflect the tactics of the choice architect rather than exclusively their own evaluation of alternatives” (Hausman & Welch, 2010, p. 128). As such, the nudge could be coercive. In such instances, the first condition of Mele’s sufficient conditions for autonomy is not satisfied. The nudge could also run afoul on Mele’s second condition where the person has all the relevant information to make a decision. The choice that is nudged is a function of the information that is strategically provided. For many nudges, there is no intent or effort to ensure that the person would have a minimally sufficient set of the relevant information to make an informed decision (i.e., a representative understanding). Rather, the information is an intentionally small (e.g., skewed or biased) subset of the relevant information. In such instances of nudging, that small set of biased information increases the probability that people will make the “desired” decision. Hence, one potential path to autonomy is not secured, and a moral rule is violated in the Libertarian Paternalistic policies.

Providing appropriate justification for Libertarian Paternalism is complicated, especially when there are different values at stake. If what we have presented throughout this book is correct, then values will often be diverse and stably related to one’s personality. The diversity generates two challenges for libertarian paternalistic policies. One challenge is internal to Libertarian Paternalism:

Internal Challenge: Sometimes, it is not clear what values we should promote given the diversity of values (Hansen & Jespersen, 2013; N. Smith, Goldstein, & Johnson, 2013).

Value diversity presents a common problem for policies that attempt to predictably alter decisions in some direction because it is often contentious what that direction should be.

Our data also present an external challenge to Libertarian Paternalism:

External Challenge: Libertarian Paternalism is not simply justified if the (direct) goods that are generated by the policy are outweighed by the (direct) moral costs of the policy.

The External Challenge requires a bit of elucidation. One might think that if the benefit of the Libertarian Paternalistic policy is higher than the moral costs, then one should institute the Libertarian Paternalistic policy. However, looking only at the costs and benefits of the Libertarian Paternalistic policy leaves out an important element in policy decisions. Namely, whether there are other policies that could be instituted that generate similar benefits but without similar costs. If one only focuses on the good consequences and the opportunity to nudge, one may miss the opportunity to evaluate the relative benefits as compared to other potentially powerful alternatives (and relative base-rates and mechanisms of each—i.e., including why, when, and from whom the various policies succeed). Ultimately, if there were other strategies that could achieve the same, or similar, ends as the paternalistic strategy but that did not violate a moral rule, then there would be little ethical or practical justification to opt for the paternalistic policy. Thus, ethically it follows that the relative costs/benefits of Libertarian Paternalism should be compared to alternative strategies that could achieve the same or similar ends. To more fully map these issues, we next consider one potential alternative to Libertarian Paternalistic policies.

Ethical Interaction Theory: Interactive Policy Analysis

Ethical Interaction Theory is a framework designed to offer techniques to quantify and compare ethical costs and risks associated with individual instances of choice architecture. In this light, our preferred alternative to Libertarian Paternalistic policies is to inform people, empowering them to make decisions on their own—a kind if informing we call representative education (cf. “Boosting,” Hertwig & Grüne-Yanoff, 2017). We will discuss more what we take representative education to be below. But the main thrust of the approach is to provide people with enough relevant information so that they have a high-quality factual base to make decisions (e.g., promoting a representative understanding, see Cokely et al., 2018). Informing people is nothing new. However, there has been considerable debate about determining what constitutes relevant quantity and quality of information (Berleur, Nurminen, & Impagliazzo, 2006; DiPazz, 2002; Turilli & Floridi, 2009; Winkler, 2000). We propose that representative education offers a new solution to the quantity and quality problem.

Let’s consider issues with quantity of relevant information first. Complete information could theoretically avoid problems associated with the intentional use of non-rational factors to influence decisions. If one knows everything relevant to the decision and if one could integrate all of that information into a decision, then there would be no need for non-rational factors to play a role in the decision. Even if non-rational factors could potentially play a role in decision making, the information about those non-rational factors would simply be one more informational input in the decision. Those non-rational factors would no longer be non-rational factors since they would be integrated in the decision making process. Of course, a major worry is that if the necessary quantity of information is sufficiently high, autonomous decision making might rarely be an achievable standard for humans (Gigerenzer et al., 1999; Hardman & Macchi, 2003; Merz & Fischhoff, 1990). Think about the last time you sought a mortgage for your home or the last time you consented to a medical treatment or even surfed for a TV program. Odds are you had access to a lot of relevant information that you probably didn’t or perhaps could not explore. Clearly, complete information is often rather impractical if not impossible.

If providing all information about a decision is neither a necessary condition nor part of a sufficient condition for autonomy, then it appears that the quality of the information is what matters most. We think that providing information that a person can efficiently integrate into a representative understanding of the decision problem is key to autonomous, adaptive, and informed decision making. In particular, if we can present information in ways that facilitate representative understanding, then we will promote autonomy compared to instances where people developed a biased or unbalanced understanding based on systematically skewed, persuasive information. That increase in the representative quality of the understanding may then contribute to that person’s autonomy, not because of full information but because one has a more prognostic understanding of the decision problem and how it factors into one’s own life and values. Accordingly, developing valid and robust scientific means and methods for assessing, characterizing, and evaluating such representations (e.g., costs/benefits, robustness, trade-offs) is a central enterprise in the science for informed decision making and a major marker of scientific maturity and progress (e.g., increasing prediction and control of representative understanding and decision vulnerabilities).

Generally, a representative understanding happens when one’s understanding is relatively robust against bias given random additional relevant or irrelevant aspects of information. Bias, in the sense that we mean here, only implies a tendency, not an error (e.g., many Americans have a bias to write with their right hand). More specifically, representative understanding (a person variable) as well as representative education (an interface variable) can be understood by an analogy to representative sampling for statistical inference. A sample is representative of a population when the sample accurately reflects the target population on the properties of interest (e.g., by having sufficiently large, random sample). When the sample is representative (and not too small), adding additional randomly selected data will not likely change the robustness of inferences made about the population, assuming appropriate statistical techniques are used and standard assumptions are met. Likewise, having a representative understanding means that one’s understanding is sufficiently (but not exhaustively) nuanced and detailed such that additional random aspects of information are unlikely to bias inferences made on the bases of that understanding. Following the sampling analogy, any random bit of information (either relevant or not, accurate or erroneous) is not likely to change one’s mind if one has a representative understanding. To the extent that one’s decisions are easily (or substantially) biased by additional random information, then one does not have a representative understanding. To the extent that more representative education changes the decision, one does not have a representative understanding.

To offer an oversimplified but potentially useful illustration, consider making a decision about getting burned. Once a person realizes that a flame burns and hurts their finger, most adults have sufficient personal understanding, knowledge, and reasoning capacities to develop a relatively representative understanding of key causal aspects of the relationship between fire and the rest of their body surfaces (e.g., fire burns and hurts my finger; even though some skin areas are less sensitive than fingers, something that hurts the skin of my finger will likely hurt the skin in another part of my body. Thus, fire will probably hurt anywhere it touches my skin). With this understanding they can make reasonable, robust inferences about how much they (don’t) want a flame to touch them elsewhere. By chance or intent, they may come across more information that could cause them to update their previous understanding. But, absent a relatively concerted and compelling effort to manipulate incoming information or to discount one’s previous knowledge about one’s self or one’s environment (i.e., accurate Bayesian priors), the simplified yet representative causal understanding of skin and fire (a representative sample) will tend to allow them to use simple decision strategies (heuristics) to make inferences that approximate the decisions they would make if they had an expert understanding of all the relevant information (the population).

Because representative understanding essentially involves rational agency, ensuring representative understanding is autonomy promoting. For this reason, any strategy that informs, even if slightly, is to be preferred to a strategy that does not inform, everything else being equal. And, the informing is autonomy promoting because people are free to integrate that information with whatever (diverse yet stable) values they may have. In this way, promoting representative understanding through education can be different from nudging. Nudges focus on the outcome of a decision process (e.g., eating a salad, installing energy-efficient light bulbs), whereas promoting representative understanding through education focuses on the decision making process (e.g., rational agency) that leads to the outcome.

In the light of theory and extant data, it is clearly possible to avoid many problematic aspects of and debates about Libertarian Paternalism by providing representative education. But we want to be very clear that the possibility does not entail that we should always prefer promoting representative understanding to Libertarian Paternalism. By our lights, the representative education framework does not imply that any specific class of decision policy is always best. Rather, the framework offers a conceptual basis for an ethically and empirically informed interactive policy analysis of the relative merits of viable options (e.g., comparing best practice Libertarian Paternalism to decision aids or educational interventions). As the science and practice matures, we expect this process will ultimately follow standards such as formal costs-benefit or policy analysis (e.g., Gramlich, 1990; Weimer & Vinning, 2017).

To further briefly clarify, let’s consider what such a cost-benefit analysis would look like and what dimensions would be relevant. While a complete account is still to be discovered and established, we can give the contours of what an analysis would look like. An efficient starting point is Trout’s (2005) suggestion concerning strategies that attempt to debias and inform (i.e., internal strategies) as compared to those that influence by taking advantage of the biases (i.e., external strategies):

To the extent that these particular strategies work, their desirability is based on the particular features of the problem: their generality (the scope of the problems they address), their frequency (how frequently the types of problems they address actually occur), their significance (how important the problems are to human welfare), and the cost of implementation (how simply and cheaply the problem can be addressed by these methods). (Trout, 2005, p. 422)Footnote 5

In some instances, promoting representative understanding will be superior to Libertarian Paternalism on these criteria. Take the implementation criterion first. There are some simple, efficient, and direct ways to increase representative understanding. The presentation of visual aids has been shown to increase understanding of basic information relevant to some decisions (Galesic & Garcia-Retamero, 2011; Garcia-Retamero, Petrova, Feltz, & Cokely, 2017). This kind of intervention is arguably often at least as easy to implement as structuring environments so that the message frame or defaults influence people in the desired direction.

Second, interventions that successfully inform decisions (e.g., promote representative understanding) may confer other general benefits that are missed by the narrowly focused Libertarians Paternalistic strategies. Informing people’s decisions engages rational agency in ways that Libertarian Paternalistic policies do not and therefore has the potential to encourage the development of character, skills, and wisdom that may be valuable or even essential for generally skilled decision making, personal growth, and more comprehensive rational agency. For example, perhaps bar graphs provide the opportunity for people to understand statistical information better, and that experience and familiarity with bar graphs may transfer to other, similar decisions that involve statistical information. Alternatively, perhaps careful evaluation of trade-offs and options, e.g., in high-stakes medical contexts, provides people with greater insight into their own deeply held values, or a greater sense of decision making self-efficacy. In any event, because Libertarian Paternalistic policies do not increase understanding or agency to the same extent as efforts to increase representative understanding (if at all), Libertarian Paternalistic policies are not likely to help nurture these powerful kinds of skills, insights, or resources, and thereby do not generally promote autonomy or rational agency distally or proximally.

Third, Libertarian Paternalistic policies derive their effects from interventions targeting relatively passive decision making, and thus the quality of decision outcomes depends on the wisdom and power of policy makers to shape environments in suitable ways. That is, Libertarian Paternalistic policies take time and resources, including political capital, to implement. Libertarian Paternalistic policies are also typically only effective under a narrow band of conditions (e.g., under routine conditions when everyone has similar biases and would benefit from similar outcomes). What’s more, Libertarian Paternalistic policies appear to run a significant risk of encouraging more passive, dependent decision making more generally (e.g., passive behavior is reinforced and rewarded). Even if the risk is small in any single instance, given enough time and exposure, Libertarian Paternalistic policies appear likely to reduce one’s decision making self-efficacy, potentially damaging one’s deep sense of competency. Factors that threaten self-efficacy and agency in turn tend to undermine motivation toward and resiliency of autonomous behavior, innovation, well-being, creativity, leadership, skill development, and a host of other factors with real social and economic implications. In contrast, efforts to develop more autonomous decision makers theoretically should promote personal development and agency, promoting more adaptive, self-determined, skilled, and resilient decision making more generally. In such cases, autonomous and skilled decision makers are individuals who are well-equipped to make good decisions for themselves, their families, and their communities. Those kinds of decision making skills and abilities are likely to provide even larger benefits when decision makers face rapidly changing, high-stakes, and evolving conditions (e.g., when infrastructure becomes less reliable or when threats escalate, such as during natural disasters, emergencies, and in unfamiliar social and economic conditions). Underlying skills and abilities also appear to provide some protection against the threat of misinformation and disinformation and may also reduce people’s susceptibility to motivated reasoning biases that often follow from conflicts of interest, particularly in controversial domains (e.g., Climate Change; for a recent review see Cho et al., 2024; see also Van der Linden, 2023; Roozenbeek, Van der Linden, 2024).

Of course, efforts to increase representative understanding are not free of problems. Representative education can be expensive in many senses (e.g., time, money, and other resources) and may not be as effective or efficient as Libertarian Paternalistic policies, particularly in some high-stakes instances (see Feltz (2015a, 2015b) and Trout (2005)). Nevertheless, the extant data consistently indicates that systems that promote skilled and informed decision making tend to empower autonomous, high-quality decision making. Given the overwhelming evidence on the mechanisms and value of skilled decision making and related outcomes (e.g., resiliency, well-being, and agency), even if the short-term costs and benefits are relatively comparable, we can be confident that autonomous decision making will usually be morally and practically preferable to Libertarian Paternalistic policies.

To illustrate, Benartzi et al. (2017) have conducted a comparison of different types of choice architecture including Libertarian Paternalistic policies and more hard paternalistic strategies like offering monetary incentives for some choices. For instance, they measured the effectiveness of a default nudge for flu vaccination versus monetary compensation for taking part in a flu vaccine. They found that about twice as many adults got a flu vaccine in the default condition compared to the monetary incentive condition (per $100 spent). Hence, it looks like on the surface the default nudge is more effective than monetary incentives. However, we re-analyzed their data to respect the distinctions between choice architecture, nudges, and libertarian paternalistic strategies (the authors lumped all choice architectures besides incentives into one group). On our re-analysis, strategies that involved informing individuals (e.g., educational campaigns) were three times better than libertarian paternalistic nudges and eight times better than monetary incentives. This same pattern held true not only for decisions about the flu vaccine, but also for decisions about saving energy and retirement savings. Hence, in these cases, it appears that the benefits of informing greatly outweigh the benefits of the libertarian paternalistic policies independent of promoting autonomy. Critically, we would not have known how much better these policies were had we not compared them.

Choice Architecture Policy Analysis: A Detailed Case Study

To illustrate more concretely key aspects of an interactive policy analysis, we consider one case study comparing choice architectures used for risk communications (e.g., libertarian paternalism v. information transparency). We aim to illustrate some side-by-side comparisons of different choice architectures that are instructive with respect to how, when, and why we can infer that libertarian paternalistic decision policies are ethically inferior to informed decision policies (e.g., representative education that promotes representative understanding).

Our example comes from a line of research that attempts to promote better decision making related to sexual health and disease prevention. As we have already discussed, the way that choices are framed (i.e., how information is described) can predictably influence the choices that some people make, even if the information presented is formally logically identical (Levin, Johnson, & Davis, 1987; Mcneil, Pauker, Sox, & Tversky, 1982; Rothman & Salovey, 1997; Tversky & Kahneman, 1981). In a series of experiments, Garcia-Retamero and Cokely (2015a, 2015b) demonstrated that sexually transmitted infections (STI), relevant choices, and behaviors were predictably altered depending on how risk communications framed the risk information. In one part of the study, participants were randomly assigned to receive “positively” or “negatively” framed information about the risks associated with STIs. The “positive frame” emphasized that using condoms reduced the chances of contracting an illness and having long-term health consequences. The “negative frame” described the same basic information in terms of increasing the chances of contracting illnesses and long-term illness if one does not use condoms. Remarkably, this small change in description had large impacts on the resulting screening or protective behaviors of the young adult sample involved in the study (note: young adults are among the most at risk for life-altering HIV and related STI infections that could be largely prevented with condom use). Those who received the information positively framed were nudged toward engaging in more preventative behaviors (e.g., using a condom) than those who received the negatively framed information. Those who were given the information negatively framed were more likely to engage in screening behavior (e.g., getting a test for an STI) than those given the positively framed information.

Given these results, the science indicates that a choice architecture policy using gain and loss framing for information about STI prevention and decision making can successful, quickly, and robustly decrease some risky behavior (e.g., promote condom use or STI screening). Theoretically, this type of choice architecture represents a libertarian paternalistic policy as the framing of the information is designed to encourage people to engage in the targeted behavior while preserving the ability to choose. Thus, even in the absence of representative education (and the resulting representative understanding) these nudges appear to have some beneficial effects. Taken at face value, these results may suggest framing information to maximize one of those outcomes (prevention or screening) may be ethically defensible. How do these effects compare to other choice architectures using representative education to promote more representative understanding about detection and prevention costs and benefits?

To estimate the relative benefits of information transparency, Garcia-Retamero and Cokely (2015a, 2015b) gave participants data visualization as a decision aid (see Fig. 7.2). The visual aid took the form of a simple bar graph that depicted the statistical information about risk and reduction of infection in a visually accessible (i.e., transparent) and easily comparable form. All other information was the same (e.g., presented with negatively or positively framed information), yet giving participants this visual aid resulted in no measurable differences in screening and prevention behaviors as a function of message framing. Even though there was no measurable difference between the gain and loss framing on screening and prevention behaviors, the pair of behaviors together were dramatically influenced when the graph was presented compared to when the graphs were not provided (e.g., reductions of risks in subsequent behaviors). Thus, visual aids encouraged people to engage in increased amounts of prevention and screening behaviors regardless of the frame when provided with the transparent visual aid bar graph (i.e., just as highly effective as either framing intervention, on average). On the face of it, it is reasonable to assume that those who were provided with the visual aid generally understood the decision problem better than those who did not receive the decision aid. Hence, on our view, the aided decision is ethically better than the nudged decision.

Fig. 7.2
A bar graph of percentage of people who have sexual intercourse with an infected partner and contract a S T D when they, 1. use condoms, 16, 2. do not use condoms, 37. Values are estimated.

Visual representation of infection rates with and without using condoms

One could think that the presentation of the visual aid is simply one more libertarian paternalistic method to nudge people to the desired choices. That is, perhaps there is something about the graph that takes advantage of non-rational features in order to increase the desired screening and prevention behavior. But there are some important and telling clues suggesting that this is not how the graph worked. Garcia-Retamero and Cokely (2015a, 2015b) assessed how well participants understood the information about sexually transmitted diseases. When people were given the visual aid, they understood and could reason about the information better than when they were given only the framed information, which translated into enduring changes in attitudes, plans (e.g., intending to buy condoms or visit a physician), and behaviors. This is consistent with a large literature on the effect that visual aids can have on improving information understanding (Garcia-Retamero & Cokely, 2013a, 2013b, 2013c; Garcia-Retamero, Cokely, & Hoffrage, 2015; Garcia-Retamero, Okan, & Cokely, 2012). Moreover, in subsequent study, a similar design was used to test the benefits of visual aids compared to other kinds of framing effects (e.g., attribute instead of gain/loss framing) or compared to a validated and extensive (eight hour) educational intervention (Garcia-Retamero & Cokely, 2013a, 2013b, 2013c, 2017). In both cases, the simple visual aid generally matched the large benefits of other framing-based choice architectures, and of the extensive educational intervention, requiring equal or lesser amounts of time and costs to implement, and resulting in similarly enduring changes in target attitudes, intentions, and behaviors. Importantly, however, the results also revealed that the visual aids improved representative understanding to the greatest extent among most vulnerable individuals (e.g., individuals with less knowledge of risks and lower risk literacy scores as measured by numeracy tests)—largely eliminating disparities by roughly equating more and less skilled decision makers on most assessed decision making quality variables. Ultimately, representative understanding of information is a rational factor that tends to be by far the most influential variable that gives rise to better decision making. So, at least with respect to framing information, providing visual aids may, and certainly does in some cases, provide a more representative understanding of the problem than the libertarian paternalistic policy.

Given the results of this randomized control trial and the associated estimates of the comparative value of the two architectures, we can directly compare some key costs, benefits, and potential trade-offs of the two decision policies. The visual aid was ethically superior because it protected (rather than infringed on) autonomy. Other things equal, in accord with the Ethical Interaction framework, this alone would imply that the representative education policies should be preferred. But, of course, other things aren’t often equal and so we next turn to other costs and benefits that would be relevant for wider-scale implementation of both policies.

Roughly, the production and implementation costs and benefits of both the visual aids and framed information appear likely to be similar (e.g., printing, posting, and distribution), although the inclusion of a basic bar graph simplifies brochure development in the decision aid condition (e.g., doesn’t require the use of two separate brochures using gain frames for prevention and then loss frame for detection). The statistical model estimates from the experimental trial indicate a relative equivalence across the direct effects on decision outcomes and targeted consequences. However, the cognitive and emotional costs and benefits, such as the attention and processing time (i.e., reading) and overall cognitive workload, ironically may be lower for the visual aid condition, which is theoretically easier to use and remember (e.g., a picture is worth a thousand words). Moreover, the visual aid did not explicitly aim to induce mood-type states that may entail other non-rational carry-over effects that could be hard to counteract without a more representative understanding (e.g., inducing risk or loss aversion more generally). To the extent that the visual aids are likely to be easier to communicate and remember, they should also be at least marginally easier to accurately discuss with others and more resistant to distortion effects (e.g., misremembering).

Interestingly, people who were less numerate and less knowledgeable appeared to be affected by both the framing and visual aid manipulations (see Garcia-Retamero and Cokely (2013a, 2013b, 2013c)). Because less numerate people are also less prepared to independently evaluate information about risk (e.g., they have lower risk literacy), the framing manipulation also appears likely to create some disparity in the cognitive and decision making benefits in libertarian paternalistic (framing) conditions. Essentially, even in the framing condition, more numerate people are likely to use their risk literacy skills (e.g., reframing) to generate a more representative understanding of the underlying information. This processing makes it more likely that those who are risk literate will avoid being affected by framing but those with lower risk literacy won’t. Hence, there are disparities with respect to whose autonomy is diminished. To the extent framing effects do not influence decision making for numerate people but do for less numerate people, we have yet another ethical concern that was circumvented in the visual aid condition (e.g., all individuals who made better decisions were more likely to do so on the basis of a more representative understanding—roughly equating risk literacy for the current decision).

Taken together, the net benefits of the representative education policy (i.e., transparent visual aids) seem to dominate those of the libertarian paternalistic policy (i.e., the framing manipulation), without any appreciable trade-offs. Most implementation costs were similar, yet the decision aid protected autonomy and provided a more enduring means of empowerment that was shared more equitably among more and less vulnerable individuals (e.g., promoting informed and skilled decision making across all, instead of biasing less skilled individuals while informing others). In accord with the standards of Ethical Interaction Theory, in this case the representative education policy is both ethically and practically superior.

As this example suggests, in some instances there are important and viable alternatives to nudging. To put the point somewhat differently, at one time in the recent past the available science suggested “there is no evidence that the same corrective success could be effectively or routinely achieved by inside strategies, strategies of individual motivation that attempt to acquire more accurate representations or to consider alternative possibilities” (J.D. Trout, 2005, p. 430). Since then, decision aids and training programs have been shown to hold great promise with regard to the promotion of informed decision making. And these educational interventions did not “cost” more than nudges. Thus, we do not need to consistently or exclusively rely on nudges to help people make better decisions. That said, technologies that promote representative understanding also do not always result in better decisions in instances where nudges are effective, and in some cases the costs associated with informed decision making may far outweigh any benefits. The task that is left to us, then, is to determine when nudges are ethically preferable to alternative strategies like representative education. We provide some preliminary criteria to use in comparisons in the next section.

Choice Architecture Policy Analysis: Heuristic Evaluation

We have reviewed theory on some of the relative merits of nudging versus promoting representative understanding. In efforts to distill some practically useful and efficient guidelines, we next consider five primary concerns for heuristic evaluation by choice architects, designers, and policy makers. Each of these heuristics may be useful when trying to estimate the relative costs and benefits of various choice architectures. Based on the previous discussion, there is reason to think this heuristic evaluation will be a robust and practical (but not necessarily perfect) guide for complex interactive policy analysis.Footnote 6 But, to be clear, this is only a starting place (see Appendix), as there just is not enough science on nudges or their alternatives to propose or justify a full set of formal standards. Given these considerations, the following is a tentative list of heuristics to use when comparing nudges or informing decision makers. These heuristics are Benchmarks, Disparities, Resources, Reputation, and Resiliency.

  1. 1.

    Benchmarks: What are the alternatives and benchmarks against which we should evaluate the benefits of nudges or other decision policies, and why? Whenever there are potential or actual alternatives to libertarian paternalism, then an interactive policy evaluation of the relative costs and benefits of the libertarian policy should be conducted. We have already discussed some of relent criteria (e.g., those provided Trout (2005)). One additional potential evaluation element should involve an assessment of the moral costs associated with alternatives. Recall that one of the central characteristics of paternalistic policies is that they involve some moral violation, which is not an issue for some alternative policies (e.g., transparent decision aids that promote representative understanding). Therefore, in order to ethically justify using a libertarian paternalistic policy, the libertarian paternalistic policy must perform better on relevant criteria compared to the alternative in the light of other relevant associated costs and pragmatic concerns (e.g., given costs, needs, constraints, etc.). Detailing these moral and other costs and benefits is necessary (but not sufficient) for the establishment of ethically defensible nudging policies.

  2. 2.

    Disparities: How widely do stakeholders and experts agree about the value(s) being promoted? Are common differences in fundamental values likely to result in conflicts of interest? Libertarian Paternalistic policies typically find their most persuasive support in instances where there are uncontroversial normatively correct choices. The main reason why is that Libertarian Paternalistic policies, by their very nature, promote only one choice.Footnote 7 For example, default settings only attempt to encourage the default congruent choice. If there is only one basic value, then Libertarian Paternalistic policies can sometimes efficiently help secure that value. However, as we have argued, values are often diverse, stably related to personality, and there is not only one value that should be maximized above other values (e.g., different people with different end-of-life values may have different priorities when designating a surrogate). Given that there typically is no known Neo-Platonic truth about what values are correct (or even which most are most valuable), choice architects won’t be able to reliably guide people to make decisions in accord with the “right” values whenever there is diversity of legitimate values. In cases such as these, Dworkin (1988) gives some helpful suggestions about how to weigh the alternatives to nudging when there is a plurality of values:

    1. 1.

      The majority interest must be important

    2. 2.

      The imposition on the minority must be relatively minor

    3. 3.

      The administrative and economic cost of not imposing on the minority would be very high.

    To the extent that the there is good evidence that 1–3 are true, then that may generally favor some Libertarian Paternalistic policies (Soll et al., 2015). However, in instances where there is little or weak evidence for some of 1–3, then one must think carefully about the relative merits of the Libertarian Paternalistic policy versus alternative choices.

  3. 3.

    Resources: What are the essential skills and resources required for the individual decision maker, how well understood are those competencies, and what are the distributions of relevant competencies across various stakeholders? In general, Libertarian Paternalistic strategies will have the advantage when we are not sure what skills are required in order to make an independent, well-informed decision or when developing those skills is prohibitively costly (e.g., becoming an expert violinist). There is a substantial literature that suggests there are at least some domain-general skills that lead to better decision making in general (e.g., statistical numeracy is thought to give rise to risk literacy more generally; Cokely et al., 2012), as well as many other domain-specific competencies that can powerfully influence choice (e.g., expertise, time, financial resources, familiarity with the domain, etc.). Choice architects who have identified the relevant skills and resources (e.g., reverse engineering superior decision making) can more accurately assess the feasibility and other costs associated with various designs (e.g., by “boosting” competencies (Hertwig & Grune-Yanoff, 2017)). For example, it would be ethically irresponsible to design a decision education intervention that no one would understand or have time to consider (e.g., too complex), given the availability of effective and otherwise useful nudges.

  4. 4.

    Reputation: What are the costs to the Choice Architect? All the other criteria in this heuristic evaluation list concern the end-user who is the target of the instance of Choice Architecture. However, attention should also be paid to the costs to the Choice Architect who is implementing the decision making intervention. Sometimes, choosing to intervene comes with risk to the intervener. These risks could take a variety of forms including reputation costs, trust (e.g., “trust in municipal authorities”), and, in some cases, risks of physical harms (e.g., interventions to decrease homophobia) (2011). There is little research on the effects of Libertarian Paternalistic policies with respect to what we call reputation costs and how those nudges influence those factors. Some research exists about how general factors like one’s level of trust predicts acceptance of nudges, suggesting that greater institutional trust predicts acceptance of nudges (Sunstein, Reisch, & Kaiser, 2019). Other research suggests that nudges that target unconscious processes are viewed as less favorable than those that target conscious processing (Felsen et al., 2013). However, little research exists indicating whether nudging influences levels of trust in the choice architect or their agency (Hoang & Feltz, in prep).

  5. 5.

    Resiliency: What are the relevant infrastructure, development, and similar constraints that bear on the feasibility of each instance of choice architecture? Special attention should be paid to the practical, legal, and implementation costs associated with deploying any choice architecture with respect to potentially dramatic changes in the choice environment (e.g., including cognitive or emotional changes within the decision maker or social-political changes in the environment). This is particularly true for high-stakes and time-sensitive choice architectures that rely on special kinds of infrastructure or resources (e.g., changing administrations can change priorities and reduce access to resources; cyberattacks can alter communication infrastructure and cognitive workload; natural disasters can alter physical infrastructure and emotional stability). For instance, it may be wise to use a Libertarian Paternalistic policy to help people make better choices rather than to explore options about how to inform in some deadly situations when public risk perceptions could interact to exacerbate the emergency (e.g., when fear can dramatically alter behavior). If there is an outbreak of some disease that is highly contagious, perhaps it is better to set up defaults (or even hard paternalistic policies) that keep people out of harm’s way. In those instances, the gains made in terms of expediency could outweigh the costs of the moral violation even if there are, at least in principle, other alternatives that theoretically could inform decisions (e.g., risk communications that should be transparent but cannot be validated in time). Of course, in similar emergency situations that may involve rapidly changing or persistently unstable conditions, decision aids that enable independent informed decision making and non-technology-mediated person-to-person communication may be favorable, as might be the case after a natural disaster that involves long-term utility and transportation disruptions. In any case, choice architectures should assess all relevant aspects of infrastructure needs and vulnerabilities, with careful attention to risks that accrue under changing cognitive, emotional, social, political, and environmental conditions.

Conclusions

This book had an ambitious goal: to provide an integrative evidence-based review of the growing literature showing that many philosophical values are predictably fragmented while also detailing how and why that fragmentation has some important theoretical and practical implications. We started by documenting the varied yet predictable nature of many people’s basic philosophical and ethical values. The main philosophical conclusion drawn from all relevant studies was that some philosophical projects run the serious risk of not being able to be reliably done based on the current methods and tools (e.g., Neo-Platonic projects). In the light of other research, it follows that it is unlikely that we will ever come to know with any great confidence the mind-independent truth about a variety of philosophical issues including essential truths about freedom, moral rightness, intentional action, and some of the values we hold most dear.

Given this analysis, what we are left with is a variety of different concepts and values that all seem, at least on the face of it, to be acceptable and normal for people to have. In the presence of irreducible and justifiable diversity and fragmentation, what are we to do? In closing our book, we have focused on providing a theoretical and practical framework for translating these philosophical insights into ethical interaction policies that have the potential to impact many people. Our preference with this book has been to focus on the commonly accepted and valued concepts of beneficence and autonomy. Of course, given other theoretical commitments, we could have chosen different sets of concepts and values. But these concepts, in concert with the empirical evidence on the fragmentation of fundamental philosophical intuitions, uniquely inform some influential emerging debates with major policy impactions and direct relevance to people’s daily lives (e.g., when and why might Libertarian Paternalism be the preferred method for helping people make decisions that promote health, wealth, and happiness?).

In this final chapter, we stretched to take what we think is an important step toward providing a well-grounded, simple, and sound approach to ethical interaction theory and interactive policy analysis. The common and overarching goal is obvious: Use systems and science to ethically and efficiently help people make better decisions—decisions that allow them to get more of what they (should and often do) value, including health, wealth, safety, and happiness. So that the intricate philosophical integration doesn’t become lost for esoteric ends, we deliberately worked to give some structure to a prototype framework for heuristic evaluations of decision policies, following some best practices in systems design and human factors engineering fields. Ultimately, we are fairly comfortable with the notion that we’ve provided a rough but useful first draft of practical and easier-to-use tools for interactive policy analysis (e.g., evaluating decision policies ranging from paternalistic policies like nudges to representative education policies such as decision aids; see Appendix). However, we once again want to emphasize that we do not know (or think that we can know) all of the values that people should or do have. Thankfully, we don’t need to know them all in order to adequately justify our approach because there is substantial agreement that autonomy is at least one essential and enduring human value, and for good reason. Consequently, provided that people have a representative understanding of a decision, helping people make their own decisions is likely to increase overall welfare in complex and surprising ways. These benefits result not only because autonomy provides opportunities to obtain those values, but also because it provides an essential and important route to individual self-efficacy and personal resiliency. Helping people become more autonomous has bountiful, measurable, and enduring well-being and welfare benefits (Deci & Ryan, 1995; Deci & Ryan, 2000; Devine, Camfield, & Gough, 2008; Ryan & Deci, 2017; Ryff, 1995; Sandman & Munthe, 2010) often giving rise to a deep sense of personal meaning and satisfaction (Peterson & Seligman, 2004; Seligman & Csikszentmihayli, 2000).

Beyond many more and less tangible benefits, ethical interaction theory provides an ethically defensible, evidence-based framework for the sustainable development of science for informed decision making—i.e., inclusive science designed to efficiently and ethically enhance skilled and autonomous decision making. After all, there is no question that providing domain-general education and decision making training can and often does help people make better decisions across a very wide range of high-stakes choices (Cokely et al., 2018; Garcia-Retamero et al., 2017). And while these interactions may often be about helping people obtain self-directed values (e.g., their own happiness, wealth, health), almost all humans have other-directed values, including values about entities and issues they may never even come in direct contact with (e.g., animals, organizations, future generations, biodiversity). Hence, at least on the average, it is a very good bet that helping people effectively realize and express their values should also make society better off independent of the specific benefits that accrue for the decision maker. In these ways, ethical interactive systems promote autonomous and efficient beneficence for individuals and societies more generally.

Given our analysis, a robust theoretically and empirically sound framework is now starting to come into place. More than many other decision science-based approaches, this framework provides for the protection of diverse values, choices, and individuals. On this view, decision scientists and policy makers have a duty to defend our freedom to disagree and to decide for ourselves. Our charge now is to make the most of these new resources and related insights from Ethical Interaction Theory.