Keywords

Introduction

A growing body of research investigates how people process information and form opinions and how these reasoning processes can have considerable consequences for risk governance. Specifically, scholarship examines the ways in which individuals’ information processing deviates from a normative model of learning where accuracy is the only goal. The concept of ‘motivated reasoning’, developed by political psychologists and political scientists, describes and explains these deviations. Theories of motivated reasoning seek to understand how reasoning works when accuracy is not the primary or sole goal directing individuals’ reasoning processes. Empirical studies indicate that individuals collect, process, and interpret information in a goal-driven fashion, which enables them to arrive at conclusions that are useful to them in some way—notably because they align with their prior beliefs, worldviews, or the positions of social groups they belong to. Because accuracy is not the sole goal, such reasoning is often perceived as ‘wrong’ by others and not sufficiently empirically based.

Gaining a better understanding of the sources, mechanisms, and implications of motivated reasoning can help scholars and practitioners of risk governance to anticipate, understand, and address differences in people’s risk perceptions, as well as differences in their level of trust in scientific evidence about risk. In practice, differences in perception and processing of risk information can lead to conflicts—even over the factual evidence itself. Prominent examples of risk issues at the centre of long-standing public disputes over the underlying evidence include climate change (e.g., Druckman and McGrath 2019), vaccinations (e.g., Kahan et al. 2010), and new technologies (e.g., Druckman and Bolsen 2011). Motivated reasoning research may help risk practitioners to see that apparent conflicts over ‘the evidence’ related to risk may be rooted in differences in people’s values, identities, and prior beliefs. Importantly, motivated reasoning research also suggests that such conflicts are not easily overcome by simply presenting more evidence to people.

Despite the growing scholarship on motivated reasoning, fundamental conceptual challenges remain. This chapter provides an analysis and discussion of this body of work to increase awareness among risk practitioners and scholars of its insights and contributions. The chapter begins with a review of the main contributions to the literature in psychology, political science, and communication studies. The review finds that prominent theorists in the field use the term ‘motivated reasoning’ to explain patterns of behavior using different theoretical accounts—and they may even be describing different phenomena altogether.

In addition to identifying and exploring these discrepancies, the chapter also focuses on the normative evaluations inherent in particular uses of the concept of motivated reasoning. These judgments typically include ideas about what it means for individuals to reason in a ‘rational’ manner and for society to govern risks ‘rationally’. We find the use of ‘rationality’ problematic in assessments of motivated reasoning in the context of risk decision-making in part because some of the theoretical accounts of motivated reasoning suggest that reasoning can be perfectly rational. Given the historical abuses of ‘rationality’ to dismiss the beliefs of marginalized groups (e.g., women, Indigenous peoples, people of color, or disabled people), we urge caution in assessments of the rationality of the beliefs of marginalized groups and motivated reasoning more generally. While some kinds of motivated reasoning are clearly irrational, others are not.

The chapter proceeds as follows. Section one, “The Theory of Motivated Reasoning”, presents basic motivated reasoning concepts and identifies key theoretical models. Section two, “Where Theoretical Models of Motivated Reasoning Diverge”, examines the conceptual differences between these models, while the third section, “Is Directional Motivated Reasoning a Problem?”, explores the normative implications of motivated reasoning. Section four, “Where to from Here? Theoretical Implications, Empirical Implications, Practical Implications”, discusses implications for theoretical and empirical research on motivated reasoning, along with implications for practice and policy. The final section offers conclusions.

The Theory of Motivated Reasoning

Basic Concepts

Generally, motivated reasoning is understood as a psychological description of how people process information and form/update their beliefs and/or attitudes about objects/events/issues. Motivated reasoning generally refers to how people’s goals or motivations affect their reasoning and judgments (Kunda 1990). When people pursue accuracy as their sole goal, they strive to reach a correct conclusion; when their goals are directional, they “unconsciously conform assessment of factual information to some goal collateral to assessing its truth” (Kahan 2016a, 2, emphasis in original). People may pursue both accuracy and directional goals to different degrees simultaneously (Kunda 1990), with directional goals, whether conscious or unconscious, exhibited as biases in people’s search for, interpretation and evaluation of information. In the empirical literature on motivated reasoning, correlations between people’s worldviews, goals, or values and their reasoning outcomes are often identified and examined in experimental designs, where study participants holding particular prior beliefs or values are presented with information and asked to assess it in some way (see for example, Lord et al. 1979; Redlawsk 2002; Taber and Lodge 2006).

Some of the key shared conceptual components of the literature addressing motivated reasoning include:

Motivation. The literature often draws on the definition of motivation by Fishbach and Ferguson (2007) as “cognitive representation of a desired endpoint that impacts evaluations, emotions and behaviors” (491). The terms ‘motivation’ and ‘goal’ are commonly used interchangeably.

Reasoning. Reasoning is commonly understood to incorporate multiple cognitive processes, including information collection, processing, and evaluation; memory retrieval; attitude formation; judgment and decision-making (Leeper and Mullinix 2018). This chapter focuses in particular on how motivated reasoning affects people’s reasoning about risk and their processing of risk information.

System 1 vs. System 2 thinking. Multiple theoretical accounts of motivated reasoning draw on this distinction. In the framework developed by Kahneman (2011), System 1 cognition is immediate and intuitive, while System 2 is deliberate and slow. Traditionally, biases in judgment are attributed to affect-driven System 1 reasoning. However, as we will see below, some argue that it is System 2 cognition that is centrally deployed in motivated reasoning.

Bayesian updating/learning. Motivated reasoning processes are often contrasted with truth-seeking Bayesian learning (Gerber and Green 1999; Redlawsk 2002). According to this model, individuals hold initial estimates of the probability that a hypothesis is true (the prior). The prior is updated when people receive new, relevant evidence. Importantly, a normative accuracy-seeking Bayesian model prescribes that people collect, assess, and adopt new evidence independently of their prior. As a consequence, people with opposite prior views should converge in their opinions when exposed to the same information. Motivated reasoning deviates from this Bayesian ideal because the process of updating is influenced by directional goals (Druckman and McGrath 2019). This means that uptake of new evidence is explicitly dependent on prior beliefs. As discussed below, exposure to the same evidence may then lead to the opposite effect on people with different priors, and increase division or polarization.

Bias. The term ‘bias’ is ubiquitous in the literature, with motivated reasoning commonly understood as leading to ‘bias’ in judgment and decision-making. Biased reasoning was first defined as a systematic and measurable deviation from the (known) correct answer (e.g., Tversky and Kahneman 1974). In this early work, biases were not correlated with motivations (ibid., p. 1130). The conception of bias has since expanded to include any correlation between a person’s beliefs and motivations other than accuracy goals, even if the true, correct answer remains unknown. Interestingly, none of the theoretical accounts reviewed here provides an explicit definition of bias.

Key Theoretical Models of Motivated Reasoning

First developed by psychologists in the second half of the twentieth century, the concept of motivated reasoning was later picked up by political scientists and communication scholars. Multiple theoretical models of motivated reasoning exist, but a small number of models are the backdrop for numerous research studies. The models do not agree on central theoretical components, but each contributes insights that help to understand the implications of motivated reasoning for risk scholarship and practice.

The model of ‘biased assimilation. In their pivotal study on people’s views of the death penalty, Lord et al. (1979) find evidence of what they term ‘biased assimilation’. Their results show that people holding strong opinions about the death penalty evaluate and interpret new, ambiguous evidence on the topic in the light of their prior views. Study participants—both supporters and opponents of the death penalty—systematically considered evidence in line with their previous viewpoint as more convincing than incongruent evidence. In fact, the presentation of new evidence made both supporters and opponents become more attached to their initial positions, amplifying divisions between the two groups. Importantly, the model of biased assimilation uses a cognitivist approach to explain information processing, namely the objective of achieving “consistency of […] evidence with the perceiver’s theories and expectations” (ibid., 2099) that shape their “judgments about the validity, reliability, relevance, and sometimes even the meaning of proffered evidence” (ibid.).

The model of motivated skepticism. Taber and Lodge (2006) argue that people’s prior attitudes and beliefs about a contentious issue influence how they select and evaluate new information about it. In particular, the authors identify a ‘confirmation bias’ (seeking out evidence that supports prior attitudes), a ‘disconfirmation bias’ (discounting non-supportive arguments), and a ‘prior attitude effect’ (considering arguments supporting prior attitudes to be stronger than those contradicting prior attitudes). The result is what the authors term ‘motivated skepticism’: exposure to balanced information about a contested issue did not lead to people’s opinions converging, but rather, led to further polarization and a strengthening of people’s prior attitudes.

The John Q. Public (JQP) model of motivated reasoning. This model defines motivated reasoning as an affect-driven, unconscious judgment process that involves post hoc justification and rationalization (Lodge and Taber 2013; Kraft et al. 2015). Affect is considered the key driver: feelings (positive or negative) arise immediately and spontaneously when people are confronted with new information (the ‘hot cognition’ hypothesis) and these initial feelings are seen to influence all subsequent processing and reasoning processes. Conscious re-writing of spontaneous responses is not impossible, but it is rare and requires time and effort so that only a strong motivation (e.g., accuracy goals) may make it worthwhile for individuals. However, people often engage in conscious deliberations to vindicate their spontaneous, unconscious judgments after the fact in order to justify their positions to themselves and others.

The Politically Motivated Reasoning Paradigm. When the goal in motivated reasoning is identity protection, Kahan (2016a) refers to this as politically motivated reasoning, which he defines as “the formation of beliefs that maintain a person’s status in an affinity group united by shared values” (ibid., 3). Kahan emphasizes that ‘identity’ can be defined in various ways and along various dimensions, including political affiliation, ideology, values (see below), religion, gender, ethnicity, etc. (Kahan 2016a). No matter the group characteristics, the underlying mechanism that directs information processing is the same: people interpret information in ways that signal their agreement with the position associated with their identity-giving social group.

Cultural Cognition Theory of Risk Perception. While the foregoing models concern human reasoning in general—and may be applied to reasoning about risk—this theory focuses on directionality in risk perception. Based on cultural theory and an individual’s ‘cultural worldview’ or value system (Douglas and Wildavsky 1982), cultural cognition posits that people who belong to different cultural groupsFootnote 1 systematically differ in their perception of risk and risk information through both psychological and social processes (Kahan 2012). Specifically, individuals tend to believe that what they value is not a source of risk and vice versa.

Multiple mechanisms of cultural cognition of risk are identified in the literature (Kahan 2012). A key mechanism is, again, identity protection—here, more specifically, cultural identity protection. For example, research has shown that white males systematically perceive risks from environmental hazards to be lower than women or non-white males (the ‘white male effect’) (Kahan et al. 2007).

Where Theoretical Models of Motivated Reasoning Diverge

The above models of motivated reasoning agree on the general idea that directional ‘motivated reasoning’ (however it is understood in the various accounts) introduces bias in people’s reasoning. However, the models differ in how they explain the source and extent of directionality.

What Is the Motivation in Motivated Reasoning?

We distinguish among three goals: (1) consistency with prior beliefs and attitudes, (2) identity commitments, and (3) value commitments.

Consistency with prior beliefs and attitudes. The model of ‘biased assimilation’ (Lord et al. 1979) and the model of ‘motivated skepticism’ (Taber and Lodge 2006) understand motivated reasoning to be directed mainly by people’s intrinsic goal to uphold and confirm previously held beliefs and attitudes. Specifically, these models argue that people are motivated to select and evaluate more positively new evidence that supports their previously held beliefs and attitudes. Switching off this kind of inertia takes time and effort.

Empirical studies indicate that people’s tendency to process and assess new information about an issue in light of their prior positions can lead to conflict and polarization over scientific evidence. These findings underscore that providing people with more risk information—the ‘knowledge deficit’ model of risk communication—may not promote shared perceptions of risk. In fact, the opposite may obtain: people may diverge further in their beliefs.

This tendency can be positively correlated with peoples’ level of knowledge: the study on motivated skepticism by Taber and Lodge (2006) revealed that more knowledgeable individuals, with stronger initial attitudes and beliefs, were more likely to reflect motivated skepticism in their information processing because their prior beliefs and attitudes were comparably stronger. Crucially, this finding suggests that risk experts may not be less, but rather more, likely than the general population to reason in a motivated fashion. Simply put, more knowledge does not necessarily produce reasoning focused solely on truth-seeking.

Identity protection. Kahan’s model of politically motivated reasoning focuses on one particular goal in people’s reasoning—identity protection. In this model, holding on to familiar beliefs despite being confronted with new, contradicting evidence is not a goal in and of itself. Rather, people’s goal when processing new evidence is to align their position with that of a relevant social group to maintain and express their membership in it.

Hence, when belief/disbelief in scientific facts about a risk issue become associated with ‘identity-defining affinity groups’ (Kahan 2016a), individuals are motivated to reason about information in ways that express their group identity. For example, DeFranza et al. (2020) conducted a study focused on how religiosity (i.e., feelings, thoughts, experiences, and behaviors associated with the sacred) affected adherence to shelter-in-place directives in response to COVID-19. Prior to a shelter-in-place directive, religiosity did not affect people’s decisions. However, once there was a shelter-in-place directive, higher religiosity resulted in less adherence to shelter-in-place directives.

Value commitments. Cultural cognition theory identifies worldviews and values as key motivators of directionality in people’s reasoning about risk. Cultural cognition specifies that individuals seek consistency with their values when forming beliefs about risk, and aim for alignment in their risk perceptions with cultural groups bound by the same values. Hence, cultural cognition theory includes both the consistency objective and the goal of identity protection as drivers of directionality in human reasoning, but considers these goals through a value lens.

What Is the Role of Affect and ‘Hot Cognition’ in Motivated Reasoning?

Some of the models of motivated reasoning above suggest that the phenomenon is primarily a result of immediate, affect-driven judgment; others suggest that motivated reasoning is the outcome of a more deliberate cognitive process. In other words, models differ with regard to whether motivated reasoning is theorized to occur mostly in System 1 or System 2 thinking.Footnote 2

The JQP model of motivated reasoning and the model of motivated skepticism specifically emphasize the influence of affect and ‘hot cognition’ on the formation/updating of beliefs and attitudes in response to new information. These models situate motivated reasoning firmly in immediate, automatic System 1 thinking, where spontaneous, affect-driven processes drive information processing by triggering selective attention, exposure, and judgment processes. The unconscious, immediate ‘hot’ response to new information determines the direction and strength of subsequent information processing. While people generally “want to get it straight” (Lodge and Taber 2013, 152), they are unconsciously held hostage by their powerful, affective priors. According to such affect-focused explanations of motivated reasoning, conscious deliberations (System 2) in most instances merely serve to justify spontaneous, unconscious judgments (System 1) after the fact.

In contrast, Kahan’s model of politically motivated reasoning suggests that deliberate, slow System 2 thinking is required to successfully direct reasoning. For example, Kahan (2013; 2016b) argues that when individuals defeat challenging arguments to ensure their position remains loyal to their identity-giving group, it is a deliberate and often sophisticated intellectual act that requires System 2 thinking.

What Are the Limits of Motivated Reasoning?

The studies reviewed above seem to agree that while “all reasoning is motivated” (Taber and Lodge 2006), individuals do not typically engage in directional motivated reasoning in an extreme manner all the time. For example:

  • Accuracy motivations can put a limit on the influence of directional motivations (Kunda 1990; Kahan 2013).

  • People with weaker beliefs and attitudes about a certain issue are less likely to engage in motivated reasoning about it (Taber and Lodge 2006).

  • People generally have a desire to appear rational and objective to outside observers, and their need to justify their judgment puts constraints on the judgment’s outcome (Kunda 1990, 1999).

  • Only a constrained number of risk issues bear so much social meaning that an individual’s position on the issue signals belonging to a certain social group (Kahan 2013).

It is not clear from the literature whether and how public authorities might intervene to address directional motivated reasoning on contentious societal issues to facilitate consensus building. Research on motivated reasoning is still fairly new, and as such the main focus has been on understanding the underlying mechanisms rather than investigating how to address the issue. However, all accounts agree that whether driven by consistency goals, value commitments, or identity protection goals, directional motivated reasoning about a societal issue is not easily addressed by more or better evidence. Instead of converging around the evidence, people’s opposing positions may harden and diverge further. Models also agree that people with greater expertise about an issue may be particularly prone and better equipped to engage in directional motivated reasoning.

Still, some of the theoretical models above suggest some responses, including information campaigns (Kraft et al. 2015) and preventing positions on important policy issues from becoming associated with certain ideological groups (Kahan 2016a). Cultural cognition theory suggests that risks should be communicated in ways that affirm rather than threaten cultural worldviews to elicit greater receptiveness and trust in the information (Kahan 2012). In practice, this may include working with culturally diverse risk communicators who enjoy credibility in the target communities. Others argue that more intrusive measures should be taken to prevent motivated reasoning. In particular, Kahan (2013) argues that individuals’ incentive structures should be modified in ways that promote the pursuance of accuracy goals rather than directional goals to link their beliefs more firmly to the truth.

This emerging debate about how to address motivated reasoning assumes that it is indeed a problem requiring intervention. Is motivated reasoning a problem for risk decision-making for the individual and/or society? These normative questions are examined next.

Is Directional Motivated Reasoning a Problem?

All of the theoretical models of motivated reasoning above include more or less explicit evaluations of the benefits of motivated reasoning for individual decision-making. As outlined above, processing information in a way that enables people to uphold their prior beliefs and attitudes allows them to build on their previous experiences and knowledge (Lord et al. 1979; Taber and Lodge 2006). This can be efficient at the individual level because updating beliefs is a time and resource-intensive process. Similarly, engaging in reasoning that protects identity and value commitments affords people an immediate benefit from maintaining loyalty to identity-giving groups, in contrast with the longer term (and often more nebulous) benefit from holding a factually accurate position (Kahan 2013, 2016a).

Examining the impacts of motivated reasoning on risk perception and assessment becomes more controversial when considered from a societal perspective. While it is generally fair to assume that motivated reasoning about risks provides some benefit to individuals, others might judge the risk attitudes and beliefs that they arrive at as simply ‘wrong’ or harmful to those individuals or to society. Even if people benefit from motivated reasoning, one may argue that collective decision-making about risk can suffer as a consequence. Kahan (2013; 2016a) argues that the benefits to individuals may cost democratic society as a whole since evidence-based decision-making about risks becomes increasingly difficult when new evidence has little impact on people’s beliefs.

Judging the effects of motivated reasoning from a societal perspective requires a normative criterion to define ‘good reasoning’ about risk. The literature often uses ‘rationality’ as a criterion for evaluation, which is automatically contrasted with any correlations between values, identity, or prior positions and a person’s stated beliefs. However, the models reviewed above draw implicitly on different understandings of ‘rationality’.

Serving self-interest. Kahan et al. (2012) argue that evidence of identity-protective motivated reasoning shows “how remarkably well-equipped ordinary individuals are to discern which stances towards scientific information secure their personal interests” (733). Rational belief formation is here construed as what is overall in one’s self-interest, which Kahan argues is mostly driven by the need to fit in with one’s community. As a result, for the individual, Kahan (2013) does not see identity-protective cognition “as a reasoning deficiency but as a reasoning adaptation suited to promoting the interest that individuals have in conveying their membership in and loyalty to affinity groups central to their personal wellbeing” (418).

Based on a similar understanding of ‘rationality’ as ‘alignment with self-interest’, Lord et al. (1979) argue that it is rational for individuals to assess new information as more plausible when it aligns with their previous knowledge and experiences: “Willingness to interpret new evidence in the light of past knowledge and experience is essential for any organism to make sense of, and respond adaptively to, its environment” (ibid., 2107). Giving more weight to one’s prior attitudes in the collection and processing of new information is therefore seen as generally efficient and sensible (Taber and Lodge 2006).

Publicly defensible. Kunda (1990) draws on this understanding of rationality when she writes that “The biasing role of goals is thus constrained by one’s ability to construct a justification for the desired outcome: People will come to believe what they want to believe only to the extent that reason permits” (483). The need for a justification that could pass muster under the scrutiny of others is one sense of rationality that seems to constrain directional motivated reasoning. The contrast to the first understanding of ‘rationality’ can be sharpened by considering that it is in many circumstances implausible that ‘fit with one’s peer community’ would be accepted as being a defensible public reason to justify a belief.

Truth-seeking. Finally, directional motivated reasoning is generally considered irreconcilable with traditional, enlightenment-era ideas of rationality. Goal-oriented motivated reasoning by definition interferes with accuracy-seeking, dispassionate decision-making as idealized by the norms underlying the accuracy-seeking Bayesian model. The JQP model explicitly considers ‘hot cognition’ (System 1 thinking) as driving human judgment and therefore suggests that humans process information generally in an ‘irrational’ manner. While others argue that directional motivated reasoning strongly engages System 2 thinking (traditionally equated with this conception of ‘rational’ thinking), the general assumption that slow, deliberate thinking necessarily results in accuracy-seeking reasoning does not hold (Kahan 2016c). Importantly, from a risk governance perspective, at the societal level this conception of ‘rationality’ is typically reflected in calls for basing policymaking and regulation on ‘objective’ scientific evidence (Sanderson 2006).

‘Rationality’ as a Contested Concept

What defines rational decision-making is not often explicitly defined by theorists of motivated reasoning. However, their understandings are implicit in the sense that normative evaluations of motivated reasoning phenomena either cast it as ‘irrational’, in the sense that it leads to assessments and decisions that do not accord with ‘the facts’ or ‘truth’ (in line with the third sense of rationality above), or in the sense that the reasoning would not offer a publicly defensible justification for a belief (in line with the second sense of rationality above), or they cast it is ‘rational’ in the sense that it serves individual purposes, but not those of accuracy (in line with the first sense of rationality above).

We contend, therefore, that a more explicit engagement with what counts as rational in decision-making in the first place is critical to advancing understanding of motivated reasoning phenomena. Specifically, we draw attention to the fact that rationality is a contested concept, as is clear in the different senses of rationality noted above.

An additional line of work important in this regard is that of Gigerenzer and Gaissmaier (2011). A common element in the third sense of rationality noted above is that assessment of the rationality of individuals’ decisions relies on whether individuals’ reasoning processes followed particular logical or statistical norms (Gigerenzer and Gaissmaier 2011). That is, the assumption is that it is possible to assess the rationality of a decision purely on the basis of universally applied norms, and independent of the particular context in which the decision is made, or of the person making the decision. Todd and Gigerenzer (2012) argue that the assessment of decision-making cannot solely rely on adherence to logical or statistical procedures; it must also take into account the success of decisions in the ‘real’ world. The authors draw on the notion of ecological rationality to emphasize this particular notion of rationality. Further, as this chapter makes clear, individuals make decisions in the context of particular values, goals, and larger purposes, such that it is rarely possible to identify common ideals about optimal decision outcomes on people in general (e.g., maximizing health, optimizing financial outcomes, etc.).

The use of ‘rational’ as a desired trait also has societal implications that underscore its contested nature. First, it privileges the views of certain social and demographic groups that have defined what it means to be ‘rational’, e.g., being accurate, objective, and unemotional. What counts as rational or irrational depends to a large extent, then, on historical, cultural, and political contingencies. Groups and individuals generate diverse narratives of what is considered rational and which meaning prevails depends in part on the power of those putting forward a particular definition. For example, historically, the ‘rationality discourse’ has been used to disable or discredit groups, including men’s power over women, whom they labeled irrational (Wolbring 2008; Buechler 1990; Viola 1986), a tactic still used today (Wolbring 2019; Daily Star 2014). The concept of ‘irrationality’ is also used as a tool to discredit one’s opponents in policy or societal debates (see, for example, Wolbring and Diep [2016], Posusney [1993], Van Montagu [2013], Osborne [2014]). Rationality discourse can also be used to question a person’s self-perception or self-acceptance. For example, disabled people who perceive their body as a variation that does not need to be fixed—not an aberration—are often told their perspective is not rational because it does not reflect the dominant view (Harris 2001, 2000).

Secondly, the social nature of rationality can be seen when it is used as a standard for making risk decisions. For example, in the governance of emerging technologies, there is always some level of potential risk to consider, but a great deal of uncertainty about its nature, severity, distribution, and probability. In this context, values play a central role in characterizing and mitigating risk based on the evidence. In fact, it is impossible to base societal decisions on scientific information alone (e.g., Kuzma 2018). Yet, regulatory decisions are portrayed as rational and ‘science-based’, masking the values embedded in decisions that are not made explicit. Those with power and authority have defined what is a rational interpretation of the scientific evidence based on their own values—and often behind closed doors (Meghani and Kuzma 2011). Those outside of the process who hold alternate views are often pegged as irrational Luddites.

In contrast, the idea of ‘strong objectivity’ challenges the monopoly that powerful actors hold on rationality (Harding 1995, 332). Arising out of feminist standpoint theory, it argues that what we can know is enabled by where we come from socially. Only through the inclusion of diverse standpoints, particularly those from marginalized groups, can we maximize our knowledge and achieve strong objectivity. Strong objectivity redistributes power to groups that have not been at the helm of ‘evidence-based’ decision-making by defining a more socially robust form of rationality.

Crucially, in contrast to the models of rationality discussed in the previous section, strong objectivity places importance on the phenomena driving motivated reasoning, such as prior beliefs, values, and identities, in achieving accuracy (e.g., through Bayesian updating) (Druckman and McGrath 2019). This is a fruitful insight that deserves more discussion in the literature.

Where to from Here? Theoretical Implications, Empirical Implications, Practical Implications

Our analysis has a number of implications for theory development, empirical studies of motivated reasoning, and the practice of risk governance. The following sections summarize these implications.

Theoretical Implications

As revealed above, the terminology around motivated reasoning is ambiguous. There are discrepancies in key concepts and models, which suggest that not only the theoretical accounts—but indeed the phenomena they describe—vary. We need more theoretical clarity and consistent terminology, tied to empirical practice, to analyze how individuals form beliefs and attitudes.

Further, the normative differences around ‘rationality’ discussed above were distilled from work within the social sciences literature on motivated reasoning. Additional normative issues arise if one views the issues through the lens of philosophy of science. Work over the past few decades has led philosophers to examine the rational and necessary role of social and ethical values in science, which holds important implications for research on motivated reasoning. There are at least two crucial places where social and ethical values play a legitimate role in scientific reasoning and practice.

The first is in the direction of scientific research effort: deciding what is important to study and how research problems are framed. Public skepticism about scientific claims can arise because some segments of the public view scientific efforts as inappropriately contextualized or directed. For example, if scientists are incentivized to pursue patentable technology solutions to problems of food production but some people are more interested in changing agricultural practices (e.g., shifts to organic farming), those people can view scientific research as fundamentally misdirected and thus results of such scientific work as inadequate for addressing policy issues. Similar concerns have been raised regarding research on the safety of vaccines (Goldenberg 2016).

The second role for values is in the assessment of evidential sufficiency in science. Science is an inherently inductive investigative process and the evidence underpinning scientific claims is never complete. When, then, is the evidence strong enough? Examinations of inductive risk reasoning in science (Douglas 2000; Elliott and Richards 2017) have shown the pervasive need to embed ethical and social values in this judgment. This means members of the public holding different values than scientists might disagree with scientific assessment of evidential sufficiency for value-based reasons—and do so rationally (Douglas 2017).

On the other hand, not all kinds of reasoning can be considered rationally acceptable (in the sense of publicly justifiable). For example, if many segments of the public consider evidence important, there should come a point when the evidence is strong enough for all. If no evidence could convince people, then they would have adopted an unfalsifiable position, which would be irrationally intransigent (as Taber and Lodge [2006] note). This insight can be stated using a Bayesian framework: it is not just priors that diverge among actors, but also likelihood ratios. This can explain why different kinds or levels of evidence might be needed by different actors.

Work by social scientists finding correlations between value-inflected motivations and beliefs or attitudes—including work on cultural cognition theory—tend not to differentiate between rational and irrational influences of values on the assessment of scientific claims. Future work could be geared to do so.

Empirical Implications

Given these theoretical implications, researchers must be more precise in the specific domains or constructs they aim to measure empirically. Take, for example, experimental research that relies on framing effects to evaluate differences in how people process information. Cacciatore et al. (2016) examine how the presentation of information affects people’s opinion formation. Equivalency framing, drawing largely from psychological literature, examines how otherwise equivalent information can be manipulated to assess if there is an effect on how an individual processes information that is (in)congruent with their beliefs (Druckman 2001). This makes the approach well-suited to models of motivated reasoning that seek to assess consistency with previously held values or beliefs (e.g., Lord et al. 1979 or Taber and Lodge’s model of motivated scepticism). Kahan et al. (2011) found that individuals were more likely to support scientific information congruent with their culturally predisposed position. Equivalency framing studies are most effective when the scientific evidence concerning an issue is fairly well-established, and researchers are seeking to assess which communication strategies may be more effective for a given scenario (Pedersen 2017; Cacciatore et al. 2016).

By contrast, emphasis framing, drawing from sociology, examines how presenting specific aspects of an issue unconsciously affects how information is processed. The focus may be on manipulating what is received or is salient with different actors, as opposed to ensuring that equivalent content is presented (Cacciatore et al. 2016). Emphasis framing may align with the John Q Public model of political information processing, based on the assumption that unconscious thoughts predict the direction of subsequent reasoning despite conscious deliberation (Taber and Lodge 2016). Emphasis framing may also be a useful strategy in seeking to understand the evolution of a new or emerging risk situation. For example, Driedger et al. (2018) used qualitative thematic analysis to examine how different sets of actors were represented in Canadian news media and on Facebook regarding a controversial hypothesis about a ‘promising’ new therapy for people suffering from multiple sclerosis. While the need for ‘appropriate’ and ‘standard measures’ in following sound science was strongly promoted by scientists and government policy actors, other voices in the debate—patients, advocacy groups, and scientific experts with competing knowledge claims—used oppositional collective action frames to challenge the traditional scientific discourse. By creating a social and political maelstrom, people with multiple sclerosis were able to persuade governments and researchers to respond differently, culminating in the funding of a national clinical trial into a controversial hypothesis that defied all standards of evidentiary support. This type of oppositional collective action might be considered rational skepticism or irrational bias, depending upon the perspective. Nevertheless, while similar studies focused on motivated reasoning have illustrated the presence of bias using similar techniques, it may require more nuanced research approaches to fully understand the causal relationship between stimuli and the value-infused motivations, and to assess the public justifiability of different views.

Further, it may be possible to explore the boundary between rational skepticism and irrational bias by using affective computing and sentiment analysis. Previous studies have used natural language processing techniques to analyze transcripts from interviews with the general public and experts. The research found that people responded positively to information embedded in scientific narrative structures regardless of their stance on the issue (i.e., for or against) (Shanahan et al. 2019). This example may more closely align with emphasis framing. By contrast, it may be more important to examine different types of discourse to understand when individuals respond differently to the same types of information, much like equivalency framing research. One study found that those with different political beliefs often respond to the same types of information positively or negatively in relation to ideology, not facts (Balasubramanyan et al. 2012). Nonetheless, these natural language processing techniques may provide insight to differentiate between rational skepticism as a response to uncertainty and irrational bias.

While an imperfect classification system, equivalence framing is likely more easily assessed with quantitative research and emphasis framing with qualitative studies. That said, looking at motivated reasoning in qualitative research or using non-experimental designs would help researchers to identify and explain motivational biases (Maxwell 2004). Regardless of approach, it is important for researchers to be clear in how they define or use the term ‘motivated reasoning’, since, as discussed in this chapter, important conceptual differences among models exist. It is also important for them to be explicit about how they define rationality, along with the role and place of values in their research and assessments.

Practical Implications

We identify four key takeaways from this discussion for risk practitioners. First, policymakers and regulators working on risk governance need a better understanding of motivated reasoning and how it affects risk perception. Importantly, research shows that motivated reasoning is a human phenomenon—citizens, public authorities, and scientific experts are not exempt from it. In fact, greater expertise on an issue can make individuals more sophisticated in their capacity to reason in a motivated fashion.

Second, the fact that motivated reasoning is inevitably part of any risk decision-making process does not necessarily make decision outcomes flawed or irrational. Rather, the above discussion of rationality deliberately challenges the idea that ‘rational’, accuracy-oriented, and value-free decision-making processes are superior. Instead, bringing people’s values, prior beliefs, and identities into public decision-making about risks is crucial to developing and implementing effective solutions and to pursuing democratic legitimacy. Again, rather than chasing an unattainable and ultimately undesirable ideal of solely ‘rational’ risk governance, greater awareness and better understanding of motivated reasoning (however defined) will better enable policymakers and regulators to detect and address the directional goals, values, and identities that shape people’s beliefs and attitudes toward risk issues and to recognize them more effectively in the process—rather than automatically writing them off as irrational and irrelevant distractions.

Third, this discussion also hints at recommendations on what not to do in response to motivated reasoning. For instance, simply providing more scientific evidence on a risk issue is not likely to ‘cure’ people’s motivated reasoning by bringing their opinion more in line with science. In fact, research indicates that this strategy may backfire. People may reject messages at odds with their goals and move in the opposite direction of the message (Zhou 2016).

Finally, and perhaps most importantly, since research reveals that everyone engages in motivated reasoning, including experts and scientists, the existence of the phenomenon should not be used as an argument against efforts to democratize risk governance. In fact, under the guise of ‘rationality’, doing so makes implicit decisions on whose values and objectives matter in risk governance and whose do not, potentially reinforcing the exclusion of marginalized groups.

Conclusion

Research indicates that motivated reasoning is ubiquitous in human thinking and decision-making. But as shown in this chapter, there remain large gaps in our understanding of the phenomenon. We need more clarity around theoretical concepts and models of motivated reasoning, as well as better approaches to studying its effects. Perhaps most importantly, we need to better integrate what we already know about human reasoning into risk governance practice. The normative (if implicit) connotations in research about motivated reasoning should be made transparent and critically discussed. Perceiving motivated reasoning as necessarily harmful to effective risk governance and striving for ‘rationality’ in decision-making about risk ignores the fact that values, identity, and other non-accuracy goals will always influence human beliefs and attitudes, and sometimes properly rationally so. Neither experts nor public authorities are immune to directional motivated reasoning. Instead, inclusive and transparent processes that explicitly acknowledge the presence of values and motivations in all people’s risk perceptions, assessments, and preferences about risk management open the door to effective and legitimate risk governance.