Keywords

5.1 Introduction

When President Trump indicated he might not accept the results of the 2016 election, the protests were loud and numerous. The USA prides itself on peaceful transitions of power, and, for some, Trump’s reluctance to say he would accept the election results if he lost was taken as a lack of loyalty and commitment to democracy. Real Americans, one could argue, accept the outcomes of our democratic processes, whether we like them or not, especially when it comes to transitions of power. In his defense, Trump claimed that the processes might not be democratic at all, but instead “rigged.” But perhaps most interestingly, when Trump won the election, the tables turned. After the election, Trump appeared less concerned about the possibility that the wrong processes had been used, and others, likely including some others who had accused Trump of disrespecting our democracy, were now more concerned that maybe the election was rigged after all.

It seems clear that people’s assessments of decision-making processes and outcomes are intertwined, with such assessments likely having bidirectional influences upon each other. The dual influences of perceived processes and outcomes have long been recognized in the literatures examining distributive justice , which focuses on the fairness of outcomes, and procedural justice , which focuses on the fairness of the procedures leading to those outcomes (Cropanzano & Folger, 1991; Lind & Tyler, 1988). Trust likely also plays a role (Brockner, 1996). Those most likely to question Trump’s commitment to democracy appeared to be those who also trusted him the least. Likewise, Trump’s own portrayal of the reasons why he might not accept the election results questioned the trustworthiness of the processes (are they fair or rigged?), the media (are they honest and competent or corrupt and incompetent?), and the Democratic and even Republican parties (do they care about the American public or only about their own interests?).

In a democracy such as the USA, processes are put into place with the intention and hope that the outcome will be accepted, even if (or especially if) not preferred, on the basis that appropriate processes were used. Acceptance might also depend on perceptions that the involved parties behaved trustworthily during the processes (i.e., in competent, caring, honest ways) (Mayer, Davis, & Schoorman, 1995) and can be trusted to follow and respect any implied outcomes of such processes. The outcomes are not always “accepted” entirely, but one hope that underlies our democracy is that the outcomes will be at least sufficiently accepted for a time—at least until the time that they are overturned, again, via the appropriate processes, as opposed to via violence.

The key words here are “acceptance” and “appropriate processes.” Instead of having wars to decide who has power and who gets to decide what rules are in place, democracies have set up other processes by which to decide such things. Ideally, the best ideas will win, and people who disagree with decisions made through the established processes will nonetheless accept—or at least tolerate and abide by such decisions—at least for a time, that is, until the time such that the decision is changed by additional rounds of appropriate processes.

But what are appropriate processes? More central to the purposes of our studies, are there certain public engagement processes that can be used during decisions about policies, in order to increase public acceptance of decisions and the policies arising from such processes? Are there processes that might even reduce the impact of one’s policy preferences on policy acceptance, so that those who do not receive their preferred outcome are nonetheless willing to accept the outcome? Meanwhile, do any public engagement processes actually make things worse? Are there processes that decrease the likelihood that people will accept policies that are not preferred? And in either case, what accounts for the linkages between various public engagement processes and policy acceptance?

In this chapter, we examine variables that predict acceptance or support of nanotechnology policies across one or more of our four studies. In so doing, we also examine whether our experimental manipulations impact some theorized mediators and moderators, such as process fairness , conscientious engagement , and trustworthiness perceptions. Furthermore, given the importance of accepting polices even when they are not preferred, we explore factors that might alter the relationship between “policy preferences ” and “policy acceptance,” by increasing or decreasing their dependence upon each other. As we give an overview of our results, we point to some findings that seem predictable and understandable. Other findings, however, may leave the reader (as well as we writers) uncomfortably searching for explanations. We hope this spurs more research and not just frustration. But before we get to that, let us first provide some theoretical context.

5.2 A Rough Draft Theory of Policy Preference , Acceptance, and Support

5.2.1 For What? Definitions and Relationships Between Some Key Variables

The outcome of greatest interest to this chapter is policy acceptance and support . While it may not always be important to make such fine-grained distinctions, because of our interest in whether certain public engagement processes might increase the acceptance or support of policies even when they are not preferred, we need to distinguish between such constructs. Currently, there is a noted lack of existing theory distinguishing and connecting policy preferences, acceptance, and support (Dreyer, Polis, & Jenkins, 2017). We draw from the little that we could find (e.g., see also Rau, Schweizer-Ries, & Hildebrandt, 2012), as well as literatures that we perceive might be related (i.e., the literatures relating to legitimacy and procedural fairness) to develop our theoretical approach.

Our view of the relationships between constructs is illustrated in Fig. 5.1, which shows policy preference ranging from low to high along the horizontal dimension. We define policy preference as an evaluative attitude of preferring, liking, agreeing with, and being “for” a given policy. Our construct of preference is similar to constructs suggested by Schade and Schlag (2003) and Dreyer and colleagues’ (Dreyer & Walker, 2013). Notably, these authors used terms like acceptance and acceptability for their similar constructs—whereas we argue that “preference” is a better term and that acceptance should be viewed as distinct from preference.

Fig. 5.1
figure 1

An illustration of the relationships conceptualized among policy preference attitudes (horizontal light-colored yellow line), acceptance/nonacceptance (double vertical blue border and zones to right and left of each), and support/resistance (subzones within the red nonacceptance and green acceptance zones, which begin at the point of the single vertical borders and extend to the extremes)

We use the term policy acceptance to refer to judgments and evaluations about the policy being in place. As we asked our participants in Studies 2–4, do you agree that the government made the right decision [by adopting the policy]? Acceptance is expected to be strongly affected by policy preferences but also affected by other factors such as processes used to make the policy. One could, for example, express acceptance of a policy (or some political outcome, like a presidency, to return to our earlier example) that one does not prefer. This could happen perhaps due to an assessment that the policy decision (or decision about who is in power) was arrived at fairly.

Although it is less important to our current analyses to distinguish between policy acceptance and support, we do so because it is important theoretically, and in case later it helps to make sense of some of our more puzzling results. Prior authors have distinguished policy support as reflecting behaviors, while constructs like policy preference or acceptance reflect attitudes (Batel, Devine-Wright, & Tangeland, 2013; Dreyer, Teisl, & McCoy, 2015). Policy support (and its opposite, policy resistance ) is viewed as a behavioral expression of attitudes toward the policy. Therefore, models of policy acceptance and support generally view policy acceptance and nonacceptance attitudes as perquisite to policy support and resistance behaviors.

Regarding the relationships between these constructs, as shown in Fig. 5.1, we theorize that policy acceptance, nonacceptance, and policy support and resistance usually occur in different zones of the policy preference dimension. Acceptance and support are expected to be more likely at higher levels of policy preference. Nonacceptance and resistance are expected to occur at lower levels of policy preference. And like Dreyer and colleagues, we theorize that the boundary between acceptance and nonacceptance will occur at more moderate levels of policy preference (i.e., closer to neutral) than active support and resistance.

5.2.2 What Works and How? Prior Research and Theory Concerning Factors Impacting Policy Acceptance and Support

In the past, and usually without carefully distinguishing between the constructs we just reviewed, a number of factors have been found to impact policy preference, acceptance, and support across a range of policy types. Although we do not test all of these relationships in the analyses presented in this chapter, the small black arrows shown in Fig. 5.1 reflect our current educated guesses as to examples of certain variables that may act more directly on each construct as defined here. For example, perceived fairness and effectiveness of a policy are commonly found to predict attitudes toward and willingness to support a policy (Drews & van den Bergh, 2016; Dreyer & Walker, 2013). Because the perceptions are appraisals directly relevant to the policies themselves, it is likely they most strongly correlate with and affect preferences for those policies.

As another example, Dreyer et al. (2015) note that whether or not people behave in a supportive way will be influenced by how much effort a behavior is required. They argue it takes less effort to form an attitude than to act upon it and that policy acceptance is usually necessary for policy support (Dreyer et al., 2017). Consistent with their thinking about effort, recent political mobilization efforts use technology to change the amount of effort required to engage in support/resistance (e.g., sending a letter to one’s representative) by creating pre-written letters and forms already addressed to the appropriate representative that can be sent in less than a minute with just a few clicks of buttons on web-based forms.

Especially relevant to our studies, deliberative and participatory engagement has been hypothesized to increase policy acceptance via, potentially, two specific mechanisms. First, the activities might inform people (as was supported by the knowledge increases noted in Chap. 3), and new information might change people’s attitudes (as was supported by attitude changes at the individual level, reported in Chap. 4). End attitudes toward topics, in turn, tend to correlate with people’s policy preferences about those topics in predictable ways. In our data, for example, simple correlations between the extents to which perceived benefits outweighed the risks of nanotechnology, and to which they preferred the policy that speeded up nanotechnology development or did not prefer the policy that slowed it down, ranged from about 0.45 to 0.55. If people’s attitudes are changed in such a way that they come to an attitudinal consensus, then most people may support the eventually chosen policy (Drews & van den Bergh, 2016). Unfortunately, this last linkage is not found in our data; recall that our results in Chap. 4 did not reveal that any of our methods resulted in a great deal of consensus that might in turn result in high policy acceptance in one experimental condition or another.

A second mechanism is that effective participatory engagement might increase positive perceptions of how the policy decisions were made. These positive perceptions might increase the perceived legitimacy of the policies themselves, which are then viewed as more acceptable to people. Thus, the boundary between acceptance and nonacceptance (in Fig. 5.1 this is represented by vertical double bars, in between which is policy tolerance) might not happen at the neutral point of one’s preferences and might be found at a higher or lower point or even found to shift along the dimension of preferences, due to other factors. Consistent with this thinking, Wallner (2008) describes the role of policy legitimacy in policy failure. Waller argues that, beyond policy characteristics (such as ineffectiveness, inefficiency, and poor performance), policies can fail because the public or affected stakeholders do not find them legitimate.

In addition to having a main effect on policy acceptance, positive process perceptions might also operate through another mechanism: by reducing the effect of other variables that otherwise would reduce acceptance or support—such as negative policy preferences . Evidence for such interactions (e.g., process perception x policy preference interaction effects upon policy acceptance/support) comes from research on procedural and distributive justice (e.g., Tyler & Huo, 2002). To our knowledge, prior research has not examined such interactions in the context of studies of policy acceptance. However, quite a lot of research related to procedural and distributive justice seems relevant and may generalize. Such studies examine when and why, for example, people are willing to accept outcomes that they do not favor, such as losing in an arbitration or less-than adequate severance pay in an employment layoff. Brockner (1996) provided a list of 20 studies that found the relationship between outcome preferences and expressions of acceptance (which included things like expressions of satisfaction and organizational commitment) was moderated (i.e., reduced) by introducing decision-making and implementation procedures that were perceived as fair, respectful, transparent, inclusive, and so on. Brockner (1996) argues that the reason why such procedural justice factors have the moderating impact of reducing the influence of preferences on acceptance is because they produce trust in the persons making the decisions. In the context of our study, we might see this relationship if our measures of procedural perceptions predicted trust in policymakers which in turn moderated the relationship between policy preference and acceptance.

On the negative side, however, some have hypothesized that public engagement might reduce policy acceptance rather than increase it. The arguments for the potential negative impacts of public engagement on acceptance and support are somewhat similar to those discussed in Chap. 4. If, during the engagements, people bolster their belief in and the evidence for their opinions, the effect may be greater resistance to policies among those who disagree with the policy, than if they had not engaged. Indeed, prior research suggests that people who put more effort into their decisions—something that deliberative engagements seek to promote—can create more coherent attitudes which include feeling more favorable toward their choice and less favorable to alternatives (Anderson, DeTour, & PytlikZillig, 2015; Goodwin & Wu, 1984; Svenson & Jakobsson, 2010). Thus, by increasing people’s certainty about their attitudes and preferences, it may be that deliberations simply make it easier for people who are for the policy to accept it while making it harder for those who are against the policy to accept it. In other words, such processes might increase the influence of preferences on policy acceptance/support by making those preferences more coherent.

To summarize, and to make explicit the connections between our deliberative features and prior literature and theory, we are hypothesizing two competing processes that may affect policy acceptance/support in our studies. These processes are conceptually illustrated in Fig. 5.2. One process is hypothesized to operate through perceptions of the processes as being fair and competent (see Fig. 5.2a). Public engagement features that increase these perceptions would thus also increase policy acceptance/support and may also decrease the relationship between policy preferences and acceptance/support. For example, each of our manipulations (peer discussion, active facilitation, strong information, and so on) might be viewed as more fair or competent and thus increase positive perceptions of the deliberative processes . In addition, factors that affect people’s perceptions of the background information—critical thinking prompts, use of strong versus weak information, and use of NIF-formatted materials—may also impact information evaluations, which we would expect to influence process perceptions, because the information was part of the process. Of course, given that process perceptions were assessed after participants learned about the policy decision outcome, we would expect that their preferences for or against the policy would impact their perceptions of processes too. If process perceptions are impacted positively by these various factors, we would expect those positive perceptions then could directly increase acceptance/support of the policies and/or indirectly increase acceptance/support by increasing trust in policymakers, as well as by reducing the relationship between preferences and acceptance/support (moderation effect shown with a dotted line).

Fig. 5.2
figure 2

Theoretical models relating our experimental manipulations and mediator/moderator variables to policy acceptance/support. (a) Process perceptions model. (b) Attitude coherence model

The second process (see Fig. 5.2b) is hypothesized to operate through creating stronger and more coherent attitudes toward the topics (nanotechnology) and policies under discussion. For example, prompting people to think in an effortful and conscientious manner might not only increase knowledge as suggested by Chap. 3 findings but also increase how certain people are about their resulting attitudes. Use of peer discussion and active facilitation might also increase such deliberative efforts. Use of strong information that is optimally formatted (e.g., using NIF organization) might improve people’s perceptions of the information upon which their decisions are based and thus further increase their confidence in their attitudes. Confidence in one’s attitudes would then increase the strength of the relationship between one’s preferences and policy acceptance/support. However, because we did not directly assess people’s certainty in their attitudes, we could only look at the potential moderating impacts of deliberative engagement and information evaluations instead (moderating effects shown with dotted lines).

Importantly, these two competing processes might involve the same public engagement features. For example, strong information might increase positive process perceptions and increase attitude certainty , resulting in both positive and negative effects on acceptance. Such mixed effects could hide any main effects of strong information on policy acceptance, making it important to examine the effects of our experimental manipulations, not only on the desired end but on the mediators by which they are thought to work.

5.3 The Current Study

The overarching research question we focus on in context of nanotechnology policy and the present analyses is: What factors—especially factors relating to features of public engagement—predict willingness to accept/support a policy decision, whether or not that policy decision is/was preferred? Because the variables available across studies varied, for the purposes of this chapter, we did not do formal path analyses to explore the relationships depicted in Fig. 5.2. Instead, using simple correlations and multiple regression analyses, we examined, across studies, three categories of results: (1) whether our experimental manipulations, overall, appear to impact policy acceptance/support and/or the degree to which policy preference relates to policy acceptance/support, (2) whether our experimental manipulations impacted various potential mediators that might have effects on policy acceptance/support, and (3) whether our potential mediators did indeed predict policy acceptance/support and/or moderate the impact of policy preference on policy acceptance.

5.3.1 The Policy Scenarios

We were especially interested in how participants viewed the use of public input in their decision-making processes leading up to the policy choice. Thus, as noted in Chap. 2, our method for measuring policy preferences and acceptance also purposely increased the salience of the public engagement processes used just prior to asking questions tapping policy acceptance/support and policy preferences. Specifically, in each of our studies, after all of the engagement activities, a scenario asked the participants to imagine that the very processes to which they had been exposed (which, of course, varied by experimental condition) were used to gather public input and that public input was then used to make a policy decision. In the scenario, the government’s decision—to either invest more in nanotechnology development and decrease regulations (pro-development of nanotechnology) or invest less in development and increase regulations (pro-regulation of nanotechnology)—was always portrayed to be consistent with the public input obtained from the engagement activities. Furthermore, the scenario was randomly assigned so that participants had roughly equal chances of receiving a scenario that did or did not match their preferences.

5.3.2 Key Variables

While many details of our measures and manipulations are given in Chap. 2 and in our detailed methodological reports, here we review those variables most important to this chapter. Following the scenario, we assessed our dependent variable, which we call policy acceptance/support, because we treat acceptance and support the same (without distinction) across our models and in our results. However, it may be important to note that acceptance/support was assessed in a manner more closely matching the conceptualization of policy acceptance in Studies 2–4, where we asked participants whether they agreed the government made the right choice and if they agreed that the government made the same choice they would have made. In Study 5, our acceptance/support variable more closely reflected policy support, as we directly asked about willingness to support versus resist the policies as well as whether they accepted the decision due to the processes used to come to the decision.

Immediately after the scenario, we also asked about perceptions of the processes used to make the policy decision, including how fair and competentFootnote 1 the process was. The scenarios stressed that the public input methods used in the scenario were the same ones used with the survey respondent. Thus, we expected different process perceptions might occur as a result of students having been in different experimental conditions.

Finally, in each of our studies, our measure of policy preference was comprised of a single item. In Studies 2–4, separate from but immediately after the scenario was administered and a few questions about the scenario were answered, participants were randomly assigned to one of two versions of the policy preference item: “If legislation were being considered that would slow down (speed up) nanogenomics research and development in the area of human enhancement by decreasing (increasing) funding and increasing (decreasing) restrictions...Would you be FOR or AGAINST such legislation?” Responses fell on a six-point scale ranging from “strongly AGAINST” to “strongly FOR” with no neutral point. Items were recoded to reflect whether or not the participants preferred the policy randomly assigned to them in the scenario.Footnote 2 Thus “pro-slow” development of nanotechnology persons were identified as those who were for slowing down the research and development (or against speeding it up) and “pro-fast” if they were for speeding it up (or against slowing it down). In Study 5, the policy preference item was not separate from the scenario but instead referenced the scenario and read: “If the legislation above [in the scenario] were really under consideration by the government...Would you be FOR or AGAINST such legislation?” It was followed by the same six-point scale used in Studies 2–4.Footnote 3

Other variables in our model were also assessed. We used participants’ average reported “conscientious” engagement across all of the times measured, to operationalize effortful, deliberative engagement. We used this variable because, as noted in Chap. 2, it was most reliably related to participants feeling as though they had gained knowledge from the activities. The conscientious engagement measure was available for all four studies. We used three negative valence scales assessing participant perceptions of the background materials as biased, unclear, and untrustworthy (i.e., not accurate, not thorough). These three scales were available in Studies 3, 4, and 5; Study 2 had only two of the scales (biased and untrustworthy scales). Trust in policymakers was variably assessed across studies, with only Studies 3 and 4 providing measures of perceived trustworthiness and perceived untrustworthiness of policymakers across all participants. In Study 3, we additionally measured perceptions of policymaker’s ability and motivation to take into account ethical, legal, and social issues when considering the policies that they were making .

5.4 Analyses and Results

As previously noted, we broke our broad research question about “what works and how” into three smaller questions: (1) Do our experimental manipulations, overall, impact policy acceptance/support and/or the degree to which policy preference relates to policy acceptance/support? Although little evidence was found for the main or moderating impacts of our experimental manipulations on acceptance/support, given that we had theorized competing processes might be in play, we next conducted analyses to explore: (2) Do our experimental manipulations impact the various potential mediators illustrated in Fig. 5.2 which may have effects on policy acceptance/support? Finally, our analyses explored: (3) Do the mediators predict policy acceptance/support and/or moderate the impact of policy preference on policy acceptance? To answer these questions, we both examined the correlations among our variables and results from multiple regression analyses.

5.4.1 Simple Correlations

Table 5.1 shows the simple correlations between each of our dummy-coded experimental conditions and the other variables in the study. Table 5.2 displays the correlations between other major measured variables, showing the individual correlations for each relevant study below the diagonal and the average of those simple correlations above the diagonal.

Table 5.1 Correlations between experimental conditions (variables 1–5) and other model variables
Table 5.2 Correlations among major study variables by study (below diagonal) and average correlations across studies (above diagonal)

In Table 5.2, the largest average correlations occurred as might be expected: (1) among similar sets of variables (e.g., the average correlations involving the perceptions of information ranged |r| = 0.38–0.56; the ones involving perceptions of process fairness and incompetence ranged from −0.41 to −0.55 for an average of −0.48 across studies), (2) between policy preference and policy acceptance (average r across studies = 0.54), and (3) among perceptions of the process (fairness and competence) and policy acceptance (average |r|s = 0.50–0.59). There were also moderately strong relationships between policy preferences and process perceptions (average |r|s > 0.30). This was expected because process perceptions were measured after people learned about the policy outcome (which had been randomly assigned). This means those who received outcomes matching their preferences rated the process as more fair and competent than those who received outcomes not matching their preferences.

Other correlations are also interesting to note. Consistent with Fig. 5.2, perceptions of the information modestly predict process perceptions. Perceptions of information bias (see Table 5.2, variable 10) had the strongest relationships with perceptions of the process as fair and competent (variables 8, 9) (average |r|s = 0.14–0.15). Perceived trustworthiness of policymakers also sometimes positively correlated with policy acceptance as expected, although the correlations were not consistently statistically significant.

Diverging from the predictions illustrated in Fig. 5.2a, process perceptions did not correlate with perceptions of the trustworthiness of policymakers (average rs < 0.08, and none of the correlations for any of the studies were statistically significant). In part, this may be because participants were asked to rate policymakers in general, not the policymakers in the scenario. However, and somewhat unexpectedly, negative perceptions of the background documents (assessed during A2) were related to negative perceptions of policymakers (measured at A4). In addition, although hypothesized only as a potential moderator (see Fig. 5.2b), deliberative conscientious engagement (assessed across multiple assignments) was moderately positively related to perceptions of the information and the policymakers. That is, more conscientious engagement related to more positive perceptions of the information and policymakers. In Studies 2 and 5, conscientious engagement also was marginally predictive of process fairness perceptions.

Finally, although not the focus of our analyses, Table 5.2 shows the randomly assigned scenario (0 = pro-regulation, 1 = pro-development, variable 6) correlated with policy acceptance/support. Generally speaking, participants were more accepting and supportive of the policy scenario randomly assigned to them if it was pro-regulation rather than pro-development (rs = −0.11 to −0.29, with an average − 0.17 correlation across studies).Footnote 4

We discuss other simple correlations as we discuss our three research questions and main categories of results, which include consideration of multiple regression analyses as well.

5.4.2 (1) Do Our Experimental Manipulations Impact Policy Acceptance/Support or Moderate the Policy Preference-Acceptance/Support Relationship?

The simple main effects of each of our experimental conditions were almost always not significant for the full sample. One rare simple main effect found was in Study 5, with active facilitation resulting in slightly greater policy acceptance (Table 5.1, r = 0.14, p < 0.05).

Using multiple regression procedures , we also examined the combined and interactive impacts of our manipulations on policy acceptance/support. Because our analyses so rarely revealed significant effects, we do not present the results in tables or figures. Only two statistically significant effects were found, both in Study 5. One was significant effect of group type, such that those in the negative homogenous groups were most accepting/supportive of whatever policies they received and those in the heterogeneous attitude groups were least accepting/supportive of the policies they received. Additional analyses and studies are needed to understand this result. It is possible of course that it is a chance finding. The second effect (also found in Study 5) was a significant facilitation by information interaction such that the active facilitator had a significantly positive impact on policy acceptance/support in the weak information condition only and the strong information had a significantly positive impact only when the facilitator was passive. Thus, it seemed that active facilitation compensated for the negative effects of weak information on policy acceptance/support and vice versa.

Relating to the second part of our research question, we tested for but found no significant interactions indicating our experimental manipulations had moderating effects on the relationship between policy preference and acceptance/support. That is, none of our experimental conditions appeared to significantly increase or decrease the extent to which policy preferences impacted policy acceptance.

5.4.3 (2) Do Our Experimental Manipulations Impact Potential Mediators ?

Although our experimental manipulations rarely affected policy acceptance/support, we next explored the possibility that the experimental manipulations might have indirect effects on policy acceptance/support via impacts on other related factors, such as the process perception variables (e.g., process fairness and competence) which as Table 5.2 shows are so strongly related to policy acceptance/support. Alternatively, our manipulations might impact information evaluations (which were also related to perceptions of policymakers) or impact conscientious engagement —each of which we had hypothesized might moderate the relationship between policy preferences and acceptance/support. We also tested for potential experimental manipulation impacts on perceptions of policymaker trustworthiness perceptions.

For these analyses, we examined each potential mediator individually as a dependent variable in separate analyses, for each study. We tested for the main and interactive effects of the relevant experimental manipulations in each study, always including all main effects simultaneously, but dropped any higher-order interactions that could be ruled out by not achieving at least marginal significance (p < 0.10). Also, when examining effects, we usually controlled for the effect of policy preference and, when relevant, its interaction with the randomly assigned policy scenario and the main effect of the scenario. This is because, except for in the case of information perceptions (which were measured during A2 after the first reading of the information) and conscientious engagement (which was measured throughout and averaged across activities), the other variables were assessed after people learned about the policy outcome and may have been affected by whether they received their preferred outcome. Below we describe our main findings regarding the combined impacts of our experimental manipulations on the proposed mediators .Footnote 5

Process Perceptions

Examination of the correlations in Table 5.1 revealed the experimental manipulations did not have strong or reliable relationships with process perceptions. Multiple regression analyses regressing process perceptions on our experimental manipulations simultaneously also indicated some effects, but not consistent ones. Perceptions of process fairness were only impacted by our manipulations in Study 5, such that strong information (compared to weak information) conditions were associated with higher ratings of process fairness while controlling for other manipulations. Meanwhile, ratings of process competence appeared to be affected (at least marginally) by the critical thinking conditions in the majority of our studies (again, once other manipulation effects were controlled), but not in a consistent direction. That is, (and these results are consistent with the correlation results in Table 5.1), in Studies 2 and 3, there was a main effect such that critical thinking participants felt processes were less competent than control participants. Later we note that our critical thinkers, in general, seemed critical—and this finding is consistent with that trend. However, in Study 4 the effect of critical thinking was in the opposite direction. Likewise, in Study 3 use of NIF materials increased perceived competence (i.e., decreased incompetence perceptions), but in Study 4 use of NIF materials decreased competence perceptions (increased incompetence perceptions).

Study 5 results might shed some light on the contradictory effects of critical thinking because Study 5 data suggests the effect of critical thinking depends on other factors we varied (and which could have varied unintentionally between other studies). For example, being in a homogenous positive group resulted in a significantly more positive effect of critical thinking on judgments of process competence compared to being in a homogenous negative group, in which case the effect was in the opposite (i.e., negative) direction. The effects of the critical thinking prompts also interacted with passive/active moderation and strength of information. The pattern of the three-way interaction suggested that using one positive factor (critical thinking prompts, strong information, or active moderation) could modestly increase perceived competence of the processes relative to having none of those factors. Adding a second one did not help much at all; however, you could further increase the perceived competence if all three were used. While further analyses are necessary to understand the contradictory effects found between studies, it is possible that our groups in Studies 3 and 4 were more or less homogenous in composition. It is also possible that some of our moderators were more passive than intended. Either of these situations might have impacted the direction of the critical thinking effect in Studies 3 and 4.

Information Perceptions

In each of our studies we found consistent evidence that critical thinking prompts impacted information perceptions. Specifically, people prompted to think critically also rated the background documents more negatively on one or more of our scales. Such effects are apparent in the correlation results shown in Table 5.1, as well as in our multiple regression results. For example, our regression results found, in Study 2, people in the critical thinking condition rated the background information as less accurate and thorough than those in the control condition. In Study 3, those in the critical thinking condition felt the information was more biased, less thorough/accurate, and less clear than those in the non-critical thinking condition.Footnote 6 In Study 4, critical thinkers again rated the background documents as less clear and less accurate/thorough.

In Study 5, partly because we had begun to wonder if our critical thinkers were just, well, critical, we created strong and weak information conditions. We wanted to see if our participants were actually sensitized to differences in quality of information rather than just rating everything about the background information more negative overall. We did, once again, find more negative evaluations of our materials among the critical thinkers. And consistent with Table 5.1 results, use of strong rather than weak information did very little to change participants’ ratings of the information overall. However, in the case of ratings of bias in the background documents, there was—if we reduced our sample to those student participants who completed our post measures at A4 as well as our A2 measures—a statistically significant interaction between critical thinking prompts and information quality such that the critical thinkers were less negative about the strong information than the weak information. This gave us hope that we had actually induced critical thinking and not just negativity. Still, the fact that we needed to reduce the sample to the more “participatory” or “engaged” students in the course (i.e., those who completed A2 and A4 measures) suggests participant individual differences need to be taken into account in studies of public engagement , to fully understand effects that are found or not found.Footnote 7

Policymaker Trustworthiness

Policymaker trustworthiness was only measured across all participants in Studies 3 and 4. The simple correlation results in Table 5.1 suggest very little impact of our manipulations on perceptions of policymaker trustworthiness or untrustworthiness. Only the use of NIF information in Study 4 appeared to be related, specifically, to increased distrust in policymakers. Multiple regression analyses further indicated that, in Study 4, the NIF-formatted materials were associated with increased perceptions of untrustworthiness overall and also related to lower perceptions of trustworthiness when participants were in the critical thinking condition. However, in Study 3, a different effect was found: There was an NIF-format by group discussion interaction effect such that, only when in a peer discussion group, the NIF-formatted materials (compared to topically formatted materials) related to increased reports of perceptions of the ELSI-specific trustworthiness of policymakers (i.e., trustworthiness as related specifically to their taking into account ethical, legal, and social issues). Thus, overall, the results were inconsistent and inconclusive regarding the effects on trustworthiness.

Conscientious Engagement

Figure 5.2b suggested our experimental manipulations might also predict conscientious engagement (and recall there was support for this in Chap. 3, for the critical thinking manipulation). When simply examining the average correlations across studies (Table 5.1), however, we see small and somewhat unreliable correlations between conscientious engagement and the critical thinking manipulation, but more robust correlations between use of peer discussion and conscientious engagement, in all three studies where peer discussion was varied. In Study 5 there was also a significant correlation between use of strong (vs. weak) information and conscientious engagement.

Analyses regressing conscientious engagement on all of our experimental manipulations found that peer discussion increased reports of conscientious engagement in each study. However, the critical thinking manipulation was not consistently related. In Study 2, critical thinking prompts related to reduced conscientious engagement. Note that the reason we had revised our critical thinking prompts in Studies 3–5 was partly because of noticing the negative impact our critical thinking manipulations in Studies 1 and 2 had on engagement. In Studies 3 and 4, critical thinking prompts sometimes increased conscientious engagement, as noted in Chap. 3.

Additional analyses revealed the positive effects of the critical thinking prompts were primarily found when other elements commonly associated with good deliberation practices were missing. In Study 4 there was evidence of an interaction such that critical thinking increased average reported conscientious engagement if one was not in a group for discussion and group discussion increased conscientious engagement if one was not in the critical thinking condition. Thus, it seemed that group discussion and critical thinking prompts again compensated for the lack of the other. In Study 5, there was a significant three-way interaction between information strength, active moderation , and critical thinking prompts, which is illustrated in Fig. 5.3. This interaction also suggested that certain positive elements could compensate for the lack of other positive deliberative elements. Compared to having none of the features that are commonly associated high-quality deliberative events (i.e., no peer discussion, weak information, and control prompts rather than prompts to think critically), adding any one of those elements increased conscientious engagement (and the largest effect was seen by adding active facilitation). However, it never helped to add an element to a situation that already had an existing single element. In fact, adding critical thinking to either of the other elements seemed to reduce conscientious engagement. On the other hand, the highest levels of conscientious engagement were achieved if you had all three of the elements .

Fig. 5.3
figure 3

Illustration of the three-way interaction between information (weak vs. strong), facilitation (passive vs. active), and deliberation prompts (control vs. critical thinking) when predicting conscientious engagement in Study 5. Note: Numbers indicate how conscientious engagement is affected by including one, two, or three elements commonly associated with high-quality deliberative events

5.4.4 (3) Do Our Mediators Impact Policy Acceptance/Support or Moderate the Preference-Acceptance/Support Relationship?

As a final step in evaluating the feasibility of the models illustrated in Fig. 5.2 across our studies, we tested for the impacts of our mediators on policy acceptance/support and tested also whether the mediators might moderate the policy preference effect on policy acceptance/support. For these analyses, we conducted multiple regressions that always included a dummy code for the scenario type (pro-development or pro-regulation), as well as the effect of policy preference, and the interaction between scenario type and policy preference if relevant. Note that, prior to testing the simple main effect of a mediator and its interaction with policy preference, we tested for and if possible ruled out additional interactions with scenario type (i.e., the two-way mediator x scenario interaction and the three-way mediator x policy preference x scenario interaction). This was because prior analyses, including the correlation between scenario and policy acceptance, suggested that some effects differed depending on whether participants received the pro-development or pro-regulation policy decision scenario. Whenever we found an interaction with scenario type, we report the two-way interactions (between the mediator and policy preference) separately for each scenario type in Table 5.3.

Table 5.3 Mediator variable main effects on policy acceptance/support and interactions with policy preference when predicting acceptance/support

Process Perceptions

Selected results from the regression analyses examining the impacts of our process perception variables predicting policy acceptance/support are shown in Table 5.3. As shown in the top section of Table 5.3, perceptions of process fairness and process competence consistently and positively related to (had main effects upon) policy acceptance in each study.

Process perceptions also often interacted with policy preferences to predict policy acceptance/support. Contrary to our expectations, when interactions emerged between policy preferences and process perceptions, the interactions were usually positive, indicating that perceptions of fairness and competence increased the extent to which policy preferences positively predicted acceptance. For example, Table 5.3 shows that process fairness in Study 3 had a positive impact (B = 0.45) on policy acceptance for those in the pro-regulation condition at mean levels of policy preference. The interaction between fairness and policy preference differed by scenario (see rightmost set of columns for the interactions). For those in the pro-development scenario condition, the relationship between process fairness and policy acceptance increased (grew stronger) as one’s positive preferences increased (by 0.16 for every 1 SD of increase of preferences). Thus, unlike prior literature that has found procedural justice perceptions to be especially important when people receive outcomes that they do not favor, our studies relatively consistently found that the process perceptions were most highly related to policy acceptance when people received outcomes that they did prefer.

We suspect we may have found this pattern because the process perceptions in our study were assessed immediately after people were informed, in the scenario, of the policy decision made by the government. This outcome knowledge (in light of their policy preferences) likely impacted their perceptions of the processes used to come to the decision. Indeed, policy preferences were always strongly related to ratings of process fairness and process incompetence (see Table 5.2). We may have gotten a different pattern of results if we had assessed process perceptions prior to revealing the policy decision outcome.Footnote 8 Future research including measures of process perceptions prior to people learning about the policy outcomes is needed to clarify these patterns .

Information Perceptions

As shown in Tables 5.2 and 5.3, the impacts of perceptions of information quality did not consistently have main effects on policy acceptance/support, but when effects were found, seeing the information as deficient (biased, untrustworthy, or unclear) was more likely to decrease than increase policy acceptance. Information quality perceptions also often interacted with policy preferences to predict policy acceptance/support as shown in the right-hand side of Table 5.3. Whenever the interaction occurred, it was negative, indicating that perceiving the information as inadequate reduced the effect of policy preferences on acceptance/support (and conversely, that positive quality perceptions relate to increased relationships between policy preferences and acceptance/support). As previously described and illustrated in Fig. 5.2, we thought that perceptions of high information quality might either (1) weaken the relationship between policy preferences and acceptance due to increasing people’s procedural fairness assessments or (2) strengthen relationships between policy preferences and acceptance/support due to increasing attitude certainty related to their preferences. Even though we did not assess attitude certainty, the pattern we found more closely matches the second account (Figure 5.2b). In future research it would be interesting to investigate whether our effects of information perceived as poor quality are indeed due to increased uncertainty about the preferences that participants formed during the activities. It is also noteworthy that three-way interactions with scenario type were again apparent in these analyses when examining the effect of perceptions of bias in the information. It was only in the pro-development scenario condition that increased perceptions of bias (measured at A2) related to weakened relationships between policy preferences and acceptance/support (measured at A4) (see Table 5.3, right-hand side; Study 2 pro-development condition reveals a − 0.32 interaction effect, and Study 4 pro-development condition finds a − 0.20 interaction effect). Note that in this case, information perceptions were assessed before learning about the outcome, making it not possible for people’s reaction to the outcome to impact their perceptions of the information. While it is not entirely clear why bias perceptions would only impact preference-acceptance relationships in the pro-development condition, the three-way interactions underscore that not all policy decisions are equal and that accepting/supporting one policy might be very different than accepting/supporting a different or seemingly opposite policy. In our context, one policy decision (pro-development) might have been viewed as more risky than the other (pro-regulation) and thus activated risk aversion and a bias toward the status quo, which may have included greater consideration of aspects of the process that led to the decision (including quality of information considerations).

Perceptions of Policymaker Trustworthiness and Untrustworthiness

As previously noted, measures of perceptions of policymakers were only administered to all students in Studies 3 and 4, making it more difficult to assess replication of effects or lack of effects. Nonetheless, Table 5.3 shows that, in both studies, there was at least one indication that perceived trustworthiness of policymakers has a positive main effect on policy acceptance/support. In Study 4 there was also an interaction such that perceptions of untrustworthiness increased the relationship between preferences and acceptance/support. In other words, people who perceive policymakers as untrustworthy appear to more strongly rely on their policy preferences to decide upon policy acceptance/support. Future research should investigate whether these patterns hold across additional replications. That is, are trustworthiness perceptions most important for their main effects on policy acceptance? Are (low) untrustworthiness perceptions most important for accepting policies even when they are not preferred?

Conscientious Engagement

Finally, there was not much evidence that conscientious engagement had a main effect on policy acceptance/support, but there was evidence in both Studies 4 and 5 that conscientious engagement impacts the policy preference-acceptance relationship. In each case, the interaction was such that those who reported they engaged more conscientiously had stronger relationships between their reported policy preferences and acceptance/support. This is consistent with the theorizing that went into Figure 5.2b, but future research will need to establish whether or not the reason for the pattern is due to increases in attitude certainty .

Once again, there was also evidence that conscientious engagement mattered most in the pro-development condition (in Study 5). Future research is needed to explain why this is so. It is possible that when people are more accepting overall of a policy (as was the case in the pro-regulation scenario), those factors such as attitude certainty matter less.

5.5 Summary and Conclusions

In summary, our experimental manipulations rarely had direct impacts on policy acceptance/support, and never directly moderated the relationship between policy preferences and policy acceptance/support. However, public engagement features may still matter because sometimes our experimental manipulations did impact our proposed mediators which in turn impacted policy acceptance/support or moderated the relationship between policy preferences and acceptance.

One of our most robust findings was that critical thinking prompts increased negative evaluations of the quality of information participants received. This is important because of the role of information evaluations in policy acceptance/support. That is, our analyses suggest that quality of information impacts two different and competing processes. As quality of information perceptions improve, process perceptions and perceptions of policymakers improve too, which can relate to increased policy acceptance.Footnote 9 At the same time, as perceptions of information improve, the relationship between policy preference and acceptance also increases, decreasing the extent to which people who do not get their preferred outcome will be accepting or supportive. Because our analyses rarely found main effects of information quality on policy acceptance, it is possible that these two processes cancel one another out.

There was also robust evidence that peer discussion increased reports of conscientious engagement. This could be important because conscientious engagement related to both improved perceptions of information quality and trust in policymakers—both of which, as previously noted, tend to predict greater policy acceptance.Footnote 10 In addition, conscientious engagement had a moderating effect on the policy preference-acceptance relationship, increasing that relationship in a manner similar to how the perceived quality of information increased it.

Taken together, our results suggest that certain features of public engagement that strive to meet the “deliberative ideal” will result in less acceptance of non-preferred policies. Practitioners strive to use high-quality information in deliberations and strive to have people engage by thinking carefully about the information—in a manner that is likely to increase their knowledge and their application of that knowledge to their opinions. This, however, does potentially result in people’s preferences driving their support/acceptance of the policy to a greater degree than if they had not consumed high-quality information conscientiously.

Other effects of our experimental manipulations were less robust. Critical thinking prompts sometimes interacted with other factors to predict process perceptions. NIF-formatted materials sometimes interacted with other factors to predict trustworthiness of policymakers. Some effects were found in Study 5, but future research is needed to see if the effects replicate. If the findings do replicate, Study 5 results suggest that certain design features can compensate for others (such as when active moderation compensated for weak background information and vice versa). If such compensatory effects are common in public engagement research, this may make it difficult to find effects in experiments but may also be encouraging for practitioners. That is, as they strive to do many things “right,” it may be reassuring to know that not everything needs to be perfectly right to achieve benefits from deliberative engagements.