Keywords

2.1 Connecting Features, Processes, and Outcomes During Deliberative Discussions

As noted in Chap. 1, when we began our studies, the relative lack of experimental research on public engagement led us to try to begin to fill that gap. Our consideration of what approaches (public engagement features ) work for what purposes (outcomes) and why (i.e., via what processes or mediators) resulted in a general framework and conceptual strategy we applied to present research (PytlikZillig & Tomkins, 2011). This strategy involves considering some of the features commonly used and recommended for public engagement and then broadly considering how a variety of social and psychological theories might clarify how, when, and why those features might lead to various outcomes. The broad and inclusive consideration of relevant theories drove the design of our experiments, in which we attempted to experimentally vary several features, while measuring and assessing a larger number of potential outcomes, mediators, and moderators. By conducting similar but varied procedures in highly similar samples over time, we were able to examine which findings are robust and which vary even within the context of our relatively narrow inquiry: engaging science students in deliberations about nanotechnology.

Our detailed methods, including all measures and materials for each study, accompany the data sets that are available in the supplemental files to this book. Note that our Study 1 was conducted as a pilot in which we tested the incorporation of our experimental methods into the classroom setting. Several of our methods and measures were changed based on feedback from students at the end of that first semester. Due to the different nature of Study 1, and the need to focus our resources on sharing our best data, we do not include much discussion of Study 1 in this book, and we are not presenting Study 1 data here.

In this chapter, we first describe the contexts and methodological features that were constant across the remainder of the studies (Studies 2–5) and explain why we believe studies in the college student context are important. We then summarize key differences and similarities between studies in experimental conditions and outcome measures, along with providing some of the rationale for the changes we made across studies.

2.2 Our Context: Future Scientists Deliberating About Nanotechnology over Time

It is important to study theories relevant to public engagement in the specific context of deliberation around science and technology development and policy. Empirical findings from the lab or even from one deliberative discussion to another do not always easily generalize. Science, technology, and society (STS) scholars generally agree that public engagement should be context sensitive (Delgado, Kjølberg, & Wickson, 2011), making it important to examine the impact of design factors within specific, concrete contexts. As part of our strategy for connecting features, processes, and outcomes, we held context as constant as possible across our experiments. While our approach necessarily limits the generalizability of results, it provides a solid foundation for establishing the existence of internally valid and robust results within our chosen context, before extending to others. As the reader will see in Chaps. 3, 4, and 5, even with all the controls we employed, finding consistent effects was, nonetheless, no small feat. The contextual features held constant in our studies include the type of participants involved, the topics of deliberation, and the use of a longitudinal, repeated measures design.

2.2.1 Participants: College Students in the College Science Classroom

To facilitate our use of experimental methods, we worked within the constraints of the college classroom, by engaging consecutive semesters of students enrolled in a freshman-level biology for science majors course at the University of Nebraska-Lincoln (UNL). Table 2.1 describes the basic demographics of the students in each of our studies. As shown, an estimated 85–90% of the students who began the course participated in the study each semester. Those not participating may have either dropped out of the course or not consented to let us use their data. Across all studies, participants involved slightly more females than males and had an average age of 19–20 years. Students in Study 5, however, were slightly older and more varied in age. In each study, 70–90% of the students were in their first 2 years of college, and around 70–80% were science majors. About 40–50% of the students reported an affiliation with the Republican party, and the remainder of the students were approximately equally likely to affiliate as Democrats or as Independent/Others, except in Study 2, in which there were proportionally more Independent/Others. The fall semesters generally involved larger numbers of students than the spring semesters and a greater proportion of students in their first year of college .

Table 2.1 Descriptive comparison of participants across studies

There are at least three reasons why we believe this context is worthy of study. First, as noted by McAvoy and Hess (2013, p. 19), classrooms are “one of the most promising sites for teaching the skills and values necessary for deliberative democratic life.” US college students have often just achieved voting age, and most are just beginning a fuller participation in democracy. It is an arguably worthwhile endeavor to include more deliberative democracy in the classroom, and experiments in such contexts will facilitate understanding of optimal ways to do just that. Second, within the realm of STEM (science, technology, engineering, and mathematics) education, there has been a movement toward creating curricula that result in more well-rounded graduates who are not only experts in their fields but also able to work with interdisciplinary teams and think about the implications of the technologies that they may work with or develop and refine. Within biology education at UNL, discussion of a “New Biology” that focuses on interdisciplinary problem-solving and the application of science to solving societal problems makes our work applicable to the goals of that movement (Labov, Reid, & Yamamoto, 2010; National Research Council, 2009). As noted by the National Research Council’s 2009 publication of “New Biology for the 21st Century: Ensuring the United States Leads the Coming Biology Revolution” (p. 10):

Science and technology alone, of course, cannot solve all of our food, energy, environmental, and health problems. Political, social, economic, and many other factors have major roles to play in both setting and meeting goals in these areas. Indeed, increased collaboration between life scientists and social scientists is another exciting interface that has much to contribute to developing and implementing practical solutions.

Thus, work like ours is useful for introducing future scientists to the social science that is likely to impact the practical usefulness of their work as it intersects with the public and a variety of public viewpoints. Third, a very practical reason for our study of public engagement within the college classroom context is that it allowed us the use of experimental methods such as random assignment to conditions, increasing the internal validity of our findings in that context.

2.2.2 Discussion Topics: Nano-Biological Technologies and Human Enhancement

In each study, the deliberative activities focused on emerging and potential nanotechnologies. Because the activities took place in a biology course, we focused on technologies that involved biological or health applications, such as the use of nanotechnology for creating new nanomedicines or for human enhancement. We chose nanotechnology as a topic of deliberation because, at the time of our studies, governments were calling for and sometimes requiring public engagement around nanotechnology. For example, in 2003, the US twenty-first Century Nanotechnology Research and Development Act (P.L. 108–153) required public input and outreach as part of ensuring “that ethical, legal, environmental, and other appropriate societal concerns…are considered during the development of nanotechnology.” Abroad, a government-commissioned report on nanotechnology by the Royal Society and Royal Academy of Engineering argued for widespread and early public involvement during nanotechnology development (Royal Society/RAE, 2004).

Table 2.2 shows some of the features of the background documents that varied between studies. The focus of the Study 2 document was nanogenomics. Between Study 2 and 3, the topics of the background document were expanded to discuss nanotechnology in general, as well as nanogenomics and nanomedicine, and the ethical, legal and social issues (ELSI) surrounding these technologies. In addition, between Study 3 and 4, revisions were made in response to student feedback that the document seemed biased positively toward nanotechnology, to include additional information and resources relevant to risks related to nanotechnology. Some changes were also made to the format of the documents and their integration with the experimentally varied prompts. In Study 2 the background information was a stand-alone, downloadable PDF document, and the prompts for engagement were presented separately. In the other studies, the information was presented as web page text with clickable links, and prompts for student responses were embedded in the information, with students instructed to stop and answer each prompt before continuing to read the next web page. Students were given a link to a downloadable PDF at the end of their reading assignment to refer to throughout the rest of the engagement activities.

Table 2.2 Comparison of background documents across studies

2.2.3 Repeated Measures Longitudinal Design

Each of the studies also involved repeated measures administered over approximately 4–14 weeks of the semester. Table 2.3 shows the timing sequence of activities for each study. As shown, most of the study activities were organized into assignments for the students to complete. The assignments were required of all students, and student work was graded for completion and at times for effort or quality. Generally speaking, however, if students completed the work, they were given full credit. All students were required to complete and turn in the assignments. Students were given two opportunities to provide or withhold research consent: prior to assignment 1 and during assignment 4. Final consent decisions made in assignment 4 were honored. If a student did not complete assignment 4, then their consent decision for assignment 1 was honored. If a student did not complete either assignment 1 or 4, their data was omitted from the study.

Table 2.3 Timing and common content of study activities (assignments )

Assignment 1 (A1) was assigned as homework for students to complete outside of class. This homework included reading an introduction describing public engagement and why it is important and which gave an overview of the public engagement activities that would take place as part of the course. As part of A1, students also were asked to complete measures of demographics, their attitudes toward and knowledge of nanotechnology, trust, and other individual differences. The time that elapsed between A1 and other activities did vary between studies, which may have affected whether and how much students were exposed to other sources of information between A1 and other assignments. In Studies 2 and 3, A1 was completed very early in the semester, up to 10 weeks prior to the rest of the activities. In Studies 4 and 5, A1 was completed approximately 1–2 weeks prior to the rest of the engagement activities. The remainder of the activities, however, were always completed near to the end of the semester, over the course of approximately 2–3 weeks.

Just prior to assignment 2 (A2), students were given a 50-min guest lecture during a regularly scheduled large-group meeting of their course. The lecture was delivered by a member of the research team and described the role of public engagement in science and research. Then, during the week following the large-audience format lecture, students attended a small-group recitation. At that session, a researcher and their regular recitation instructor introduced them to the public engagement activities that would be done as part of the course. This introduction allowed students a chance to ask questions about the purposes of the activities and the assignment requirements. A video also was shown to introduce students to nanotechnology and its applications and to pique their interest. The video used for this purpose was a TED talk video available on YouTube. Assignment 2 (A2) was then also completed as homework on students’ own time. A2 included the readings about nanotechnology and the experimentally varied cognitive engagement prompts. In addition, students were asked to complete measures of attitudes, knowledge, and engagement and to evaluate the reading materials.

Unlike the other assignments, assignment 3 (A3) was almost always completed during the students’ 1-h recitation.Footnote 1 During A3, students were given brief descriptions of imagined future scenarios and questions designed to prompt deliberation about ethical, legal, and social issues related to nanotechnology development . Table 2.4 lists the scenarios used across the different studies. These scenarios, which are available in the supplemental materials, were developed by the research team and/or inspired by or adapted from scenarios used by other teams conducting public engagements around nanotechnology (e.g., Hamlett, Cobb, and Guston, 2008).Footnote 2 The students completed these deliberative activities during class, under different conditions (e.g., working alone or while discussing with their peers) as discussed in the next section. Students also completed additional measures of attitudes and engagement and when relevant completed measures of group processes.

Table 2.4 Scenarios used as part of assignment 3 (A3) to prompt deliberation of ethical, legal, and social issues (ELSI)

Finally, immediately after finishing A3 in class, students were given a link to online assignment 4 (A4). As part of A4, students completed a variety of post-measures including reporting their final attitudes and completing knowledge assessments. In the next sections, we give additional detail on the conditions varied as part of the assignments and the measures administered during each phase (assignment).

2.3 What Works? Experimentally Varied Deliberative Engagement Features

Table 2.5 compares the experimental manipulations that were used across studies. As shown, in all of the studies, we varied the presence or absence of explicit prompts to think critically. In studies 2–4, we experimentally manipulated the presence or absence of peer discussion. In Study 5, we varied the construction of discussion groups to represent homogeneous or diverse attitudes, the inclusion of passive or active facilitators, and the characteristics of the background information provided. We also varied an introductory opinion question about the assignments to see if the question might be impacting student perceptions of the assignment. While the full details are given in the detailed methodological reports, here we give an overview of the conditions and our rationale for examining them.

Table 2.5 Experimentally varied engagement features in each study

2.3.1 Importance of Ethical, Legal, and Social Issues (ELSI) Topics in Science Education

In our pilot Study 1 and in Study 2, we noticed a tendency for the science students who served as our participants to doubt whether the public participation activities were beneficial to them and should be part of their biology course. Part of the problem was, as students told us, the assignments were too long and boring, due to the many survey measures we had included. Another part of the problem, however, seemed to be an expectation that the activities themselves should not be part of a “basic biology” course. In response to these views, in Studies 3 and 4, prior to engaging in any of the deliberation activities, we asked all students the following open-ended question:

What do you think? In your opinion, how important is it that science students--including beginning science students such as you and your classmates--learn how to think about the ethical, legal and social issues (ELSI) pertaining to science? In 2-3 sentences, give your answer and a brief explanation of why you think as you do .

Interestingly, student open-ended responses to this question suggested largely positive views prior to engaging in the activities, which led us to wonder if we had changed student assessments of the activities by asking them their opinions in A1. Thus, in Study 5, rather than asking all students the ELSI importance question, we randomly assigned only one-half of all the students to answer the question. The other half of the students were not asked to reflect on the usefulness of the assignment. They instead were asked to give initial answers regarding the development of nanotechnology and its regulation .

2.3.2 Characteristics of the Background Information

To ensure high-quality materials that were accurate with respect to their depiction of nanotechnology, nanoscientists on our team assisted in finding or recommending source materials and reviewed our final readings for accuracy and appropriateness. In Study 2, all students read the same background document which was organized topically—that is, around topics such as the definitions of nanotechnology, nanogenomics, and human enhancement, how nanogenomics is being used now and might be used in the future, and varying viewpoints on the benefits and risks associated with nanogenomics. At the end were links and references to additional information. In Study 2 we used prompts to encourage different approaches to engaging with the background document, including one condition that asked students to organize the material they were reading into different perspectives that vary in the extent to which they see nanotechnology as risky versus beneficial. Because our initial analyses suggested that the information organization condition did not seem to impact student learning or other outcomes, in Studies 3 and 4, we experimentally varied the organization of the document itself, rather than prompting students to organize the information.

In Studies 3 and 4, we expanded the information provided and created two different versions of the background document. Both versions of the background information began with the same overview providing some definitions and examples of nanotechnology, nanogenomics, and nanomedicine. However, one version then presented the information related to ethical, legal, and social issues in a format similar to and inspired by the National Issues Forum (NIF-format). The other version included the same information but presented it in a topical format such as was used in Study 2. In the NIF-format materials, we identified explicitly opposing perspectives (e.g., “human enhancement as forward progress” vs. “human enhancement as unnecessary risk”) and listed the action implications, evidence to support, pros and cons (trade-offs), and opposing points to each perspective. In the topical format, we did not explicitly identify opposing perspectives but instead discussed relevant topics that may impact people’s views on use of nanotechnologies for human enhancement purposes, such as “the costs of not pursuing available benefits,” “changing social concepts,” “right to autonomy,” and “unforeseen, unpredictable, unacceptable risks.” Study 4 materials were largely similar to Study 3 materials, except that we included additional information about the risks or drawbacks of nanotechnology development in response to student concerns that the materials were positively biased.

Finally, in Study 5 we used only the NIF-formatted document, but we altered that document to create a stronger and weaker version. In studies 3 and 4, we found relatively few effects of the NIF versus topically organized documents but found students in our critical thinking conditions (described in the next section) were more negative about the background information. The use of stronger and weaker background documents was intended to explore whether our critical thinking students were more attentive to quality of information or simply more critical. To create the weaker documents, we altered some of the content and wording to introduce bias toward nanotechnology and removed some of the references supporting certain statements in the document (see detailed Study 5 methods in the supplementary files for comparison of the two versions).

2.3.3 Prompts for Cognitive Engagement

Readings about nanotechnology (given to students in A2 and referenced in A3) were accompanied by prompts to encourage deeper processing of the information. As discussed in more detail in Chap. 3, and in PytlikZillig, Hutchens, Muhlberger, and Tomkins (2017), prior research has found that students can engage with reading materials in a variety of ways with different effects. For example, people may learn more if they are engaged in deep rather than surface-level processing, and some have found that the manner in which students take notes and organize the information impacts learning (Dinsmore & Alexander, 2012; Robinson & Kiewra, 1995).

In Study 2 we included three types of prompts during A2. One type of prompt encouraged students to organize the information using matrix note-taking, a type of note-taking that emphasizes comparison and contrast and has been found to improve learning over outline format note-taking (Robinson & Kiewra, 1995). Another type of prompt asked students to practice and then apply critical thinking strategies (e.g., looking for bias and examining the quality of evidence that was available to support claims). Critical thinking prompts were hypothesized to be directly beneficial to the goals of deliberation, which emphasize weighing of evidence. In addition, it was expected that the prompts would induce deeper processing of the information which could have positive impacts on learning. The third type of prompt was designed as a control that would evoke engagement and require students to respond but would not necessarily evoke deep or strategic engagement. The control prompts, which we often reference as the “feedback” condition, asked students to provide feedback on the readings and list “insights, realizations, reactions, or new things that you learned as a result of reading the background document or exploring the additional resources in that document.”

As noted above, in Studies 3–5, we dropped the information organization (matrix note-taking) condition and focused only on the critical thinking and control prompts. In part this decision was made because the information organization condition appeared not to have much impact when compared to the control condition. In addition, beyond the classroom it seemed more feasible that we might be able to prompt people to think critically than to take notes in a certain way. At the end of Study 2, students were invited to engage in a fifth assignment during which we piloted some refinements to our prompts and administered additional personality measures. In Studies 3–5, we used refined critical thinking prompts in A2 that more gently nudged students to think critically, without asking them to practice thinking critically as was the procedure in Study 2. This change was implemented because our measures of cognitive-affective engagement suggested our Study 2 critical thinking prompts resulted in student disengagement relative to the control prompts (PytlikZillig et al., 2017).

In addition, during Study 3, discussion facilitators who were leading groups of students in the critical thinking condition used discussion prompts that asked students to judge the quality of information shared and to be alert for sources of bias. In Study 4, we extended the critical thinking prompts to become part of the deliberative materials that all students received during A3. In Study 4 the critical thinking A3 prompts asked students to say what they thought about the scenario and included instructions reminding them of critical thinking skills that they could apply (e.g., citing sources and looking out for bias) as well as explicitly asking them to consider what someone with a different perspective might think about the issues. In the Study 4 A3 control condition, students were simply asked to write down their reactions to the scenarios, and no scaffolding of responses was provided. In addition, instead of explicitly asking what someone of a different perspective might think, a follow-up question simply asked what other questions they had about the topics under discussion. In Study 5, however, the critical thinking prompts were not used during the A3 discussions, but instead active facilitators sought to promote brainstorming, analysis, and synthesis of information and different perspectives, while passive facilitators merely read the discussion instructions and scenarios to the group .

2.3.4 Peer Discussion

In each of our Studies 2–4, we varied the presence versus absence of peer discussion during the in-class deliberative activities. We randomly assigned each student to discussion or non-discussion conditions. Students did not know whether they would be working alone or discussing issues with peers until immediately prior to A3. Students assigned to the discussion condition were then grouped with others who were in the same A2 condition (e.g., NIF vs. traditional, accompanied by critical thinking vs. control prompts). Students assigned to deliberate without peer discussion were directed to a separate room to read through and work on their deliberative activities during the same class period as the discussions took place. Both the discussion and non-discussion classrooms were monitored by a researcher or recitation instructor. All students, whether in the discussion condition or deliberating alone, had condition-appropriate background materials available to them either in hard copy form or via an online link. All students responded to the scenarios using online survey forms. A few copies of the survey forms were also available on paper in case of technical difficulties.

In general, more students enroll in the introductory biology course in the fall than in the spring, resulting in a larger sample sizes for those semesters. In fall of 2011 (Study 3), we thus assigned a convenient subset of the students to complete their deliberation activities via asynchronous online discussion with their group members. Because random assignment was not used to place these students in their condition, their data was treated as pilot data for exploring the impacts of students discussing online instead of face to face.

In fall of 2012 (Study 5), all students deliberated in groups. In part the decision to have all students in groups was due to student reports of much greater enjoyment and engagement when deliberating with their peers rather than alone. In addition, the choice to have students in groups allowed us to vary a specific theory-relevant aspect of discussion: attitudinal homogeneity, which is discussed in greater detail in Chap. 4. Positive and negative attitudes toward nanotechnology were determined by examining student responses to attitude questions answered at the end of A2. Once students were identified as tending to have relatively positive or negative attitudes, two-thirds were randomly assigned to homogeneous (positive or negative) groups and one-third to mixed (heterogeneous) attitude groups.

2.3.5 Active Facilitation During Discussion

In Studies 2–4, facilitators were used to lead group discussions during A3 due to recommendations made by many deliberative theorists and practitioners (Dillard, 2013). The facilitators were students recruited from prior semesters of the course who had previously taken part in the deliberative activities. The facilitators were trained by project researchers and given facilitation guides to ensure the use of common methods, prompts, and follow-up questions. In Studies 3–4, the facilitators were also given prompts specific to the critical thinking or control conditions to build on the prompts used in A2. In Study 5 active facilitators were instead given a list of prompts that might support and encourage brainstorming ideas, analyzing and evaluating different perspectives, and synthesizing information. In Study 5, facilitators were trained in both active and passive methods of facilitation in order to investigate the importance of active facilitation. Passive facilitators were instructed only to read the scenarios and questions used in the deliberation. Active facilitators were instructed to use the full range of facilitation techniques and the prewritten prompts and follow-up questions to evoke student interaction and consideration of different viewpoints .

2.4 For What Deliberative Engagement Outcomes?

Examination of the purported possible benefits and drawbacks of deliberation led us to focus upon, operationalize, and measure outcomes (which could serve as dependent variables) that included knowledge gains, changes in attitudes toward the topic of discussion, development of democratic or deliberative attitudes and other civic capacities such as political motivation and self-efficacy, changes in trust in scientists and regulators, and acceptance of policy resulting from such engagements. Table 2.6 provides a list of the major constructs measured across each of our studies and the assignments during which the measures were administered. Here, we give an overview of the outcome measures available for each study, but readers are referred to the supplemental materials for the specifics, including all items and how item wording may have been revised over time. In addition, here we focus on major measures used in at least two of the four reported studies. Additional measures and items are discussed in the supplemental materials (the detailed methods and materials accompanying this book).

Table 2.6 Major constructs assessed and measures administered in each study

2.4.1 Knowledge

Given the theoretical and empirical differences between subjective and objective knowledge (as discussed in greater detail in Chap. 3), in all studies, we assessed both subjective and objective knowledge with multiple items and nearly always at multiple time points. Subjective knowledge was assessed with items such as “How familiar are you with nanotechnology?” and “How familiar are you with how nanotechnology is used in genetics research and development?” followed by five-point response scales ranging from “not at all familiar” to “extremely familiar.” When multiple items were used, Cronbach alphas for scales created by averaging across subjective knowledge items at a specific time point were >0.80.

Objective knowledge was assessed with multiple choice and true-false items based on information that was presented in the A2 reading. In Studies 3 and 4, some of the objective knowledge items were accompanied by confidence rating (“I am _____ confident in my answer” with response options ranging from “not at all” to “completely”). As noted in Chap. 3, we evaluated the adequacy of our knowledge questions by examining the patterns of responses exhibited by individuals over time (Chatterji, 2003). Items sensitive to changes in knowledge should be more likely answered incorrectly prior to information presentation and relatively more likely to be answered correctly after information presentation. For each question, we examined the proportion of students who answered the question incorrectly prior to the reading and correctly after the reading and compared that percentage to the percent of students showing different patterns. For example, large percentages of students answering the question wrongly before and after information presentation would indicate an item was perhaps too difficult to detect knowledge changes or might indicate information was not adequately covered during information presentation and consideration. By examining the questions in this way, we were able to revise our knowledge questions over the first couple of studies. During each specific study, we chose items most likely to be sensitive to actual changes in student knowledge.

We also assessed student perceptions of their subjective learning during some of the assignments or for the module as a whole by asking questions such as “How much do you feel you learned about nanotechnology as a result of working on this assignment?” accompanied by a scale of five responses ranging from “nothing that I didn’t know before” to “a great deal.”

2.4.2 Attitudes Toward Nanotechnology

Attitudes toward nanotechnology were assessed in a number of ways, with certain questions asked during every assignment. As noted in Chap. 4, two rating questions were used over and over throughout the studies. One item asked about relative risks and benefits : e.g., “Based on what you know right now, do you think the risks of nano-biological research and development outweigh the benefits? Alternatively, do you think that the benefits outweigh the risks?” The risk benefits items were followed by a multipoint scale ranging from “The risks greatly outweigh the benefits” to “The benefits greatly outweigh the risks.” Risk versus benefit items have been commonly used in prior research on public attitudes toward nanotechnology (e.g., see review by Satterfield, Kandlikar, Beaudrie, Conti, and Harthorn, 2009). The second item asked students about their perceptions of the need to regulate nanotechnology development. This item was often accompanied by a 100-point scale and read: “In your opinion, how much regulation is needed with respect to nano-biological research and development? Move the slider to reflect your view. 0 means that you believe there should be NO regulation of nano-biological developments. 100 means that you believe EVERY ASPECT of nano-biological research and development should be HIGHLY regulated.”

Beyond those two single-item measures , which are analyzed in detail in Chap. 4, we also assessed a number of other more specific attitudes. These measures were often assessed at pre and post (e.g., A1 or A2 and again in A4) but not as frequently assessed throughout the assignments. A number of opinion items were obtained or adapted from prior research (Hamlett & Cobb, 2006; Scheufele & Lewenstein, 2005). Examples of these items include “The government will effectively manage any risks associated with nanotechnology” and “There are serious ethical problems associated with not quickly developing nanotechnologies.” These items were typically accompanied by seven-point scale response scales ranging from “strongly disagree” to “strongly agree.”

To assess different values associated with nanotechnology, participants also were presented with goals related to nanotechnology development and asked to assess their importance. Examples of these goals included “minimizing potential environmental risks” and “maximizing potential benefits for human enhancement.” Response options typically fell on a five-point scale ranging from “unimportant” to “extremely important.” In some studies, however, we also asked students to allot a certain percentage of total funding available to the different goals by explaining as follows: “There are, of course, only a limited number of resources (time, effort, money) that can be spent on each of the things that you rated above. So, please consider what percentage of the U.S. resources (i.e., total time, effort, and money) should be spent on each. Make sure that your percentages sum to 100%, and make sure that you give the items you rated the most important the greatest percentage of resources. What percentage of resources would you assign to each category?”

A number of more specific risks and potential risks and benefits were assessed using items taken or adapted from prior research on public attitudes toward nanotechnology (Cobb & Macoubrie, 2004; Lee, Scheufele, & Lewenstein, 2005). Examples of potential risks/benefits included nanotechnology leading to “The extension of human life expectancy” and “Pollution of the environment by nanomaterials.” Responses fell on two different scales. The first, concerning likelihood of the risks or benefits, was typically a five-point scale ranging from “not at all likely” to “extremely likely.” The second, concerning importance, was often in the form of a seven-point bipolar scale ranging from “very important to AVOID” to “very important to ACHIEVE,” with “NOT IMPORTANT to avoid or achieve” representing the midpoint.

In addition to assessing attitudes directly, we also asked students to report their subjective sense of being likely to change or of having changed their minds . During the later assignments, after certain of the attitude questions, students were asked closed-ended rating questions such as, “Overall, from the time you began these exercises, until now, to what extent did you change your mind about the…question above?” followed by a five-point scale ranging from “not at all” to “a great deal.” Similarly, students were sometimes asked whether they changed their opinions about nanoscientists or policymakers. In Study 4 we also asked students to rate the extent to which they strengthened their original opinions. Beginning in Study 3, during A1, we also asked how likely students thought it was that they would change their views. In some of the studies, the rating questions are accompanied by open-ended questions asking students to explain how and why their opinions changed or might change.

Finally, for each of our studies, we asked students to answer open-ended opinion questions regarding their attitudes toward nanotechnology. In most studies students were asked to answer similar or the same questions at least two times during the study, during A1 or A2 and again at A4. Overarching open-ended questions typically asked students to describe the developments relating to nanotechnology that should be prioritized or avoided as well as the regulations they felt should or should not be imposed. The students were also asked to give reasons for their views and at times were explicitly asked to consider opposing views and how they would answer those opposing views .

2.4.3 Perceptions of Actors: Nanoscientists and Policymakers

We assessed perceptions of nanoscientists and policymakers in most of our studies, and often at two different time points. Questions included items pertaining to familiarity/ certainty (e.g., “How much do you know about nanoscientists?” and “To what extent do you feel certain in your views about policymakers who regulate nanotechnology?”). We also assessed confidence in and perceptions of trustworthiness of nanoscientists and policymakers. For example, students were asked, “To what extent do you have confidence in nanoscientists to…” “do their jobs well,” “meet their professional responsibilities,” and “make decisions about the most important directions for future development [of nanotechnology].” Responses typically fell on a five-point scale ranging from “not at all” to “a great deal.” Following prior research on trustworthiness (Mayer, Davis, & Schoorman, 1995; PytlikZillig et al., 2016), we also often included items assessing specific trustworthiness and distrustworthiness components including whether participants feel nanoscientists and policymakers “are fair” or “…are primarily motivated by what will benefit them personally,” “…don’t really care about the long-term risks of their decisions,” and “…are dishonest.”

2.4.4 Policy Scenario: Policy Preference , Acceptance , and Support

At the end of the study during A4, we assessed policy preferences and acceptance/support. As described in Chap. 5, we conceptualized policy acceptance/support as a willingness to accept, tolerate, and not resist a policy, even if only with reservations or for a time. In contrast, we conceptualized policy preference as reflecting one’s personally preferred policies: the extent to which people agree with, feel good about, and prefer a given policy.

We assessed policy preference by asking people to indicate the extent to which they were for or against a given policy. For example, participants were asked questions such as: “If legislation were being considered that would speed up nanogenomics research and development in the area of human enhancement by increasing funding and decreasing restrictions... Would you be FOR or AGAINST such legislation?” followed by a seven-point scale ranging from “strongly AGAINST” to “strongly FOR.” In Studies 2–4 we randomly assigned students one of the two versions of the question, one asking about speeding up research and development and one asking about slowing it down. This was in order to find out if slowing down and speeding up policies were direct opposites or somehow different.

We assessed policy acceptance using an imagined scenario. In our study we were interested in whether salient public input processes could increase public acceptance, and thus we wrote a scenario that was designed to first make those processes salient and then assess acceptance/support in light of those salient processes. In our scenario, the government purportedly listened to input from the public, and input was purportedly gathered according to the same procedures the student had experienced. Then the government made a decision that was consistent with the public input. In Studies 2–4, after describing the scenario we had designed, we assessd policy acceptance by asking participants if they agreed “The government made the right choices with regard to this issue,” “The government made the same choices you would have made,” and/or if “The government should have made different decisions about the issue” (reversed). In Study 5 we directly assessed policy support/resistance in addition to acceptance by asking if participants agreed or disagreed that “I would support this decision made by the government [in the scenario just read],” and “I would resist this decision made by the government” (reversed). In Study 5 we also included an acceptance item: “Because of the processes used, I would accept this decision made by the government.”

Regarding the validity of assessing policy acceptance or support as separate from policy preferences, students were allowed to explain their answers, and some of their explanations did indicate that at least some students perceived the questions as different from one another. Specifically, the acceptance answers by at least some students took into account the results of the public engagement processes used to come to the decision. As an example, here is an explanation given by a student in Study 4 who also indicated a preference for pro-nanotechnology policies (expressing that he was “strongly against” slowing down research and increasing regulation). Despite the pro-development preference this student indicated, he also agreed (not strongly, but also not slightly) that the government in the scenario made the same choices that he would have made by slowing down development of nanotechnology and increasing regulations. As an explanation he wrote :

If the majority of people feel that they’re not ready for the technology then it’s hard to change that. The government is supposedly the will of the people…If people of a nation-state [choose] not to participate in new technologies then that is the will of the country, the government should abide by that will….

2.4.5 Motivational Variables

We also assessed certain individual differences in motivation and attitudes or beliefs that, based on prior theory, might be expected to be affected by participation in deliberative engagements. In each study, at both A1 and A4, we assessed attitudes toward deliberative engagements or deliberative citizenship using five items relating to the value of deliberation as an element of active citizenship. These included the statements “A good citizen should listen to people who disagree with them politically” and “A good citizen should be willing to justify their political views.” These items were taken from Muhlberger and Weber (2006), and when averaged into a scale, the scale internal consistency as assessed with Cronbach alpha was always >0.75 in each of the four studies.

Political self-efficacy can also be impacted by engaging in political discussions with others. We assessed political self-efficacy at A1 and A4 in each study, using four to six items and resulting in a scale with Cronbach alphas >0.70 for each of the studies. Examples of these items include the statements “Sometimes politics and government seem so complicated that a person like me can’t really understand what’s going on” which was reverse-scored, and “I consider myself well-qualified to participate in politics.” These statements were adapted from similar items used in the American National Election Study (2000).

Finally, drawing from self-determination theory which suggests that people’s motivations can become internalized over time and with various experiences, we assessed three types of political motivation in both A1 and A4 in each study: intrinsic , introjected, and extrinsic. The items used for these measures were adapted from Koestner, Losier, Vallerand, and Carducci (1996) and Losier and Koestner (1999). To measure the extent to which a student felt motivated to be politically engaged by factors that are intrinsic to their concept of self, we used statements such as “I follow political and social issues because I want to learn more things” and “I follow political and social issues because I think it’s important.” The extent to which a student felt motivated to be politically engaged by factors they had internalized from outside sources (introjected motivation) was assessed with items such as “I follow political and social issues because it bothers me when I don’t.” The extent to which a student felt motivated to be politically engaged by external factors ( extrinsic motivation ) was assessed with one item such as “I follow political and social issues because that’s what I’m expected to do.” When multiple item scales were used to assess these constructs, the Cronbach alphas were >0.70.

2.4.6 Evaluation of Public Engagement

We assessed student perceptions of public engagement and its usefulness in various ways. In our module evaluation questions, we often asked “Overall, to what extent was the public participation module a beneficial part of your learning in this course?” with responses to both on a five-point scale ranging from “not at all” to “a great deal.” In addition, to better understand perceptions of conducting ELSI engagement activities in the context of science courses, we asked students at different points, sometimes both before and after engaging in all of the module activities, to answer an open-ended question that read, “In your opinion, how important is it that science students--including beginning science students such as you and your classmates--learn how to think about the ethical, legal and social issues (ELSI) pertaining to science? In 2–3 sentences, give your answer and a brief explanation of why you think as you do.” Finally, to assess perceptions of the value of public engagement in general, in A4 of Study 5, we additionally asked “Some people feel that the government should primarily rely on expert opinions, not citizen opinions when making policy decisions. What do you think? In your opinion, how much weight should government give to citizen opinions (like you wrote above) when making decisions about the future of nanotechnological development and regulation?” This question was accompanied by a five-point response scale ranging from “None! The government shouldn’t be considering opinions of everyday people like me” to “A lot of weight! The government should take opinions like mine very seriously.”

2.5 How and Why: Mediators and Moderators

Having identified features and outcomes, the next challenge is to connect them via explanatory theories. What mechanisms might explain why certain public engagement features might or might not connect to certain outcomes in certain contexts? And what moderators might impact different effects? One primary mechanism examined in our studies was “engagement”—or, rather, the varieties of ways that people can engage (PytlikZillig et al., 2013). Other process measures assessed group processes that took place in A3; student perceptions of the quality of the background readings, assignments, and public engagement module as a whole; whether and why (or why not) students felt they changed their mind on different opinion or attitude questions; and their open-ended responses to questions about the development and regulation of nanotechnology. Potential moderators that we assessed included participant characteristics such as their demographics, their political ideology and party, and various personality traits such as openness, conscientiousness, need for cognition, and dispositional trust.

2.5.1 Cognitive-Affective and Behavioral Engagement

It is interesting that although engagement is, in name, central to public engagement, engagement research does not often focus on the varieties of ways individuals might be engaging. How do people feel when they are engaging? Are they bored, interested, and annoyed? Maybe they are distracted by things happening on their Instagram feed. If they are fully engaged with the topic, are their minds open and listening to another person’s perspective they have not heard before, or are their minds busy plotting their rebuttals to earlier remarks?

As described elsewhere (PytlikZillig et al., 2013), engagement is a varied and multifaceted state that includes the affective, cognitive, and behavioral experiences of individual participants at different points during the engagement activities. We drew from a number of theories in developing self-report engagement scales, including cognitive theories of deep and surface cognitive processing (Chin & Brown, 2000; Dinsmore & Alexander, 2012), and educational theories of metacognition and active learning strategies (McCormick, 2003; Veenman, Van Hout-Wolters, & Afflerbach, 2006; Vermunt & Vermetten, 2004). Drawing from theories of emotion and affect, we incorporated items asking about anger and boredom (Fahlman, Mercer-Lynn, Flora, & Eastwood, 2013; Harmon-Jones, Schmeichel, Mennitt, & Harmon-Jones, 2011). Drawing from personality and social psychology, we devised measures of states of open-mindedness, creativity, closed-mindedness, conscientiousness, and social engagement (Akbari Chermahini & Hommel, 2012; Fleeson, 2001; PytlikZillig, 2001). Then, in our studies, we asked participants to self-report how they had engaged during different activities—such as reading and responding to background readings in A1 or deliberating alone or with peers in A3. This allowed for the investigation of the roles of certain engagement “states” that are explicitly or implicitly referenced by numerous theories that could be applied in the contexts of public engagements.

To assess engagement, participants were asked to assess various statements, each beginning with the stem “During the assignment, I…” (e.g., “felt focused”). Responses fell on a five-point scale ranging from “not at all” to “a great deal.” Items were taken from/reported in PytlikZillig et al. (2013) and were intended to assess each of the eight different ways of engaging: active learning (e.g., “identified questions that I still had about the topics”), conscientious (e.g., “gave careful consideration to all of the options presented”), open-minded (e.g., “felt open to hearing new ideas about the topics”), social (e.g., “discussed my ideas about the topics with others”), creative (e.g., “used my imagination”), disinterested (e.g., was impatient to get this over”), angry (e.g., “felt angry”), and closed-minded (e.g., “felt like my mind was already made up”). Finally, in addition to the items assessing states of engagement experience, we also at times asked students to self-report the amount of time that they spent on various portions of assignments.

2.5.2 Self-Reports of Influences on Attitudes

In addition to the questions about whether and how much attitudes changed (variables listed under outcomes), in some of the studies, we asked students to rate influences on their attitudes, at times including whether and how much a specific assignment influenced their specific opinions or attitudes about nanotechnology. In some studies, they rated how much certain factors (e.g., talking about the issue with others, background reading, views of people important to them) impacted the views that they expressed in the surveys. At times the rating questions were accompanied by open-ended questions asking students to elaborate on why they changed their views.

2.5.3 Participant and Facilitator Perceptions of Group-Relevant Processes

Another set of process measures, used in Studies 4 and 5, asked student participants involved in discussions to reflect on their discussion experiences. Questions typically focused on perceptions of their group and group members and perceptions of their facilitator. In addition, some of our studies include ratings made by the facilitators of the group processes.

Related to participant perceptions of their group and group members, some statements were relevant to identification with the group. Examples of these items include “I identified with my group during the discussion” and “I felt like an ‘outside’ member of my group,” reverse coded. Responses were on a seven-point scale ranging from “strongly disagree” to “strongly agree.” Other questions measured group consensus such as “How much disagreement was there within your group at the beginning of discussion?” and “How much agreement was there within your group at the end of discussion?” Additional questions included “Overall, how do you feel about the other people in your group?” (seven-point scale from very negative to very positive) and “How satisfied were you with your small group discussion in class today?” (five-point scale from not at all satisfied to extremely satisfied). Some questions focused on individual group members. Participants were asked to identify each member of their group by first name and last initial. They were then asked questions referencing each individual. For example, “Before today’s discussion, how familiar would you say you were with ________?” (with responses falling on a five-point scale ranging from “not at all familiar” to “extremely familiar”) and “Regarding the ethical scenarios you just discussed, how similar do you think __________’s opinions are to your opinions?” Response options were comprised of five-point scales ranging from “not at all similar” to “extremely similar.”

Related to perceptions of their facilitator , participants in Study 5 were asked to indicate the extent of their agreement with statements regarding how active or passive their facilitator was. These statements all began with the phrase “The moderator of our group….” Examples of these items include “was very active in leading our group discussion” and “summarized what he/she heard the group saying.” Responses fell on a seven-point scale ranging from “strongly disagree” to “strongly agree.” These questions served as a check for our manipulation of facilitator activity.

In each of the studies, facilitator perceptions of group processes were assessed at the group level, with the questions being refined with each study. For example in Study 2, facilitators gave their impressions regarding how interested participants were in the discussion questions and how much the students stayed on topic and discussed alternative perspectives. In Study 3 facilitators additionally gave impressions about whether students used critical thinking skills, such as providing and evaluating evidence for different perspectives. In Study 4, facilitators also reported some of the topics that came up in discussion, and in Study 5, they were asked to perform a self-check of how well they adhered to the active and passive conditions during their facilitation.

2.5.4 Assignment and Information Evaluations

After the most substantive of the assignments (A2 reading, A3 deliberation) and at times at the end of the entire set of public engagement activities (during A4), students were asked to evaluate the quality of various components of the public engagement. For example, they rated the quality of the background information by responding to statements beginning with the stem “The information provided in this assignment was…” and ending with phrases such as “unbalanced” and “fair in its presentation of the issues,” to assess perceptions of bias. They also rated the clarity or understandability of the background information and its quality in terms of accuracy and thoroughness. In addition to rating the background information, within specific studies, students were at times also asked to rate the quality of A2 as a whole or the usefulness and value of the entire public engagement “module” (set of activities) used in the course.

2.5.5 Written Reponses and Comments

Open-ended responses also were gathered from our students. In addition to the ones already described in previous sections of this chapter, student written responses to open-ended questions during the A3 deliberation are also available for merging with our data sets by request. The open-ended data include responses to the scenarios that they read in Studies 2–4 and a drafted and revised “law” for the regulation of nanotechnology that students were asked to write during A3 of Study 5. Often students also left open-ended comments at the end of a given assignment or study .

2.5.6 Data Quality Checks

For the most part, our open-access data sets include all consented data regardless of quality. Exceptions are noted in our supplemental materials. In each of the studies, we included some assessments of data quality, including questions which overtly asked students if they completed the questions honestly (vs. randomly, without reading, or felt they should answer in a certain way that differed from their honest response). In Study 5 we also included, in some assignments, items explicitly asking participants to choose a certain response (e.g., directing them to choose “strongly agree”). Students failing to follow those instructions may have been answering the survey inattentively. In addition, it may be possible to ascertain random or inattentive responding by examining the pattern of responses to certain of our measures—such as whether students classified presumed potential negative outcomes as negative (e.g., pollution).

2.5.7 Demographics and Individual Differences

Other potential moderators assessed in our studies include measures of demographics and individual differences. In all studies, we assessed self-reports of demographics including age, gender, year in school, typical grades, and prior experience with ethics coursework. In Studies 2–4 we also asked for self-reports of both parents’ highest level of education.

In all four studies, we also included multi-item measures of ideological identity, interest in politics, dispositional trust, and need for cognition. In some but not all studies, we also assessed trust in institutions, cultural cognition, authoritarianism, and certain of the Big 5 traits of openness, agreeableness, emotional stability, extraversion, and conscientiousness. The details of these additional measures are in our supplemental materials.

2.6 Conclusion

In this chapter, we presented a large number of comparisons and details of variations between studies in order to give readers an overview that will allow them to assess whether our data may be useful for their own purposes, as well as to provide background on the measures we reference in our remaining chapters. Readers will find even more detail in the documentation that accompanies the data sets in the supplemental materials.

Next, to provide exemplars of how our data may be used to test various theories, we turn our attention to a much narrower set of variables that comprise some of the most desired outcomes of public engagement: increases in knowledge, changes in attitudes or opinions, and acceptance of policy decisions.