Recent discussions in this journal have focused on the observable “fragmentation” (Yurevich 2009) or “differentiation” (Zittoun et al. 2009) of psychology as a scientific discipline. This discussion is unquestionably a critical one given that further exploration in the various psychological disciplines is likely to produce more complex, rather than less complex, concepts and explanations regarding psychological phenomena; as these complexities mount, the distance between these different disciplines may continue to grow (Zittoun et al. 2009). It is important to note, however, that the phenomenon of fragmentation is not unique to psychology—it can be seen within and between most, if not all, domains of knowledge. As knowledge becomes more specialized and fine-grained, it becomes more difficult to communicate to those who do not share the same experiences or expertise.

This discursive distance is especially noticeable between those in scientific fields and those outside them and has become apparent in the context of recent scientific controversies. In the case of global climate change, embryonic stem cell research, and vaccines, public controversies have emerged over the validity of science that is otherwise generally accepted within the relevant scientific fields. These controversies have continued despite scientific consensus—as a result, scientists have often thrown up their hands, lamenting that the public ‘just doesn’t understand.’ Throughout these public controversies, the media have played a specific and influential role as mediator between scientist and layperson. Indeed, journalists play a large part in filtering and shaping scientific messages as they are delivered to their intended audience.

Using Yurevich’s terms, one could say that scientists have been operating primarily along the lines of a natural-scientific paradigm, emphasizing the importance of rigorous and empirical findings, while the media have reflected (even cultivated) different popular sociodigms, placing a spotlight on communities of laypeople who have different “circles of communication” and refer to different people as authorities on the issue at hand (Yurevich 2009). However, to the extent that one takes an anthropological view of culture—in which culture reflects the common beliefs and behaviors that stem from shared experience (Strauss and Quinn 1997)—one could also view the divergence between paradigm and sociodigm as a gap between different cultures.

The idea of such a cultural gap harkens back to C.P. Snow’s famous ‘two cultures’ (Snow 1993), and represents a somewhat similar dichotomy between scientific and non-scientific ways of knowing; unlike Snow’s formulation, however, the cultures I refer to here are not restricted to so-called intellectuals. At the heart of these ‘scientific’ and ‘non-scientific’ cultures lie different conceptions regarding the types of information that can be considered valid and reliable when seeking answers to questions about the world. This is not to suggest that the scientific and non-scientific cultures are unified, homogeneous groups. Rather, the terms ‘scientific culture’ and ‘non-scientific culture’ are used here to reflect the idea, introduced above, that shared experiences can result in common beliefs and behaviors that contribute to the formation of different cultures.

According to pragmatist C.S. Peirce, there are four methods by which we can attempt to answer questions about the world and ‘fix’ our beliefs: tenacity, authority, a prioi, and science (Peirce 1957). While we may not agree with the specific methods postulated by Peirce, his description of the relationship between method of inquiry and the fixation of belief is well applied here. That is, those in both the scientific and non-scientific cultures engage in active inquiry about the world but they rely on different methods for doing so. Within the scientific culture, the most valid and reliable method of fixing belief is, of course, that of science; within the non-scientific culture, however, other methods of fixing belief may be considered just as valid and reliable.

Central to this perspective, and in line with the theoretical perspectives of pragmatism and symbolic interactionism (e.g., Blumer 1969), is the idea that humans are constantly interacting with and shaping the world around them through the meaning-making process. Meaning is not inherent to things out in the world—rather, we actively derive meaning from our experiences with the people and things we encounter. To use Peirce’s terminology, when we attempt to ‘fix’ belief, we necessarily frame and shape that belief, and this is true for those in the scientific and non-scientific cultures alike.

With this perspective in mind, the purpose of this paper is to examine the epistemic gap between scientific and non-scientific cultures as illustrated by the ongoing public controversy over the measles-mumps-rubella (MMR) vaccine. Specifically, I will examine how the different epistemic positions espoused by those who value the scientific method of belief fixation and those who value non-scientific methods of belief fixation can influence public understanding and decision-making regarding scientific issues. In highlighting specific examples of discourse from scientific and journalistic sources, I hope to demonstrate how public discourse, and the positions revealed therein, can have serious downstream effects, especially in the case of scientific controversies. Ultimately, the goal of this analysis is to help identify the ways in which can we successfully bridge the gap between the scientific and non-scientific cultures so that the two groups can successfully communicate with each other and participate in a deliberative and democractic policy-making process.

History of the MMR Controversy

In 1998, Andrew Wakefield and colleagues published a study in the prestigious medical journal The Lancet in which they investigated the co-occurrence of bowel disease and developmental disorder in 12 children (Wakefield et al. 1998). The authors noted that for eight of the 12 children studied, parents reported that behavioral symptoms of developmental disorder emerged shortly after MMR vaccination. Based on this reported correlation alone, the authors speculated that the MMR vaccine may play a causal role in the onset of autism spectrum disorders. The article received considerable coverage in the media and spawned a highly publicized debate between scientists, who uniformly found no empirical evidence for the MMR-autism link (see, for example, Offit and Coffin 2003), and advocates and parents, who cited anecdotal evidence for the relationship.Footnote 1

Subsequent to the publication of the study and the media attention it received, MMR vaccination rates declined in some areas of the United Kingdom (Jefferson 2000; O’Dell and Brownlow 2005; Serpell and Green 2006; Speers and Lewis 2004). In Dublin, Ireland, the decrease in MMR vaccination led to an outbreak of measles that resulted in the hospitalization of over 100 children and the death of three children (McBrien et al. 2003). As a consequence of low MMR vaccination coverage, measles is once again considered endemic to the United Kingdom (Health Protection Agency 2008). While the decline in vaccination has not been so dramatic in the United States (where vaccination is often a requirement for school entry), research indicates that some parents are delaying or foregoing vaccinations for their children due to concerns about adverse outcomes such as autism (e.g., Freed et al. 2010).

Many scientists consider media coverage of the Wakefield study to have been a driving influence in parents’ decisions not to immunize their children with MMR (Speers and Lewis 2004). On the other hand, some scholars have highlighted changes that they feel need to be made in the practice of both science and journalism in order to promote better public understanding of the science of vaccines and autism and to prevent such public health crises in the future. Regardless of who is considered to be ‘at fault,’ the MMR controversy highlights the epistemic and discursive gaps between scientists and non-scientists. Most importantly, the MMR controversy provides clear evidence that such gaps can have real consequences for both individual-level decision-making and public health.

The Role of Scientists in Public Understanding of Science

While many researchers seem ready and willing to engage with the public on scientific issues and improve their communication skills (Weigold 2001), naïve expectations about the process of effective science communication remain (Welch-Ross and Fasig 2007). Indeed, the sentiment prevails that scientists are generally ineffective at communicating with the public (Weigold 2001). One reason that efforts at science communication may be ineffective, especially on controversial issues, is because they are often based on a ‘deficit model’ of public understanding of science.

The deficit model essentially maintains that the reason people do not understand a particular scientific concept, or science more generally, is because they do not have sufficient information to do so (Bauer et al. 2007; Burns et al. 2003; Miller 2001; Sturgis and Allum 2004; Weigold et al. 2007). Because laypeople do not have the expert knowledge that scientists have, they cannot understand science and form scientific opinions in the way that scientists do. This lack of information is what supposedly leads to many of the public debates on scientific issues that are more socially than scientifically controversial (e.g., stem cell research, global warming, MMR-autism link) (Nisbet and Goidel 2007). If, according to the model, the problem is truly a matter of a deficit of information, then the resolution to the problem is clear: provide laypeople with the additional information they need to come to the correct understanding or opinion. Serpell and Green (2006) note that much of the MMR vaccination literature rests on the deficit model, under the assumption that better information will lead parents to vaccinate their children

According to this model, the scientific facts speak for themselves—scientists simply have to fill in the gaps in people’s knowledge (Miller 2001; Nisbet and Goidel 2007) and any controversy will naturally disappear (Nisbet and Goidel 2007; Nisbet and Mooney 2007). If the public does not understand or appreciate the research as expected, then the fault can be placed either with the media, for inaccurately portraying the science, or with the public, for holding irrational beliefs about the science (Bauer et al. 2007; Hilgartner 1990; Nisbet and Goidel 2007; Nisbet 2009; Pardo and Calvo 2004; Sturgis and Allum 2004). Indeed, the MMR controversy underscores this implicit attribution of responsibility, with scientists casting parents’ fears about the MMR vaccine as ‘illogical’ (Murch, 2003 as cited in O’Dell and Brownlow 2005).

Scientific Discourse in the MMR Controversy

Examples of discourse from those who value the scientific method of belief fixation demonstrate how the boundaries between scientific to non-scientific knowledge are easily perpetuated. Indeed, a recent quotation from Lancet editor Dr. Richard Horton provides an example of the role that implicit deficit model assumptions play in the MMR controversy. In February, 2010, The Lancet officially retracted the 1998 Wakefield study. In an interview for the National Public Radio show On the Media, Horton was asked what could be done in the future to prevent such a ‘debacle’ from occurring. He replied:

“We used to think that we could publish speculative research which advanced interesting new ideas which may be wrong, but which were important to provoke debate and discussion. We don’t think that now…[W]e don’t seem able to have a rational conversation in a public space about difficult, controversial issues, without people drawing a conclusion which could be very, very adverse.” (A shot of reality 2010)

Despite the fact that Horton, in the same interview, describes the controversy as a ‘system failure,’ involving media, government, and the scientific community, this quotation underscores the implicit assumption that the science should be able to speak for itself but cannot because the media distort the science and the public continue to hold on to irrational fears.

Similarly, when asked about the fact that some parents still believe in the association between MMR and autism, despite all findings to the contrary, Autism Science Foundation president Allison Tepper Singer responded:

“So I think where we are now is we have to be willing to accept what the science is saying and I think there is still a small faction of parents who are clinging onto this belief with almost religious conviction, as opposed to a scientific conviction, and that’s caused a lot of harm. I think you have to be willing to accept what the data show.” (Vaccines and autism: A story of medicine, science and fear 2011)

By contrasting scientific conviction and religious conviction, this excerpt clearly reveals the assumption that the scientific method is the only acceptable method of fixing belief. As in the quotation from Horton above, Tepper Singer implicitly assigns blame to parents for failing to understand the science on vaccines and autism—these parents refuse to accept the scientific data, desperately clinging to a causal story that science does not support.

While some scientists may believe that providing a science-illiterate public with more information will turn them into a science-literate and science-appreciative public, the relationship is not quite so simple. Many science communication scholars agree that the deficit model is too simplistic because it fails to account for people’s attitudes, values, and experiences, or for their cognitive biases. For this reason, many scholars argue for a ‘contextualist’ approach to public understanding of science, in which understanding arises from a two-way negotiation of meaning between expert approaches to knowledge and lay approaches to knowledge (Bauer et al. 2007; Cobern 1996; Miller 2001; Turney 1996; Wagner 2007; Yearley 2000; Zehr 2000). This contextualist model focuses specifically on the contributions of many different factors in the public understanding of science, including moral, social, and political values; trust; personal interests and preferences; alternative forms of knowledge; and culture (Burns et al. 2003; Cobern 1996; Nisbet and Goidel 2007; Pardo and Calvo 2004; Sturgis and Allum 2004; Turney 1996; Zehr 2000).

As Turney (1996) notes, after one recognizes the influence of the various factors on public understanding of science, “the impulse to try and bring people’s opinions into line with what scientists think they ought to be by insisting that they must understand the same facts in the same way dies hard” (p. 190). Commenting on the MMR controversy specifically, Leask et al. (2000) observe that “there is little empirical support for the hope that decision making about vaccination is based on ‘facts’ alone” (p. 108).

The Role of Scientific Conventions in Science Communication

One reason that scientists may adhere to the deficit model is because they are reluctant to take a more active role in shaping public understanding of science. As McCall and Groark (2007) note, scientists have often been hesitant to go beyond the strict boundaries of their data, avoiding prescriptions for best practices for fear of overgeneralizing. Furthermore, scientists are so accustomed to examining all the limitations of their research and alternative explanations for their findings that they are often uncomfortable with efforts to create a seemingly unqualified bottom-line message from their data (McCall and Groark 2007; Shonkoff 2000; Weigold 2001).

Offit and Coffin (2003) note that while scientists are the first to argue against the purported association between MMR and autism, the language they use to present the scientific evidence may not be clear enough to convince the public that MMR does not cause autism. Scientists’ claims that “there is no evidence that MMR vaccine causes autism” and that “autism occurs after MMR vaccine at the same rate that it occurs in children who did not receive the vaccine” are clearly generated out of respect for the scientific method but may be incomprehensible to a public that is unaware of or unconcerned with issues regarding scientific methodology (Offit and Coffin 2003). In order to reach a lay audience, scientists must be able to distill their research findings and create messages that are clear, jargon-free, and easy for non-experts to relate to (e.g., Gascoigne and Metcalfe 1997; Office of Science and Technology and the Wellcome Trust 2001).

While some scientists are not comfortable with the idea of actively shaping the information that is communicated to the public, Nisbet and Scheufele (2009) counter that “[f]raming is an unavoidable reality of the science communication process. Indeed, it is a mistake to believe that there can be ‘unframed’ information” (p. 5). Many scholars have highlighted the constructed nature of what scientists consider to be purely factual knowledge (e.g., Knorr-Cetina 1981). These scholars argue that the boundaries that exist between scientific knowledge and popular knowledge are not fixed in any real way but are instead embedded in a linguistic and behavioral repertoire that is employed by scientists to distinguish between the two types of knowledge (e.g., Hilgartner 1990). Although scientists themselves may not be aware of it, scholars of the sociology of scientific knowledge have observed that scientists actively shape what they consider to be objective inquiry by asking certain kinds of research questions, by following specific methodologies, and by relying on particular modes of analysis. Knorr-Cetina (1981) characterizes this as the process of ‘selection,’ in which a chain of such selections serves to structure and constrain the empirical data that result.

Beyond construction of the actual data, scientists also influence the way that their findings are portrayed and interpreted by emphasizing some details over others in their research reports, and by focusing on those findings that lend to the persuasiveness of their argument (Nisbet and Scheufele 2009). Along these lines, writing and submitting a grant proposal, a process central to the advancement of science today, could be considered a form of popularization in that it occurs downstream from original research and, therefore, involves some sort of framing (Hilgartner 1990). In continuously reinforcing artificially strict boundaries between scientific and non-scientific knowledge, and shying away from taking a more active role in shaping science communication, scientists simply leave their findings open to framing and interpretation by media and the public. This is unfortunate, as no one is better positioned to interpret and disseminate a “murky and fragmented research literature than those who understand all of its limitations and qualifications” (McCall and Groark 2007, p. 24).

It is important to remember that we should expect a fundamental difference in the knowledge of a scientist and a layperson—promotion of public understanding of science would be no better served by an approach that locates the deficit entirely with the scientist for failing to understand the ordinary citizen (Miller 2001). It is clear that, when it comes to scientific knowledge, scientists and laypeople are not on the same footing (Miller 2001), and that lack of scientific knowledge is a problem that cannot be solved by simple information transmission. Laypeople are not equipped to digest science information in the way that scientists wish they would—there are various factors that influence the interpretation and evaluation of scientific research beyond those that scientists consider to be important. As such, ignoring the lay audience’s “needs, roles, circumstances, knowledge, motivations, values, beliefs, ways of interpreting and processing information from science” (Welch-Ross and Fasig 2007, p. 5) is a critical mistake. Failure to take these non-rational factors into account often results in science communications that are simply irrelevant to the public and ineffective at shaping their understanding of science (McCall and Groark 2007). As a consequence, “information rich science enthusiasts become even more informed while the broader American audience remains disengaged” (Nisbet 2009, p. 4).

The Role of Journalists in Public Understanding of Science

While many laypersons may not actively seek out science information (National Science Board 2008), research suggests that whatever science information individuals are exposed to most likely comes from the media—“[w]hen formal education in science ends, media become the most available and sometimes the only source for the public to gain information about scientific discoveries, controversies, events, and the work of scientists” (Nisbet et al. 2002, p. 592). Media consumers are thought to play both active and passive roles in media consumption, depending on the contextual factors that come into play at any given time (Potter 2009). As Iyengar (1991) points out, mass media often reinforce the beliefs and opinions viewers already hold, as people tend to select (confirmation bias) and interpret (assimilation bias) information in ways that confirm their existing beliefs. For issues on which they have little or no personal experience, however, media consumers may depend on information presented by mass media to form an opinion. This dependence, in turn, allows the media significant potential to influence public thinking (Iyengar 1991).

Research on media and communications suggests that there are certain journalistic conventions that serve to ‘frame’ the news on a given topic, helping to organize thought by “packaging complex issues in persuasive ways by focusing on certain interpretations over others” (Nisbet and Huge 2007, p. 200). These media frames signal to the media consumer what information is important and what information is irrelevant (Dunwoody 2007). This is not to say that journalists intentionally employ frames in order to evoke a particular response in their viewers or readers. Rather, by providing more weight to certain aspects of a story over others, the frames that journalists use promote a particular “problem definition, causal interpretation, moral evaluation and/or treatment recommendation for the item described” (Entman 1993, p. 52). Thus, in the case of a scientific controversy, these frames provide the media consumer with an implicit indication about who or what is responsible for the problem, what the outcomes of the controversy will be, and what course of action should be taken to ensure the best outcome possible (Nisbet and Mooney 2007).

Scientists’ reluctance to engage in science communication may stem in part from their wariness of the quality of media reporting of science. Researchers who have little experience in working with the media often view the media with distrust and are particularly critical of the media’s ability to produce coverage that accurately represents and situates scientific findings within the body of existing research (Gascoigne and Metcalfe 1997). Scientists decry the media push for ‘sensationalistic’ headlines, arguing that the objectivity of scientific research is often sacrificed in order to produce more ‘newsworthy’ stories (Weigold 2001). Scientists’ wariness of science reporting reveals the fundamentally different methods by which scientists and journalists engage in inquiry.

Journalistic Discourse in the MMR Controversy

Several studies provide evidence for the influence media may have in shaping public understanding of the debate and public decision-making regarding the MMR vaccine. For example, Mason and Donnelly (2000) compared vaccination rates in areas covered by the South Wales Evening Post (SWEP), which ran a focused anti-MMR campaign, with those parts of Wales not covered by SWEP. They found that vaccination coverage within SWEP’s distribution area declined at a rate four times greater than in the rest of Wales. Along similar lines, Ramsay et al. (2002) conducted a longitudinal study in England that demonstrated that both perceived safety of the MMR vaccine and actual MMR vaccination rates declined after periods of intense media coverage but increased again (though not to original levels) once media coverage died down.

While these studies do not indicate the mechanisms by which media may influence public opinion and behavior in relation to MMR, examples of discourse from media do provide some clues. Indeed, a recent episode of Frontline (Palfreman 2010), an hour-long documentary program that airs on the Public Broadcasting System network, showcases some of the journalistic conventions that are thought to contribute to scientific controversies. The documentary, entitled “The Vaccine War,” focuses on the ongoing MMR-autism controversy, featuring interviews with scientists, public health officials, and parents and advocates from both sides of the debate. The first 45 s of the introduction to the documentary provide a perfect demonstration of the journalistic approach to scientific issues:

Narrator: Tonight on Frontline: They’re hailed as medicine’s greatest triumph: conquering smallpox, diphtheria, polio, and more.

Scientist: ‘If you look at vaccines over the past hundred years, they’ve increased our life span by 30 years.’

Narrator: But today, some Americans question if all those vaccines are worth the risk.

Parent advocate: ‘And I said, “Why am I supposed to vaccinate my newborn baby against a sexually transmitted disease,” and the nurse got really mad.’

Narrator: And some parent groups attack vaccines as the cause of chronic diseases from ADHD to autism.

Parent advocate: ‘My kid got six vaccines in 1 day, and he regressed.’

Parent advocate: ‘Would I rather have the measles versus autism? We’ll sign up for the measles.’

Narrator: Despite numerous scientific studies that say vaccines are safe, public concern persists. The result: outbreaks of infectious diseases not seen for a generation.

(Palfreman 2010)

This introduction exemplifies three journalistic conventions that serve as particularly powerful media frames: conflict/controversy, human interest, and balanced coverage.

The Role of Journalistic Conventions in Science Communication

Conflict/Controversy

Journalists hew strongly to the belief that their job is not to create news but simply to report on it (Dearing 1995; Dunwoody 2007). At the same time, news stories exist as commodities in a market that must attract consumers. As such, journalists’ responsibilities reach beyond the mere transmission of facts to include reporting news in ways that are compelling and engaging. It is perhaps not surprising, then, that those scientific issues that contain such elements of drama and that easily fit the narrative structure of a typical news article receive the greatest coverage (see Nisbet and Huge 2007).

One dramatic framing device that easily fits within a typical narrative structure is that of conflict or controversy between two well-defined groups (Dearing 1995; Nisbet and Huge 2007; Zehr 2000)—pitting, in this case, scientists and public health officials against parents and advocates. The name of the Frontline episode—“The Vaccine War”—directly elicits notions of serious conflict. The conflict frame is further emphasized by the use of the word ‘attack’ and by the inclusion of an excerpt in which a parent describes her personal battle with a nurse over the administration of a vaccine. This kind of dramatic controversy frame conveys a “potency” and “urgency” that carries considerable emotional weight with the public and serves to further fuel the existing conflict and provoke public concern (Cobb and Elder, 1983 as cited in Nisbet and Huge 2007).

Balance

Journalists recognize that they are not experts on scientific issues and that they do not have the time to become experts on scientific issues; out of respect for the complexity of the issues they cover, most journalists refrain from issuing claims about what is true (Dearing 1995; Dunwoody 2007). Despite this well-intentioned neutrality, it is evident that the particular frames journalists rely on often implicitly suggest evaluations that steer public decision-making.

In lieu of providing judgments about truth claims on controversial issues, journalists have evolved other strategies with the goal of illuminating the truth. One of these strategies rests on the convention of balance, which entails an acknowledgement of the fact that different people have different ideas of what the truth is (Dearing 1995; Dunwoody 2007). This attempt at balance, in combination with journalists’ avowed neutral stance, means that media reports on controversial topics often devote equal coverage to differing viewpoints, regardless of the rationale or evidence for the viewpoints (Dearing 1995; Dunwoody 2007).

By alternating between the viewpoints of scientists and parent advocates, for example, the introduction to the Frontline documentary creates a sense of balance, which may leave viewers with the impression that the two sides are equally supported by science. Although balanced coverage may represent journalists’ honest attempts at promoting objectivity in reporting, this all-sides-are-equal approach does not provide the public with a full representation of the context surrounding the scientific issue at hand. Nisbet (2009) highlights the potential danger of ‘balanced’ coverage, referring to it as “the trap of ‘false balance’” (p. 19).

Human Interest

In order to lend stories their necessary dramatic weight, journalists tend to choose stories that easily fit a decontextualized episodic, or incident-specific, frame. Instead of focusing on larger trends over time, contextual factors (e.g., social, cultural, governmental, and environmental factors), or systemic solutions (i.e., targeted toward underlying institutions and/or policies), these stories hone in on episodic aspects of a story, including specific individuals and particular events (Iyengar 1991). By including an excerpt in which a parent advocate describes how his child regressed the day after receiving multiple vaccines and another excerpt in which a parent advocate states that she would rather her child contract measles than be diagnosed with autism, the introduction to the Frontline documentary clearly emphasizes the human interest angle of the controversy.

These kinds of episodic stories elicit a powerful emotional response from the audience, while simultaneously overshadowing the more practical and technical aspects of the science behind vaccines and autism. The episodic frame implicitly approaches the audience as a media consumer and helps to ‘humanize’ a story in ways that are easy for media consumers to relate to (Benjamin 2007; Iyengar 1991; Bubela et al. 2009). Stories that are not easily distilled for viewers or readers are simply less likely to be covered (Iyengar 1991; Bubela et al. 2009).

Conclusions and Future Directions

As seen in the MMR controversy, scientific and journalistic practices interact with each other, and with the personal characteristics of lay audience members, to produce a public discourse that is often fragmented and contradictory. The discursive gaps between those who value scientific approaches and those who value non-scientific approaches to belief fixation often make it difficult for the groups to engage in effective and productive communications. This is of particular concern when it relates to issues such as MMR vaccination, which can have a direct and immediate impact on public health. Despite these public controversies, it is important to remember that laypeople tend to hold positive attitudes about science and technology in general (Miller and Pardo 2000; National Science Board 2008). These attitudes reflect a considerable amount of trust in scientists’ abilities to conduct research and communicate research findings in an unbiased manner. Trust in science and scientists can act as a powerful schema for people who are constantly confronted with information on topics that they know relatively little about (Nisbet and Scheufele 2009; Office of Science and Technology and the Wellcome Trust 2001; Sturgis and Allum 2004; Turney 1996; Yearley 2000).

These findings provide hope that the existing gap between scientific and non-scientific cultures that has helped to exacerbate scientific controversies may be bridgeable. Recent efforts at science communication have attempted to deconstruct the traditional boundaries between scientist and non-scientist, focusing much more on a democratic model of public participation, dialogue, and engagement (e.g., Going public 2004; Nisbet 2009; Wilsdon and Willis 2005). These public engagement efforts have taken many different approaches, including the use of focus groups, consensus conferences, and stakeholder dialogues (see Wilsdon and Willis 2005 for a more complete description).

Research indicates, however, that these efforts may ultimately fall short of their goal. For example, Kurath and Gisler (2009) found that projects that were ostensibly intended to foster more open exchange and mutual learning with members of the public on the topic of nanotechnology often, in practice, defaulted to a deficit model of public understanding. Unfortunately for those who engage in science communications, genuinely mutual ‘science-society’ exchanges are difficult to achieve. Entrenched biases, including knowledge boundaries that are perpetuated by both scientists and non-scientists, can easily derail such open dialogue (Cuppen et al. 2009).

Some scholars and scientists have expressed criticism regarding how far such engagement efforts ought to extend (see Tait 2009 for a comprehensive discussion). They are concerned that the lay attitudes and opinions that have contributed to existing scientific controversies would only further hinder scientific inquiry if they were to be given additional weight in policy-oriented decision-making. Past failures at effective science-society communications on controversial scientific issues—including climate change, evolution, embryonic stem cell research, and the MMR vaccine—indicate, however, that public dialogue is critical. On these issues, failure in communications between scientists and non-scientists has led to “major political fallout, increased public risk, increased public fear, mistrust and skepticism about science, and poor public policy making” (Parsons 2001, p. 303).

Ultimately, we must recognize that the goal of public engagement is not necessarily to move public opinion in the direction favored by scientists or policymakers. As discussed above, our existing values, beliefs, and experiences serve as powerful heuristics that influence how we interpret information and inform our decision-making processes. When scientific findings contradict people’s political leanings or personal values, they will often discount the findings in favor of their values (McCall and Groark 2007). Thus, effective public engagement must take into account the personal characteristics that are relevant to people, including their “values, interests, and worldviews” (Nisbet 2009, p. 3).While public engagement efforts may never result in absolute consensus among all parties involved, they can help to “generate new approaches to the governance of science that can learn from past mistakes, cope more readily with social complexity, to harness the drivers of technological change for the common good” (Wilsdon and Willis 2005, p. 12).

This push for public engagement does not require that we devalue or replace the scientific method of belief fixation; the point is simply to recognize that science, just as with all other ways of knowing, is itself framed and constructed by those who do it. The goal of public engagement efforts, then, should be to make the scientific process more transparent to non-scientists, and “to expose to public scrutiny the values, visions and assumptions [underlying science] that usually lie hidden” (Wilsdon and Willis 2005, p. 12). By doing so, scientists, policymakers, and others can open up a productive conversation in the public square that can begin to bridge the epistemic gap between scientific and non-scientific cultures. These kinds of public conversations may ultimately serve as a model for overcoming the fragmentation that has been observed both within and between different domains of knowledge, allowing people who have experiences and expertise to communicate openly and effectively with each other.