Keywords

Introduction

Risk and crisis communication constitutes a rich field of expertise and practices. It has long been (and still is) mainly viewed as a practical rather than a theory-based approach. Numerous manuals and “how-to” books have been published over the last decades (Lundgren and McMakin 2009; Heath and O’Hair 2010; Sellnow et al. 2009 to name only a few). It is often believed that they provide more recipes, refined over the years, than solid scientific literature upon which an evidenced-based risk and crisis communication strategy can be developed and fostered (McComas 2006). This review is based partially on a surprise: contrary to what was expected, there is an abundant stock of theories and approaches, albeit very diverse. The intention of this chapter is to guide the reader through some of them, considered, maybe too hastily, as the most prominent. The objective is not to produce an exhaustive review, but rather to provide an orientation in a field, whose popularity is growing throughout industries, companies, public health institutions, and public services.

The profile of risk and crisis communication was elevated to a major topic in the aftermath of 9/11 as scenarios of massive terrorist attacks, large-scale natural disaster and the threat of reemerging diseases attracted resources, scholarships, funding, and new concepts (Bastide 2017). Indeed, nowadays it is deemed to be integral to any public policy intervention. However, repeated examples of fiascos and less than adequate campaigns and responses have naturally shed light on complex and often controversial issues that risk and crisis communication involves. The risk communication fiasco concerning anthrax in the United States in the aftermath of the 9/11 attacks is one famous example (Glik 2007). During Hurricane Katrina (2004), it is believed that people died in greater numbers than predicted as a result of internal communication and coordination failures at the State and Federal levels (Gheytanchi et al. 2007; Perrow 2007). The terrible mistakes made in West Africa during the Ebola crisis aggravated relationships with populations already in despair, leading them into hiding (Faye 2015; Calain and Poncin 2015; Le Marcis 2015).

Nevertheless, attempts at promoting breakthroughs in risk and crisis communication in order to promote evidence-based campaigns are on the agenda of numerous powerful organizations. For example, it should be recalled that the topic of risk communication has already been identified as one of the eight core capacities of International Health Regulations (2005) (Malley et al. 2009). Lately, in 2015, World Health Organization, in the aftermath of Ebola Virus Disease Crisis, has set up a working group in charge of drafting “guidelines on building national capacity for communicating health risks during public health emergencies”.

Research in the field of risk and crisis communication is at a crossroads: it can both enhance and sophisticate its current tools, or it can look for alternatives to its current work practices and develop new thinking. Indeed, the emergent crises of our times are putting pressure on various types of authorities to develop communication tools and preparedness as well as capacities to deal with unpredictable outcomes and lasting uncertainties. The general philosophy for risk and crisis communication is rather straightforward: planning in advance, announcing early, being transparent, respecting public concerns, and building trust (Abraham 2009): “Be first, be right, be credible” is the slogan of the famous Centers for Disease Control and Prevention (US CDC). But, is it enough?

Risk and crisis communication now has a history of over three decades. It is often associated with technological disasters, natural disasters, floods, bio-terrorism, and sanitary crises (including infectious disease epidemics). But this sort of communication is also associated with a wide range of public health priorities and concerns, such as the hazards of smoking, obesity, air quality, VIH detection, to name only a few. Its profile has been raised, when powerful organizations such as the CDC in the United States issued, in 2002, their Crisis and Risk Emergency Communication (CERC) manual, followed in 2005 by World Health Organization’s own guidelines for outbreak communication.

There is a historic distinction between “care communication”, “consensus communication”, and “crisis communication” (Lundgren and McMakin 2009). Risk and crisis communication differs in definition and scope, however they are also interconnected. As Bennington explains (2014, p. 32): “Risk communication addresses probabilities and potential situations of harm and danger, while crisis communication focuses on a specific event or action that has already occurred or will almost certainly occur in the near future (…). Risk communication messages almost always address likely (future) consequences, are based on some form of persuasive and compelling evidence and are intended to prevent or modify specific behaviors and practices” (…) “crisis communication is an on-going process that occurs during the actual crisis” (…). “It addresses both what is known and not known about a situation”.

Interestingly, in today’s practice of risk communication at CDC, the two concepts have been merged and are combined in a same model called “Crisis and Emergency Risk Communication” (CERC model).

Disciplines and fields such as psychology, social-psychology, economic psychology, behavioral sciences, sociology, media studies, information technology and political science have greatly contributed to the clarification of what is risk and crisis communication (Glik 2007). One can only stress in this brief overview that “good” risk or crisis communication can do little in the face of massive governance problems, where controversies and conflicts naturally arising inside first-line institutions dealing with complex crises will inevitably also be visible on the outside. Suppressing controversies over uncertainties is not possible. Therefore, crisis management is increasingly becoming a question of (risk) governance (Renn 2008; Haferkorn, this volume) rather than being confined only to a question of communication technics.

Risk and crisis communication is a part of risk management, which can be understood as a technical field applying probabilities to articulate and recommend prevention and mitigation strategies—at the technical, organizational, and individual level. Yet, it has long been demonstrated that recommendations based on the balance of risks and benefits, made by experts, are not sufficient. Anthropologist Mary Douglas with political scientist Aaron Wildavsky (1983) pioneered studies, aiming at emphasizing the gap existing between different segments of society on what an acceptable future might look like. Slovic’s (2000) numerous studies confirmed that social actors hold different risk perceptions, depending on their position in society. In this book, several chapters touch upon these various issues, and examples of risk communication strategies belonging to different perspectives, and even mixing some of them are presented, reflecting on the tensions that the field is currently experiencing.

In the remainder of the chapter, a brief overview of the main theories in use and their respective merits will introduce the reader to a more complex field. We choose a somewhat chronological presentation for practical reasons. However, the reader should not be lured by an apparent evolutionary trend. In fact, these successive attempts at enriching the field continue to coexist largely in the academic literature and within organizations. Fischhoff cautioned us with this false evolutionary promise already and gave this humorous synthesis reproduced below (1995, p. 138).

  • All we have to do is get the numbers right

  • All we have to do is tell them the numbers

  • All we have to do is explain what we mean by the numbers

  • All we have to do is show them that they’ve accepted similar risks in the past

  • All we have to do is show them it’s a good deal for them

  • All we have to do is treat them nice

  • All we have to do is make them partners

  • All of the above.

The Crisis, the Experts and the Public

Baseline

Historically, risk and crisis communication has been considered as a subset of technical communication (Ogrizek and Guillery 2000). Mainly, it has been viewed as the communication of some risk to affected parties by experts. The emblematic model of such conception is the classical crisis communication approach, which is mainly instrumental and rests only within risk communication professionals. The idea is to only give the audience the information they need to protect themselves, usually radically (like accepting an evacuation). This crisis communication can be referred to as the firefighters type, where threats are tangible and protection vital. During time of crisis, as Bennington (2014) explains, decision-makers need to “construct a crisis response narrative that (1) meets the organization’s goals of informing, reassuring, and protecting the public and (2) instills sufficient confidence in the organization to insure the public will be influenced to take the actions deemed necessary to manage the threat.” (p. 8).

The dominant paradigm is to persuade the general public of the sound basis of expert’s judgments. Irrationality, misperception, misconception, misinformation, inaccurate reporting, and rumors (Bennington 2014, p. 10) are the main obstacles to be suppressed by “effective” risk and crisis communication strategies.

This rather one-sided view has not disappeared and when it comes to crisis communication, public officials or industry representatives often refer to irrational public fears, or unfounded doubts, that they have to fight. Rumors, myths, urban legends, inaccurate information, fake stories, or conspiracy theories are often targeted as the main limitations of successful campaigns. They have always existed, especially during epidemics (Berce 1993). Nowadays, they have the potential to reach millions of people in an instant through the internet and via social media. Some studies show that rumors and conspiracy theories are now part of mainstream political opinion (Hargrove and Stempel III 2007). As academic research has highlighted since the 1930s, rumors develop and amplify where uncertainties and lack of leadership are apparent (Prasad 1935; Allport and Postman 1947). Rumors help to make sense of what is happening and reduce the level of anxiety. They provide narratives and attribute clear responsibilities. Other scholars also explained that groups with less access to “legitimate” sources of information are more subject to rumors and more prone to disseminate them in their communities (Mirowsky and Ross 1983; Knight 2003).

In this perspective, “educating” and “persuading” the public are the main drivers of any risk communication strategy. Using psychological and behavioral research to establish adequate and efficient messages, and identify proper vehicles (community and/or religious leaders; trustworthy institutions; and now social media) are the main focus of risk communicators, wanting to modify behaviors and reducing what they call “knowledge gaps”. Bringing more knowledge in better formats are key to these activities.

First Cracks in the Conventional Wisdom

The “Mental Models approach” grounded in cognitive psychology and artificial intelligence, developed at Carnegie-Mellon by researchers like Baruch Fischhoff and Granger Morgan (Morgan 2002) looks like a first attempt to enrich the perspective. Based on the Radon information program, located at the U.S. EPA (Environmental Protection Agency), researchers established that it is of crucial importance to understand what the audience already knows about the risk, and for crisis communication purposes what is the culture of the audience, to be able to discuss ways to mitigate a crisis. In their publications, they strongly advocate for in-depth qualitative interviews prior to developing risk communication programs of any kind: “Communications can be crafted to fill gaps, reinforce correct beliefs, and correct misconceptions—with some assurance that the messages are to the point and be comprehended by recipients” (Fischhoff 1995, p. 140).

With the “hazard + outrage” approach, later popularized by Sandman (2003), it is argued that the audience’s view of risk (as opposed to that of the expert assessing the risk) reflects not only the danger of the action (hazard) but also how people feel about the action, and even more important, what emotions they feel about the action (their outrage).

Later Covello (2010) developed the “Mental Noise approach” which stipulates that when people perceive themselves as being at risk, their ability to hear and process information decreases dramatically. They are preoccupied with a great deal of “internal mental noise” and are less able to attend to externally generated information. His various studies show that the ability to pay attention and to retain information is estimated to be 80% less than normal. This is especially true in sudden and unexpected crises. Unfortunately, this line of research distilled for a long time the idea that emotions are only on the public’s side.

Contemporary to these lines of research, Everett Rogers and colleagues (Rogers and Lincaid 1981) developed an approach called the “Convergence Communication approach”. For these researchers, communication is an “iterative long-term process” in which the values (culture, experiences, and background) of the risk communication organization and the audience affect the process of communication. In their view, this iterative process will naturally push the two groups (the organization and the audience) to converge on common ground, while exchanging information back and forth. In their theory, exchanging information modifies the outcome, bringing the two sides closer.

Unsurprisingly, these studies contributed to build a long-lasting consensus on the intrinsic qualities of a professional risk and crisis communication campaign (Covello et al. 1988): (i) Audiences tend to simplify messages and reduce their complexity; therefore, it is important to communicate with this principle in mind; (ii) Credibility and believability go hand in hand; therefore, experts really need to be independent; (iii) Risk messages should include some efficacious action that individuals can take to alleviate risk; (iv) Messages should be matched to audience needs and values, and their particular economic, political, and sociological backgrounds; (v) Candor, openness, and transparency are the cornerstones of risk and crisis communication.

From these principles, Covello and Allen (1988) derived seven cardinal rules of risk communication: (i) Accept and involve the public as a partner; (ii) Plan carefully and evaluate your effort; (iii) Listen to the public’s specific concerns; (iv) Be honest, frank and open; (v) Work with other credible sources; (vi) Meet the needs of the media; (vii) Speak clearly and with compassion.

For years, these cornerstones of best practices have been the alpha and omega of any serious risk and crisis communicator. Major companies and institutions have largely integrated these principles, at least officially. Yet, they sometimes failed to completely embrace them in practice.

Disputing Experts’ Central Position: The Dialogic Turn

Since these pioneered studies, the risk communication field has known different tipping points. One of the main issues of the 1990s has been to revise and recast the central position of experts in the communication process. From all-too-powerful, and central in the production of knowledge and guidance, the tendency is now to encourage experts to engage in genuine listening exercises. These are not only “nice-to-do” but crucial in terms of knowledge production and ultimately key for designing the risk mitigation strategies.

The origin of this new line of research is often described as an early comprehensive effort led by the US. National Research Council (NRC) in 1989 to improve risk communication. Its definition of risk communication is the following: “Risk communication is an interactive process of exchange of information and opinions among individuals, groups, and institutions concerning a risk or potential risk to human health or the environment (…) social context of the risk should start from the very beginning” and must incorporate “exchange of information and opinions” (NRC 1989).

In this vein, Waddell (1995) opposed the view that during a risk communication campaign and assessment, the scientific community provides technical knowledge while the audience or stakeholders manifest values, beliefs, and emotions through feedback on the risk communication effort. His approach holds that in fact inputs come from both sides. There are no “hard facts” on one side and “soft facts” on the other, expertise on one side and emotions on the other. Experts get emotional on risk matters as well.

Finally, risk communication based on an understanding of the public as an active participant in the process of apprehending and controlling risk, based on its own rational understanding of risk, implies a different dialogic communication between so-called “experts” and so-called “lay publics” (Abrahams 2009).

Entering the Twenty-First Century: Facing Social Networks and Governance Issues

At the turn of the twenty-first century, observers and experts of the field tend to all agree that risk and crisis communication had to move away from the now classical and state-of-the-art public relations campaigns toward strong anticipatory and elaborate strategies (Berger and Journé, this volume; Baram and Lindoe, this volume). It appears to many that the “seven cardinal rules of risk communication”, recalled above, are insufficient when confronted with massive crises, capable of challenging preestablished plans and preconceptions.

Recent examples might include: The management of A (H1N1) pandemic in 2009–2010 (mostly in Europe); The Deepwater Horizon drilling rig explosion in 2010 and its environmental consequences in the Gulf of Mexico; The Great East Japan Earthquake, Tsunami, and Fukushima Daiichi Nuclear Power Plant Accident in March 2011 and its lasting impact on the population (Nishizawa, this volume; see also Baumont, this volume); The 2014 Ebola Virus Disease epidemic in West Africa and its burden in Guinea, Liberia, Sierra Leone (Bastide, this volume). These book chapters are precisely looking back at the complex challenges that risk communicators have had to face during these dramatic crises. They highlight the daunting tasks that communicators had to fulfill, when disorganization was so complete and fears so overwhelming.

Some experts believe that current crises are very different from older ones: “The accidental, compartmentalized crises of the twenty-first century have mutated into systemic dislocations calling for new intelligence” (Granatt et al. 2009, p. 1). Furthermore, scholars call for more elaborate strategic thinking that deliberately moves away from the “planning” culture. Plans give false comfort to managers and leaders. Plans may be counterproductive in the face of rapidly evolving crises (Clarke 1999; Lagadec 2009). To move away from a “planning culture” means allowing actors in charge to develop ad hoc strategies, according to local situations and needs. This cautious note should not be understood as a plea against predetermined scenario, nor against the scenario planning philosophy (Bieder and Bourrier 2013). They remain important tools to develop and rehearse. However, they should be enriched and augmented with the development of resilient, agile, self-designing risk communication strategies, aiming at facing the unexpected (Weick and Sutcliffe 2011). These strategies cannot be developed in a vacuum. They need to be supported by organizational practices that encourage this mindfulness, reliability, and high performance.

Recent examples of massive crises, like Fukushima Daiichi disaster or Ebola Virus Disease, still give steam to the “social amplification of risk” approach, developed in the late 1980s by Roger Kasperson and colleagues (among them were Jeanne Kasperson, Paul Slovic, Ortwin Renn) at Clark University (Kasperson et al. 1988; Kasperson and Kasperson 1996). The most fundamental argument of Kasperson and his colleagues is that social activities will magnify the consequences of a risk event, often in unexpected ways. Potential “social amplification stations” (Wiig et al. this volume) might include mass media and journalists, groups of scientists, governmental agencies, and politicians. Stigmatization is a primary concern. Later, Leiss and Powell (1997) theorized that a risk information “vacuum” is most likely to blame for the social amplification of risks. When experts refuse to provide information, or when they are seen as untrustworthy, a hungry public will fill the void, often with rumors, suppositions, easy to blame targets and fakes.

Following the social trust argument, Earle and Cvetkovich argue that “social trust, understood in everyday terms, is the process by which individuals assign to other persons, groups, agencies, or institutions, the responsibility to work on certain tasks” (1995, p. 4). They further explain that “within the realm of risk management, most tasks are too big and complex for individuals, regardless of technical training, to successfully complete alone” (p. 4). This situation leads to a necessary measure of trust allocated to institutions, or agencies in charge of communicating mitigation strategies. They further argue that if people do not trust an organization, negative information associated with that organization reinforces their distrust, whereas positive information is discounted. In essence, no matter how well thought through and well packaged an information might be, it will not communicate risk effectively if trust and credibility are not established first.

Other scholars have also contributed to the renewal of the research agenda, by importing concepts from other subfields. Taking seriously the network paradigm and the “network society” we live in (Castells 2011) could rejuvenate a risk communication’s perspective and imply revision of the principles under which risk communication campaigns and activities are devised. Organization studies (Bovasso 1996; Burt 1987), for example, have demonstrated that social networks influence behavior and attitudes in the workplace, as well as within the family and among friends: “A majority of these network studies, particularly those exploring the idea of social contagion through cohesive network ties, have been conducted in organizational settings. In addition, thousands of studies have examined community networks from a diffusion of innovation perspective (Rogers 1995). These studies suggest that interpersonal networks influence the adoption of ideas, innovations, and behaviors” (Scherer and Cho 2003, p. 262). This strongly suggests that who we spend time with affects our worldview and our risk perception. Looking beyond individuals to their communities, networks, neighborhoods, including social ties built on social media might be promising venues to learn more on publics’ and audiences’ knowledge, adaptations, and prejudices toward risks.

The development of new social media and electronic networks (blogs, Facebook, Twitter, chats, forums, etc.) poses new challenges to risk communicators (Veil et al. 2011; Liu et al. 2016). Risk communication campaigns must now be tailored to a variety of audiences that do not read the same news media outlets, nor inform themselves in the same ways. Campaigns must be devised in many more subcategories and must reach out to many more different communities and stakeholders. This is true both for routine risk communication and ad hoc crisis communication.

Following this perspective, another venue has gained momentum (Renn 2008). More recently, some scholars (Lofstedt et al. 2011) have suggested that risk communication should be envisaged as two-way communication, and that each segment brings knowledge and expertise to the problem. When confronted with risks and threats, recurrent or sudden, affected communities and multiple stakeholders have generally developed mitigation strategies worth investigating. Prior knowledge exists and should not be ignored or too easily labeled as false preconceptions.

Issues of transparency, participation, and democracy are central to this agenda. Building trust, understanding and establishing symmetrical relationships are essential to these developments. In this view, risk communication is a long-term strategy that cannot only be deployed in case of emergencies. Nowadays, for many experts in this field, and beyond, a mature risk communication strategy shall include and articulate different perspectives, held by affected professionals, communities, segments of the population, from experts to lay persons, in order to adequately engage with the risks considered. The coproduction of risk communication strategies is considered as an optimal goal, not yet achieved in many arenas (Guérard in this volume). This is especially the case when concerned members of the public or nonofficial experts are tweeting and using the blogosphere to post their own analysis of the situation.

This new philosophy also implies moving away from dogma such as “educating the public”, or “educating the media”. Major public health emergencies and alerts will instantly engage the media, who should be seen as major stakeholders in all the processes of communication. The media are not an adjunct to public emergency response. They have their own obligations to the public (see Wiig et al. this volume). Public health emergency planners now acknowledge the media’s role in a crisis and plan to meet reasonable media requirements during an outbreak. The idea is to use public perceptions, resources (cognitive, social, symbolic, etc.), opinions, knowledge, rationales, beliefs, assumptions, as well as the opinions and perceptions of experts, stakeholders, and political appointees to build a reasonable communication strategy capable of dealing with uncertainties.

After 2010: The Narrative Turn

An interesting development has recently taken place and concerns more directly the format of information transmission. Central to this discussion is the following question: How to design an effective message in risk communication, capable of bringing about changes and altering behaviors? This is not a novel theme in risk communication research, nor is it new when looking back at numerous handbooks. Constructing guidelines, using adequate language, materials, graphs, and iconography is the subject of entire sections in handbooks (see for example, Lundgren and McMakin 2009, pp. 145–157).

Didactic, expository, nonnarrative forms are often opposed to narrative forms. “The (narrative style) consists of presenting the risk information in the form of a personal story instead of, or in addition to, presenting exposure calculations or other data. The story structure helps the audience understand the risk by simplifying it and focusing on cause and effect” (Lundgren and McMakin 2009, p. 150). For a long time, medical, public policy and scientific organizations have regarded narrative forms as being less rigorous: “Non-narrative forms were viewed as objective, and therefore more credible than narratives, which were seen as anecdotal and subjective, and consequently unscientific” (Barbour et al. 2016, p. 813). Yet, evidence gathered so far is in favor of narrative forms, when dissemination of information is key. It is reported that narrative formats fare better on social media and will be shared more often and disseminate more easily than nonnarratives (Green and Sestir 2008; Hinyard and Kreuter 2007; Kreuter et al. 2007).

Nowadays, storytelling is considered to be integral to any business strategy and a key feature of organizational management (Brown et al. 2004). Organizations and institutions are increasingly using stories to reach out to their audiences (donors, patients, advocates, employees, communities, etc.) to communicate on their programs, products, and services, and on their worldviews (Krause 2014). For example, during the Ebola Virus Disease, Médecins sans frontières, along with World Health Organization, and Centers for Disease Control and Prevention, displayed numerous stories on their websites to explain the situation, document their activities, and promote certain types of messages, and avoid others. These organizations also promoted “their” own narratives about the crisis and used their deployees’ stories to present facts in line with their baseline (Casaer 2015).

As Barbour et al. (2016) noted, having recourse to narratives does not please every group inside these complex organizations, often afraid to lose their scientific reputation. The narrative turn is not always embraced by all constituencies inside an organization. This exposes yet another feature of the narrative turn. It affects the power dynamics and asymmetries inside the organization, and should not be treated only as a communication tool toward the public. The same can be said on the outside: narratives can also be understood as defensive and active propaganda. They could look like poor transparency exercises, lacking candor in the end.

It remains to be seen whether the narrative turn we are witnessing will be more beneficial than detrimental to risk and crisis communication: Beneficial because of its power to disseminate key messages to wider audiences and tailored to their needs; or detrimental because of the potential dangerous slippery road to detestable institutional propaganda?

Conclusion

Big data and citizen science, combined, might well open a new era to embrace two crucial elements for risk and crisis communication success: (1) Getting precise, accurate, reliable feedback from the field; (2) Allowing affected populations to develop some efficacious action to alleviate risk. However, it remains to be seen whether this can lead to the promise of empowerment that many hope for with this new stage in risk and crisis communication development.

In conclusion, we might be willing to add a few stages to Fischhoff’s scale in line with the developments one sees coming, as some of the chapters in this book exemplify. First, we could start by adding “All we have to do is show them they are part of it” to break through the “insider view” that still threaten risk and crisis communication strategies and alienate many publics, who feel estranged by what they frequently perceived as opaque organizational logics. Second, we might add this twist: “All we have to do is show them they are the experts” which signals that knowledge and cognition are distributed. Inhabitants of contaminated areas near Fukushima-Daïchi are indeed the experts of their land’s contamination and its monitoring. But the true leap forward will probably happen when it will be possible to abandon “we” and propose “All that has to be done is to make them part of “we””.

  • All we have to do is get the numbers right

  • All we have to do is tell them the numbers

  • All we have to do is explain what we mean by the numbers

  • All we have to do is show them that they’ve accepted similar risks in the past

  • All we have to do is show them it’s a good deal for them

  • All we have to do is treat them nice

  • All we have to do is make them partners

  • All we have to do is show them they are part of it

  • All we have to do is show them they are the experts

  • All that has to be done is to make them part of “we”.

Adapted from Fischhoff, supplements by Bourrier.