Achieving Accuracy through Ambiguity: the Interactivity of Risk Communication in Severe Weather Events

Abstract

Risks associated with natural hazards such as hurricanes are increasingly communicated on social media. For hurricane risk communication, visual information products—graphics—generated by meteorologists and scientists at weather agencies portray forecasts and atmospheric conditions and are offered to parsimoniously convey predictions of severe storms. This research considers risk interactivity by examining a particular hurricane graphic which has shown in previous research to have a distinctive diffusion signature: the ‘spaghetti plot’, which contains multiple discrete lines depicting a storm’s possible path. We first analyzed a large dataset of microblog interactions around spaghetti plots between members of the public and authoritative weather sources within the US during the 2017 Atlantic hurricane season. We then conducted interviews with a sample of the weather authorities after preliminary findings sketched the role that experts have in such communications. Findings describe how people make sense of risk dialogically over graphics, and show the presence of a fundamental tension in risk communication between accuracy and ambiguity. The interactive effort combats the unintended declarative quality of the graphical risk representation through communicative acts that maintain a hazard’s inherent ambiguity until risk can be foreclosed. We consider theoretical and practice-based implications of the limits and potentials of graphical risk representations and of widely diffused scientific communication, and offer reasons we need CSCW attention paid to the larger enterprise of risk communication.

Introduction

Risk communication of severe weather events is an area of practice that has relatively recently evolved from the enactment of a reductive transmissive model of communication (Shannon 1948; Shannon and Weaver 1963) to a model that builds upon the dialogic ways we come to understand the complex matter of uncertainty (Eosco 2008; Morss et al. 2017; Gui et al. 2018). This conceptual transformation that is informing industry practice came about independently from the rise of social media. However, exchanges over social media make it possible to examine why interactivity in risk communication is important through the questions, answers, and comments that people make in response to difficult-to-understand representations of risk.

To this end, this research examines how risk communication happens over social media using the destructive 2017 Atlantic hurricane season as the site of study. Building upon initial research that examined the diffusion of multiple forms of hurricane risk graphics across a microblogging platform (Bica et al. 2019), this research further focuses on the ever-present challenge of conveying uncertainty for severe weather hazards. Through qualitative examination of a large data set of microblog interactions between weather authorities and members of the public, and supplemented by interview data, this research follows the trail of the parsimonious ‘spaghetti plot’ on social media to understand the interactive pursuit of risk interpretation (Eiser et al. 2012), positing to be a CSCW concern.

Crisis informatics research, which is often connected to the CSCW literature, has examined how messaging about disasters has arisen in social media since 2007 (Palen et al. 2007; Macias et al. 2009; Starbird et al. 2010; Vieweg et al. 2010; Sarcevic et al. 2012; Dailey and Starbird 2014; Reuter and Spielhofer 2017; Wong-Villacres et al. 2017), and also why it has become problematic (Mendoza et al. 2010; Starbird et al. 2014). This literature has also tackled other forms of digital integration into disaster management (Shklovski et al. 2008; Soden and Palen 2014; Kogan et al. 2016). We argue that, at the core of much of this research are questions about the assessment, communication, and control of risk, in one way or the other. Risk itself is an enormous area of study, with emphasis on the cognitive perceptions of risk (e.g., Slovic 1987). CSCW has an opportunity to expand the crisis informatics framing by taking up risk as an experience that is mediated through many people, artifacts, and representations. As natural hazards—the domain in which we are conducting this research—grow in frequency and degree of destruction, this importance of risk and its effective communication becomes increasingly clear.

Social media brings the interpersonal and sociobehavioral qualities of risk interpretation into view through the public interaction between experts and laypeople in the form of questions, answers, commentary, and humor. Parasocial relationships with meteorologists on social media (Sherman-Morris 2005; Klotz 2011) create opportunities for—but also expectations of—expert interaction, differently than for other kinds of crisis events in which official information is seen as unreliable (Gui et al. 2017). A wide audience of people—often distributed over vast regions in the case of hurricanes—affects how broadcast communication of risk can be localized in a meaningful way. The dynamics between expert and nonexpert relationships in relation to scientific representations of hurricane risk are what are in question here.

Furthermore, we use images to scope and investigate risk communication because they depict uncertainty in ways that become widely diffused; they are at the center of hurricane risk communication. Images are powerful communicators on their own, and instruments of power as well. A series of tweets and videos of US President Donald Trump using a modified hurricane graphic to discuss the path of Hurricane Dorian in September 2019 had an effect on people’s interpretation of risk regionally, and called into question the relationships between scientific authority, national politics, human safety and more (Cappucci and Freedman 2019). These concerns compound the urgency to understand the nature of risk communication as it seeks in its practical applications to expand in scope and effectiveness.

In particular, we focus on the spaghetti plot or spaghetti model, a type of ensemble visualization of hurricane track forecast models (Figure 1). Our prior research on the diffusion of various hurricane risk images on Twitter revealed spaghetti plots to be unusual because they received the most responses (as measured by tweet replies) despite being relatively uncommon compared to other types of risk graphics (Bica et al. 2019). Spaghetti plots explicitly convey forecast uncertainty differently than other hurricane risk images by displaying a set of distinct potential paths that are simultaneously possible at a given point in time, and require some domain expertise for their interpretation (Hyde 2017).

Figure 1.
figure1

Example spaghetti plot image, with detail in box enlarged on right. Source: @spann. © WeatherBELL Analytics.

In the sections that follow, we first provide background information on hurricane risk products and related work. We then outline the study site, data collection procedures, and analysis methods. Our findings illuminate how members of the public and weather authorities interact on social media around spaghetti plot risk representations, how these interactions contribute to interpretations (and misinterpretations) of risk, and how authorities view their role and responsibilities in the risk communication.

Background

Hurricane Models and Forecasts

We offer a brief overview of how hurricane models and forecasts are generated and visualized. Forecasts for hurricanes, as well as other meteorological phenomena, can be generated using Numerical Weather Prediction (NWP) models: complex programs run on supercomputers that can only be produced by a limited number of operational forecast centers around the world.

An ensemble model is a set of such NWP forecasts that are valid for the same ‘forecast parameter’ (e.g., track, precipitation, or sea level pressure) at the same time. Hurricane track ensembles, which are the most available to the public and are the focus of this paper, are sets of forecasts plotted on the same map for hurricane tracks. Graphics for hurricane track ensembles are often called spaghetti models or spaghetti plots because of the way the multiple lines that represent the forecasted tracks look when displayed. In interpreting a spaghetti plot, confidence in the forecast is greater the more the potential track lines cluster together, and lower the less they cluster.

Spaghetti plot graphics differ in how they visualize forecast models. Some spaghetti models include a set of hurricane track forecasts based on multiple simulations with variations (known as ‘perturbations’) of a single operational forecast center’s model. Examples of this type include the ‘Euro’ model produced by the European Centre for Medium-Range Weather Forecasts and the Global Ensemble Forecast System model (GEFS), produced by the US National Weather Service (NWS) (see Figure 2a).

Figure 2.
figure2

Two spaghetti plots for Hurricane Irma. a GEFS ensemble model. Each line is an ensemble member, or a potential track based on a variation of the same forecast model. Source: @JohnMoralesNBC6. © WeatherBELL Analytics. b ‘Poor man’s ensemble.’ Each colored line is a potential path that is the result of a different forecast model listed by its acronym in the key. Source: @FOX35Glenn. © FOX Television Stations.

Another type of spaghetti model shows a set of tracks based on model simulations from various forecast centers, and are known as poor man’s ensembles (Ebert 2001) (see Figure 2b). The individual track forecast lines included in poor man’s ensembles differ based on the models by which they are generated. For instance, dynamical models produce forecasts by using supercomputers to solve physical equations related to atmospheric properties. Statistical models, in contrast, are based on historical relationships between storm behavior and storm-specific details such as location and date.Footnote 1

Another common hurricane risk visualization is the cone of uncertainty, or simply the cone. This graphic is issued by the US National Hurricane Center (NHC) as a formal forecast product known as the Track Forecast Cone. As shown in Figure 3, it displays the forecast of the potential track of the center of a tropical cyclone as a black line which is surrounded by a cone. The cone represents the boundaries of two-thirds of historical official forecast errors over the previous five years, and thus indicates where the eye of the storm may travel. In contrast to spaghetti plots, research on the communicability of risk of the cone of uncertainty has been extensive. Comparisons of the effectiveness of spaghetti plots to the cone of uncertainty are reviewed in Section 2.2.1.

Figure 3.
figure3

Cone of uncertainty for Hurricane Irma. Source: NOAA/National Weather Service.

Related Work

Here we review related research on studies of visual risk perception primarily conducted in laboratory settings, as well as research about risk communication in the context of hazards events, especially over social media.

Visualizing Risk and Uncertainty

Risk involves uncertainty, often expressed as probabilities. One challenge in any risk communication is representing probability in a way that people can understand (Gresh et al. 2012). Visuals are often used for conveying probabilities of weather events because they reveal data patterns, assist viewers in interpreting numerical information, and attract and hold attention (Lipkus and Hollands 1999).

Furthermore, maps, as a subset of visuals, are well-suited for communicating risk, because they allow viewers to personalize to their location (Roth 2012). Maps are superior to text-based representations for risk interpretation and decision-making in response to disaster warnings (Canham and Hegarty 2010; Cao et al. 2016; Cheong et al. 2016; Liu et al. 2017), and they are strongly preferred over text-based messages for most types of warning information (Cao et al. 2016). However, it is not clear that maps can fully resolve the confusion people encounter when interpreting risk imagery because of the fundamental issue that some are unable to accurately identify if they are in a hurricane risk or evacuation area (Zhang et al. 2004; Arlikatti et al. 2006; Lazo et al. 2015).

Hurricane forecasts can be either deterministic, meaning there is a single predicted forecast, or probabilistic, meaning there is a range of forecasts and therefore probability or uncertainty. Laboratory-based simulation studies have been used to investigate the effectiveness of various visual hurricane risk messages (Meyer et al. 2013; Wu et al. 2014; Rickard et al. 2017). In a study comparing ensemble and cone visualizations, Ruginski et al. (2016) found fewer misinterpretations for the former. Follow-up work by Padilla et al. (2017), however, showed that cognitive biases exist for both visualization types, with individual members of ensemble visualizations (i.e., the ‘spaghetti’ hurricane tracks) given too much weight in people’s risk assessments, which we also find in our analysis. While ensemble visualizations may lead to better understanding of the uncertainty and unpredictability of a hurricane forecast, they may be overall more cognitively difficult to make sense of than the cone (Cox et al. 2013). Here we newly examine how ensemble visualizations—the spaghetti plot—are approached through social media for risk interpretation in situ.

Crisis Informatics and Risk Communication

Public response to warning information is influenced by attributes of the warning including source, frequency, and channel, as well as the attributes of the message including consistency, credibility, accuracy, and understandability (Mileti and Sorensen 1990). People are more likely to consider risk qualitatively in relation to their prior experience and heuristics (MacEachren et al. 2005; Eosco 2008; Hasan et al. 2011; Gresh et al. 2012). They are more influenced by forecasts they see on television if they trust their weathercaster (Bloodhart et al., 2015) because of the parasocial relationship (Sherman-Morris 2005; Klotz 2011).

Risk and crisis communication ‘best practices’ typically include communicating openly to build trust with audiences (Seeger 2006; Veil et al. 2011). Social media affords multidirectional communication during crises, and so it is important that authorities understand their stakeholders and their expectations and concerns (Goldgruber et al. 2018). However, social media use by emergency management is known to be fraught with challenges including limited resources and training (Latonero and Shklovski 2011; Tapia and Moore 2014).

Though much research has studied messaging over social media during disaster management and response (for reviews, see Fraustino et al. 2012; Houston et al. 2015; Palen and Hughes 2018; Reuter et al. 2018; Young et al. 2020), social media has also become a common channel for warnings in the pre-disaster phase (National Research Council, 2013). As such, studies have begun to examine behavior specifically in such contexts. Demuth et al. (2018) showed how people’s risk perceptions evolved with changing information over the course of a hurricane, and the impact this had on their protective decision-making. Risk perception and communication has also been studied in the context of the global Zika health epidemic, with a focus on online interactions among members of the public to aid in their decision-making in the face of unreliable authoritative information (Gui et al. 2017). In such an event, people’s risk perceptions are characterized as ‘speculative’ in that they had to curate and validate information and consider possible outcomes among themselves (Gui et al. 2018). Other work has examined emergency management’s use of localized hashtags on social media during the pre-crisis stage of a hurricane (Lachlan et al. 2016).

In summary, we see that the laboratory studies traditionally associated with risk perception help to determine how visual features correspond to interpretation of the risk images, but they cannot account for the effect of interpersonal interactions when making sense of risk in real situations (Eiser et al. 2012). In part due to the rise of social media and its impact on communication around severe weather events, a national report recently highlighted a critical knowledge gap regarding the impact of information and communication technologies on how people interpret and respond to weather and risk information (National Academies of Sciences Engineering and Medicine 2018). The social media studies that attend to the warning aspects of crisis informatics are attempting to close this gap, with this research diving into how interactive comprehension is organized around risk graphics, which are a core component of hurricane warning communication.

Data Collection and Analysis

2017 Atlantic Hurricane Season

The window for this research is the 2017 Atlantic hurricane season, which saw notably heavy hurricane activity with six major hurricanes, the third highest number in a single year over the past century (Lim et al. 2018). It produced 17 total named hurricanes and tropical storms which caused thousands of fatalities and billions of dollars in damage. The social media activity around these hurricane events was extensive and rapid. We focus on the particularly destructive seven-week portion of the season which included Hurricanes Harvey, Irma, Jose, Katia, Lee, Maria, and Nate. Much of the season’s destruction occurred in Texas (Harvey), Florida (Irma), the Caribbean, especially Puerto Rico (Maria), and Costa Rica (Nate).

Method

We collected tweets in multiple steps by first identifying a list of 796 Twitter accounts which were US-based authoritative sources of hurricane risk information during the 2017 season. These weather authorities were identified with the assistance of weather sociologists at the US National Center for Atmospheric Research (NCAR) who possess expert knowledge about the weather industry landscape. We then collected the ‘contextual tweet streams’ (Palen and Anderson 2016) of those accounts from 17 August to 10 October, 2017—that is, all the tweets they generated in the date range, regardless of whether they pertained to the subject of study—so that discourse could be examined in the context of the longer narrative. This initial collection totaled over 9.8 M tweets, of which 85 K were original tweets (i.e., not retweets/quote tweets) that contained media, either images, gifs, or videos. The first author and a hired team of five trained undergraduate researchers manually and iteratively coded these 85 K tweets. One round of coding determined which tweets had imagery portraying hurricane forecast or risk information, and a second round classified these tweets according to an inductively generated coding scheme with 22 types of hurricane risk imagery (one category was ‘spaghetti plot’). In total, we identified and categorized 16,531 hurricane risk image tweets from 489 authoritative source accounts. For full methodological details on this data collection, please see (Bica et al. 2019).

In this dataset, 478 tweets were coded as containing a spaghetti plot graphic, which served as the starting point for the research presented here. These tweets were filtered down to only those that received one or more replies so that we could examine the nature of the risk communication exchange. Thus, the dataset used for this study consists of 281 tweetsFootnote 2 containing spaghetti plot images that were shared—and in some cases, but not all, generated—by 76 authoritative accounts, along with each tweet’s ‘conversation,’ as Twitter refers to the set of replies associated with a post.Footnote 3 The conversations include direct replies to the top-level tweet, which in our set total 1424 replies, as well as threaded replies-to-replies, which contributed several hundred additional reply tweets (this could not be computationally quantified as the sub-conversation reply counts are not made available by Twitter). The replies are mainly from members of the public but sometimes from authoritative sources, including the originators when they engage in dialogue with others on their own tweets.

The methodological approach began with discourse analysis (Gee 2010) of the individual tweets conducted in collaborative group data sessions modelled after Jordan and Henderson (1995), in which the research team carefully reviewed the images and discourse over multiple passes so as to uncover increasingly more detail while limiting biases of any individual analyst. To this end, we printed each of the 281 tweets and their conversations, totaling over 500 pages of content, as they appeared originally on Twitter, maintaining the original threading of replies. This enabled manual, collaborative annotation of tweets with the research team in data sessions with close reading of details of spaghetti plots and surrounding discourse.

With each tweet, the data analysis team analyzed and recorded the ways in which risk was communicated visually and linguistically by the authoritative source and the ways in which risk was perceived, interpreted, and questioned by those who replied to the tweet. We also recorded: 1) in which instances and how the original authoritative source responded to questions or other types of replies to their own spaghetti model tweets, 2) in which instances there were threaded conversations (as opposed to singular, unthreaded replies) involving either the authoritative source and/or other people, and 3) the timing or pacing of conversations based on timestamps. Note that tweets were analyzed not as discrete units but considered each as part of the larger set of interactions that occurred around each top-level spaghetti model tweet, as a way to reject the ‘tyranny of the tweet’ as its own individual data point (Palen and Anderson 2016). Accounting for context in this way promotes more ‘responsible interpretation’ of people’s responses to the spaghetti models because it examines what might potentially be an arc of questioning and meaning-making—something that analysis of single tweets cannot do (Palen and Anderson 2016; Kogan and Palen 2018; Anderson et al. 2019).

In addition to analyzing the text and accompanying graphics of each tweet, we conducted an inductive thematic analysis (Braun and Clarke 2006) to iteratively develop themes that encompassed how risk was communicated and interpreted across the dataset. Specifically, we maintained a list of themes with corresponding examples from the data that arose during both our group discourse analysis sessions and our analysis of the interviews, described below. The list was iteratively revised with multiple passes over the data. The final set of themes is used to organize our findings in Section 4.

To support this qualitative analysis, we maintained a database of the 281 spaghetti plot tweets with both the metadata originally collected from Twitter (e.g. tweet text, timestamp, user details, media format) and additional details we determined either manually or computationally. Tweet-level details include:

  • Number of direct replies: All 281 tweets received at least one reply, with 108 also containing threaded replies (i.e., replies-to-replies within the conversation space).

  • Corresponding 2017 hurricane(s): To determine this, we inspected the tweet text, date, and imagery. The majority of spaghetti plot forecasts were about Hurricane Irma (n = 162), followed by Maria (n = 53), Harvey (n = 25), Nate (n = 20), Jose (n = 18), and Lee (n = 2). (Some tweets pertained to more than one hurricane, so the sum is greater than 281.)

  • Type of authoritative source who posted the tweet: Each authoritative account was exclusively categorized as Weather News (n = 57), Non-Weather News (n = 16), Weather Other (n = 2, a meteorology student and an independent meteorologist), and Weather Government (n = 1, NWS Houston).

  • Media format of the spaghetti plot imagery: Tweets were categorized as containing still images (n = 274), video (n = 6), or animated gif (n = 1).

In this paper, we include tweet data excerpts throughout to illustrate observations. These are formatted as:

figurea

These excerpts are denoted by vertical gray bars on the left. The username is underlined if it is the authoritative weather source who originally authored the spaghetti plot tweet. Authoritative sources are named in this paper because they are self-identified public figures whose accounts are authenticated (‘verified’) by Twitter. For non-authoritative accounts, we follow best practices in academic reporting offered by Fiesler and Proferes (2018) and anonymize usernames (using the convention @person1, @person2, etc.), though we note that this is a higher standard than institutional review boards require. We make one exception where the meaning that a username signals becomes relevant to the analysis. For this we draw upon historical reporting precedents that support revealing a username where there is no expectation of privacy and no apparent risk to the person. We note that academic reporting standards are moving to an opposing stance to what Twitter requires, which is identification of all usernames to give authorship attribution. The text of some tweets is minimally altered for readability. Note that all tweets are from August to October 2017 and were limited to 140 characters (the tweet limit was increased to 280 characters later in November 2017). Arrows are used to indicate replies, with threaded replies nested beneath direct replies.

To complement our analysis of the tweet dataset, we conducted semi-structured interviews with a subset of the authoritative sources who shared spaghetti plots to understand how they thought about communicating risk and uncertainty for hurricanes with the public on social media. This was motivated by prior work that suggests that both the source and the content of risk and warning information influence people’s risk perceptions and responses (Mileti and Sorensen 1990), as well as by our own preliminary findings about the varying roles of experts in the risk communication. We received IRB approval for this research.

For the interviews, we narrowed the sample of 76 weather authority accounts to include just those operated by individuals so that they could be interviewed, resulting in n = 57. We further narrowed the sample to include only those who interacted in the conversation spaces of their tweets, and who had enough of those interactions to be discussed in an interview. This resulted in ten—all meteorologists—whom we invited to participate in an interview, with six agreeing. Notably, the weather broadcasters we invited who are not chief meteorologists seemed to have more trouble seeking authorization from their employers to participate, and so final set favors chief and independent meteorologists.

Interviews occurred between February and August 2019 with questions based on each person’s social media posts (that they had available for review) to ground interview responses in their real posting behavior. Interviews were conducted via phone or online video call, lasted between 60 and 90 minutes, and were subsequently transcribed by the authors. Analysis of the interview data involved refining and adding to the themes generated through the social media data analysis, with a focus on attributes of authoritative communication that could be only be revealed through interviews, such as insight about professional identity, employer connections, and why responses were formulated as they were.

Table 1 shows details for the interview participants, who included broadcast meteorologists at local television news stations and one independent meteorologist/journalist in regions affected by the season’s storms. As part of the human subjects protocol, the interview participants agreed to be de-anonymized so that we could tie their social media communications to their narrative explanations as obtained from the interviews. In some cases, their remarks are de-identified as appropriate for the research and/or at their request.

Table 1 Authoritative source interview participants.

Findings

We first describe findings about how members of the public respond to spaghetti plots shared by weather authorities, and how authorities respond to them in turn. The queries, assessments, reactions, and responses in relation to the plots revealed how they were being interpreted and explained as representations of risk. Throughout this reporting, data from the social media set are supplemented with data from the interviews that clarify how weather authorities see themselves in these interactions.

Building ‘Hurricane Literacy’

Before assessing risk, viewers must first be able to ‘read’ the risk representation. Queries to weather authorities often pertain to what the different aspects of the plots mean—the lines, the colors, the numbers, and the legend if there is one—with people hoping to get their interpretations verified.

As we see in the following example, the ability to read the plots are what experts and laypeople alike referred to as ‘hurricane literacy.’ Here, @person1 poses a question in reference to a graphic for Hurricane Irma that depicts the GEFS ensemble model, for which colors represent atmospheric pressure at different points along the forecast tracks, as indicated by the legend at the top.

figureb

We come to learn across the excerpts in this paper that ‘hurricane literacy’ is hard to achieve, and it is not helped by the lack of standardization of spaghetti plots across weather graphics producers. This makes ‘hurricane literacy’ an unstable concept at best, in part because knowledge about one type of graphic may not generalize to others. Even the ubiquitous cone of uncertainty, though formally standardized by the NHC, is often misinterpreted (Broad et al. 2007). Therefore, in this exchange above, that @person1 fails to get an answer about their colorization question is problematic because the colors here do have distinct meanings; in this case, they correspond to atmospheric pressure, but in other graphics they correspond to different attributes. Another example from Harvey, with color representing models, illustrates this, showing how even the first stage of literacy is hard to achieve because of non-standardization:

figurec

Here, the omission of a legend in the graphic perhaps suggests to @person2 that there is some standardization around color meaning that is simply unknown to him- or herself personally. The weather authority who posted the image—@spann, denoted by the underlining of his name—responds almost immediately, though with brevity. He answers the question, but does not elaborate on how probability might be understood when reading the plot, except to make an oblique reference to ‘models.’ This is consistent with Spann’s general approach, as we learn in his interview, which is to respond to as many posts as possible. This helps by correcting assumptions across a large number of people, which in turn helps people make better protective decisions and build literacy. However, it also restricts answer depth which perhaps limits how one learns to generalize to future risk graphics.

In this next example, @person3 tries to make sense of the track lines for Harvey, this time by considering the legend. This person also receives a quick but unelaborated response from a weather authority. Note that the legend is not in alphabetized order, suggesting some other ordering effect that @person3 is trying to discern:

figured

What @person3 does not have an opportunity to learn in this interaction is that the legend is again non-standardized. Our consultation with the NCAR scientist who generates this specific plot reveals that the order of the models in the legend is based on how he parses the data, which roughly aligns with highest to lowest recent accuracy of each model, though this attribute is not apparent in the depiction itself (Jonathan Vigh, personal communication, 28 March 2019). With this in mind, we perhaps understand @SpaceCityWX’s unelaborated response of ‘No’ to mean something more: perhaps he as the mediator—not the producer—of the graphic could not offer more explanation about the ordering other than what he knew it was not.

In the following excerpts, we see people focus on individual hurricane tracks, especially outlying tracks or tracks which coincide with their local area (and is consistent with Padilla et al., 2017):

figuree

What this exploration of spaghetti plot legibility illustrates is that, though the tracks are meant to be viewed as a set (an ensemble), their depiction as lines affords a more deterministic reading. This solves the problem the cone of uncertainty encounters with its readership, which is that people sometimes think that a hurricane increases in size as it travels (Ruginski et al. 2016), even though what it actually depicts is the wide area of effect it could have, with uncertainty of its track increasing as time goes on. However, the spaghetti plot introduces the problem of artificial precision. When experts respond to this overreading of precision by members of the public, they strive to at least negate the assumption to keep misconception at bay, and sometimes elaborate about how to read ‘uncertainty’ as time and text space allows. When people do strive to read the tracks as a set, we sometimes see affirmation of the interpretation, as this excerpt illustrates:

figuref

In interviews, meteorologists commented that the public’s understanding of spaghetti plots was ‘not great,’ ‘probably pretty poor,’ and ‘low, and that is no fault of their own whatsoever’—with this last statement showing that they attribute poor understanding to the complexity of risk communication. Even with these concerns, one meteorologist expressed that providing spaghetti plots is important because:

most people in hurricane prone areas are used to them and quite frankly they expect to see them and if they don’t find them from you they are going to go find it from somebody else.

We might ask why this is so. Perhaps it is because readers do not know they do not understand the plots or, if they do know their knowledge is limited, the spaghetti plot may serve as a reassuring indication that the work of forecasting the hurricane is being done by someone, somewhere. It may be reassuring that one’s local meteorologist can explain the plot, or that agencies that produce the plots are in the background monitoring the activities of earth systems and taking stock of how the hazard will impact the public. This idea provides some basis for thinking about the relationship between science and the public, and the multiple roles of scientific communication, which we return to in the Discussion.

Localizing Risk

A prevalent theme found in responses to spaghetti plot posts was people’s desire to localize risk. The risk literature refers to how people ‘personalize’ risk, or interpret risk in terms of personal impacts (Mileti and O’Brien 1992; Lindell and Perry 2012). We use ‘localizing risk’ because it expresses the additional correspondence to the geographic qualities of the spaghetti plots in the personalization of risk. Spaghetti plots are superimposed over broad swaths of land, sometimes even displaying the entire map of the Atlantic Ocean or continental US (e.g., Figure 1). The track lines may also span an exceptionally large geographic region (e.g., Figure 2a), therefore putting a large population under potential threat. Despite this macro view in spaghetti plots, we see people narrow in on how risk may affect them personally or hyperlocally based on their circumstances:

figureg

Some localization is done with expertise or high ‘hurricane literacy’ as in this next exchange between two friends in reply to a spaghetti plot for Irma. They discuss risk at the geographical level of the state of Florida, and in relation to the prior year’s Hurricane Matthew:

figureh

People also comment on how the threat affects areas to which they intend to travel, not unlike what was seen in the Zika outbreak (Gui et al. 2017). This is seen in the following excerpts about Hurricanes Harvey and Nate, respectively:

figurei

In this excerpt above, meteorologist @EricBurrisWESH concedes that despite being too early to know the track or timing, the risk cannot be ruled out for that particular area. The three-day window he offers for potential landfall is conditioned on uncertainty (‘IFFFFFF’)—what we call an act of ambiguation that allows some precision in an answer without foreclosing possibilities, therefore making the whole of the messaging as accurate as possible. Another excerpt offers additional insight:

figurej

Here we see that @spann responds quickly and parsimoniously with only a link to his weather website, The Alabama Weather Blog, on which he had already provided a detailed forecast which included the ‘Saturday’ @person13 asks about. By doing this, he as an authority is offering an open-ended response to communicate the uncertainty of the forecasts, reflecting the most accurate interpretations of the data he has at the time. The choice to answer without confirmation or affirmation even when a question is framed that way indicates a commitment to holding open possibilities until there is enough information to rule them out. @Person13 responds in a way that suggests he understands that tomorrow will bring a forecast that is more predictive of the location he is concerned about.

Awareness of the Larger Region under Threat

In addition to localizing risk for oneself, we also found that people expressed awareness of the broader impacts of the storm to other geographic regions. This is likely enhanced by the public nature of the platform. For example:

figurek

We found instances of this phenomenon performed with some admonishment in response to forecasts that are ‘good’ for some but then are surely bad for others (like ‘Alabama’ in the previous excerpt). These following three cases show members of the public reacting to TV weathercasters who speak to their regions of effect, and neglect to account of the impacts to other areas:

figurel
figurem

This suggests that the collective gaze of members of the public provokes the display of empathetic understanding of how hazards affect others (Sontag 2003). In addition, in terms of communicating the forecast to the public, these excerpts show that when the forecast seems clear, experts are willing to close out some or all ambiguity.Footnote 4 However, such an act only closes out the forecast for a particular geography, and so from the points of view of those outside the area, they remain as uncertain as before, perhaps even more so (‘but what about the rest of us’).

In interviews, TV broadcast meteorologists noted the tension between their professional responsibility to geographies they cover for their local TV news audience and the larger social media arena that might be attending to their posts. One explained,

I realize that people in South Texas—and I might have some followers down there, and I might have followers that live outside of my area, but there’s some meteorologists around the country who like to be everybody’s weatherman, and they want to post, you know, whatever the story is... I believe in serving my local community and so that is my focus.

To this point, another uses social media to reach a broader audience than TV allows:

In the television business we have a designated market area (DMAs) that define your market. My market…[has] like 23 counties. The digital world doesn’t stop at a county line. In the digital world you can reach anybody, so that is very appealing.

What we learn from the participation of many people from a large area under threat—all of whom are represented in the same scientific, high-level, earth-systems rendering—is that the established professional practice among TV meteorologists does not align with the communicative responsibilities they invite by acting online to help people with making protective decisions. We now examine this in greater depth.

Communicative Responsibilities of Weather Authorities

The communicability of the spaghetti plot as an information artifact is not only a function of its design, but also of how authoritative sources employ expertise to explain the risk it depicts. This next section focuses on this mediation function by identifying three ‘communicative responsibilities’ enacted by meteorologists in dispersing risk information in the current information landscape: the responsibilities of interaction, interpretation, and maintaining uncertainty.

The Responsibility of Interaction

With the increasing importance of digital information sharing during severe weather events, weather authorities face increasing responsibilities of interaction on multiple platforms for both traditional and social media. We learn from the meteorologist participants that interacting with audiences is important to their understanding of their role in hurricane risk communication:

…to be a part of that conversation...as a meteorologist, as a TV station, you’re part of a larger brand and I think there is a responsibility and a professional, almost, requirement to be in that conversation.

Indeed for one meteorologist, reaching his audience via social media is a ‘performance metric’ by which he is evaluated, while another acknowledges that this multi-platform interaction is self-imposed:

I do my best to look at everything and it takes time—you don’t even know how long it takes to look at this stuff.

The demand to always be ‘on,’ even when not ‘on air,’ makes prioritizing time and resources around public interaction challenging but necessary during a severe weather event. During uninterrupted, ‘wall-to-wall’ coverage of a hurricane on his TV news station, a participant says:

I’ve got a hurricane that’s threatening South Florida…if I’ve got a threat to this market, then that means I’m on TV a lot and… carving out time for social media becomes challenging.

Furthermore, weather authorities reported employing multiple online platforms in different ways to communicate in hazards events. Some meteorologists use tweets to direct people to websites or other forums where they further elaborate the details of a spaghetti plot as well as other forecast information:

figuren

Participants described Twitter as being ‘like a cocktail hour’ or a ‘newswire’ that directs listeners to their own weather websites where ‘all [their] best information’ goes, which is similar to how a sheriff’s office studied during a flood event treated Twitter as ‘real time notification tool’ and their blog as an ‘information backbone’ (St. Denis et al. 2014).

Participants describe answering questions about weather risk as important because ‘people may not understand exactly what’s happening’ and need advice about protective actions such as evacuation or canceling travel plans. They describe the need to be ‘straightforward’ and to take on an attitude that ‘there are no dumb questions,’ but rather all should be taken ‘very seriously.’ One specifically tries to prioritize by answering questions that ‘could help not just that one person but many others as well.’ This approach resembles the purpose of early CSCW research on reusing answers to commonly asked questions to build an information resource over time while avoiding duplication of effort (Ackerman and Malone 1990).

The Responsibility of Interpretation

Weather authorities who post risk graphics online report that interpretation of those representations is a part of the risk communication. They need to ‘tell a story,’ which follows in the journalistic tradition to which they also identify (Demuth et al. 2012). By ‘telling a story’, they provide context for forecasts, as opposed to ‘regurgitating data’:

otherwise I’m just regurgitating data for you and that’s not my job. My job as a meteorologist is to simulate the data the best I can and to figure out what my message is that I want to put out based on the data that I’ve looked at…

Storytelling serves the purpose of creating a kind of abstract of the event. It reduces some of the complexity around the concept of uncertainty by personalizing and localizing the risk, which is important for protective decision-making (Mileti and Peek 2000). Images support the storytelling and are also seen to draw more attention to social media posts than text alone. One participant noted that graphics are helpful for his market which covers regions where people do not have high literacy. Another developed a communication policy around this which was to ‘always’ include images with his social media posts.

Graphics like the spaghetti plot also help externalize scientific practice. One meteorologist says that showing the range of potential outcomes helps explain that hurricane forecasting is not an ‘exact science’:

…because you’ll often read that even the [US National] Hurricane Center forecast discussion, ‘well we think this’ll happen but this could happen,’… and so just to kind of help people understand that it’s not obviously an exact science and so we explain to them that we think this could happen, but this could happen as well, and sometimes that works really well with the graphic.

Meteorologist @HellerWeather does exactly this when he engages with a Twitterer about forecast models and their representations in risk communication, encouraging them to not just seek the ‘one [model] that’ll scare you’:

figureo

Similarly, @JohnMoralesNBC6 received a question about his interpretation of a spaghetti plot for Irma that asked why he notes that the relative positioning of the models is ‘important’:

figurep

Morales’s response here is more technical and less accessible to those unfamiliar with spaghetti plots than most interactions we saw with members of the public. Morales elaborated on this exchange in his interview:

Look at their handle… ‘weather stud.’ Now I can’t recall ever going into this account to figure out who this person is, but [this] person’s asking me a question that would probably not come from a layman.

In this instance of recipient design (Palen and Dourish 2003), we learn that Morales recognized that this person had some level of expertise based on both their username (@wxstud, where ‘wx’ means weather) as well as the content of their question. This example is an exception to what we learned was Morales’s general strategy of carefully allocating time to answer questions that can help people broadly; in this case, he provides a specific, technical answer in a one-to-one interaction with someone he perceives as not a ‘layman.’

Finally, the tweet data reveal that there is a sense among members of the public that some computer models are better than others, or conversely, that some are worse. One participant described how he excludes track lines produced by ‘models that are just not as robust that don’t have a good historical track record’ in spaghetti plots that he creates to show on TV (though this is less feasible with spaghetti plots he shares on Twitter that are generated by other sources online). Another described what are known as poor man’s ensembles as graphics ‘where they’re all kind of thrown together, some garbage and some good models. I would not share those in a particularly serious way to get the forecast.’

The following exchange has a member of the public asking Spann to ‘clear out all the trash paths’ from a spaghetti plot for Harvey:

figureq

In the interview with Spann, we learned that this second image he shared shows only the official NHC center track of the hurricane—it is not probabilistic like the first image, but rather a deterministic one, showing only one outcome. However, he explained that he did not initially share this single-track image because the exact track was not indicative of the primary threat: that Harvey was ‘a flood problem’ because of its looping and stagnating trajectory that would affect a large region beyond what the track line showed. In this case, the ambiguity of the tracks in the initial image when considered as a whole more accurately communicates what Spann believed to be the most important risk message.

We learned in interviews that experts find providing such interpretive context in these ways to be a matter of professional responsibility, as we see in these statements from two meteorologists:

You see them [spaghetti plots] everywhere—they’re on national newscasts, they’re on the cable networks, they’re on the local channels, they’re on all the social media feeds—but we have to be responsible when we [share them], and give some context to what they’re looking at.

You can get the model plots anywhere, you can get them all over the place, but my job is to help you interpret what that data is actually showing.

The Responsibility of Maintaining Uncertainty

Though providing explanation through interaction seems to help audiences understand spaghetti plots, there often is only so much even an expert can provide. Hurricane forecasts are inherently uncertain. Our analysis revealed that authorities tend to orient their messaging around the uncertainty underlying these representations. For instance, many weather authorities use some variation of ‘it’s too soon to tell’ with regard to spaghetti plots as we see in these excerpts here:

figurer

Another interaction further underscores the unknowability of the storm’s path:

figures

In this case, @EricBurrisWESH’s post refers to the agency that produced the model, indicating that the meteorologist is a mediator who can interpret risk for his audience but who cannot predict risk for a large scale earth system—‘they’ do that, allowing @EricBurrisWESH to point tacitly to science as governing what it is we can know.

The following interaction takes an interesting turn around the idea of ‘honesty,’ which is used both colloquially and literally in the following exchange:

figuret

It seems that @person26 is using ‘be honest with me’ as a colloquial phrase, as in ‘tell me the whole of the threat; I can take it.’ It is the kind of talk one might have with a doctor who is explaining a diagnosis, with the patient presuming that the doctor is politely holding back the ‘whole truth’ until a patient is ready to hear the bad news. @MichaelHaynes appears to respond more literally (‘we try our best to be honest’) at first, but then clearly makes an attempt to reframe the lesson into the bigger realm of the difficulty that is inherent in narrowing uncertainty. He is saying that there is no other additional information to help @person26 assess their risk, nor that anything is being hidden. This captures a core idea of conveying uncertainty—that unequivocality is not authoritative until enough information comes into view. Rather, holding onto ambiguity is authoritative until the likelihood of risk is low enough to release it.

Mediators like these weather authorities must disambiguate meaning of the spaghetti plot (as discussed in Section 4.4.2). However, they also must re-ambiguate when members of the public try to read the plots as overly determined. We see meteorologists performing these acts of ambiguation in these illustrative exchanges emphasizing the dynamic (‘last minute’, ‘changing’) and probabilistic (‘uncertain’, ‘chance’, ‘possible’) nature of the forecasts:

figureu

The weather authorities are deliberately leaving things open rather than foreclosing possible scenarios that could still reasonably emerge (Soden et al. 2017). This is echoed in interviews:

If it’s a weak disorganized tropical storm that’s going to be a flood threat, that line means absolutely nothing. You know you’ve really got to communicate, ‘this could create flooding 300 miles up the coast here, not that little dot.’ And everyone’s different, every hurricane’s different, every tropical storm is different, the impact is different.

and:

When Harvey was going to make landfall…and you lived in Southeast Houston, we couldn’t say whether it was going to rain 10 in.…or 40 in. at your house—there’s enough uncertainty about the track and rainfall intensities that we just don’t know…so you need to be prepared for this scenario, but realize that something else may well happen. And it’s just part of like being real with people…not trying to be like uber hot shot forecaster who’s got it, ‘oh it’s going to be a hundred and twenty mile per hour storm going to hit Corpus Christi and go here…’ I mean it’s…look you know this could happen or this could happen.

One participant specifically described this maintaining of uncertainty in relation to spaghetti plots:

I think when you see a bunch lines and you’re not sure which ones to look at you feel uncertain. I think it—almost the confusion that it generates is almost like the confusion you should feel when you try to forecast a hurricane.

He also noted the importance of designing forecast visuals like the spaghetti plot and the cone in ways that introduce ‘more ambiguity in the forecast which really should be there.’

The problem of unequivocality in authoritative communication when uncertainty is present was also noted by a participant in the context of ‘rogue’ entities, i.e., weather companies that operate in the private sector and use commercial weather forecasting products that are separate from, and sometimes in competition with, those used in the public sector or government like NOAA and NHC. He also discussed ‘rogue’ social media accounts, run by ‘kids who love weather’:

They look for the worst case scenario from any map they can find even if they don’t understand it and they’re going to throw it out there, and those are the ones that get shared hundreds of thousands of times, and all the clicks and the likes and shares, and they learned the trick: the more outrageous the scenario, the better chance you’re going to get all the likes on Facebook.

It takes ‘no skill,’ he goes on to say, to find and share an early, sensationalized, unequivocal forecast that may appear confident but is actually only falsely reassuring. This acts as a form of misinformation, for which authorities have an increasing responsibility to correct in online communications (Starbird et al. 2018).

These observations underscore the idea that visual hurricane risk representations sit between multiple populations of scientists, mediators, and laypeople, working as unwilling boundary objects. They do not seamlessly support the needs of these groups, but rather require a great deal of interpretation work at the level of building ‘literacy,’ as well as re-ambiguation work to make them work for different geographic contexts. They sit at the intersection overburdened with this complicated work because little else currently does. Indeed, these graphics are generated from an earth systems view of things—a place of scientific authority—for purposes of application to experts ‘on the ground,’ but were not originally imagined for public consumption without a degree of expert mediation.

Discussion and Implications

This research considers risk communication as an interactive achievement that is mediated by various actors and information representations. By isolating one form of risk communication—that which is abetted by the spaghetti plot disseminated via social media—we can examine the responsibilities and practices borne by experts combined with the effort made by the public to assess risk.

Though interpretability of risk via graphics has been studied in laboratory settings (Wu et al. 2014; Ruginski et al. 2016; Padilla et al. 2017), when confronted with these graphics in real-life severe weather events, people typically encounter them with guidance from a weather authority such as a TV meteorologist who narrates the ‘stories’ behind them. These graphics are now available online, though without the rich, accompanying verbal and gestural narration of TV broadcasts. When shared via social media, they alternatively afford dialogue between and among authorities and viewers to aid in their interpretation. Furthermore, TV broadcast meteorologists today interact with their audiences across multiple media in different ways than they did before, demanding new ways in which they engage the trusting, parasocial relationship they have with their audiences. Past crisis informatics research has shown how the roles of journalists (Dailey and Starbird 2014) and emergency managers (Latonero and Shklovski 2011; Denef et al. 2013; St. Denis et al. 2014; Hughes et al. 2014) in disasters have evolved in this new media landscape due to the participation of the public in information production. In this research, we show how this has likewise shaped the role of meteorologists in anticipation of severe weather hazards who feel the responsibility to interact with large public audiences to understand and guide their interpretations of risk.

Achieving Accuracy by Maintaining Appropriate Ambiguity

At a high level, we see that experts and laypeople alike struggle with communication around representations of hurricane risk, citing inadequate ‘hurricane literacy’—a literacy that is quite hard to achieve. Spaghetti plots are created with non-standardized industry practices with respect to their visual features such as colorization, legend use, and model ordering. Public responses to the spaghetti plots reflect a range in levels of understanding from focusing on individual lines to considering larger trends, though, even so, the plots are an expected part of hurricane risk communication. They simplify features of risk representation that other graphics do not, though the interpretability of the tracks is often over-determined. Experts, then, must battle that over-determinism with re-ambiguation of risk to maintain the accuracy of the forecasts, such as by sharing additional information or clarifying misunderstandings.

We interpret the persistent presence of spaghetti plots in spite of these issues to be an important explicit signal that someone, somewhere is attending in a scientifically authoritative way to major weather systems. Mediators like weathercasters are needed to serve as translators of renderings to protective actions. Moreover, the collective gaze onto spaghetti plots in the public settings of social media begs viewers to acknowledge not just their own risk, but that of others, resulting in expression of concern for others, though at the highest of levels (‘sorry Alabama,’ ‘bad for the Caribbean’). We wonder, however, if such acknowledgement of the global implications of weather systems can be heightened to increase awareness about climate change issues.

We also found that authoritative communicators of spaghetti plots assume a responsibility of interacting with their audiences even when they are not on-air (in the case of broadcast meteorologists). Yet, working on social media to answer questions requires time they may not have, especially when a hurricane is imminent. In the face of this, there is remarkable dedication by some meteorologists to answer as many questions as possible, even if briefly. Other responses direct readers to blog posts or websites, making Twitter a way-stop to long-form content and even narrated video explanations that cover forecasts in more detail. Authoritative communicators also bear responsibility to interpret the risk representations for both those who know little about how they are constructed and those who know much more. There is a sense that answering one person will help others, but the occasional weather hobbyist elicits an expert but necessarily concise answer because of the tweet-length limits, which in turn further limits novices from learning more because the answer has to be parsimonious.

Together, many of these features of communicating risk can seem to make the risk representations more declarative and determined—more ‘accurate’—to viewers than intended. The over-determinism with which many people interpret the spaghetti plots is exacerbated by other factors, including the urgency of protective decision-making in the face of a hazard, and the space limit of microblogs with respect to how much text can be used to explain complex scientific imagery or answer questions. The interactions between the public and the experts serve to modulate this declarative quality by allowing experts and sometimes other members of the public to communicate accuracy through acts of ambiguation in response to people’s questions and (mis)interpretations of the graphics.

Implications of the Research

The implications of this research extend to matters of design to improve risk representations while also yielding insight about what acts of design can do to drive systems-level change; of practice, with respect to the mediation of risk products to a large audience looking for individual answers; of scientific communication, with respect to how the public can engage with earth systems; and of how CSCW can shape future risk communication.

Design and Design Demands as Pressure for System-Level Change

Spaghetti plot representations of hurricane risk can be confusing even to ‘hurricane literate’ readers for multiple reasons. For one, they are not standardized, so interpretations of one version of a spaghetti plot do not carry over to other versions. Second, viewers want to assign visual and graphical features such as color and legend ordering to have meaning so as to make sense of a complex graphic, but again these do not always have meaning, or have meanings that differ across graphics. Third, some representations are more complete than others in that they include legends and labels for individual models.

Standardizing features of spaghetti plots could certainly better support interpretation of them. For instance, color is often used meaningfully in other risk and uncertainty visualizations (MacEachren 1992; Bostrom et al. 2008; Roth 2012; Sherman-Morris et al. 2015), however in many cases it is used arbitrarily for spaghetti plots. Colors could be consistently mapped to specific models across spaghetti plots from different producers, so that viewers can always look, for instance, to the black line to see the official NHC track forecast.

The presence of a legend is essential whenever the image appears elsewhere other than a TV broadcast, where the weathercaster in effect serves as the legend. Problems arise when graphics for TV are used as double-duty for social media. Additionally, the ordering of forecast models in the legend could be made meaningful, such as ordering by model accuracy or likelihood if such information is available, or otherwise alphabetically based on the model name so as not to introduce an unintended ordering effect. Of course, this sort of standardization is easier said than done, because renderings correspond to the idiosyncrasies of their origins; standardization would require changes in design policy that align weather agencies and the various groups that generate the plots.

This brings us to a larger point, that of the systems-level role that design can bring to a world where information artifacts are distributed across the organizational boundaries and media platforms in which they were previously contained. The vast distribution of similar but non-standardized spaghetti plots and the responses they receive might create a kind of bottom-up pressure to the origins of the artifacts themselves. Those creators—agencies, institutions, and individuals—might be required to rework what risk representations across populations and delivery platforms are like, thereby further engaging in and designing the larger informatics milieu in which people are making decisions. This might mean that specialized information artifacts, distributed across many people and supported by many intermediaries, could themselves be the drivers for new organizational relationships.

Communicative Practice by Weather Authorities

The data give rise to ideas about how hurricane risk communication in general could be improved to support meteorologists as mediators who need to communicate risk succinctly and persuasively, but while maintaining the ambiguity that is part and parcel of uncertainty. In particular, they need to convey other aspects of hurricane risk: Beyond where the storm will precisely travel, what additional harm will it affect in its traversal across land? Flooding is often a major risk in hurricanes, and even more so if the storm is disorganized and stagnant, as Harvey was over the Houston, Texas region. Multiple weather authorities noted this particular communicative challenge of emphasizing flooding over other features of the forecast. One participant imagined a new risk graphic that could improve upon this by outlining day-by-day forecasts both visually and textually—here’s what’s going to happen—so as to emphasize how impacts like flooding continue after the point of landfall. This would support the kinds of questions about interpretation of risks (How bad will it be? Should I be concerned?) and protective actions (Should I evacuate? Should I change my plans?) and from members of the public that we saw in the tweet data. It refocuses the concern not so precisely on the where the track will be, but how large the area of effect could be and the secondary threats that may arise. This is a complementary aspect of maintaining ambiguity about a storm’s track by refocusing on building capacity for preparatory activities.

Scientific Communication

An area of reflection emerged from this research on the interactivity between people who collectively gaze upon the implications of risk for many, ‘beyond the personal’ (Gui et al. 2018): can the ubiquity of these map-based representations of earth systems influence public understanding of the larger effects of climate change? What opportunities do we see to educate and broaden perspectives? The answers may lie in part in the other implications discussed above, but with elaboration around reconciling how the actual tracks play out against projections, thereby begging the question of how models are created in the first place. Does this become yet another responsibility of producers and mediators of these representations, or is there a place for more human and informational mediation of risk along the pipeline, a pipeline that also extends after a hurricane has wielded its wrath? Social computing has a role to play in scientific communication, as it already does here in hurricane risk assessment.

Risk Communication as an Opportunity for CSCW

As we consider risk communication more generally beyond the specific form it takes here, we can ask how it further beckons CSCW engagement. Now that risk communication between formal entities and members of the public has entered the digital sphere, the platforms that host such communications draw a collective gaze that now itself becomes visible, and hence more interactive, and different from the broadcast models out of which it was first borne.

As we have learned from the crisis informatics research that has evolved over the years, the collective gaze can enable many to learn from the questions of a few, in keeping with the early CSCW hopes of systems like Answer Garden (Ackerman and Malone 1990), and it can check for errors, misunderstandings, and the one-off ‘rogue actor’ in the early crowdsourcing sense (Palen et al. 2009). However, we also know that benevolence is not universal, and that the collective gaze has often been hijacked by actors who know how to infiltrate and do harm on a larger scale in ways that are not detectable (Starbird et al. 2014). Risk communication around hazards of various kinds seems like a target of particular concern for this new age of misinformation, as a region or population already affected by threat might be additionally vulnerable to digitally-abetted misanthropes who are interested in operating at scale.

If this is our digital future, then risk communication must graduate quickly from its TV-based broadcast formula which has been co-opted for social media venues, even to the point of TV weathercasters being the most interactive among meteorologists—at least as this scope of research found. It is in this need to move expeditiously that CSCW might respond, and for which we raise more questions than we currently offer answers.

What do platforms for next generation broadcasting need to support interaction, such that many can interact and listen beyond the limits of just a few characters? We do not think we should be constrained by imagining the future to be only platforms that attempt to support everything and in doing so support so very little. Is there a place for certain forms of risk communication to ensue, perhaps at least for recurring natural hazards like hurricanes that we know to expect? If so, what is the governing model for such an environment, from the level of code to the level of human interaction? Who or what runs it?

How can risk representations be interactive and transcend some of the limits of static imagery such that interpretations can be personalized through localization, while also being accessible to many? How can the relationship between a few experts to many nonexperts be rethought and newly supported such that the questions inform expert responses, across events that are different instance over instance?

What of the organizational arrangements of risk communication in particular and scientific communication more generally? Have we learned that scientific representations are meaningful to the public and should not be simplified, and that their graphical improvements and ongoing education of the public are the best ways to engage people with earth systems views when those are the origin of threat? The experiment of social media is currently testing this, as there are few to no steps between those who produce the graphics and persons on the ground. Or perhaps there is more that remains to the question, and we might ask if we purposefully design more forking of explanation at different points in the chain of warning, and for different purposes.

Risk communication is itself a risky business. For hazards like hurricanes, risk messages are generated from a massive and world-wide enterprise of scientists, communicators, computation, sensors, and data. Easily disseminated visual risk representations in particular have implications for human safety and the economics of hazards response. The work of this enterprise might output representations as seemingly straightforward as a spaghetti plot, but the implications of those representations are sweeping. Those representations are political. In a time that is being held hostage by misinformation campaigns across the same social media platforms where safety-critical information is also being delivered, we must work toward supporting all aspects of the risk communication enterprise, as it winds its way from scientific genesis to a listener’s ear, with each step offering a host of new CSCW problems to tackle.

Conclusion

Risk communication is an interaction between readers, mediators, and messages which often contain graphics that are intended to parsimoniously describe risk. As we found in this research of the phenomenon during the 2017 Atlantic hurricane season, the interaction between weather authorities and members of the public strives to achieve accuracy regarding hurricane risk without being falsely precise; it is necessary for the communication to maintain uncertainty until it sufficiently narrows. Acts of ambiguity hold at bay a tendency toward over-determinism, characterizing the work that happens when graphics that traditionally are verbally and gesturally narrated appear on social media for even closer and perhaps wider inspection, but without the same interpretive support.

Notes

  1. 1.

    https://www.nhc.noaa.gov/modelsummary.shtml

  2. 2.

    An additional 10 tweets had spaghetti plot imagery and initially received replies, but these replies became unavailable due to deletion or account suspension and thus the tweets are not included in this analysis.

  3. 3.

    https://help.twitter.com/en/using-twitter/twitter-conversations

  4. 4.

    As it happened, the forecast models for Hurricane Irma, like those shown in @FOX35Glenn’s tweet, were poorly predictive of its track. Thus, even though many forecasts similarly predicted the storm to travel east, Irma eventually traveled up Florida’s west coast and had damaging impacts across the state.

References

  1. Ackerman, Mark S.; and Thomas W. Malone (1990). Answer Garden: a tool for growing organizational memory. ACM SIGOIS Bulletin, vol. 11, nos. 2–3, pp. 31–39.

    Google Scholar 

  2. Anderson, Jennings; Gerard Casas Saez; Kenneth Anderson; Leysia Palen; and Rebecca Morss (2019). Incorporating Context and Location Into Social Media Analysis: A Scalable, Cloud-Based Approach for More Powerful Data Science. In Proceedings of the 52nd Hawaii International Conference on System Sciences. Grand Wailea, Maui, Hawaii, USA: IEEE Computer Society Press, pp. 2274–2283.

  3. Arlikatti, Sudha; Michael K. Lindell; Carla S. Prater; and Yang Zhang (2006). Risk area accuracy and hurricane evacuation expectations of coastal residents. Environment and Behavior, vol. 38, no. 2, pp. 226–247.

  4. Bica, Melissa; Julie L Demuth; James E Dykes; and Leysia Palen (2019). Communicating Hurricane Risks: Multi-Method Examination of Risk Imagery Diffusion. In CHI ‘19. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, pp. 1–13.

  5. Bloodhart, Brittany; Edward Maibach; Teresa Myers; and Xiaoquan Zhao (2015). Local climate experts: The influence of local TV weather information on climate change perceptions. PLoS ONE, vol. 10, no. 11, pp. 1–14.

    Google Scholar 

  6. Bostrom, Ann; Luc Anselin; and Jeremy Farris (2008). Visualizing seismic risk and uncertainty: A review of related research. Annals of the New York Academy of Sciences, vol. 1128, no. 1, pp. 29–40.

    Google Scholar 

  7. Braun, Virginia; and Victoria Clarke (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, vol. 3, no. 2, pp. 77–101.

    Google Scholar 

  8. Broad, Kenneth; Anthony Leiserowitz; Jessica Weinkle; and Marissa Steketee (2007). Misinterpretations of the “cone of uncertainty” in Florida during the 2004 hurricane season. Bulletin of the American Meteorological Society, vol. 88, no. 5, pp. 651–667.

    Google Scholar 

  9. Canham, Matt; and Mary Hegarty (2010). Effects of knowledge and display design on comprehension of complex graphics. Learning and Instruction, vol. 20, no. 2, pp. 155–166.

    Google Scholar 

  10. Cao, Yinghui; Bryan J. Boruff; and Ilona M. McNeill (2016). Is a picture worth a thousand words? Evaluating the effectiveness of maps for delivering wildfire warning information. International Journal of Disaster Risk Reduction, vol. 19, no. October 2016, pp. 179–196.

    Google Scholar 

  11. Cappucci, Matthew; and Andrew Freedman (2019). Trump shows doctored hurricane chart. Was the White House trying to cover up for Alabama Twitter flub? The Washington Post, 5 September 2019. Retrieved from https://www.washingtonpost.com/weather/2019/09/04/president-trump-shows-doctored-hurricane-chart-was-it-cover-up-alabama-twitter-flub/.

  12. Cheong, Lisa; Susanne Bleisch; Allison Kealy; Kevin Tolhurst; Tom Wilkening; and Matt Duckham (2016). Evaluating the impact of visualization of wildfire hazard upon decision-making under uncertainty. International Journal of Geographical Information Science, vol. 30, no. 7, pp. 1377–1404.

    Google Scholar 

  13. Cox, Jonathan; Donald House; and Michael Lindell (2013). Visualizing Uncertainty in Predicted Hurricane Tracks. International Journal for Uncertainty Quantification, vol. 3, no. 2, pp. 143–156.

    MathSciNet  Google Scholar 

  14. Dailey, Dharma; and Kate Starbird (2014). Journalists as Crowdsourcerers: Responding to Crisis by Reporting with a Crowd. Computer Supported Cooperative Work (CSCW), vol. 23, nos. 4–6, pp. 445–481.

    Google Scholar 

  15. Demuth, Julie L.; Rebecca E. Morss; Betty Hearn Morrow; and Jeffrey K. Lazo (2012). Creation and communication of hurricane risk information. Bulletin of the American Meteorological Society, vol. 93, no. 8, pp. 1133–1145.

    Google Scholar 

  16. Demuth, Julie L.; Rebecca E. Morss; Leysia Palen; Kenneth M. Anderson; Jennings Anderson; Marina Kogan; Kevin Stowe; et al (2018). “Sometimes da #beachlife ain’t always da wave”: Understanding People’s Evolving Hurricane Risk Communication, Risk Assessments, and Responses Using Twitter Narratives. Weather, Climate, and Society, vol. 10, no. 3, pp. 537–560.

    Google Scholar 

  17. Denef, Sebastian; Petra S Bayerl; and Nico Kaptein (2013). Social Media and the Police--Tweeting Practices of British Police Forces during the August 2011 Riots. In CHI ‘13. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, pp. 3471–3480.

  18. Ebert, Elizabeth E (2001). Ability of a Poor Man’s Ensemble to Predict the Probability and Distribution of Precipitation. Monthly Weather Review, vol. 129, no. 10, pp. 2461–2480.

    Google Scholar 

  19. Eiser, J. Richard; Ann Bostrom; Ian Burton; David M. Johnston; John McClure; Douglas Paton; Joop van der Pligt; and Mathew P. White (2012). Risk interpretation and action: A conceptual framework for responses to natural hazards. International Journal of Disaster Risk Reduction, vol. 1, no. 1, pp. 5–16.

    Google Scholar 

  20. Eosco, Gina Marie (2008). A Study of Visual Communication: Cyclones, Cones, and Confusion. M.S. thesis. Cornell University.

  21. Fiesler, Casey; and Nicholas Proferes (2018). “Participant” Perceptions of Twitter Research Ethics. Social Media and Society, vol. 4, no. 1, pp. 1--14.

    Google Scholar 

  22. Fraustino, Julia Daisy; Brooke Liu; and Jin Yan (2012). Social Media Use during Disasters: A Review of the Knowledge Base and Gaps. College Park, MD.

  23. Gee, James Paul (2010). How to do Discourse Analysis: A Toolkit. New York, NY, USA: Routledge.

    Google Scholar 

  24. Goldgruber, Eva; Susanne Sackl-Sharif; Julian Ausserhofer; and Robert Gutounig (2018). ‘When the Levee Breaks’: Recommendations for Social Media Use During Environmental Disasters. In Social Media Use in Crisis and Risk Communication, Emerald Publishing Limited, pp. 229–253.

  25. Gresh, Donna; Léa A. Deleris; and Luca Gasparini (2012). Visualizing Risk. IBM Research.

    Google Scholar 

  26. Gui, Xinning; Yubo Kou; Kathleen H. Pine; and Yunan Chen (2017). Managing Uncertainty: Using Social Media for Risk Assessment during a Public Health Crisis. In CHI ‘17. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, pp. 4520–4533.

  27. Gui, Xinning; Yubo Kou; Kathleen Pine; Elisa Ladaw; Harold Kim; Eli Suzuki-Gill; and Yunan Chen (2018). Multidimensional Risk Communication: Public Discourse on Risks during an Emerging Epidemic. In CHI ‘18. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, pp. 214:1-214:14.

  28. Hasan, Samiul; Satish Ukkusuri; Hugh Gladwin; and Pamela Murray-Tuite (2011). Behavioral Model to Understand Household-Level Hurricane Evacuation Decision Making. Journal of Transportation Engineering, vol. 137, no. 5, pp. 341–348.

    Google Scholar 

  29. Houston, J. Brian; Joshua Hawthorne; Mildred F. Perreault; Eun Hae Park; Marlo Goldstein Hode; Michael R. Halliwell; Sarah E. Turner Mcgowen; et al (2015). Social media and disasters: A functional framework for social media use in disaster planning, response, and research. Disasters, vol. 39, no. 1, pp. 1–22.

    Google Scholar 

  30. Hughes, Amanda L.; Lise A. St. Denis; Leysia Palen; and Kenneth M. Anderson (2014). Online public communications by police & fire services during the 2012 Hurricane Sandy. In CHI ‘14. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, pp. 1505–1514.

  31. Hyde, James Tupper (2017). Avoiding the Windshield Wiper Effect: A Survey of Operational Meteorologists on the Uncertainty in Hurricane Track Forecasts and Communication. M.S. thesis. North Dakota State University.

  32. Jordan, Brigitte; and Austin Henderson (1995). Interaction Analysis: Foundations and Practice. Journal of the Learning Sciences, vol. 4, no. 1, pp. 39–103.

    Google Scholar 

  33. Klotz, Adam M (2011). Social Media and Weather Warnings: Exploring the New Parasocial Relationships in Weather Forecasting. Ball State University.

  34. Kogan, Marina; and Leysia Palen (2018). Conversations in the Eye of the Storm: At-Scale Features of Conversational Structure in a High-Tempo, High-Stakes Microblogging Environment. In CHI ‘18. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM Press, pp. 1–13.

  35. Kogan, Marina; Jennings Anderson; Leysia Palen; Kenneth M. Anderson; and Robert Soden (2016). Finding the way to OSM mapping practices: Bounding large crisis datasets for qualitative investigation. In CHI ‘16. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York: ACM Press, pp. 2783–2795.

  36. Lachlan, Kenneth A.; Patric R. Spence; Xialing Lin; Kristy Najarian; and Maria Del Greco (2016). Social media and crisis management: CERC, search strategies, and Twitter content. Computers in Human Behavior, vol. 54, pp. 647–652.

  37. Latonero, Mark; and Irina Shklovski (2011). Emergency Management, Twitter, and Social Media Evangelism. International Journal of Information Systems for Crisis Response and Management, vol. 3, no. 4, pp. 1–16.

  38. Lazo, Jeffrey K.; Ann Bostrom; Rebecca E. Morss; Julie L. Demuth; and Heather Lazrus (2015). Factors Affecting Hurricane Evacuation Intentions. Risk Analysis, vol. 35, no. 10, pp. 1837–1858.

    Google Scholar 

  39. Lim, Young-Kwon; Siegfried D. Schubert; Robin Kovach; Andrea M. Molod; and Steven Pawson (2018). The Roles of Climate Change and Climate Variability in the 2017 Atlantic Hurricane Season. Scientific Reports, vol. 8, no. 1, pp. 1–10.

    Google Scholar 

  40. Lindell, Michael K.; and Ronald W. Perry (2012). The Protective Action Decision Model: Theoretical Modifications and Additional Evidence. Risk Analysis, vol. 32, no. 4, pp. 616–632.

    Google Scholar 

  41. Lipkus, Isaac M.; and J. G. Hollands (1999). The visual communication of risk. Journal of the National Cancer Institute Monographs, vol. 1999, no. 25, pp. 149–163.

    Google Scholar 

  42. Liu, Brooke Fisher; Michele M. Wood; Michael Egnoto; Hamilton Bean; Jeannette Sutton; Dennis Mileti; and Stephanie Madden (2017). Is a picture worth a thousand words? The effects of maps and warning messages on how publics respond to disaster information. Public Relations Review, vol. 43, no. 3, pp. 493–506.

    Google Scholar 

  43. MacEachren, Alan M (1992). Visualizing Uncertain Information. Cartographic Perspectives, no. 13, pp. 10–19.

  44. MacEachren, Alan M.; Anthony Robinson; Susan Hopper; Steven Gardner; Robert Murray; Mark Gahegan; and Elisabeth Hetzler (2005). Visualizing Geospatial Information Uncertainty: What We Know and What We Need to Know. Cartography and Geographic Information Science, vol. 32, no. 3, pp. 139–160.

    Google Scholar 

  45. Macias, Wendy; Karen Hilyard; and Vicki Freimuth (2009). Blog functions as risk and crisis communication during Hurricane Katrina. Journal of Computer-Mediated Communication, vol. 15, no. 1, pp. 1–31.

    Google Scholar 

  46. Mendoza, Marcelo; Barbara Poblete; and Carlos Castillo (2010). Twitter Under Crisis: Can we trust what we RT? In SOMA ‘10. 1st Workshop on Social Media Analytics. New York, New York, USA: ACM Press:, pp. 71–79.

  47. Meyer, Robert; Kenneth Broad; Ben Orlove; and Nada Petrovic (2013). Dynamic Simulation as an Approach to Understanding Hurricane Risk Response: Insights from the Stormview Lab. Risk Analysis, vol. 33, no. 8, pp. 1532–1552.

    Google Scholar 

  48. Mileti, Dennis S; and Paul W. O’Brien (1992). Warnings during Disaster: Normalizing Communicated Risk. Social Problems, vol. 39, no. 1, pp. 40–57.

    Google Scholar 

  49. Mileti, Dennis S.; and Lori Peek (2000). The social psychology of public response to warnings of a nuclear power plant accident. Journal of Hazardous Materials, vol. 75, nos. 2–3, pp. 181–194.

  50. Mileti, Dennis S.; and John H. Sorensen (1990). Communication of emergency public warnings: A social science perspective and state-of-the-art assessment. Oak Ridge, TN.

  51. Morss, Rebecca E.; Julie L. Demuth; Heather Lazrus; Leysia Palen; C. Michael Barton; Christopher A. Davis; Chris Snyder; et al (2017). Hazardous weather prediction and communication in the modern information environment. Bulletin of the American Meteorological Society, vol. 98, no. 12, pp. 2653–2674.

    Google Scholar 

  52. National Academies of Sciences Engineering and Medicine (2018). Integrating Social and Behavioral Sciences Within the Weather Enterprise. Washington, D.C.: The National Academies Press.

    Google Scholar 

  53. National Research Council (2013). Public Response to Alerts and Warnings Using Social Media: Report of a Workshop on Current Knowledge and Research Gaps. Washington, DC: The National Academies Press.

    Google Scholar 

  54. Padilla, Lace M., Ian T. Ruginski and Sarah H. Creem-Regehr (2017). Effects of ensemble and summary displays on interpretations of geospatial uncertainty data. Cognitive Research: Principles and Implications, vol. 2, no. 1, p. 40.

    Google Scholar 

  55. Palen, Leysia; and Kenneth M. Anderson (2016). Crisis informatics--New data for extraordinary times. Science, vol. 353, no. 6296, pp. 224–225.

    Google Scholar 

  56. Palen, Leysia; and Paul Dourish (2003). Unpacking “Privacy” for a Networked World. In CHI ‘03. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM Press, pp. 129–136.

  57. Palen, Leysia; and Amanda L. Hughes (2018). Social Media in Disaster Communication. In H. Rodríguez; W. Donner; and J. E. Trainor (eds.): Handbooks of Sociology and Social Research. Handbook of Disaster Research., 2nd ed.Cham, Switzerland: Springer International Publishing, pp. 497–518. Handbooks of Sociology and Social Research.

  58. Palen, Leysia; Starr Roxanne Hiltz; and Sophia B. Liu (2007). Online forums supporting grassroots participation in emergency preparedness and response. Communications of the ACM, vol. 50, no. 3, pp. 54–58.

    Google Scholar 

  59. Palen, Leysia; Sarah Vieweg; Sophia B. Liu; and Amanda Lee Hughes (2009). Crisis in a networked world: Features of computer-mediated communication in the April 16, 2007, Virginia Tech event. Social Science Computer Review, vol. 27, no. 4, pp. 467–480.

    Google Scholar 

  60. Reuter, Christian; and Thomas Spielhofer (2017). Towards social resilience: A quantitative and qualitative survey on citizens’ perception of social media in emergencies in Europe. Technological Forecasting and Social Change, vol. 121, no. 2017, pp. 168–180.

    Google Scholar 

  61. Reuter, Christian; Amanda Lee Hughes; Marc-André Kaufhold; and Amanda Lee Hughes (2018). Social Media in Crisis Management: An Evaluation and Analysis of Crisis Informatics Research. International Journal of Human-Computer Interaction, vol. 34, no. 4, pp. 280–294.

    Google Scholar 

  62. Rickard, Laura N.; Jonathon P. Schuldt; Gina M. Eosco; Clifford W. Scherer; and Ricardo A. Daziano (2017). The Proof is in the Picture: The Influence of Imagery and Experience in Perceptions of Hurricane Messaging. Weather, Climate, and Society, vol. 9, pp. 471–485.

    Google Scholar 

  63. Roth, Florian (2012). Visualizing Risk: The Use of Graphical Elements in Risk Analysis and Communications. Zurich: Risk and Resilience Research Group Center for Security Studies (CSS), ETH Zurich.

  64. Ruginski, Ian T.; Alexander P. Boone; Lace M. Padilla; Le Liu; Nahal Heydari; Heidi S. Kramer; Mary Hegarty; William B. Thompson; Donald H. House; and Sarah H. Creem-Regehr (2016). Non-expert interpretations of hurricane forecast uncertainty visualizations. Spatial Cognition & Computation, vol. 16, no. 2, pp. 154–172.

    Google Scholar 

  65. Sarcevic, Aleksandra; Leysia Palen; Joanne White; Kate Starbird; Mossaab Bagdouri; and Kenneth Anderson (2012). “Beacons of Hope” in Decentralized Coordination: Learning from On-the-Ground Medical Twitterers During the 2010 Haiti Earthquake. In CSCW ‘12. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. New York, New York, USA: ACM Press, pp. 47–56.

  66. Seeger, Matthew W (2006). Best practices in crisis communication: An expert panel process. Journal of Applied Communication Research, vol. 34, no. 3, pp. 232–244.

    Google Scholar 

  67. Shannon, Claude E (1948). A Mathematical Theory of Communication. Bell System Technical Journal, vol. 27, no. 3, pp. 379–423.

    MathSciNet  MATH  Google Scholar 

  68. Shannon, Claude E.; and Warren Weaver (1963). The mathematical theory of communication. Champaign, IL: University of Illinois Press.

    Google Scholar 

  69. Sherman-Morris, Kathleen (2005). Tornadoes, television and trust--A closer look at the influence of the local weathercaster during severe weather. Environmental Hazards, vol. 6, no. 4, pp. 201–210.

    Google Scholar 

  70. Sherman-Morris; Kathleen; Karla B. Antonelli; and Carrick C. Williams (2015). Measuring the Effectiveness of the Graphical Communication of Hurricane Storm Surge Threat. Weather, Climate, and Society, vol. 7, no. 1, pp. 69–82.

  71. Shklovski, Irina; Leysia Palen; and Jeannette Sutton (2008). Finding community through information and communication technology in disaster response. In CSCW ‘08. Proceedings of the ACM Conference on Computer Supported Cooperative Work. ACM, pp. 127–136.

  72. Slovic, Paul (1987). Perception of Risk. Science, vol. 236, pp. 280–285.

    Google Scholar 

  73. Soden, Robert; and Leysia Palen (2014). From Crowdsourced Mapping to Community Mapping: The Post-earthquake Work of OpenStreetMap Haiti. In C. Rossitto; L. Ciolfi; D. Martin; and B. Conein (eds.): COOP 2014. Proceedings of the 11th International Conference on the Design of Cooperative Systems. Cham, Switzerland: Springer International Publishing, pp. 311–326.

  74. Soden, Robert; Leah Sprain; and Leysia Palen (2017). Thin Grey Lines: Confrontations With Risk on Colorado’s Front Range. In CHI ‘17. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, pp. 2042–2053.

  75. Sontag, Susan (2003). Regarding the pain of others. New York, NY, USA: Farrar, Straus and Giroux. Vol. 201.

  76. St. Denis, Lise Ann, Leysia Palen, and Kenneth M. Anderson (2014). Mastering Social Media: An Analysis of Jefferson County’s Communications during the 2013 Colorado Floods. In Proceedings of the 11th International ISCRAM Conference, pp. 737–746.

  77. Starbird, Kate; Leysia Palen; Amanda L. Hughes; and Sarah Vieweg (2010). Chatter on The Red: What Hazards Threat Reveals about the Social Life of Microblogged Information. In CSCW ‘10. Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work. New York, New York, USA: ACM Press, pp. 241–250.

  78. Starbird, Kate; Jim Maddock; Mania Orand; Peg Achterman; and Robert M. Mason (2014). Rumors, False Flags, and Digital Vigilantes: Misinformation on Twitter after the 2013 Boston Marathon Bombing. In iConference 2014 Proceedings. iSchools, pp. 654–662.

  79. Starbird, Kate; Dharma Dailey; Owla Mohamed; Gina Lee; and Emma S. Spiro (2018). Engage early, correct more: How journalists participate in false rumors online during crisis events. In CHI ‘18. Proceedings of the Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, pp. 1–12.

  80. Tapia, Andrea H.; and Kathleen Moore (2014). Good Enough is Good Enough: Overcoming Disaster Response Organizations’ Slow Social Media Data Adoption. Computer Supported Cooperative Work (CSCW), vol. 23, nos. 4–6, pp. 483–512.

    Google Scholar 

  81. Veil, Shari R.; Tara Buehner; and Michael J. Palenchar (2011). A Work-In-Process Literature Review: Incorporating Social Media in Risk and Crisis Communication. Journal of Contingencies and Crisis Management, vol. 19, no. 2, pp. 110–122.

    Google Scholar 

  82. Vieweg, Sarah; Amanda L. Hughes; Kate Starbird; and Leysia Palen (2010). Microblogging During Two Natural Hazards Events: What Twitter May Contribute to Situational Awareness. In CHI ‘10. Proceedings of the 28th International Conference on Human Factors in Computing Systems. New York, New York, USA: ACM Press, p. 1079.

  83. Wong-Villacres, Marisol; Cristina M. Velasquez; and Neha Kumar (2017). Social media for earthquake response: Unpacking its limitations with care. Proceedings of the ACM on Human-Computer Interaction, vol. 1, no. CSCW, pp. 32:1-32:22.

  84. Wu, Hao-Che; Michael K. Lindell; Carla S. Prater; and Charles D. Samuelson (2014). Effects of Track and Threat Information on Judgments of Hurricane Strike Probability. Risk Analysis, vol. 34, no. 6, pp. 1025–1039.

    Google Scholar 

  85. Young, Camila E; Erica D Kuligowski and Aashna Pradhan (2020). A Review of Social Media Use During Disaster Response and Recovery Phases. National Institute of Standards and Technology.

  86. Zhang, Yang; Carla S Prater; and Michael K Lindell (2004). Risk Area Accuracy and Evacuation from Hurricane Bret. Natural Hazards Review, vol. 5, no. 3, pp. 115–120.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Melissa Bica.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bica, M., Weinberg, J. & Palen, L. Achieving Accuracy through Ambiguity: the Interactivity of Risk Communication in Severe Weather Events. Comput Supported Coop Work 29, 587–623 (2020). https://doi.org/10.1007/s10606-020-09380-2

Download citation

Keywords

  • Forecasts
  • Hurricanes
  • Imagery
  • Risk communication
  • Risk interpretation
  • Scientific representations
  • Social media
  • Uncertainty
  • Weather