Using Statistics to Quantify and Communicate Uncertainty During Volcanic Crises

  • Rosa Sobradelo
  • Joan Martí
Open Access
Part of the Advances in Volcanology book series (VOLCAN)


For decades, and especially in recent years, there has been an increasing amount of research using statistical modelling to produce volcanic forecasts, so that people could make better decisions. This research aims to add confidence by arming users with quantitative summaries of the chaos and uncertainty of extreme situations, in the form of probabilities—that is to say the measure of the likeliness that an event will occur.


Probabilistic terms and associated jargon are often part of the working environment of volcanologists. Research activities about volcanic hazard and the quantification of volcanic risk even led to officially defining volcanic hazard in terms of probability (Blong 2000). The last decade has produced a comprehensive framework of studies, surveys and computer-assisted procedures for transforming field data into probabilities of occurrence of a particular scenario (Newhall and Hoblitt 2002; Marzocchi et al. 2004, 2008, 2010; Aspinall 2006; Martí et al. 2008; Neri et al. 2008; Sobradelo and Martí 2010, 2015; Sobradelo et al. 2013). Following the successful development of probabilistic tools, came the challenge of communicating their results. Research and operational strategies started to incorporate the enhancement of the communication of these probabilistic forecasts to decision makers and the public (Marzocchi and Woo 2007; Marzocchi et al. 2012; Sobradelo et al. 2014). At the same time, extensive work has been done in the psychological and sociological aspects on the perception and interpretation of uncertainty, for both volcanology and across other hazards. Despite this extensive use, sometimes there is confusion surrounding the statistical interpretation of probabilities, partly due to unclear statistical concepts like: What is a probability? What is statistical science? How much can I rely on a probability estimate? What are they used for? What is uncertainty? How does uncertainty and probability relate to each other? Why are statistics and probabilities sometimes misunderstood? Why is it that scientists and/or users (officials) don’t fully appreciate the uncertainty surrounding a probability estimate?

In this chapter we try to address the above questions by focusing on the statistical meaning of probability estimates and their role in the quantification and communication of uncertainty. We hope to provide some insights into best practices for the use and communication of statistics during volcanic crises.

Quantifying and Communicating Uncertainty in Volcanology

Volcanology is by nature an inexact science. Deciphering the nature of unrest signals (volcanic reactivation), and determining whether or not an unrest episode may be an indication of a new eruption, requires knowledge on the volcano’s past, current and future behaviour. In order to achieve such a complex objective experts in field studies, volcano monitoring, experimental and probabilistic modelling, amongst other, work together under pressure and tight time constrains. It is important that these stakeholders communicate on a level that caters for the needs and expectations of all disciplines; in other words, it is important to agree on a common technical language. This is particularly relevant when volcano monitoring is carried out on a systematic survey basis without continuous scientific scrutiny of monitoring protocols or interpretation of data.

By definition, uncertainty is the state of being uncertain. It is used to refer to something that is doubtful or unknown. It means lack of confidence about something. Hence, it is directly related to the amount of knowledge we have about a process. A forecast, in the form of a probability estimate, is an attempt to quantify this uncertainty and support decision-making. Forecasting potential outcomes of volcanic reactivation (unrest) usually implies high levels of scientific uncertainty. Anticipating whether a particular volcanic unrest will end with an eruption and where (temporal and spatial uncertainty) requires scientific knowledge of how the volcano has behaved in the past, and scientific interpretation of precursory signals. Whilst this may be less challenging for volcanoes that erupt often, it is far more difficult for volcanoes with long eruptive recurrence and less data available, and even more so for those without historical records.

The main goal of volcano (eruption) forecasting is to be able to respond to questions of how, where, and when an eruption will happen (Sparks 2003). To address those questions we often use probabilities in an attempt to quantify the intrinsic variability due to the complexity of the process. The communication of those probabilities will have to adapt to the recipient of that information. Making predictions on the future behaviour of a volcano follows similar reasoning as in other natural phenomena (storms, landslides, earthquakes, tsunamis, etc.). Each volcano has its own characteristics depending on magma composition, physics, rock rheology, stress field, geodynamic environment, local geology, etc., which makes its behaviour unique. What is indicative in one volcano may not be relevant in another. All this makes the task of volcano forecasting challenging and difficult, especially when it comes to communicating uncertainty to population and decision-makers.

During a volcanic emergency, relevant questions are first how to quantify the uncertainty that accompanies any scientific forecast, and second, how to communicate it to policy-makers, the media and the public. Scientific communication during volcanic crises is incredibly challenging, with no standardized procedures on how this should be done among the stakeholders involved (scientists, governmental agencies, media and local populations). Of particular importance is the communication link between scientists and decision-makers (often Civil Protection agents). It is necessary to translate the scientific understanding of volcanic activity into a series of scenarios that are clear to decision-making authorities. Direct interaction between volcanologists and the general public is also important both during times of quiescence and activity. Information that comes directly from the scientific community has a special impact on risk perception and on the trust that people place on scientific information. Therefore, the effective management of a volcanic crisis requires the identification of practical actions, to improve communication strategies at different stages and across different stakeholders: scientists-to-scientists, scientists-to-technicians, scientists-to-Civil Protection, scientists-to-decision makers, and scientists-to-the general public.

The Role of Statistics and Probabilities in the Quantification of Uncertainty

Concepts, Definitions and Misconceptions

Formally speaking, Statistics is a body of principles and methods for extracting useful information from data, assessing the reliability of that information, measuring and managing risk, and supporting decision-making in the face of uncertainty. Rather than drowning in a flood of numbers, statistics helps to make better management decisions and gives a competitive advantage over intuition, experience and hunches alone.

Probability shows the likelihood, or chances, for each of the various future outcomes, based on a set of assumptions about how the world works. It allows handling randomness (uncertainty) in a consistent, rational manner and forms the foundation for statistical inference (drawing conclusions from data), sampling, linear regression, forecasting, and risk management.

With statistics, we go from observed data to generalizations about how the world works. For example, if we observe that the seven hottest years on record occurred in the most recent decade, we may conclude (perhaps without justification) that there is global warming. With probability, we start from an assumption about how the world works, and then figure out what type of data we are likely to see under that assumption. In the above example, we could assume the null hypothesis, H0: There is no global warming, and then test how likely is it to observe the seven hottest years within the last decade if H0 was true. We then use the observed data to look for significant statistical evidence to reject H0 in favour of the alternative, H1: Some phenomena related to global warming may be ongoing. To some extent, we could say that probability provides the justification for statistics.

However, there is no precise definition for probability. All attempts to define it must ultimately rely on circular reasoning. According to the Oxford Dictionary, probability is “the state of being probable; the extent to which something is likely to happen or be the case”. Roughly speaking, the probability of a random event is the “chance” or “likelihood” that the event will occur. To each random event A we attach a number P(A), called the probability of A, which represents the likelihood that A will occur. The three most useful approaches to obtaining a definition of probability are: the classical, the relative frequency, and the subjective (Jaynes 2003; Colyvan 2008), discussed further below.

The number of volcanic eruptions of magnitude greater than 1 in the next t years in a particular area is an example of a random variable, Y. When we try to quantify the value of Y we are implying that a true value exists, and we want to anticipate to it, so that we can make advanced decisions. That is, we want to estimate a range of values that we think will contain the true value of the random variable Y. The most common way of showing this range of values is by presenting a best estimate ± confidence margin. Here, we could distinguish between two types of uncertainty, the one surrounding the best estimate, type A, and the one that accounts for the level of confidence that we have in that best estimate, type B. It is not enough to provide a best guess (point estimate) for a parameter, we also need to say something about how far from the true parameter value such an estimator is likely to be. The confidence interval is one way of conveying our uncertainty about a parameter. With that, we report a range of numbers, in which we hope the true parameter will lie.

Measures of Uncertainty

Probability can be used as a measure of uncertainty, both type A and B. The way we understand probabilities depends on the degree of numeracy we have. It is common in our daily lives to make choices with some level of uncertainty, for instance, whether or not to order the fish of the day in a new restaurant, or whether to buy one or two bags of fruit in a new shop. To make those simple decisions, we unconsciously go through previous knowledge on similar experiences to work out some kind of odds of making the right choice. Suppose now that we are being rushed to make up our mind at the restaurant, we will have to rush our decision. The main difference between this and the decision of whether to evacuate a populated area threatened by a destructive volcanic event is the penalty or loss for making the wrong decision. In the first case, the loss is negligible to our daily lives, but a wrongly timed evacuation decision could have serious consequences. For this reason, the interpretation of probability must be in the context of how much we are willing to lose if we make the wrong decision. The difference between probability, the extent to which something is likely to happen; and risk, a situation involving exposure to danger; means that the relevance of a probability estimate for the occurrence of an event will depend on the associated risk, this is, on how much exposure to danger is in the occurrence of the event. Suppose the odds are one to ninety nine (1:99) that our car breaks down in the middle of a trip. We would most likely still take our family on that trip. Instead, suppose we are given the same odds for an airplane crash. We would most likely not want to take our loved ones on that plane. In both cases the probability is the same, but the risk is different. This illustrates how probability estimates must be interpreted in the context of their associated risk.

Clearly emotions, values, beliefs, culture and interpersonal dynamics play a significant role in decision-making processes. Extensive work in the field of psychology and sociology has examined perceptions and interpretation of uncertainty for both volcanology and across other hazards (weather, tsunami, operational earthquake forecasting, climate change) (Fischhoff 1994; Cosmides and Tooby 1996; Kuhberger 1998; Windschitl and Weber 1999; Bruine De Bruin et al. 2000; Gigerenzer et al. 2005; Patt and Dessai 2005; Risbey and Kandlikar 2007; Morss et al. 2008; Budescu et al. 2009; McClure et al. 2009; Joslyn et al. 2009; Mastrandrea et al. 2010; Jordan et al. 2011; Eiser et al. 2012; Doyle et al. 2014a, b). However, that is not the scope of this chapter. For the purpose of our argument, we focus on the ‘rational side’ of decision-making. That is, the quantification of uncertainty using statistical theory.

What makes statistics so unique is its ability to quantify uncertainty, so that statisticians can make a categorical statement about their level of uncertainty, with complete assurance. But the statements have to be made taking into account all possible factors (sources of uncertainty) and making sure the data are correctly selected to eliminate all sources of bias. These could have a significant impact and involve matters of life and death. So far we assumed that the probability estimates have been calculated using the right methods. For the restaurant or supermarket examples this could be a simple arithmetic mean. Forecasting the occurrence of a volcanic event will require more elaborated mathematical modelling. The accuracy in the probability estimate will depend largely on the model selection.

Disciplines and Schools of Thought

To quantify uncertainty using statistics there are three main disciplines statisticians rely on: (i) data analysis, (ii) probability, and (iii) statistical inference (Cooke 1991; Pollack 2003; Kirkup and Frenkel 2006). The first step is always the data analysis, that is, the gathering, display and summary of the data. In the case of volcanoes, we look at past and monitoring data, and we make the necessary adjustments for any inconsistencies (e.g.: Sobradelo and Martí 2015). The second step is the formal study of the laws of chance, also called the laws of probability, whose birthplace is in the 17th century for no other reason than to be used in gambling (Cooke 1991). Probabilities are the result of applying probability models to describe the world, and this is done using the concept of random variables, that is, the numerical outcome of a random experiment or a random process we are trying to understand, so that we can forecast its future outcome (height, weight, income, eruptive events in the last 500 years, number of seismic events in one day, etc.). Finally, we use the above so that we can make inferences in the real world with a certain degree of confidence (Rice 2006).

Approaches to developing probability models, associated with different schools of thought, are: (1) the classical, based on gambling ideas, which assumes that the game is fair and all elementary outcomes have the same probability; (2) the relative (objective) frequency approach which believes that if an experiment can be repeated, then the probability estimate that an event will occur is equivalent to the proportion of times the event occurs in the long run; and (3) the personal (subjective) probability approach which believes that most of the events in life are not repeatable (Cooke 1991; Jaynes 2003). They base the probability on their personal belief of the likelihood of an outcome, and then update that probability as they receive new evidence (Cosmides and Tooby 1996). An objectivist uses either the classical or the frequency definition of probability. Subjectivists, also called Bayesians, apply formal laws of chance to their own personal probabilities. What makes the Bayesian approach subjective is the choice of models and a priori beliefs to define the prior probabilities, even if the rules and observed data to update and compute the posterior probabilities are quite “objective”. The Bayesian approach claims that any state of uncertainty can be described with a probability distribution, making it suitable for the study of volcanic areas where very little or no data exists, other than theoretical models or expert scientific beliefs. These initial probabilities get updated each time new information arrives, making the approach quite dynamic and easy to apply.

For many years there has been controversy over the “frequentist” versus “Bayesian” methods. However, neither the Bayesian nor the frequentist approaches are universally applicable (Jaynes 2003). For each situation, some approaches and models are more suitable than others to produce probability estimates as accurately as possible with high confidence. It is the task of the statistician to decide and justify the model selection to ensure reliability of the results. But a brilliant analysis is worthless unless the results are successfully communicated, including its degree of statistical uncertainty.

Often presented as an alternative to the probabilistic approach, is the deterministic approach. Events are completely determined by cause-effect chains (causality), without any room for random variation. Here, a given input will always produce the same output, as opposed to probabilistic models that use ranges of values for variables in the form of probability distributions. This approach is sometimes used in fields with a lot of data, like in weather forecasting, or where the underlying process can be explained with physics-based models, such as in seismology. In any case, the reliability of probabilistic versus deterministic forecasts is sometimes a cause of debate, and is often a mixed of both, a deterministic and a probabilistic approach, the preferred option.

How Reliable Is a Forecast: Data and Methodology

By giving an expected value for a forecast we are already quantifying a measure of uncertainty. This value will have an interpretation based on the degree of confidence which the estimate is made with, which will depend on the type, amount, quality and consistency of the evidence upon which the estimate is made, usually past data or theoretical models.

The degree of confidence, or certainty, is quantified and expressed via the variance or standard deviation (squared root of the variance). Suppose we have three measurements of a random process (e.g. inter-event time in years) of 2, 3, and 4 years, and want to draw some conclusion about the inter-event time based on these values. We use 3 years, a simple arithmetic mean, as the estimate of the inter-event time. The three measurements are equally distant and symmetrical around the mean. The variance, which measures the dispersion of the values around the mean, is 1, and the median, which is the value in the middle, is 3 as well as the mean. Suppose we do the same exercise with measurements 1, 3, and 5, we still get a mean of 3, but now we can see the values 1, and 5 are two units away from the mean, and so the variance, as a measure of dispersion around the mean, is now 4, instead of 1. Note, however, that the values are symmetrically distributed around the mean, and that the mean and median are still the same as before, 3. The only thing that has changed is the variance, now larger. The lower the variability around the point estimate, the more reliable is our estimate. Let’s take a sample with 10 measurements: 1, 1.4, 2, 2.1, 2.2, 2.3, 3, 4, 5, 7. The estimated inter event time, based on a simple arithmetic mean, is still 3, but we based this estimate on 10 rather than 3 observations. The more data we have to compute our estimates, the more confident we are in these results (Rice 2006).

Apart from the reliability of the data to produce an estimate, a crucial aspect of a forecast is the correct choice of methodology to model this. Most of the time we do not know the underlying distribution of a random process (e.g. number of volcanic eruptions in a time interval and particular area, assumed to be random), and so we make assumptions to help us find a function within a family of known distributions (Normal or Gaussian, Exponential, Binomial, Beta, Poisson, Chi-Squared, Log-normal, etc.) that would be suitable to model this unknown process (see Rice 2006; Gonick and Smith 2008; McKillup and Dyar 2010 for details on these distributions). This facilitates making inferences and forecasts based on the conveniently known properties of these functions. The choice of the distribution family depends on the characteristics of the sample data (how many observations are there, whether it is a symmetrical or a skewed distribution, what type of measurement was used, etc.). To select the most appropriate distribution, it is important that the data is an unbiased and representative sample of the population. Therefore, the data gathering process and a preliminary and exhaustive analysis of the dataset are crucial to reduce uncertainty and increase confidence in the final results. Needless to say, the choice of distribution and assumptions about the sample data add uncertainty to the results, and must be taken into account when presenting the final outcome.

Arithmetical means are pure descriptive measures used to sum up the information from the data sample. In practice, we would not use a simple arithmetical mean to estimate probabilities and make inferences about complex processes. There are a large number of statistical modelling techniques (not the scope of this chapter) based on the type of data we have, its distribution, quality and quantity and the type of question we want to answer. In the end, the reliability of the probability estimate (whereas an inter event time of 3 years or not) will depend on the accuracy, reliability and amount of data used to reach that conclusion, together with the statistical model and approach. That is why a probability estimate should always be presented with some measure of its variability (estimated error, usually given by the variance or standard deviation) and it should be made clear that it is an estimate based on the available data, and that we have assumed that a future behaviour of the random event will follow the same pattern we have observed in this dataset. This might in fact not be the case, and that is why sometimes we hear about time series data being “stationary or not”, meaning that depending on what time interval the data comes from, the pattern observed may be different. In short, there are many assumptions and sources of uncertainty around a probability estimate that have to be taken into consideration when interpreting probability.

Taking a bigger picture view, ultimately all we are doing is drawing some general conclusions about an unknown process (the inter-event time) from some samples of observations. We do not have access to all the possible observations of this process, but still want to anticipate the future value of this event, so we can be better prepared should the event strike. This is the reason why we use statistical approaches to model random events, unless we can see into the future, a probability estimate can never be either 0 or 100%.

Using Probabilities to Communicate Uncertainty

Since the late 1990s there has been significant focus on improving communications during volcanic crises (IAVCEI 1999; McGuire et al. 2009; Aspinall 2010; Donovan et al. 2012a, b; Marzocchi et al. 2012; Sobradelo et al. 2014). A common factor that emerges is the value of probabilities as a way to communicate scientific forecasts and their associated uncertainties, for natural hazards in general (Cooke 1991; Colyvan 2008; Stein and Stein 2013), or more specific for volcano forecasting (Aspinall and Cook 1998; Marzocchi et al. 2004; Aspinall 2006; Sobradelo and Martí 2010; Marzocchi and Bebbington 2012; Donovan et al. 2012c). However, it also requires the need to communicate the uncertainty that accompanies any forecast on the future behaviour of a natural system.

Making predictions on the future behaviour of a volcano involves analysis of past data, monitoring of the current situation and identification of possible scenarios. Quite often, these predictions are challenging to quantify and communicate due to lack of data and past experience. An added source of complexity is when the probability estimates are very small, <1%. Most lay people are not familiar with decimals or small fractions. A layman will easily understand a probability of 0.2 or 20%, but not so well one of 0.0002 or 0.02%, even when both are associated to the same level of risk. Scientists responsible for the communication of volcanic forecasts have the difficult task of selecting the scientific language to deliver a clear message to a non-scientific audience.

The uncertainty that accompanies the identification and interpretation of eruption precursors derives from the unpredictably of the volcano as a natural system (aleatory or deep uncertainties) and from our lack of knowledge on the behaviour of the system (epistemic or shallow uncertainties) (Cox 2012; Stein and Stein 2013). These uncertainties will depend on how well we know the volcanic system. Active volcanoes with high eruption frequencies can be more easily predicted (i.e. they are reasonably well known and so past events are good predictors of future ones, shallow uncertainties). In contrast, deep uncertainties are associated to probability estimates based on poorly known parameters or poor understanding of the system, this is usually the case for volcanoes characterised by low eruption frequencies.

In everyday life we are often quite unaware that we use probabilities (commonly known as “common sense”) to evaluate the degree of uncertainty we face. The question is whether we prefer or understand better the mathematical expression of probability (e.g.: 20% chance of an event occurring) or more verbal statements such as likely, improbable, certainly, to make our decisions. Greater precision does not necessarily imply greater understanding of what the message really is, as it will be perceived differently (Slovic 2016).

Some countries, like USA, prefer to use probabilities to express uncertainties with weather forecasts, while some European countries prefer to use verbal expressions. In both cases, people react according to the forecast. There are different ways in which probabilities (and uncertainties) can be described. These include words, numbers, or graphics. The use of words to explain probabilities tend to use language that appeals to people’s intuition and emotions (Lipkus 2007). However, it usually lacks precision as it tends to introduce significant ambiguity by the use of non-precise words such us “probable”, “likely”, “doubtful”, etc. A probability is the “measure” of the likeliness that an event will occur, so it makes sense to expect a numerical value (e.g. percentages) associated to that measure. However, in volcanology most of the time there is insufficient observational data to present probabilistic forecasts with enough level of confidence. Using only numerical expressions may fail when the audience has a low level of numeracy. The interpretation of probabilistic terms can vary greatly depending on the educational level of the receptor and whether verbal or numerical expressions are used (Budescu et al. 2009; Spiegelhalter et al. 2011; Doyle et al. 2014a; Gigerenzer 2014). To minimise this problem, a combination of verbal uncertainty terms (e.g.: very likely) with quantitative specifications (e.g.: <90% probability) has been recommended, for example, to better understand results from Intergovernmental Panel on Climate Change (IPCC) (Budescu et al. 2009, 2012). Climate scientists working within the IPCC have adopted a lexicon to communicate uncertainty through verbal probability expressions ranging from “very likely”, “likely”, “about as likely as not unlikely”, “very unlikely” and “exceptionally unlikely” to refer to probabilities (e.g. IPCC 2005, 2007). The terms are assigned specific numerical meanings but are typically presented in verbal format only, so that a probability of occurrence of 1% will be interpreted as “very unlikely” for that particular event, and a probability of 66% will be seen as “likely” for the event to happen. Similarly, anything in the range of 33–66% would be perceived as “about as likely as not unlikely”.

Since 2011 it has been increasingly common to use graphics to represent probabilities in natural hazards (Kunz et al. 2011; Spiegelhalter et al. 2011; Stein and Geller 2012). The advantage of communicating uncertainties (or probabilities) visually is that people are everyday better prepared and trained to use and understand infographics, as an immediate consequence of the globalised use of internet and informatics, and a graphic can be adapted to stress the importance of the content of the communication and can be adapted to the needs and capabilities of the audience (Spiegelhalter et al. 2011).

In addition to considering the way probabilities (and uncertainties) are communicated, there is a need to consider the local context of the particular society in which the volcanic crisis is occurring. “Odds” is an expression of relative probability that is well understood by many communities (e.g. gambling, games of chance) and can be effective also to communicate volcano forecasting if it is correctly adapted for the purpose. Regulations (i.e. legal and commonly accepted norms) frequently determine the articulation of uncertainty and risk used to manage environmental and natural hazards. Finally, culture is of key importance in communication (Oliver-Smith and Hoffmann 1999; Eiser et al. 2012). The way in which risk is perceived may change depending on cultural beliefs of each society, and in the same way the cultural diversity of societies facing a volcanic threat may imply that communication methods that work in one country or culture may not work in another. Therefore, it is important to investigate and gain in-depth understanding of the particular cultural aspects of each society in order to define the best communication procedures and languages in each case. There are numerous studies that demonstrate the importance of public education, pre-crisis education programmes, and risk perception to better understand scientific communication during crisis (e.g. Bird et al. 2009; Budescu et al. 2012; Dohaney et al. 2015). Most of them agree that better educated populations on natural hazards understand better risk communication and behave in a more orderly way for managing a crisis. There are additional sociological and qualitative aspects to consider when communicating probabilities beyond the scope of this chapter, but address issues around risk perception, trust, decision-making, and managing disasters e.g. Kilburn 1978; Fiske 1984; Tazieff 1977 ; Paton et al. 1999; Chester et al. 2002; Sparks 2003; Haynes et al. 2007, 2008; Baxter et al. 2008; Solana et al. 2008; Fearnley 2013; Doyle et al. 2015.

What Should Be Communicated?

The key questions focus around what can be forecasted. Should the forecasting of the outcome of a volcano be determining whether it will erupt of not? How big or explosive will it be? When? Where? What is the dimension of the problem? These are basic questions that civil protection asks to the scientist once an alert has been declared, and the process of managing a volcanic crisis has started (IAVCEI 1999; McGuire et al. 2009; Aspinall 2010; Donovan et al. 2012a, b; Marzocchi et al. 2012; Sobradelo et al. 2014). Usually, scientists can answer these questions with approximations (probabilities) based on knowledge of previous cases from the same volcano, or from other volcanoes with similar characteristics, knowledge of the past eruptive history of the volcano, warning signals (geophysical and geochemical monitoring), and knowledge about the significance of these warning signs. Whilst giving probabilities as an outcome of a volcano forecast may be relatively easy for the scientist (depending on the degree of information available), it may not be fully understood by the decision-maker or any other recipient of such information. It is necessary to find a clear and precise way to communicate this information between scientists and key decision-makers, to avoid misunderstandings and misinterpretations that could lead to an incorrect management of the volcanic emergency and, consequently, to a disaster.

In recent years, a way used to improve the communication of statistics, as well as decision-maker needs, is through the development of exercises where a volcanic crisis is simulated and all key players involved in risk management, such as scientists, civil protection, decision-makers, population and media are invited to participate, as in a real case. Exercises have been carried out at different volcanoes such as Vesuvius (MESIMEX, Barberi and Zuccaro 2004), or Campi Flegrei, Cotopaxi and Dominica (VUELCO Project,, New Zealand (DEVORA), among others. These simulations facilitate interaction and cooperation between the stakeholders, and the sharing and exchanging of procedures, methodologies and technologies among them, including scientific communication. They present an opportunity for learning the exact role and responsibilities that each key player has in the management of a volcanic crisis, as well as exchanging concerns and feedback on specific matters.

Whilst volcanic forecasts centre on scientific data and probabilities as much as possible, scientists may also recommend safe behaviour directly to the public, providing advice that saves people’s lives (e.g. going up a hill if a lahar threatens). Often this is beyond the legal requirements of the scientists, who are required to comment on the volcanic science only, but they could feel a moral duty to assist (Fearnley 2013). However, this should not imply or be confused with making decisions on how to manage a volcanic emergency (e.g. evacuation), as this frequently falls under the remit of civil protection (or other such government organisations), although in some countries such as Indonesia the scientists and the civil protection organisations work together rather than having distinct roles; it is dependent on the governance structures of the country.

When Should a Volcano Forecast Be Communicated?

Ideally, forecasts should be communicated as early as possible, and then with increasing frequency if, or when, an eruption nears. This means there should be a permanent flow of information between scientists, the vulnerable populations, and policy-makers on the eruptive characteristics of the volcano, its current state of activity, and its associated hazards, even when volcanoes do not show signs for alarm. This is to aid preparation for when an emergency starts and things need to move much faster. However, in many cases scientific communication in hazard assessment and volcano forecasting is just restricted to volcanic emergencies. When volcanic unrest starts and escalates, the origin of this unrest needs to be investigated to assess the level of hazard expected. Good detection and interpretation of precursors will help predict what will happen with a considerable degree of confidence. This means that scientific communication during a volcanic crisis needs to be constant and permanently updated with the arrival of each new piece of data. The longer it takes to make a decision, the greater the potential losses are likely to be as vulnerability increases. This constitutes the main challenge in communicating forecasts and probabilities during a volcanic crisis. In essence, the relationship between the decrease of uncertainty in the interpretation of the warning signs of pre-eruptive processes to acceptable (reliable) levels, and the time required to make a correct decision, is a function of the degree of the scientific knowledge of the volcanic process and of the effectiveness of scientific communication. Therefore, scientific communication during a volcanic crisis needs to be effective from the start.


In order to improve scientific communication during a volcanic crisis it is recommended that the communication protocols and procedures used by the different volcano observatories and scientific advisory committees are compared for each level of communication: scientist-scientist, scientist-technician, scientist-Civil Protection, scientist-general public. Experience from other natural hazards helps, as do clear and effective ways to show probabilities and associated uncertainties. Although each cultural and socio-economic situation will have different communication requirements, comparing different experiences will help improve each particular communication approach, thus reducing uncertainty in communicating volcano forecasts.

Finally it is worth mentioning that a crucial aspect in facilitating risk communication is education. This, however, is a long-term task that requires to be conducted permanently in societies threatened by natural hazards. Risk perception depends on cultural beliefs but also on whether or not a society has been educated on its natural environment and potential hazards. In the same way scientific communication is better perceived and understood when the population have previous knowledge on the existence and potential impacts of natural hazards. There are numerous studies that demonstrate the importance of public education, pre-crisis education programmes, and risk perception to better understand scientific communication during crisis (e.g. Bird et al. 2009; Budescu et al. 2012; Dohaney et al. 2015). Most of them agree that better educated populations on natural hazards understand better risk communication and behave in a more orderly way for managing a crisis. Therefore, best practices on communication should also consider improving education of population on natural hazards, their potential impacts and the ways to minimise the associated risks, as well as on how to behave during the implementation of emergency plans in a crisis.


  1. Mastrandrea MD et al (2010) Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties. IPCC cross-working group meeting on consistent treatment of uncertainties. Jasper Ridge, CA, USA, 6–7 July 2010. Accessed from
  2. Jordan TH et al (2011) Operational earthquake forecasting: state of knowledge and guidelines for utilization. A report by the international commission on earthquake forecasting for civil protection. Ann Geophys 54(4). doi: 10.4401/ag-5350
  3. Aspinall WP (2006) Structured elicitation of expert judgment for probabilistic hazard and risk assessment in volcanic eruptions. In: Mader HM et al.(eds) Statistics in volcanology. Special Publication of IAVCEI # 1, Geological Society of London, pp 15–30Google Scholar
  4. Aspinall WP (2010) A route to more tractable expert advice. Nature 463:294–295CrossRefGoogle Scholar
  5. Aspinall W, Cook R (1998) Expert judgement and the Montserrat volcano eruption. In: Mosleh A et al (eds) Proceedings of the 4th international conference on probabilistic safety assessment and management PSAM4, 13–14 Sept. Springer, New York, pp 2113–2118Google Scholar
  6. Barberi F, Zuccaro G (2004) Soma Vesuvius MESIMEX. Final technical implementation report, 2004/393427 (
  7. Baxter PJ, Aspinall WP, Neri A, Zuccaro G, Spence RJS, Cioni R, Woo G (2008) Emergency planning and mitigation at Vesuvius: a new evidence-based approach. J Volcanol Geoth Res 178:454–473CrossRefGoogle Scholar
  8. Bird DK, Gisladottir G, Dminey-Howes D (2009) Resident perception of volcanic hazards and evacuation procedures. Nat Hazards Earth Syst Sci 9:251–266. Scholar
  9. Blong R (2000) Volcanic hazards and risk management. In: Sigurdsson H et al (eds) Encyclopedia of Volcanoes. Academic, San Diego, pp 1215–1227Google Scholar
  10. Bruine De Bruin W et al (2000) Verbal and numerical expressions of probability: “It’s a fifty-fifty chance’. Organ Behav Hum Decis Process 81(1):115–131CrossRefGoogle Scholar
  11. Budescu DV, Broomell S, Por HH (2009) Improving communication of uncertainty in the reports of the intergovernmental panel on climate change. Psychol Sci 20(3):299–308CrossRefGoogle Scholar
  12. Budescu DV, Por HH, Broomell S (2012) Effective communication of uncertainty in the IPCC reports. Clim Change 113:181–200CrossRefGoogle Scholar
  13. Chester DK, Dibben CJL, Duncan AM (2002) Volcanic hazard assessment in Western Europe. J Volcanol Geoth Res 115:411–435CrossRefGoogle Scholar
  14. Colyvan M (2008) Is probability the only coherent approach to uncertainty? Risk Anal 28(3):645–652CrossRefGoogle Scholar
  15. Cooke RM (1991) Experts in uncertainty: opinion and subjective probability in science. Oxford University Press, OxfordGoogle Scholar
  16. Cosmides L, Tooby J (1996) Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition 58(1):1–73CrossRefGoogle Scholar
  17. Cox lA Jr (2012) Confronting deep uncertainties in risk analysis. Risk Anal 32:1607–1629CrossRefGoogle Scholar
  18. Dohaney J, Brogt E, Kennedy B, Wilson TM, Linsay JM (2015) Training in crisis communication and volcanic eruption forecasting: design and evaluation of an authentic role-play simulation. J Appl Volcanol 4:12. doi: 10.1186/s13617-015-0030-1CrossRefGoogle Scholar
  19. Donovan AR, Oppenheimer C, Bravo M (2012a) Science at the policy interface: volcano monitoring technologies and volcanic hazard management. Bull Volcanol 74:1–18CrossRefGoogle Scholar
  20. Donovan AR, Oppenheimer C, Bravo M (2012b) Social studies of volcanology: knowledge generation and expert advice on active volcanoes. Bull Volcanol 74:677–689CrossRefGoogle Scholar
  21. Donovan AR, Oppenheimer C, Bravo M (2012c) The use of belief-based probabilistic methods in volcanology: scientists’ views and implications for risk assessments. J Volcanol Geotherm Res 247–248:168–180CrossRefGoogle Scholar
  22. Doyle EEH, McClure J, Johnston DM, Paton D (2014a) Communicating likelihoods and probabilities in forecasts of volcanic eruptions. J Volcanol Geoth Res 272(2014):1–15CrossRefGoogle Scholar
  23. Doyle EEH, McClure J, Paton D, Johnston DM (2014b) Uncertainty and decision-making: volcanic crisis scenarios. Int J Disaster Risk Reduct 10:75–101CrossRefGoogle Scholar
  24. Doyle EEH, Paton D, Johnston D (2015) Enhancing scientific response in a crisis: evidence-based approaches from emergency management in New Zealand. J Appl Volcanol 4:1CrossRefGoogle Scholar
  25. Eiser JR, Bostrom A, Burton I, Johnston DM, McClure J, Paton D, van der Pligt J, White MP (2012) Risk interpretation and action: a conceptual framework for responses to natural hazards. Int J Disaster Risk Reduct 1:5–16CrossRefGoogle Scholar
  26. Fearnley CJ (2013) Assigning a volcano alert level: negotiating uncertainty, risk, and complexity in decision-making processes. Environ Plan A 45(8):1891–1911CrossRefGoogle Scholar
  27. Fischhoff B (1994) What forecasts (seem to) mean. Int J Forecast 10:387–403CrossRefGoogle Scholar
  28. Fiske RS (1984) Volcanologists, journalists, and the concerned local public: a tale of two crises in the eastern Caribbean: in studies in geophysics explosive volcanism: inception, evolution and hazards. National Academy Press, Washington, pp 170–176Google Scholar
  29. Gigerenzer G (2014) Risky savvy: how to make good decisions. Penguin, 312 ppGoogle Scholar
  30. Gigerenzer G, Hertwig R, van den Broek E, Fasolo B, Katsikopoulos KV (2005) “A 30% chance of rain tomorrow”: how does the public understand probabilistic weather forecasts? Risk Anal 25(3)CrossRefGoogle Scholar
  31. Gonick L, Smith W (2008) The cartoon guide to statistics. Paw Prints, 240 ppGoogle Scholar
  32. Haynes K, Barclay J, Pidgeon N (2007) The issue of trust and its influence on risk communication during a volcanic crisis. Bull Volc 70(5):605–621CrossRefGoogle Scholar
  33. Haynes K, Barclay J, Pidgeon N (2008) Whose reality counts? Factors affecting the perception of volcanic risk. J Volcanol Geoth Res 172(3–4):259–272CrossRefGoogle Scholar
  34. IAVCEI Subcommittee for Crisis Protocols (1999) Professional conduct of scientists during volcanic crises. Bull Volcanol 60:323–334, comment and reply in Bull Volcanol 62:62–64CrossRefGoogle Scholar
  35. Intergovernmental Panel on Climate Change (2005) Guidance notes for lead authors of the IPCC fourth assessment report on addressing uncertainties. Retrieved from
  36. Intergovernmental Panel on Climate Change (2007) A report of working group I of the intergovernmental panel on climate change: summary for policymakers. Retrieved from
  37. Jaynes ET (2003) Probability theory: the logic of science. Cambridge University Press, Cambridge, 753 ppGoogle Scholar
  38. Joslyn SL, Nadav-Greenberg L, Taing MU, Nichols RM (2009) The effects of wording on the understanding and use of uncertainty information in a threshold forecasting decision. Appl Cognitive Psychol 23(1):55–72CrossRefGoogle Scholar
  39. Kilburn CRJ (1978) Volcanoes and the fate of forecasting. New Scientist 80:511–513Google Scholar
  40. Kirkup L, Frenkel RB (2006) An introduction to uncertainty in measurement. Cambridge University Press, Cambridge, 248 ppGoogle Scholar
  41. Kuhberger A (1998) The influence of framing on risky decisions: a meta-analysis. Organ Behav Hum Dec 75(1):23–55CrossRefGoogle Scholar
  42. Kunz M, Gret-Regamey A, Hurni L (2011) Visualization of uncertainty in natural hazards assessments using an interactive cartographic information system. Nat Hazards 59:1735–1751CrossRefGoogle Scholar
  43. Lipkus M (2007) Numeric, verbal, and visual formats of conveying health risks: suggested best practices and future recommendations. Med Decis Making 27(5):696–713CrossRefGoogle Scholar
  44. Martí J, Aspinall WP, Sobradelo R, Felpeto A, Geyer A, Ortiz R, Baxter P, Cole P, Pacheco JM, Blanco MJ, Lopez C (2008) A long-term volcanic hazard event tree for Teide-Pico Viejo stratovolcanoes (Tenerife, Canary Islands). J Volcanol Geotherm Res 178:543–552CrossRefGoogle Scholar
  45. Marzocchi W, Bebbington M (2012) Probabilistic eruption forecasting at short and long time scales. Bull Volc 74:1777–1805CrossRefGoogle Scholar
  46. Marzocchi W, Woo G (2007) Probabilistic eruption forecasting and the call for an evacuation. Geophys Res Lett 34Google Scholar
  47. Marzocchi W, Sandri L, Gasparini P, Newhall C, Boschi E (2004) Quantifying probabilities of volcanic events: the example of volcanic hazard at Mount Vesuvius. J Geophys Res 109. doi: 10.1029/2004JB003155
  48. Marzocchi W, Sandri L, Selva J (2008) BET EF: a probabilistic tool for long- and short-term eruption forecasting. Bull Volcanol 70(5):623–632CrossRefGoogle Scholar
  49. Marzocchi W, Sandri L, Selva J (2010) BET VH: a probabilistic tool for long-term volcanic hazard assessment. Bull Volcanol 72:705–716CrossRefGoogle Scholar
  50. Marzocchi W, Newhall C, Woo G (2012) The scientific management of volcanic crises. J Volcanol Geotherm Res 247–248:181–189CrossRefGoogle Scholar
  51. McClure J, White J, Sibley CG (2009) Framing effects on preparation intentions: distinguishing actions and outcomes. Disaster Prev Manage 18:187–199CrossRefGoogle Scholar
  52. McGuire WJ, Solana MC, Kilburn CRJ, Sanderson D (2009) Improving communication during volcanic crises on small, vulnerable islands. J Volcanol Geotherm Res 183(1–2):63–75CrossRefGoogle Scholar
  53. McKillup S, Dyar MD (2010) Geostatistics explained: an introductory guide for earth scientists. Cambridge University Press, Cambridge, 414 ppGoogle Scholar
  54. Morss RE, Demuth JL, Lazo JK (2008) Communicating uncertainty in weather forecasts: a survey of the U.S. Public. Weather Forecast 23(5):974CrossRefGoogle Scholar
  55. Neri A, Aspinall WP, Cioni R, Bertagnini A, Baxter PJ, Zuccaro G, Andronico D, Barsotti S, Cole PD, Esposti Ongaro T, Hincks TK, Macedonio G, Papale P, Rosi M, Santacroce R, Woo G (2008) Developing an event tree for probabilistic hazard and risk assessment at vesuvius. J Volcanol Geoth Res 178:397–415CrossRefGoogle Scholar
  56. Newhall C, Hoblitt R (2002) Constructing event trees for volcanic crises. Bull Volc 64(1):3–20CrossRefGoogle Scholar
  57. Oliver-Smith A, Hoffmann SM (eds) (1999) The angry earth: disaster in anthropological perspective. Routledge, New YorkGoogle Scholar
  58. Paton D, Johnston D, Houghton B, Flin R, Ronan K, Scott B (1999) Managing natural hazard consequences: information management and decision making. J Am Soc Prof Emerg Managers 6:37–48Google Scholar
  59. Patt A, Dessai S (2005) Communicating uncertainty: lessons learned and suggestions for climate change assessment. Comptes Rendus Geosci 337(4):425–441CrossRefGoogle Scholar
  60. Pollack HN (2003) Uncertain science… uncertain world, UK. Cambridge University Press, Cambridge, p 256CrossRefGoogle Scholar
  61. Rice JA (2006) Mathematical statistics and data analysis. Duxbury Press, BelmontGoogle Scholar
  62. Risbey JS, Kandlikar M (2007) Expressions of likelihood and confidence in the IPCC uncertainty assessment process. Clim Change 85(1–2):19–31CrossRefGoogle Scholar
  63. Slovic P (2016) The perception of risk. RoutledgeGoogle Scholar
  64. Sobradelo R, Martí J (2010) Bayesian event tree for long-term volcanic hazard assessment: application to Teide-Pico Viejo stratovolcanoes, Tenerife, Canary islands. J Geophys Res 115. doi: 10.1029/2009JB006566
  65. Sobradelo R, Martí J (2015) Short-term volcanic hazard assessment through Bayesian inference: retrospective application to the Pinatubo 1991 volcanic crisis. J Volcanol Geoth Res 290:1–11CrossRefGoogle Scholar
  66. Sobradelo R, Bartolini S, Martí J (2013) HASSET: a probability event tree tool to evaluate future volcanic scenarios using Bayesian inference. Bull Volcanol 76(1):1–15Google Scholar
  67. Sobradelo R, Martí J, Kilburn CRJ, López C (2014) Probabilistic approach to decision-making under uncertainty during volcanic crises: retrospective application to the El Hierro (Spain) 2011 volcanic crisis. Nat Hazards. doi: 10.1007/s11069-014-1530-8CrossRefGoogle Scholar
  68. Solana MC, Kilburn CRJ, Rolandi G (2008) Communicating eruption and hazard forecasts on Vesuvius, Southern Italy. J Volcanol Geoth Res 172:308–314CrossRefGoogle Scholar
  69. Sparks RSJ (2003) Forecasting volcanic eruptions. Earth Planet Sci Lett 210(1–2):1–15CrossRefGoogle Scholar
  70. Spiegelhalter D, Pearson M, Short I (2011) Visualizing uncertainty about the future. Science 333(6048):1393–1400CrossRefGoogle Scholar
  71. Stein S, Geller RJ (2012) Communicating Uncertainties in natural hazard forecasts. Eos 93(38):361–362CrossRefGoogle Scholar
  72. Stein S, Stein JL (2013) How good do natural hazard assessments need to be? GSA Today 23(4/5):60–61CrossRefGoogle Scholar
  73. Tazieff H (1977) La Soufrière, volcanology and forecasting. Nature 269:96–97CrossRefGoogle Scholar
  74. Windschitl PD, Weber EU (1999) The interpretation of “likely” depends on the context, but “70%” is 70%—right? The influence of associative processes on perceived certainty. J Exp Psychol Learn 25(6):1514–1533CrossRefGoogle Scholar

Copyright information

© The Author(s) 2017

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Willis Research Network (Willis Towers Watson)LondonUK
  2. 2.UCL Hazard Centre Honorary Fellow MemberUniversity College LondonLondonUK
  3. 3.Institute of Earth Sciences Jaume AlmeraCSICBarcelonaSpain

Personalised recommendations