Keywords

6.1 Introduction

There are many weather-related hazards, of which only a few occur in the atmosphere itself. Others arise from the interaction of the atmosphere with other components of the natural environment, including oceans, rivers and the land surface. In this chapter, we show that:

  • A wide variety of environmental hazards result from the weather

  • Hazards are predicted using process models or statistical models, which may be coupled with or integrated into NWP models

  • Some weather systems give rise to multiple hazards, which must be predicted consistently

  • Hazard forecasting methods and terminology have evolved separately to meet the needs of science and users, sometimes in very different ways from those used in weather forecasting

  • When linking hazard models to NWP models, care must be taken that variables represent the same things in each model, that space and timescales match and that biases have been removed

  • Observations of hazards are fundamental to understanding and for verification but are not widely available or easily accessible

  • Successful partnerships come from a shared understanding of the different methods and viewpoints of the different hazard sciences and the agreement of shared objectives

6.2 Hazard Forecasting

In this section, we look at a variety of weather-related hazards for which forecasts might be needed. We start with a brief section covering some general issues in hazard forecasting: model building, availability of observations, consistency, timeliness and interfaces between models. Then, we look in turn at river flood, coastal marine hazards, surface water flood, wet landslide, winds in cyclones, orographic windstorm, severe convective storm, wildfire, extreme heat, air pollution, fog and winter weather. Table 6.1 provides an overview of aspects of forecasting in each area. We conclude with a consideration of multi-hazard forecasting and evaluation of hazard forecasts.

Table 6.1 Contributing weather variables, methods of prediction and ancillary data inputs for a selection of hazards

6.2.1 General Aspects of Hazard Prediction

Prediction of hydrometeorological hazards starts by identifying states of the environment that cause significant socio-economic impact. Observation and prediction of these states are the basis of monitoring and warning. While observations of the atmosphere are widely available, observations of land surface conditions, the oceans and atmospheric pollution are much sparser and more difficult to access. Observations of extreme conditions are especially rare. Process models are often developed and calibrated using data from field experiments, but these rarely include extremes. Remote sensing offers high spatial and temporal resolution and can capture these rare events, but with less detail and lower accuracy. Future improvements in hazard prediction, particularly using data-driven techniques, require clearer definition of the required hazard variables, development of sensing methods, reporting to agreed standards, open exchange and accessible archives (Fig. 6.1).

Fig. 6.1
figure 1

Selection of commonly applied process-based hazard prediction models and some of the linkages between the processes involved. (© Crown Copyright 2021, Met Office)

From observations, a description of the physical processes involved in creating the hazardous states is developed, aimed at generating a mathematical model. Process models embody our understanding of how the hazard develops and can allow for complex interactions of multiple forcings. They may be used directly for prediction or may be used as simulators to generate data for use in training statistical models. When used for prediction, outputs often require further post-processing and transformation for them to be useful in warnings, including calibration to remove biases, downscaling to specific locations and translation into variables that relate to socio-economic impact.

Where the processes are not fully known or are too complex or uncertain to model, the process-based description is complemented or replaced by a statistical or empirical model derived from past or simulated data. Ranging from simple linear regression to convolutional neural networks operating in a deep learning framework, these techniques bypass the need to model the processes. They are unbiased, by design, and may be tuned for individual locations or areas. Statistical models require a large training dataset spanning the observed space of the predictand and an independent testing dataset for evaluation. Some methods, such as decision trees, Bayesian networks and fuzzy logic, include human reasoning in the design of the model, while data-driven approaches such as machine learning (ML) allow the model to be determined by the relationship between input and output data with only general guidance on model structure. ML is rapidly gaining use in hazard prediction, facilitated by open-source code libraries and fast access to data from multiple sources (e.g. Lagerquist et al. 2020).

The credibility and usefulness of hazard forecasts are dependent on consistency – in time and space – between variables and between products. A flood forecast for a different catchment from the rain forecast and an ice forecast for earlier than the corresponding fall in temperature are inconsistencies that could undermine credibility. While a single forecaster is unlikely to make mistakes of this sort, products issued from different locations on different schedules and with different spatial/temporal resolutions can easily fall into these traps. On the other hand, some apparent inconsistencies are real. In this case, they need to be justified, e.g. a river flood occurring days after the rain. Inconsistency between experts is inevitable but must be resolved before a warning is issued, so as to avoid confusion, e.g. if the meteorological forecaster says there will be flooding but the hydrological forecaster says there will not.

Time is critical for warning and response. Most hazard forecasting methods must wait for weather forecast inputs. Minimising this delay is important – but poor-quality inputs will produce misleading warnings however quickly they are available. Frequent rapid updates can be helpful if skill improves with each update, and fast post-processing for multiple output locations and requirements is an advantage. For some hazards, such as tornadoes or flash floods, minutes of lead time can be critical, but a misleading forecast is worse than none, so warnings should only be issued or changed when consistent guidance is available – from observations, multiple forecasts and ensembles.

The interfaces between process models can be sources of error. This is particularly the case where biases can accumulate. For instance, a small temperature error of say 0.5 °C may be ignored by the meteorologist as being unimportant, but a persistent bias of 0.5 °C in the evaporation calculations of a soil moisture deficit model can grossly distort flood and drought calculations.

6.2.2 River Flood

Floods are the most frequent type of natural hazard associated with disasters and affect more people than all other types combined (55% of the global total from 1994 to 2013, CRED 2015). When heavy rainfall falls on the ground, some will infiltrate. The excess may directly inundate the land surface or drain into streams and rivers that then swell beyond their banks. When heavy rainfall infiltrates pervious rock strata, it may also create excess groundwater, resurfacing in distant locations to cause flooding. Flooding may also result from melting of snow: seasonally in snow-dominated landscapes or episodically, such as in rain-on-snow events (Fig. 6.2).

Fig. 6.2
figure 2

Processes involved in modelling terrestrial flooding. (© Crown Copyright 2021, Met Office)

The speed of the flood wave down a river depends on the steepness of the catchment and on the magnitude of the flood. Large floods can be faster or slower, depending on the cross-section profile of the channel and the friction of the water passing over different materials in the riverbed. When the river overflows its banks, flooding spreads across adjacent low-lying areas. In the flattest regions of some continents, floods may spread thousands of square kilometres and persist for weeks (Sajjad et al. 2019).

Natural systems interact with human systems throughout the landscape (Sene 2008). Built environments are often made of impervious materials that prevent infiltration, resulting in increased runoff which is then channelled through canals and pipes. Gates and leveés are used to control the path of the flood, and reservoirs may allocate space for flood control, while in exceptional circumstances, dam failures can exacerbate floods. Deforestation and wildfire also affect the balance between runoff, evapotranspiration and infiltration.

Five primary modelling challenges for flood forecasters (Pagano et al. 2014; Adams and Pagano 2016) are:

  1. (a)

    Estimating antecedent conditions of the catchment/watershed, often through accounting of historical precipitation

  2. (b)

    Predicting future precipitation, often using Numerical Weather Prediction models

  3. (c)

    Partitioning precipitation into runoff and infiltration

  4. (d)

    Tracking the flood wave as it travels downstream across the landscape

  5. (e)

    Relating the river flow to river depth and/or extent at key sites of interest

The simplest approach for large rivers has been to observe the river flow (or level) at upstream locations and then to relate these, statistically, to the later observed flow or level at the location of concern. Given adequate observations, this approach can provide accurate flood forecasts. However, it can only be applied to measured locations and requires that the upstream-downstream relationship is recalibrated frequently.

For the upper reaches and rapidly responding rivers, rainfall-runoff models relate the river flow to the rainfall in the catchment or watershed with simple representations of rainfall infiltration and evapotranspiration of soil moisture. Traditionally, such models have been basic, representing the system as a collection of “leaky buckets” (e.g. Perrin et al. 2003), with parameters tuned to get a good fit between historical simulations and observations of river flow (Duan et al. 1993). The spatial dimension can be represented in one of three ways (Khakbaz et al. 2012): the catchment may be lumped, where a single time series of catchment average rainfall forces the model; it may be semi-distributed, with the landscape represented by irregularly shaped but hydrologically homogeneous areas; or it may be distributed, with the landscape represented by a regular grid (like Numerical Weather Prediction models). In the latter two cases, runoff is aggregated to the catchment outlet using routing methods. Traditionally, both runoff and routing components have required tuning (Duan et al. 1993; Overton 1966).

For the lower reaches of rivers, the shape of the channel and its surroundings, the composition of the channel bed and vegetation are important characteristics, requiring the application of hydraulic models. These have historically represented the river as a series of segments, with a cross-section shape and bed friction that vary along its length.

Where appropriate, flood forecasters also make use of water operations models (e.g. Klipsch and Hurst 2007), which simulate the filling and spilling of reservoirs, many of which operate to fixed rules, sometimes set by laws or treaties.

Current research is moving towards the application of three-dimensional gridded models that use process-based descriptions of the flow of water, integrating runoff, river flow and inundation (e.g. Yamazaki et al. 2011), with some also including subsurface hydrogeological flows. Satellite measurements of flood inundation are used to calibrate, update and verify these models (Wu et al. 2014). These approaches facilitate extended models covering whole countries, and outputs from global flood models (e.g. the ECMWF GLOFAS model, Alfieri et al. 2013) are now freely available to flood warning authorities.

6.2.3 Coastal Marine Hazards

Coastal erosion and flooding are hazards that affect many major cities. They may result from local wind-driven ocean waves, remotely generated swell waves or storm surges (storm tides) (Fig. 6.3). Here, we do not consider tsunamis, which are geologically initiated. The energy in storm waves can destroy coastal flood defences, both natural and man-made, while the combined elevation of storm tide and storm waves can project large amounts of water over remaining defences, inundating the land behind. The destruction of coastal defences may allow further flooding from subsequent regular high tides if no action is taken to repair the damage.

In situ observations of ocean waves are made by ships and buoys – the latter moored to fixed locations. Basic measurements are the average wave height (usually of the one third highest waves) and the average time between waves. Modern buoys can analyse the wave structure into a frequency and direction spectrum. Storm surges are measured by tide gauges after removal of the astronomical tide. In both cases, the measurements are very sparse compared to the variability of the phenomena being measured. Satellites are also able to provide information on waves and tides, but there are considerable challenges in using it.

The main meteorological inputs are the distribution of wind and pressure across upstream ocean areas, but accurate flood prediction also depends on knowledge of detailed bathymetry, especially near the shore, and of the coastal defences, whether natural or engineered.

Simple deep-water ocean wave predictions can be obtained using statistical relationships of wave height and wavelength/wave period to local wind speed and the duration and/or fetch of ocean that it is blowing over. Outside the influence of a storm, swell waves propagate along great circles with little loss of energy until they reach shallow water.

Fig. 6.3
figure 3

Processes involved in modelling coastal flood hazards. (© Crown Copyright 2021, Met Office)

Current best practice forecasts use third-generation ocean wave models (Komen et al. 1994), which incorporate energy transfers between different wavelengths. While the processes that transfer energy from wind to waves are broadly understood, the details are too complex for wave prediction models which, instead, rely on simplified relationships between mean wind at a defined height and the spectrum of wave energy. Wave breaking is also represented in a simplified way. The most advanced of these models represent the influences of ocean currents and of refraction and diffraction in shallow water (Booij et al. 1999).

Storm surges (storm tides) are generated by low pressure and wind acting on the ocean, producing a large-scale wave. When this propagates into shallow water, its height is magnified, producing an enhanced high tide which can overtop coastal defences producing serious flooding (Pugh and Woodworth 2014). In some locations, storm surges propagate along the coast, and simple predictions can then be obtained by upstream observation, using statistical relationships between surge heights at different locations, derived either from observations or from modelling. Simple predictions may also be obtained from statistical relationships between prior pressure and wind speed/direction patterns in specific areas of the ocean and observed surges. However, such relationships have limited application, and current best practice is to use a vertically integrated 2-D ocean model (Flather 2000). Since the surge and the astronomical tide interact, it is important that the ocean model reproduces the astronomical tides. Inshore amplification of the surge is sensitive to small-scale bathymetry, so requires a high-resolution local model.

Having computed mean water depth using the surge model and wave height using the wave model, whether offshore or using an inshore model, the estimation of erosion and of water volumes overtopping defences must usually be carried out statistically – either based on historical data or using offline models tuned to specific locations. Choice of forecast location is important, and usually, the location that local knowledge indicates is the first to overtop. Once a predicted overtopping volume has been forecast, flood extent and depth can be modelled in the same way as for a river breach.

Current research is leading towards the use of 3-D ocean models for surge forecasting and their integration with ocean wave and NWP models (Cavaleri et al. 2018). The benefits include a better energy budget and consistency, especially for wave-current interactions. The use of variable inshore resolution will enable improved representation of inshore processes, potentially requiring inclusion of time-varying water extent due to both tide and surge.

Major challenges remain, particularly in modelling inshore processes, including seabed and beach sediment transport during storms (e.g. Carniel et al. 2011). Detailed modelling of the interaction of waves with coastal defences remains possible only for simple geometries, while the regular observations needed to estimate failure likelihood are simply not available, particularly for natural defences. It therefore seems likely that the final step of computing defence failure and overtopping volume will continue to be based on statistical relationships for the foreseeable future.

Other coastal hazards for which warnings may be required include dangers to bathers from rip tides/currents and the growth of potentially toxic algal blooms (red tides). Prediction of algal blooms requires a much wider interaction of ocean, land and atmospheric processes, with temperature, nutrient runoff and biological processes as key components (Pettersson and Pozdnyakov 2012; Zohdi and Abbaspour 2019).

6.2.4 Surface Water Flood

Whereas a flood wave in a river can be tracked downstream, the prediction of flooding from intense rain that has yet to enter a watercourse or that overflows from drains, ditches and other minor watercourses is much more challenging, requiring detailed knowledge of the rainfall intensity distribution. Once the water is on the ground, predicting its movement requires an extremely high-resolution representation of the surface and how it might change as the flowing water picks up, transports and deposits material or erodes new channels. Predictions must also account for absorption of water into the ground, requiring knowledge of soil moisture for natural surfaces and drainage capacity in urban areas (Bach et al. 2014).

Simple approaches to surface water flood prediction rely on statistical analysis of thresholds in rainfall depth and duration beyond which flooding is observed to occur in particular locations. When using these, the rainfall amount should be adjusted for absorption into the ground, to give an “effective rainfall” threshold for flood occurrence. More sophisticated approaches have been developed and may be suitable for specific applications (Shaw et al. 2011). For small rural and upland catchments, rainfall-runoff models can be used to route water into minor watercourses for flash flood prediction. For urban areas, hydrodynamic models of various complexities can be used to model surface and/or subsurface drainage networks (Bach et al. 2014). More generally, distributed inundation models are now available with the capability to use gridded rainfall time series as input and to model the flow of water across the land surface and in water courses (Bates et al. 2004). These approaches are all highly sensitive to the land surface specification, with metre-scale horizontal resolution and centimetre-scale vertical accuracy necessary in urban areas. Given the limited predictability of intense local rainfall, and the consequent need for speed, approaches using pre-computed flood scenarios are being adopted as an alternative to real-time computation (Aldridge et al. 2020; Birch et al. 2021).

In the future, real-time computation will be coupled with real-time updating of flow paths in critical urban areas.

6.2.5 Wet Landslide

The spatial distribution of landslides in an area reflects variations in the underlying geological, geomorphological and hydrological conditions. Landslides can be triggered by intense rainfall, snowmelt and ground vibration such as earthquakes or human-induced changes in stress conditions, e.g. road construction and quarrying. Landslide inventories provide an overview of the extent of landsliding in an area, spatially and temporally, and can include valuable information on the types of processes occurring. Inventories can be populated using direct mapping, either on the ground or using remote sensing, or by accessing archive material and harvesting social media data and news reports.

Rainfall is one of the most common triggers of landsliding with global fatalities focused in south, southeast and eastern Asia linked to the summer monsoon, typhoons and La Niña events (Froude and Petley 2018; Petley 2009). The impact of these landslides, not restricted to loss of life, is wide-ranging, and many communities are affected by loss of livelihoods and damage to transport links and infrastructure.

Landslide type is strongly influenced by the intensity, frequency and duration of the rainfall as well as by antecedent conditions. Large, deeper-seated failures respond more slowly to hydrological changes, while shallow slides and debris flows are most commonly triggered by short-duration, high-intensity rainfall events (Martelloni et al. 2012). Shallow landslides, the subject of most early warning systems, can be extremely rapid and destructive. They commonly occur due to the rapid infiltration of rainfall leading to a rise in pore water pressures and shear failure or due to sediment entrainment in surface water runoff (Baum et al. 2010; Godt and Coe 2007; Wieczorek 1996) (Fig. 6.4).

Fig. 6.4
figure 4

Processes involved in modelling landslide hazards. (© Crown Copyright 2021, Met Office with input from BGS © UKRI 2021)

Landslides are forecast at scales ranging from national to slope scale. At slope scale, monitoring of deformation alongside hydrogeological and meteorological parameters can be used to produce slope-scale thresholds related to rates of deformation and changes in groundwater level, as well as highlight precursors to failure including the build-up of destabilising groundwater pressures.

At the regional to local scale, landslide forecasting is commonly carried out through the estimation of rainfall thresholds that lead to failure. Empirically derived thresholds are widely used, based on statistical analysis of the local historical rainfall record alongside a detailed, dated inventory of landslides (Caine 1980; Guzzetti et al. 2008; Brunetti et al. 2010). The most widely used rainfall variables are cumulated event rainfall-duration, intensity-duration or antecedent rainfall. Rainfall data are mostly obtained from rain gauges but also from radar and satellite. The quality and reliability of the threshold depend on the rainfall data quality, network density, temporal resolution (hourly, daily or coarser) and accuracy as well as on the landslide record (Gariano et al. 2020; Nikolopoulos et al. 2015).

Antecedent conditions may be incorporated by setting hydrological rather than rainfall thresholds (Reichenbach et al. 1998). The Norwegian Water Resources and Energy Directorate (NVE) has developed a forecasting system for rainfall- and snowmelt-induced landslides which combines real-time data (discharge, groundwater levels, soil water content) with modelled hydrometeorological conditions (Krøgli et al. 2018).

Physically based models which couple slope stability and hydrological models, to produce a spatial and temporal forecast of landsliding, have also been developed, most commonly at a slope or catchment scale (Montgomery and Dietrich 1994; Baum et al. 2002; Salvatici et al. 2018; Guzzetti et al. 2020). However, this approach requires significant amounts of geotechnical, mechanical and hydrological data.

6.2.6 Extreme Winds in Cyclones

Damaging winds are associated with both tropical and extra-tropical cyclones. These travelling storms typically form or intensify over oceans, which provide their main energy source, but may then move over land.

Tropical cyclones gain their energy primarily from the condensation of water vapour, evaporated from tropical seas, as it is lifted in the updraughts of storm clouds. They are associated with the most destructive winds which may reach speeds up to 300 km/h in the eyewall, a ring of intense precipitation and high winds wrapped around the centre of circulation. Strong winds and tornadoes can also occur far from the centre of the storm, associated with spiral bands of heavy rain and thunderstorms. Forecasts and warnings of winds are generally based on surface observations (including ships and buoys), Doppler radar (when in range), satellite data (both imagery and scatterometer winds) and aircraft reconnaissance (where available) and using model outputs from statistical models, statistical-dynamic models and NWP models (Cangialosi et al. 2020). For the nowcasting of tropical cyclone impacts, observations of tropical cyclone structure, track and intensity are crucial (Leroux et al. 2018), with satellite and aircraft reconnaissance flights in particular providing timely information on changes to the locations and intensity of the strongest winds, e.g. changes in the eyewall structure.

There has been a large improvement in tropical cyclone track prediction by NWP models, but it has proved more challenging until recently to improve intensity forecasts (Yamaguchi et al. 2017). However, both regional higher-resolution models and global NWP models are now providing useful guidance, although the forecasting of rapid intensification remains challenging (Short and Petch 2018; Magnusson et al. 2019; Knaff et al. 2020). Satellites provide critical observations of the storm’s initial conditions, together with information on the wider environment affecting its evolution. Predictions from a variety of NWP models are widely shared and synthesised into the official advisories issued for each ocean basin.

Improvements to model physics and dynamics are required, along with representation of model uncertainties, to reliably simulate the possible developments. There is also great potential to improve the use of ensemble-based uncertainty information in tropical cyclone forecasts and warnings (Titley et al. 2019). Improvements in model resolution, in atmosphere-ocean coupling and in the representation of momentum exchange at the sea surface should lead to further improvements in intensity prediction.

Extra-tropical cyclones acquire their energy from the temperature gradient between tropics and poles, supplemented by the release of latent heat of condensation of water vapour as it cools. These storms cover a much larger area than tropical cyclones, so it is particularly important to identify which areas will be subject to damaging winds. NWP models are generally very accurate at predicting the intensity and track of such cyclones, but it is only in recent years that details of the wind structure, such as the sting jet (Clark and Gray 2018), have been adequately captured. Damaging winds associated with embedded convection, in urban areas, and in areas of complex topography, are generally predicted using empirical rules and forecaster experience, though very-high-resolution models are showing some promise at direct prediction.

6.2.7 Orographic Windstorms

Damaging winds can occur when the wind blows across a mountain range and the upper air temperature structure creates a barrier to upward motion, resulting in either large amplitude atmospheric waves or a hydraulic jump (Whiteman 2000). Such windstorms have a variety of names around the world (Fig. 6.5).

Fig. 6.5
figure 5

Processes involved in modelling orographic windstorms. (© Crown Copyright 2021, Met Office)

Traditional forecasting approaches use analysis of observed wind direction and vertical temperature soundings to identify conditions favourable for storm development. NWP models provide good guidance in many cases, but the requirements for vertical resolution of the temperature structure and horizontal resolution of the mountains make accurate predictions of the marginal conditions for onset and cessation difficult, so the forecaster often uses a combination of methods to refine the timing and intensity of the storm. Improvements to NWP resolution will continue to improve the skill of predictions, many of which are freely available from global and regional NWP centres.

6.2.8 Extreme Winds, Lightning and Hail in Severe Convective Storms

Hazards associated with severe convective storms include intense rainfall, damaging straight-line winds and tornadoes, large hail and lightning. Vertical motion in convective clouds is fuelled by the release of latent heat of condensation, producing a deep cloud composed of water at lower levels and ice higher up (Yau and Rogers 1996), characterised by severe turbulence and icing. Most of the damaging winds are a result of outflow generated by thunderstorm downdrafts or tornadoes (Church et al. 1993). Hail and lightning result from the processes of freezing and melting of raindrops (Mason and Mason 2003) (Fig. 6.6).

Fig. 6.6
figure 6

Processes involved in modelling convectively generated severe weather. (© Crown Copyright 2021, Met Office)

Forecasters monitor Doppler radar scans for indicators of severe wind or hail, other remote sensing tools such as satellite and lightning networks and/or spotter input. They also look for indices in the storm environment such as convective available potential energy, convective inhibition or helicity that are statistically related to the occurrence of severe thunderstorm phenomena.

Convective-scale NWP models provide useful guidance in issuing warnings of extreme wind, lightning and hail threat. Part of the challenge is the accurate analysis of storms and their surrounding environment in the initial conditions, for which the assimilation of radar and satellite data is essential. The rapid growth of uncertainty in these forecasts requires the use of ensemble systems to generate probabilistic forecasts (Wheatley et al. 2015). As resolution, data assimilation and ensemble prediction improve at these convection-permitting scales, the contribution of NWP to forecasting these hazards will become dominant.

6.2.9 Wildfires

Predicting wildfire is becoming increasingly important with climate change and the move of people into peri-urban spaces (NAS 2020; Ubeda and Sarricolea 2016) (Fig. 6.7). The primary meteorological inputs for wildfire prediction are temperature, wind speed and direction, relative humidity, lightning and precipitation, including antecedent precipitation which affects fuel moisture. Environmental inputs include vegetation type, fuel load and moisture content. The recent burn history of an area has a strong influence on the vegetation available to be burned. Topography influences fire spread through meteorological effects such as wind channelling, boundary layer structure and rain enhancement/shadow. Large, intense fires can modify the surrounding meteorological environment, which may lead to unpredictable and dangerous fire behaviour. The passage of a cold front, accompanied by a substantial shift in wind direction, can turn a head fire into a much broader flank fire with a much longer fire front. A significant uncertainty in wildfire prediction is ignition, which is often caused naturally by lighting but can also be caused by inadvertent or deliberate human activity.

Fire weather prediction occurs on multiple timescales. Coupled modelling at seasonal and multi-week timescales gives outlooks for anomalous fire weather conditions based on indices that combine multiple weather and environmental variables (e.g. Bedia et al. 2018; Dowdy 2020). In the medium and short range, ensemble and deterministic NWP is routinely used to predict fire weather conditions and is particularly important for forecasting significant wind changes. When fires are occurring or expected, forecasters need to make forecasts of detailed weather conditions at locations specified by the agencies responsible for firefighting.

Fig. 6.7
figure 7

Processes involved in modelling wildfire hazards. (© Crown Copyright 2021, Met Office)

Simple statistical models may be used to represent linear (downwind) fire spread in simple conditions, for example, grass fires in flat terrain. Fire spread models take meteorological and environmental inputs and compute the speed and direction of the fire spread and may provide other parameters, such as flame height or fire radiative power. These models can represent changes associated with up-slope and down-slope direction and may include spotting. The outputs are very sensitive to wind direction. When run in ensemble mode, the probabilistic outputs can represent uncertainty in weather and fuel inputs and fire behaviour processes.

Very-high-resolution coupled fire-atmosphere models are now becoming available that simulate the feedback processes causing fires to grow explosively into extremely dangerous and unpredictable fires (e.g. Filippi et al. 2018; Jiménez et al. 2018). These models can simulate the transport of embers to start new fires and the acceleration of winds in hydraulic jumps in down-slope flow. Some models can also represent fire-generated thunderstorms (pyrocumulonimbus), generated by convergence of air towards the fire, that can generate lightning and downbursts with gusty winds. In general, these models are still too costly to run operationally (Peace et al. 2020).

Successful application of the best fire risk and fire behaviour models has the potential to enable more effective fire prevention and firefighting, with reduced loss of life and damage to property. However, remaining challenges include reliable observation of both fuel state and wind at sub-kilometre scales; accurate forecasting of weather and environmental inputs, including fuel state, on these scales; effective modelling of complex fire behaviour; and more complete observations of fire behaviour to validate and improve the models. In the light of recent evidence of increased wildfire activity due to climate change, more emphasis on addressing these challenges can be expected in the next few years (Dowdy 2020).

6.2.10 Extreme Heat

Extreme heat is generally associated with meteorological conditions that vary slowly in space and time. Temperature is the main variable of interest, but humidity, wind speed and radiation are also relevant for predicting the impacts of heat stress on health. Cities are particularly vulnerable to extreme heat because of the urban heat island (UHI) effect, where the absorption and re-emission of radiation by asphalt and concrete surfaces and the concentration of industry and other heat sources cause heat to be retained (Fig. 6.8). The relationship between outside and inside temperature in buildings is also important for some health impacts. Thresholds for defining heat waves are often defined relative to the local climate (e.g. Nairn and Fawcett 2014), requiring both observed and model climatologies.

Coupled numerical modelling systems are now capable of predicting heat wave conditions more than a week in advance and the likelihood of weather regimes associated with heat waves several weeks in advance (e.g. Marshall et al. 2014). UHI effects can be estimated from satellite remote sensing and urban sensor networks and used in post-processing to predict heat stress conditions on sub-kilometre grids. Pinpointing local heat stress requires detailed understanding of neighbourhood and street-scale conditions, including sun/shadows, green canopy, wind flow around buildings and ventilation within buildings. This level of detailed modelling is currently beyond the routine prediction capability of operational weather services. Statistical models can be used to relate local effects to the larger-scale meteorological and environmental conditions, perhaps guided by urban canopy models.

Fig. 6.8
figure 8

Processes involved in modelling heat and pollution hazards. (© Crown Copyright 2021, Met Office)

The future “smart city” will exploit new sensors, communication technology and the internet of things to predict heat stress and its impact using data-driven forecasting approaches.

6.2.11 Extreme Pollution

Hazardous pollution results when industrial and transport emissions are trapped in a stable boundary layer and when wildfire smoke and dust are transported into populated areas. Poor air quality from primary pollutants may be exacerbated by secondary pollution from photochemical reactions.

Quantitative measurement and forecasting of air pollution is a relatively young science, borrowing much from weather prediction. Variations of exposure at ground level within the urban fabric are important. Air quality monitoring relies on sparse surface networks of (primarily urban) air monitoring sites supplemented by limited satellite remote sensing capabilities (EEA 2020). The recent deployment of inexpensive fine particle (PM2.5) sensors offers opportunities to improve monitoring, forecast initialisation and post-processing (Lewis et al. 2018b).

For directly emitted chemicals such as the oxides of sulphur and nitrogen, the principal uncertainty is in specifying the emissions and how they diffuse close to source. Static emissions inventories from industry and other anthropogenic sources require a large effort to update, so often become outdated as the industrial landscape and regulatory practices evolve. Simple passive or chemical transport and diffusion modelling approaches can provide useful predictions of high concentration levels for use in warnings (WMO 2020).

State-of-the-art predictions of hazardous near-surface concentrations of PM2.5, ozone and other pollutants use high-resolution limited area numerical air quality models. These can be characterised as “offline”, meaning gas and aerosol chemistry is computed in a chemical transport model using meteorological conditions as input, or “inline/online”, meaning the gases and aerosols can influence the radiation, temperature and cloud microphysical properties of the weather model (e.g. Wang et al. 2020). Inline models, while more accurate, are more complex and expensive.

All approaches rely on the accuracy of the emissions and the wind, and for limited area models, the inward transport of pollutants at the boundaries must be accurately specified. Observations of aerosol optical depth from satellites and ground-level data can enhance forecast accuracy.

6.2.12 Fog

Fog is primarily a hazard to people using transport networks. The critical visibilities for which warnings are required vary significantly between users, depending on their speed of motion and the complexity of the landscape being navigated. For a pilot of a fast military jet in mountainous terrain, safe flight may require several kilometres of clear visibility, whereas for a car driver on a straight road, 100 m is normally sufficient (Call et al. 2018). Distinguishing such visibility thresholds is challenging both to observe and to forecast. Variations occur at very small scales and may be related to local water and/or pollution sources. Over the ocean, minor changes in sea surface temperature can produce the same effect (Isaac et al. 2020; Fallmann et al. 2019). Once formed, a thick fog bank may persist until there is an air mass change. However, it is the time of clearance that the user requires, and this can be as hard to predict as formation.

Current forecasting methods mostly use representative vertical temperature soundings, from observations or model predictions, and apply detailed site-specific modelling of local heat and humidity budgets in the vertical column, allowing for any change in mixing due to the wind (see, e.g. Gultepe et al. 2007). However, very-high-resolution NWP models (of 100s of metres grid length) are showing useful accuracy and will likely become the preferred approach in the next decade (Price et al. 2018).

6.2.13 Frost, Ice, Snow and Freezing Rain

Prediction of ice-covered surfaces is required for warnings of hazardous road and footway conditions, for anti-icing treatment of roads and aircraft and for warnings of ice accumulation on structures and cables. Ice can form when pre-existing water freezes, by frost deposition, from freezing rain or from compaction of snow or other frozen hydrometeors. Freezing rain (Changnon and Creech 2003) is a major hazard for road users and for trees and towers (Fig. 6.9).

Fig. 6.9
figure 9

Processes involved in modelling winter hazards. (© Crown Copyright 2021, Met Office)

While regular products from NWP models can provide useful guidance, warnings need to be based on careful analysis of the thermal conditions of the lower atmosphere and the exposure of the road or other surface (Karsisto et al. 2017). Road icing is generally forecast by 1-D heat balance models that incorporate influences on the local short- and long-wave radiation, including local shading from daytime sun and inhibition of nocturnal cooling by local heat sources such as walls and trees, together with a detailed representation of the thermal properties of the road. Large-scale conditions are taken from a NWP forecast. Accumulation of ice on structures and cables requires models of water availability and the heat budget (Fikke et al. 2006).

NWP can also provide guidance on frozen precipitation, but accuracy remains low, due to limitations in cloud physics and inadequate resolution, so forecasters commonly rely on interpretation of the low-level vertical temperature structure, both observed and predicted. Even small amounts of snow or ice on untreated roads can be hazardous, so a judgement is needed as to the amount of snow that will settle on the road, bearing in mind the effects of wind, shelter by trees, buildings and hedges, local convective enhancements and variability due to orography.

6.2.14 Multi-hazards

A weather system may give rise to several hazards, each with distinct warning requirements, potentially leading to very complex warnings. To enable the most useful advice to be provided, hazard forecasters need precise information about the different hazards and their interactions. Table 6.2 gives some examples of multi-hazards in weather systems, which we illustrate by looking in more detail at the most of extreme of these: the tropical cyclone.

Table 6.2 Multiple hazards associated with some weather systems

Tropical cyclones are one of the most destructive meteorological phenomena and are associated with several different hazards that can cause significant impacts on life and property (WMO 2017). In addition to destructive winds, the combination of wind-driven waves and low pressure can produce a coastal storm surge, destroying coastal defences and causing coastal flooding. Extremely heavy rainfall associated with tropical cyclones can lead to landslides and serious pluvial and fluvial flooding. Tornadoes and lightning are also commonly associated with tropical cyclones. Different hazards may occur together in the same or neighbouring regions, leading to difficulties for emergency preparedness, response and recovery programmes. Impacts from the strongest winds are greatest close to landfall location in the eyewall of the storm, but the highest storm surge is displaced to the side experiencing onshore winds, while precipitation and fluvial flooding can extend far inland from the landfall location. Warnings based only on track and peak intensity do not provide a complete picture of multiple and cascading hazards. For example, when planning evacuations from areas of extreme winds, it is important to ensure that evacuation centres are not in a flood area. The impact of a tropical cyclone also depends on its translation speed, which affects the duration of strong winds and the accumulated rainfall, and the land characteristics, vulnerability and exposure of the area being impacted. Accurate predictions of all hazards, and their associated uncertainty, across the entire multi-hazard event can provide the basis for more effective communication of the multiple risks to life and property.

6.2.15 Evaluation

We started this section with the challenge of obtaining credible hazard observations. When evaluating the performance of hazard forecasting systems, two different approaches are used to overcome the lack of observations. For a process model, it may be assumed that accurate prediction of normal conditions is a necessary, if not sufficient, condition for accurate prediction of extremes. For some hazards, such as ocean waves and storm surges, river levels or visibility, sufficient observations are available for statistical verification in non-hazardous conditions. Where the extreme distribution is sufficiently well defined by the observations, it may also be possible to establish whether the asymptotic behaviour of the predictions is consistent with that observed. Where a hazardous threshold is exceeded sufficiently often for statistical significance, the ability to forecast exceedance or non-exceedance of the threshold can be tested using a variety of binary scores (Jolliffe and Stephenson 2012). The second approach uses case studies to evaluate performance. Given several of these, it may be possible to infer conditions under which predictions are more, or less, realistic.

6.3 Capabilities of the Weather Forecast

The weather forecasting process is described in detail in Chap. 7. Here, we briefly introduce the three main prediction approaches of Numerical Weather Prediction (NWP), rapid update nowcasting and statistical forecasting, together with the role of the forecaster in relating these sources of guidance to the hazard of concern. We then summarise the capabilities of weather forecasts in general and for each of the main hazard-related weather variables. We conclude with a brief section on evaluation.

6.3.1 NWP Models/Ensembles

The primary source of quantitative meteorological input for hazard prediction comes from Numerical Weather Prediction (NWP) models, run routinely by weather services, which form the basis for public weather forecasts. The World Meteorological Organization’s Global Data-processing and Forecasting System (WMO 2019) supports an information cascade in which a small number of countries share information from global ensemble NWP systems and several countries in each region share information from regional NWP systems, providing every weather service with access to forecast guidance.

As will be described further in Chap. 7, NWP models encapsulate a complex set of processes in the atmosphere (Coiffier 2011) and in the interaction of the atmosphere with the land and ocean surfaces. The mathematical description of each process is an approximation to reality containing uncertain parameters. When integrated in a forecast model, these parameters must be mutually adjusted over extensive trial periods to reproduce the observed weather accurately.

Weather prediction is an initial value problem, which means that the resulting forecast depends on the initial state. Since that initial state is uncertain, so is the forecast, and as the forecast evolves, uncertainty eventually swamps the result as the limit of predictability is reached. Before that state is reached, the quality of the forecast depends critically on the initial state. Thus, the availability of observations and their incorporation into the model through data assimilation (see Chap. 7) are of critical importance. Many hazards are associated with parts of the atmosphere where energy is being released rapidly, and it is in these areas that the growth of uncertainty is also greatest. An important part of the NWP forecast is an assessment of this uncertainty, obtained using an ensemble of predictions from slightly different initial conditions.

Physical constraints also limit the accuracy of high-impact weather forecasts. Computer power has increased dramatically, but the accuracy of forecasts, especially of hazardous weather, remains limited by resolution, and every doubling of spatial resolution requires more than a tenfold increase in computer power and a fourfold increase in communication bandwidth. The best current NWP models have global grid spacings of ~10 km and regional grid spacings of ~1 km. Ensemble prediction systems tend to have slightly coarser resolution. Since models with a coarser grid spacing than about 4 km are unable to represent deep convection, except in a statistical sense, they are generally less good at predicting weather systems in which convection is a major energy source – including those in the tropics and summer mid-latitudes. Even models with a 1 km grid spacing are unable to represent the detailed near-surface processes that occur in fog and wildfire evolution and that determine the variation in heat and pollution between urban buildings. Speed of forecast production also impacts on the time between observation of the initial state and availability of the forecast. For global models, gathering observations from around the world takes time, while longer forecasts increase the computer time required, so it can be as much as 6 hours before a forecast is disseminated.

Communication restrictions mean that only a limited range of variables from an ensemble NWP forecast can be shared, often at much reduced time and space resolution and with limited probabilistic information. It is therefore important that the information required to meet users’ needs is selected carefully. In many cases, the variables of interest are not those that govern the atmospheric evolution, so they must be derived from the model output in a post-processing step (Vannitsem et al. 2021). State-of-the-art systems enable users to define the processing required remotely and to receive just the results required. This is increasingly being realised through hosting of the full database in “the cloud” (i.e. in shared storage facilities connected through the internet).

A state-of-the-art warning chain will carefully consider the trade-offs between timeliness and accuracy. Resolution must be good enough to represent the weather feature that is the source of the hazard, but given a significant degree of uncertainty, probabilistic information is crucial.

Challenges for the future are to increase the availability of information to hazard forecasters, including more relevant variables, at higher resolution, with a better description of uncertainty, but targeted to the areas and thresholds of concern.

6.3.2 Nowcasting Tools

We have seen that using ensemble NWP as the basis for hazard prediction takes time. For very rapidly developing hazards, a faster response to new information from local observations is required. The collection of resulting techniques is referred to as nowcasting (Browning 1982) and may be used for lead times of a few minutes up to a few hours, depending on the hazard.

To achieve maximum speed of response, nowcasting systems typically use a limited source of observations and a simplified predictive model, focused on a single variable, such as rainfall, strong wind or large hail. The first systems were based on radar reflectivity observations and used linear extrapolation of the position of areas exceeding a critical threshold (Wilson et al. 1998). Most current systems continue to be radar and/or satellite based, since these observing systems give detailed spatial coverage in a single data stream. Prediction also continues to be based on extrapolation, but with an increasingly sophisticated choice of variables and methods.

Nowcasts depend critically on the initial state. Processing of the observational input to remove errors is particularly important. Ensembles of nowcasts are also used, particularly where it is useful to identify the sensitivity of the output to different estimates of trend, whether in position or intensity. Since nowcasting tools generally produce a forecast of a single variable, it is important to avoid inconsistencies, either between different nowcasting tools or between the nowcast and NWP guidance. For example, a cloud nowcast based on satellite imagery, and a rain nowcast based on radar, may easily produce an intense rain forecast in the same location as clear skies. Such differences must be avoided if trust is to be built and maintained; effective methodologies to blend nowcasts and NWP are currently the subject of much research (e.g. Atencia et al. 2020).

6.3.3 Statistical Models and Machine Learning Algorithms

Statistical models contribute to weather forecasting, both to correct biases in NWP outputs (Vannitsem et al. 2021) and for processes that are too complex or time-consuming to incorporate in the NWP model, including very-short-range forecasting of the boundary layer and severe convective storms. Statistical methods are designed to be unbiased and may be tuned for individual locations, so are ideal for translating NWP outputs to site-specific forecasts. However, if spatio-temporal correlations are important, relationship-preserving methods such as analogue ensembles (using observations from historic cases similar to the current situation, e.g. Clark et al. 2004) may be needed.

As noted earlier, statistical models are limited by the availability of training and testing data that span the full range of required outputs. Since hazards are often associated with extremes, particular care is needed to ensure that the model gives sensible results in these conditions. Non-stationarity in the data (e.g. due to climate change) is also a challenge, requiring either frequent recalibration or the provision of an auxiliary model. These difficulties may be overcome if data can be generated with a sufficiently realistic simulation model.

Many different statistical approaches are available, ranging from decision trees involving multiple human inputs to purely data-driven approaches, such as multi-layer neural networks. However, machine learning techniques are rapidly gaining use, facilitated by open-source code libraries (e.g. Lagerquist et al. 2020). Performance assessments show these approaches can be competitive with human judgement and physical modelling.

6.3.4 The Professional Weather Forecaster

A forecaster combines outputs from a range of tools with experience and professional knowledge to reach a judgement on the future occurrence of weather, especially hazardous weather (Pagano et al. 2016). Forecast outputs are limited by the processing capability of the forecaster, which may limit them to focusing on areas of prior expected hazard or of a particular vulnerability. Where the response depends on fine judgements of cost and benefit, the ability to estimate the distribution of hazard probability reliably is key and is only achieved by the very best forecasters.

Private sector weather forecasters provide a paid-for service, which may be part of a general media information service, funded by advertising, or a consultancy service to a specific industry that has a weather-related vulnerability. In the media, the forecaster is primarily a communicator, using their expert knowledge and judgement to interpret the general forecast from the model and/or weather service professional, to create actionable messages for their audience. The challenges for this sector are in understanding the strengths and weaknesses of their inputs and in relating the information received to the concerns of their audience. This role was considered in more detail in Chap. 4.

The consultant forecaster is generally focused on specific clients at specific locations, with particular needs and vulnerabilities. Translating the general information coming from models and forecasters into the advice needed by their clients requires them to select relevant data and to apply hazard-specific prediction techniques. They may themselves predict the hazard and its impact, or they may produce bespoke weather information for others to use. In achieving this, they will use nowcast and machine learning tools, tuned specifically to the needs of their customers. Key challenges for this sector are the trade-off between the accuracy and cost of the NWP data they source and maintaining the reliability and accuracy of their tools. Tools often originate from academia, but their maintenance requires either the regular purchase of upgrades or significant effort from the consultant. Users of multiple consultants may see consistency problems due to use of different tools, different base data sources or different judgements.

6.3.5 Evaluating Weather Forecasts

Using meteorological forecasts effectively for hazard forecasting requires thorough understanding of their performance. For medium-range forecasts, standard scores for probabilistic and deterministic forecasts provide a useful assessment of the prediction of large weather systems. Since the variables concerned change smoothly on these scales, the statistics are well behaved. As the event gets closer and the details become better resolved and predicted, the forecast timing and location of synoptic-scale storms, cold fronts, tropical cyclones and other features can be evaluated using object-based verification approaches. Standard observations of surface and upper air variables can also be used to verify forecasts of environments conducive to hazardous weather. In addition to surface-level variables such as rainfall and temperature, vertical elements such as stability, wind shear and boundary layer depth should be evaluated when they are direct inputs to hazard predictions (e.g. air pollution, fog, freezing rain).

Direct evaluation of weather forecasts at hazard scale is more difficult as the standard observation network is rarely dense enough to capture the important details and there may be few observations of extremes. Remotely sensed observations from radar and satellite are spatially complete but only approximate the variables of interest such as rainfall and wind. At high resolution, traditional verification scores are hampered by the “double penalty” where small timing or location errors in a forecast feature cause the event to be predicted where it didn’t occur and missed where it did occur. Spatial verification approaches accommodate this situation (Mittermaier and Roberts 2010; Raynaud et al. 2019), but for some hazards, the spatial context is important (e.g. a river basin, a coastal city), so correctly predicting the location is crucial.

6.3.5.1 Resolution

Many hazards occur on small spatial and short temporal scales, e.g. a flash flood or a wind squall. It is a formidable challenge for weather forecast models to resolve these. The latest local-area NWP models use grid spacings of around 1 km, while research models use grid spacings down to 100 m (e.g. Lean et al. 2019). Such high-resolution models greatly improve forecast precision, but with considerable uncertainty in the small-scale detail.

Evaluation of operational forecasts indicates that reducing the grid length improves model performance at large scales as well as making it possible to resolve small scales. Improved accuracy in the 50–200 km scale range enables forecasters to better interpret observations and finer-scale predictions.

High-resolution models improve forecast accuracy most prominently for weather phenomena that are influenced by the improved representation of the atmosphere’s lower boundary – orography, the urban fabric, variability in land use, etc. For small-scale weather phenomena that are sensitive to the larger-scale flow, the benefits of high resolution may be masked by uncertainties at larger scales.

6.3.5.2 Precision and Accuracy

For use in hazard prediction , weather forecasts may need to be both precise and accurate. Precision in forecasting refers to its “fineness” in space, time and other attributes dictated by the hazard. For instance, if some threshold in heat stress was reached at a forecast of 38.4 °C, the hazard forecaster would want to know where, when and whether this would happen, not just if it would be “extremely hot”. Similarly, distinguishing between intense rain during and after an outdoor festival could be very important.

Accuracy refers to how well a forecast matches the observation. When measuring accuracy, the spatial and temporal scales of forecast and observations must be matched by upscaling or downscaling. The choice of whether to verify at the finer or coarser scale depends on the precision required by the downstream hazard model. As well as verifying at specific locations, some verification methods can measure errors in the location and timing of meteorological features such as storm systems and fronts (e.g. Dorninger et al. 2018).

Forecasts have systematic error (bias) and random error components. Biased forecasts are particularly damaging when input to hazard models that were developed using observations, so it is advisable to remove biases if possible. Random errors can be reduced through aggregation or averaging, e.g. spatially by catchment or fetch averaging or temporally by accumulation or dose averaging, at the loss of some forecast precision. When observations have significant uncertainty associated with them due to instrument error or representativeness (e.g. rain gauge measurements of convective precipitation), aggregation and averaging of observations may also be needed.

6.3.5.3 Reliability

When a risk assessment is being made, the likelihood is as important as the intensity. Ensemble forecasting systems are used to provide probability forecasts, e.g. of rainfall accumulation exceeding a threshold in a particular location and over a certain time period. The reliability of probability forecasts from ensemble prediction systems has improved enormously over the last 25 years (Bauer et al. 2015), although post-processing and calibration of probabilities are still needed (Williams et al. 2014). A probability forecast must be verified as part of a collection of forecasts, not alone. Probability verification measures, such as the Brier score (Jolliffe and Stephenson 2012), assess the following qualities: (i) reliability, agreement between forecast probability and the observed frequency; (ii) sharpness, tendency to forecast probabilities near 0% or 100%, as opposed to values clustered around the mean; and (iii) resolution, ability of the forecast to resolve events into subsets with characteristically different outcomes.

6.3.6 Predictability of Hazard-Relevant Variables

Due to scale interactions and the chaotic nature of the atmosphere, there are intrinsic limits to predictability that even an optimal (yet physically reasonable) forecast system could not overcome. These limits vary substantially between hazard-relevant variables and are a function of the weather system that is associated with the hazard. The intrinsic limit of predictability is a hypothetical concept because an “optimal” forecast system does not exist. Yet the concept is important because it underpins the use of probabilistic forecast frameworks while also guiding improvements of state-of-the-art forecast systems. Conceptually, predictability is most severely limited in the presence of potential “bifurcations” (e.g. Keller et al. 2019) such as are seen in the tracks of tropical cyclones. Bifurcations may occur in a more general sense when atmospheric conditions are close to specific thresholds, e.g. for freezing rain a temperature near 0 °C at the ground, for convective initiation a forcing that is close to the convective inhibition. Below, we provide an overview of practical limits of predictability, i.e. predictability limits as observed in current state-of-the-art systems, for several hazard-relevant variables and for typical weather situations.

Rain

Extreme rain is a function of duration and the area of interest. For large areas, extremes over long periods may dominate. Global ensemble NWP has considerable capability in predicting the persistent regimes that produce such long period extremes, but usage is hampered by biases in the modelled rainfall and lack of adequate datasets to recalibrate with. At shorter durations, we may identify three main types of rainfall extremes: interaction of atmospheric rivers or conveyor belts with orography, typically over multi-day periods (Shearer et al. 2020); organised rainbands, often with embedded convection, typically over periods up to a day; and intense convective rain over periods of an hour or so. Atmospheric rivers are predictable for few days ahead, but with limited spatial accuracy until lead times of a few hours. Organised rainbands are predictable for a day or so ahead, but details of intensity and duration are uncertain until shorter lead times. Intense convection is typically only predictable in a regime sense for a few hours, and individual storms are currently unpredictable except by nowcasting methods at less than an hour’s lead time (Wang et al. 2019). Future developments in the assimilation of storm-related data in kilometre-scale models should lead to improvements.

Wind

While the mean wind is well predicted by current forecasting methods up to several days ahead, extreme local winds are an unresolved challenge. Tropical cyclone winds are beginning to be captured skilfully by the latest generation of kilometre-scale models, at least for forecasts up to a day ahead. Prediction of tornadoes and other wind extremes related to severe storms is largely possible only through statistical inference using predicted indices of atmospheric structure, though diagnosis of predicted cloud structure and rotation in kilometre-scale forecasts are taking us closer to direct prediction (Wang and Wang 2020) up to timescales of a few hours. Orographic wind prediction has some skill up to a day ahead in models that adequately resolve the orography, provided the vertical resolution is able to capture the vertical structure of the atmosphere. However, extreme winds in the vicinity of steep gradients and buildings are not currently predictable, except in a statistical sense, because of their scale. Improvements in resolution can be expected to provide significant progress in short-range prediction.

Winter precipitation

The snow/rain boundary is diffuse and difficult to define, yet the impact of crossing it at the surface is profound. The same is true of other varieties of freezing and frozen precipitation. The extents of liquid and frozen precipitation can often be predicted a day ahead, but accurate positioning and timing of the boundary, especially in slow-moving weather systems, may not be achieved until a few hours ahead. Prediction needs to be site-specific, because of the sensitive dependence on height, so requires post-processing of model gridded outputs.

Temperature

The general structure of temperature in the atmosphere is highly predictable up to several days ahead. However, models struggle with the detail of boundary layer structure, especially during the transitions from day to night and vice versa (Lapworth 2006, Papadopoulos and Helmis 1999). In low turbulence situations, such as under nocturnal inversions, details of the land surface may be significant. The unpredictability of these flows may be such that deterministic prediction is not possible, even at very short lead times, with implications for fog and frost warnings. In urban areas, the crude representation of urban structures limits predictability.

Atmospheric boundary layer

The principal meteorological variables relevant for boundary layer hazards such as air pollution and fog are mean wind (for transport) and wind variance or turbulence (for diffusion). Since turbulence is not a primary variable, pollution models often infer it indirectly from gross boundary layer characteristics, such as boundary layer depth and mean temperature gradient. These are problematic for stable boundary layers when elevated pollution levels are a particular problem. NWP models have some capability for prediction of widespread, persistent fog, though with significant uncertainty in density and timing, even at lead times of only a few hours. Patchy diurnal fog is currently unpredictable except in a very general sense, due to limitations in humidity, in the turbulent structure of stable boundary layers and in the resolution of significant features of the land surface (McCabe et al. 2016; Fallmann et al. 2019; Ducongé et al. 2020).

6.4 The Bridge Between Weather and Hazard

In this section, we explore the challenges of connecting the disciplinary languages, processes, timescales and cultures, the organisational hierarchies, the different mindsets and the technical capabilities that impede communication between hazard forecasters and weather forecasters.

6.4.1 Institutional Barriers

It is increasingly recognised that partnerships between expert bodies, for example, national meteorological services and flood or other hazard agencies, are necessary for effective hazard prediction (e.g. Demeritt et al. 2013). For such partnerships to grow and flourish, the barriers that separate institutions must be overcome. Some of these barriers arise from political and economic decisions of government that, for instance, promote competition amongst public bodies for funding or power. Barriers may also arise from entrenched institutional procedures, which may be embedded in legislation, especially in institutions having a long history (Pagano et al. 2001). Such procedures may be tuned to the needs of particular customers, with their own history and governance structure, especially when these customers are the dominant funding source (e.g. civil aviation or the military). These barriers need to be recognised and strategies developed for overcoming them before proceeding with partnership building.

In building an institutional partnership, each party brings their scientific and technical expertise which, when integrated, can be enormously powerful. Successful communication between partners needs to start by translating the goal of the partnership into each partner’s language and then identifying a mutually beneficial objective. Although partners share the common goal of enhancing community safety, their differing mandates and areas of responsibility can lead to different priorities. National meteorological services typically operate at national scale, while hazard authorities often operate at state, region or city scale. If operational practices and requirements for meteorological information differ amongst hazard authorities in different regions, then the complexities of serving multiple users with similar but not identical data can slow down effective integration of weather and hazard prediction. Standardisation of service levels and practices can lead to improved consistency and facilitate broader and stronger partnerships.

When developing forecasting systems, meteorological agencies can choose to develop their own or to import systems developed elsewhere. While NWP models had crude representations of the land and ocean surfaces, it was normal for meteorological scientists to incorporate the available knowledge. With increasing complexity, the choice now is to import expertise (e.g. a team of hydrologists), to import a model (e.g. an ocean wave model) or to develop a partnership with a centre of expertise. For example, ECMWF developed its global flood forecasting system, GLOFAS (Alfieri et al. 2013), using an imported model, which it has further developed by employing hydrologists in-house.

Embedding meteorologists within operational agencies that are responsible for hazard prediction or, conversely, embedding hazard forecasters within meteorological agencies is becoming more common practice (Uccellini and Ten Hoeve 2019; see also Wildfire Case Study, below). This complements the integration of hazard models. Experts working in partnership can interpret and integrate important details of a hazardous situation and form a consensus view on the evolution (or possible trajectories) of the hazard. This more united view usually leads to better decisions as discussed in Chap. 4 and builds valuable trust between the partners.

6.4.2 Shared Situational Awareness

In real-time operational response, one person, however skilled, cannot provide expert interpretation for every hazard. However, as soon as responsibilities are split up, there is the possibility of inconsistency, even when the same model guidance is shared. Mechanisms for ensuring consistency of message need to be built into the operational structure of the partnership. This can involve sharing of observational data and guidance statements or of a multi-hazard dashboard (e.g. NOAA 2021a). It should also include frequent conferences between forecasters to enable different interpretations of the forecast guidance to be discussed and a common version agreed. With modern technology, the time between such conferences can be bridged using informal messaging tools, such as “chat rooms” (e.g. NOAA 2021b). All of this is greatly facilitated when all partners use a common set of tools and view the same data. This will increasingly be achieved by placing data in the “cloud”.

6.4.3 Connecting Disciplinary Cultures

6.4.3.1 Hydrology and Meteorology

Accurate weather forecasts are essential to most flood forecasting systems. While air temperature forecasts are needed for some applications (such as determining evaporation or snowmelt), the primary variable of interest is precipitation.

Precipitation is notoriously difficult to forecast, with NWP having relatively coarse resolution, substantial biases and limited skill (Cuo et al. 2011). However, recent advances in model resolution and the use of ensembles have made the outputs more relevant to flood forecasting applications. Historically, most rainfall-runoff models have been based on parametrised conceptualisations developed in the 1970s (Pagano et al. 2014). Much effort continues to be directed to improving the calibration of such models. Furthermore, much of the NWP improvement stems from better assimilation of remotely sensed observations, whereas research in hydrologic data assimilation (e.g. Chen et al. 2013) is little used.

These points reflect a cultural difference in the use of models by meteorologists and hydrologists (Pagano et al. 2016). Generally, NWP systems are run on supercomputers. After automated post-processing, the results are reviewed by the meteorologist to either accept, adjust or replace. Depending on the context, river forecasting may be more iterative, with lightweight models being run iteratively until the hydrologist is satisfied. Although this builds confidence, it prevents the use of more objective techniques, such as data assimilation and statistical post-processing. In response, there is an increasing operational trend towards side-by-side river forecasting systems, one complex and objective and another simple and adjustable.

Meteorologists are increasingly aiming to generate precipitation forecast products in probabilistic form (e.g. 25% chance of exceeding 15 mm). Although probabilities better represent the uncertainty in the forecast, they lack the spatial covariances and correlations of observations. Given that the relationship between rainfall and runoff is highly non-linear, such spatial information is essential to accurate runoff forecasting. In response to this, hydrologists have developed methods to convert probabilistic rainfall forecasts (including forecasts from different models at different lead times) into seamless, physically realistic ensembles, primarily through sampling patterns in historical observations (Clark et al. 2004; Bennett et al. 2017). Some of these approaches require objective hindcasts of NWP models, consistent with the current operational versions, which are expensive to generate.

6.4.3.2 Oceanography and Meteorology

On the face of it, oceanographers and meteorologists should communicate easily, since both are physics-based sciences of geophysical fluids on the earth’s surface. The history of the two sciences has, however, resulted in quite different approaches to some aspects of their science. Oceanography is predominantly focused on research rather than operations and on ship-based experimental research rather than modelling, whereas meteorology has been focused on operational prediction since the nineteenth century. Indeed, the history of meteorology is dominated by the synoptic map – an analysis of conditions simultaneously sampled at multiple locations over a large area. Oceanographers rarely study the global ocean as a single entity, focusing on individual ocean basins, whereas meteorologists naturally take a global view. On the other hand, the object of a weather forecast is often a single point, whereas points in the ocean are rarely of interest, except on continental shelves where offshore production facilities have been constructed. The coastline is of tremendous importance to an oceanographer, as processes are very compressed in the inshore zone, and the water edge is a model boundary. While coasts are also important to meteorologists, they tend not to be considered in any greater detail than elsewhere over land. Until recently, the oceanographer has always had to work with minimal observational data, whereas the meteorologist is much better supplied – even over the oceans. Mathematically, the large-scale behaviour of the ocean is strongly constrained by boundaries – laterally at the coasts and vertically at the seabed – while its motion is driven by momentum transfer from the wind. On the other hand, small-scale motions are internally driven and much smaller than typical weather disturbances. In the atmosphere, internal dynamics drive much of the large-scale motion, while local forcing may be more important at small scales. Buoyancy is important in both but is driven by temperature and humidity in the atmosphere, as opposed to temperature and salinity in the ocean.

There is a long history of interaction in marine weather forecasts. However, the resulting interdisciplinary science has tended to be isolated from core ocean science. Genuine partnership has grown more recently with the development of coupled global models for climate studies. Such partnerships tended not to focus on coastal hazards. However, the coupled modelling approach is gaining increased use in weather forecasting (Pullen et al. 2017), and so the number of meteorologists and oceanographers working across this interface has grown.

6.4.3.3 Meteorology and Other Disciplines

Meteorological models run on a global grid in the medium and long range, often moving to limited area at shorter ranges to allow high-resolution grids. They are updated at regular intervals. Hazard models tend to be run on demand by the user, and the meteorological input may not be as fresh as desired. Hazard modellers frequently operate at jurisdictional (state, county or local government) level, so it is necessary to interpolate or “cookie cut” weather model output to obtain appropriate input for their hazard models. Post-processing of ensemble forecasts can provide the “worst-case”, “best-case” and “expected” weather inputs for downstream models. However, sophisticated users increasingly ingest individual ensemble members to generate ensemble hazard forecasts of flood, fire, air quality, etc.

Different hazard models use weather information quite differently so weather modellers and forecasters must be flexible in their capability to serve hazard models with the necessary weather information. Weather models offer much richer information than many hazard models have been designed to ingest. As weather and hazard models become more tightly coupled, some physical attributes and processes that are critical to accurate hazard forecasting, such as vegetation, soil moisture and aerosol, will need to be treated more carefully by meteorologists.

6.4.4 Technical Constraints

6.4.4.1 Data and Standards

The data communication bandwidth within institutions is often orders of magnitude greater than that between institutions. As a result, the downstream partner may receive highly degraded input data, e.g. in spatial or temporal resolution, in domain or in the resolution of the probability distribution. Increasing bandwidth not only requires faster datalinks, but bigger databases, more expensive processing and more sophisticated interfaces, so choosing the optimum is important.

Weather model outputs conform to standards set by the World Meteorological Organization (WMO 2019). Hazard modelling communities operate to different, often local, standards, so effort is required to make weather and hazard models “talk to each other”. State-of-the-art NWP models are computationally intensive and require high-performance computers (WMO 2013). Hazard models generally have less intensive compute requirements. When coupling hazard models more closely to weather models, hazard models could be transferred to meteorological centres where the NWP outputs can feed directly into the hazard model as in the ECMWF GLOFAS system (Alfieri et al. 2013) or NWP outputs can be transferred to the hazard modellers’ environments for local use or one or both models can operate in a shared cloud environment. Each has challenges in terms of computational efficiency, control of model upgrades and speed of operation.

6.4.4.2 Spatial and Temporal Scales

The scales required to assess the impact of the hazard must be the driver for all parties. The resolution used by the hazard agency to meet these demands may require unachievably fine resolutions for the input weather data. In this case, downscaling of the weather forecasts may be needed. The benefits of doing this need to be clearly evaluated.

Hazard impacts often occur on small scales, and as a result, hazard prediction frequently places high demands on the meteorological information supplied as input. In a 1-day forecast, very-high-resolution limited area NWP may be able to meet this demand. For hazard prediction on timescales of days to a week or more, required for some mitigating actions, the influence of the large-scale meteorological flow and uncertainty on the local detail dictates the use of global models, with consequent coarser spatial resolution. The predicted timing of events similarly loses precision for longer-range forecasts. This may not meet the needs of the hazard agency for highly precise information. However, it follows from the inherent unpredictability of the atmosphere and has to be accommodated by the coupled prediction system.

6.4.4.3 Uncertainty and Bias

Weather forecasters and hazard modellers use numerical forecasts in different ways. Meteorologists accommodate errors in the predicted location and timing of high-impact weather from NWP paying particular attention to the large-scale patterns (at medium range) or the mode of convection (at short range) and applying their experience to interpret the forecast. Hazard models are often less able to accommodate errors such as rainfall falling in a different catchment because of a “minor” positional error, a surge occurring at a different state of the tide because of a “minor” timing error or snow failing to reach the ground because of a “minor” temperature error. Ensemble prediction offers a means to transfer uncertainty in weather forecasts into uncertainties in hazard forecasts. However, while ensembles are good at capturing uncertainty, they may still be biased. It is therefore important to correct systematic errors in model outputs by statistical post-processing prior to their ingestion into hazard models (Gascon et al. 2019).

6.4.4.4 Uncertainty

Atmospheric forecasts are essentially uncertain due to the chaotic behaviour of the atmosphere. An effective partnership will recognise that this is not a shortcoming in the data input to the hazard forecast but, rather, a fundamental limitation that must be reflected in the hazard forecasting system. Some hazards, such as those associated with severe thunderstorms, reinforce the uncertainty from the basic meteorology and must be predicted in probabilistic terms, while others, such as flood predictions for a large river system, may reduce it. An essential part of developing a hazard forecasting system is to identify how uncertainty will be incorporated, both in the meteorological inputs and in the hazard outputs.

6.4.4.5 Consistency of Heat, Water, Gas and Momentum Fluxes

Many hazards occur at the interface between atmosphere and land or ocean. The ability to model this interface is currently crude, with fluxes leaving one model often inconsistent with the requirements of the receiving model. Over land, the descriptions both of the land surface, including buildings, trees, rocks, etc., and of the turbulent processes through which interaction occurs are extremely simplified. Over the sea, the surface is generally considered to be horizontal, ignoring the turbulent effects of waves and moisture exchanges due to spray. Advances in understanding these processes require detailed and painstaking research supported by expensive field measurements.

6.4.5 Model Integration

From the first coupled climate models of the 1970s, representations of the long-timescale interactions between the physical and chemical state of the atmosphere along with feedbacks between atmosphere, land surface, ocean and cryosphere were necessary for decadal to centennial simulations. Since then, the scope of earth system coupling has been extended to introduce greater complexity and fidelity and to include processes such as aerosol chemistry, dynamic vegetation and ice sheet dynamics (see, e.g. Jones et al. 2011; Cornell et al. 2012).

The translation of this “whole system” thinking to weather forecast timescales is less mature but is a growing area of research and application (Rabier et al. 2015; Belair 2015). Typically, NWP has made simplifying assumptions that omit or parameterise earth system interactions so as to minimise the computational cost and complexity of forecasting systems, recognising that these processes usually make a second-order contribution to predictive skill. For example, assuming that the analysed sea surface temperature valid at the start of a simulation cycle will persist for the duration of the weather forecast has been considered sufficient at most operational NWP centres. Today, this situation is changing with the application of atmosphere-land-ocean coupled ensemble NWP systems increasingly common for global-scale systems running forecasts on timescales of days to weeks (Harrigan et al. 2020). This has been shown to improve predictive skill in tropical regions including a better representation of tropical cyclone evolution and intensity. Remaining challenges include extending this “whole system” approach to data assimilation and ensemble prediction. While assimilation methods and capability are well advanced for atmosphere and ocean components, they are less developed for other components such as atmospheric chemistry or hydrological state. Challenges also remain in coupled atmosphere-ocean data assimilation, arising from the different timescales required for initialisation of ocean and atmosphere components (Frolov et al. 2016). Further work is also required on the design of representative initial condition and model uncertainties in coupled ensemble systems, so as to capture the impact of interactions on uncertainty.

For shorter-range regional prediction systems, there is similarly a growing recognition of the potential value of increasing model complexity, including regional coupled environmental prediction systems (Lewis et al. 2018a; Fallmann et al. 2019). At kilometre scale, the relevance of earth system processes becomes important for better representing the heterogeneity of the landscape to improve model skill, notably at coastlines and around urban environments. The prospect of a more integrated catchment-resolving approach to hydrometeorological prediction also becomes possible. Critically, the advance towards ensemble numerical environmental prediction provides a framework from which to develop consistent outputs for simulation of multiple hazards. In general, environmental hazards have a strong meteorological driver, e.g. the multiple impacts of a storm in a coastal environment through strong winds producing inland inundation from sea surge and wave overtopping and through heavy rainfall and saturation of the land surface leading to high river flows, overbank inundation and potential for landslide and other linked hazards. The goal is to represent multi-hazard probabilities, accounting for uncertainty propagation through a connected system. At these scales of interest, interactions of physical and biogeochemical systems with the built environment and human systems also become increasingly relevant and offer a further frontier for bridging across communities, science disciplines and modelling capability.

Integrated modelling requires attention to be paid to the scales of interest in each domain. Integration of NWP with land surface hydrology requires recognition of the much finer horizontal resolution required for accurate hydrological prediction, especially in urban areas (Cuo et al. 2011), as well as the sensitivity to heat fluxes and evapotranspiration. Ocean models need to run with fine horizontal resolution to represent the nearshore ocean and especially the inter-tidal zone, and vertical fluxes of heat and momentum depend on modelling of waves and currents. Estuaries are particularly complex, requiring interactions amongst inshore ocean, river and flood inundation, often in an area of complex meteorology, bearing in mind that the temperature and composition of river water may influence the temperature and biology of the coastal ocean and hence any coastal atmospheric circulations. Coupling of air composition into weather models requires both the radiative and cloud microphysical impacts of aerosols to be considered simultaneously with getting the ground-level pollutant concentrations right. Composition models typically contain large numbers of species and the chemical reactions between them, resulting in much expanded prediction codes with many parameters (Freitas et al. 2011). They also require specification of pollutant sources – which may change for a variety of reasons, some of which may have regular patterns in time and space, some may be weather dependent and others may be associated with specific events such as festivals.

6.4.6 User-Oriented Verification

To generate confidence across the partnership, the quality of the inputs delivered by each partner should be measured in terms that reflect their use by the other partner, as well as in terms that support their internal development (Ebert et al. 2018). For hazard forecasting, aspects of weather forecast quality that are important may include location and timing of features such as storms and fronts, structure and variability, and magnitude and extremity. For example, verifying rainfall for flood prediction requires assessing whether it was located over the catchment of interest and whether it had the right intensity distribution to produce the observed runoff and flood height. Temperature verification for heat wave forecasting assesses whether the predicted temperatures were sufficiently extreme and of sufficient duration to lead to health impacts. This sort of diagnostic evaluation complements more traditional metrics of forecast accuracy. The discipline of routine objective forecast verification practised in operational meteorology can be extended to hazard prediction, providing suitable hazard observations are available.

Identifying the root cause of a deficiency in hazard forecasts is important and requires collaboration. Where errors can be related to a bias in the meteorological input, this may be straightforward. However, where processes are complex, it may be necessary to explore them in detail to establish whether the cause is in the process representation, in the input meteorology or elsewhere. It may, indeed, be a mixture, and model tuning often leads to error cancellation that only becomes apparent in extreme conditions. Joint field programmes can be a valuable opportunity for exploring such issues.

Verification should be oriented to the aspects of the prediction system that are most relevant to the decision-maker at the end of the warning chain. Knowing how forecasts become less accurate at longer lead times helps the decision-maker understand the risks of acting (or not) on a forecast that may turn out to be a false alarm or a missed event. Verifying forecasts and warnings of socio-economic impact (if they have been made) is extremely difficult, as discussed in Chap. 4. Visual comparison of forecasts overlaid with evidence of hazard impact can be informative and helps tell the story. Measuring the performance of the forecast elements that can be objectively verified is also important. When decision-makers have thresholds for taking action based on the forecast, then verifying forecasts of threshold exceedance at the location of interest gives the user the quality information required to develop an appropriate level of confidence.

6.5 Examples of Partnerships

Box 6.1 Flood Forecasting Centre Case Study

Graeme Boyce, Flood Forecasting Centre, UK

Following devastating floods during the summer of 2007, the UK government was determined to develop a more “joined-up” approach to both preparing for and responding to flooding. The Pitt Review (Pitt 2008) identified both the need for all organisations involved to be willing to work together and share information and the importance of forecasting and prediction in enabling emergency planners and responders to reduce the risk and impact of flooding. Its recommendation was clear – the Environment Agency (EA), as the lead flood risk management authority for England and Wales (subsequently responsibility for Wales was devolved to Natural Resources Wales (NRW)), and the Met Office, the UK national meteorological service, should work together, through a joint centre, to improve the technical capability to forecast, model and warn against all sources of flooding. In April 2009, the Flood Forecasting Centre (FFC) became operational, creating a national capability, for England and Wales, to provide advanced notice of potential flood risk from all natural sources of flooding (river, coastal, surface water and groundwater) through a daily flood guidance service delivered to all organisations with a statutory responsibility to respond to flooding. The most important role for the FFC was, and still is, to provide flood guidance to the response community; however, with commendable foresight, the scope of the centre also included the remit to engage directly with customers/users and to deliver ongoing service improvements based on feedback from those using the service.

From its inception, the FFC has placed the science of hydrometeorology at its core. A small team of meteorologists from the Met Office, and hydrologists from the EA, was recruited and cross-trained to gain a deeper understanding of each other’s disciplines and customer needs – creating a cadre of professionally accredited operational hydrometeorologists. With different training, institutional backgrounds and employment terms, there was an initial challenge of creating trust, which was overcome by a combination of openness and establishing a common purpose. However, the complications of dual IT systems linked to the parent institutions remain and will not be easily solved.

The centre was set up with a goal to forecast the impact of floods from natural sources, with as long a lead time as possible. To do this, it was recognised that concepts of likelihood and uncertainty would need to be incorporated into guidance information and this resulted in a risk matrix taking a central role in presenting the likelihood of flooding, over the next 5-day period, within the FFC’s primary product – the Flood Guidance Statement (Fig. 6.10).

Fig. 6.10
figure 10

Flood Guidance Statement for England and Wales issued on 26 December 2020. (© Crown Copyright 2020, Flood Forecasting Centre)

The risk matrix was co-designed with the Met Office National Severe Weather Warning Service (NSWWS) ensuring both used consistent concepts and terminology, helping to promote a joined-up service with our response community. Partnership and collaboration were key to its initial ability to become embedded within the flood risk incident management structure and remain vital to its success. The Flood Guidance Statement is co-produced with local forecast teams from the EA/NRW and operational meteorologists from the Met Office. At times of heightened flood risk, FFC duty managers routinely brief senior officials within central government and the EA on the flood risk at a national scale to support strategic decisions. Considerable effort is made to maintain an authoritative and consistent flood risk message during periods of heightened flood risk across the FFC partnership to support flood incident management decision-making. This level of collaboration is maintained when planning improvements to forecasting capabilities of customer facing products. The default position is to maintain common forecasting and visualisation systems where possible and work in partnership with Met Office and EA/NRW colleagues to improve these for mutual benefit. The “bridge” that the FFC provides from the Met Office to the EA/NRW flood management authorities has improved the pull-through of science into operational use and has reduced the time taken for this to happen. Prior to 2009, the lead time generally provided by flood forecasts across England and Wales was measured in hours. Over the past 10 years, flood risk guidance has routinely been provided for the next 5 days, and now the Centre is expanding the user base for its 30-day Flood Outlook service (Fig. 6.11).

Fig. 6.11
figure 11

Flood Outlook for England and Wales issued on 30 December 2020. (© Crown Copyright 2020, Flood Forecasting Centre)

Engagement with the flood responder community is also coordinated across the partnership, and this allows the forecasting authorities to present a more coordinated approach and increase the benefit gained from forecasting and warning information. All these improvements have been overseen by a Joint Steering Group, with representation from the Met Office, EA and NRW, and guided by a User Group, with a wide membership from the flood response community, that has enabled this unprecedented partnership and collaborative approach to continue.

With over 10 years of operational experience, providing a flood guidance service, including periods of significant flooding (e.g. winter 2013/2014, winter 2015/2016 and February 2020), the Flood Forecasting Centre can confidently claim that it has become a very successful partnership bringing together world leading meteorological and hydrological science. With customers from the emergency responder community asked to rate the service provided by the FFC every 2 years since its inception, overall satisfaction rose to 91% in 2019, and 92% were satisfied with the daily Flood Guidance Statement. Trust is critical, both in terms of maintaining a successful partnership and in continuing to deliver a forecasting service that is acted upon and provides value to its user base. Perhaps the most visible example of this trust is the investment by the Environment Agency in over 40 km of temporary barriers to help defend communities at risk of flooding where no permanent defences exist. Their successful deployment is dependent on good, advance notice of flooding which is delivered by the FFC in partnership with forecasting colleagues from the Met Office and Environment Agency. This has only been possible through continued collaboration at all levels of governance and leadership, scientific/technological development and operational delivery over the past 12 years.

Box 6.2 Reflections on Working in Partnership with Fire Agencies During Extreme Fires

Mika Peace, Bureau of Meteorology and Bushfire and Natural Hazards Cooperative Research Centre, Australia

In recent fire seasons, Australia has experienced unprecedented fire events. Through many of these, I have worked inside the state operations centres (SOCs) of fire agencies, providing an enhanced briefing and interpretive role.

Unlike other severe weather phenomena, high fire risk doesn’t always translate to impacts; it depends on whether ignition occurs. When extreme fires are active, they happen fast; therefore, a deep appreciation of complexity of the situation and rapid response is required. My role involves working closely with the fire behaviour analysts, as fire prediction crosses the disciplines of fire science and meteorology, requiring cohesive teams with multidisciplinary knowledge and an ongoing exchange of information. I need to have an evolving narrative as the situation unfolds and new information becomes available through the day. Being embedded in another agency also requires a strong connection and established networks with my home agency, so I can reach out for additional information when required.

I see my role as ensuring there are “no surprises”, in the SOC or on the fireground. Extreme fire behaviour will happen, but response can be adapted and risk minimised if everyone has clarity on when and where the fire will run and what fire behaviour will occur. When analysing the data, I’m constantly thinking “what could the weather and fire potentially produce as extreme fire behaviour and what is the likelihood”. The process is not as simple as looking at the NWP output; a deeper level of interpretation and pattern recognition is required. Sometimes, communicating with confidence what won’t happen is extremely valuable because it focuses energy away from unnecessary concerns.

Inside the state operations centres, there is a prodigious demand for meteorological information, but the value is in interpretation of how the weather will impact fire behaviour. Copious amounts of data are readily available; intelligence is much more difficult to develop and deliver. On bad days, numerous briefings to various audiences are requested, frequently with minimal notice and requiring distillation of complex information into an understandable and immediately relevant message. My experience inside partner organisations is that briefings and conversations are more valued than products.

Emergency management involves political leaders in the decision and response process, and they rely on expert advice from trusted scientists who can communicate clearly. I’ve been surprised and impressed at how quickly politicians can read a room and determine who has deep expertise and can be trusted for advice as well as their perceptive questions that require comprehensive knowledge to answer.

Extreme fire events are stressful, particularly in a room full of people who have responsibility for making decisions with life and death consequences. So far, I’ve had an ability to maintain a calm demeanour and clarity of thought during briefings. When I am particularly worried about a day, I’m aware that my concern is projected during briefings, and I’ve seen the emotion in the delivery of my briefing message being received and responded to just as clearly as the science content. On occasions during a disaster, I’ve switched from providing science briefings to being a listening ear and providing hugs to colleagues under stress.

In post-event debriefs, I have repeatedly heard our partners emphasise the value they place on trusted relationships with individuals. The counterargument I’ve heard is that our procedures should be sufficiently robust that relationships don’t matter. However, human nature is to value relationships, and emergency management tends to attract empathetic people with altruistic intent, so I believe relationships will continue to be important during extreme events.

Having researchers such as myself in operations has bilateral benefits as, ultimately, stronger research utilisation links will be built, enabling accelerated uptake and adoption of research findings. It will also focus research efforts towards high-impact outcomes addressing real-world issues. I am fortunate to have an extremely rewarding role straddling operations and research. However, what I do is not traditionally a defined career pathway in meteorological agencies. The benefits are intrinsic and therefore difficult to measure and are only fully realised during high-impact events. Long-term investment is required to build cross-disciplinary capability and develop partnerships before events happen so we can be ready to “hit the ground running”.

It is not possible to anticipate and plan contingencies for all possible future scenarios. It is probable that the worse-case scenario is beyond what we can imagine, and it is inevitable that cascading and overlapping events will present response challenges that stretch resources beyond capacity. A structure that supports organic response and enables well-connected people to call in any available assistance when faced with predicted and escalating situations will support optimal response to emerging disasters (Fig. 6.12).

Fig. 6.12
figure 12

Mika (left) briefing the New South Wales Rural Fire Service Commissioner (centre) and the New South Wales Premier (right) in the Rural Fire Service Operations Centre (“the room”) during the 2019–2020 fire season

Dr. Mika Peace is a fire meteorologist at the Australian Bureau of Meteorology. For 10 years, she worked as an operational forecaster in various locations around Australia, and for the past 10 years, she has held a fire meteorology research role. She is recognised as an expert in fire atmosphere interactions through research on case studies and simulations of extreme fire behaviour using coupled fire atmosphere models to understand the interaction processes between the energy release and the surrounding atmosphere.

6.6 Summary

  • Successful hazard predictions require effective application of expertise from each discipline.

  • Building partnerships amongst hazard prediction institutions requires time and effort to remove institutional barriers and build shared objectives.

  • Hazard disciplines have different languages and cultures. Successful hazard prediction requires members of each discipline to learn the language and culture of their partners.

  • Observations of hazards are fundamental to understanding their importance and their causes but are not widely available or easily accessible.

  • Linking hazard models to weather models requires care, based on an understanding of the different roles of the relevant variables in each model.

  • Linking hazard models to weather models requires choices of data standards, time and space resolution, update frequency, forecast length, representation of uncertainty and measures of quality. Compromises should be driven by user requirements wherever possible.

  • Integrated models will increasingly be the basis of hazard prediction in the future. Their implementation should be based on clear evidence of benefit to users.

  • Hazard forecasts should be verified against observations using methods that reflect the use of the predictions in warnings.

  • Hazard forecasts will be used alongside weather forecasts and should be consistent with them. Shared situational awareness tools can facilitate consistency.