7.1 Introduction

This chapter addresses the challenge of forecasting hazardous weather, focusing on the collaborations needed to meet the requirements of prediction models and forecasters for observations of the current state of the atmosphere. Figure 7.1 provides an overview of the scope of the chapter, in which we shall describe:

  • The role of the forecaster in extracting critical user-relevant information from model predictions.

  • Data-driven prediction tools, including nowcasting and statistical post-processing.

  • The structure and components of a Numerical Weather Prediction (NWP) system with emphasis on assimilation of observations and ensemble prediction in state-of-the-art kilometre-scale NWP systems.

  • Current in situ and remote sensing networks and new capabilities that are under development.

  • New sources of data that have the potential to enhance observational coverage and density.

  • Differences of methodology that need to be overcome to build effective partnerships that deliver the observational data needed for prediction.

  • Examples of working relationships that have successfully overcome these challenges.

Fig. 7.1
figure 1

Simplified view of the components of the forecast systems as discussed in this chapter. The orange arrows indicate the interface between weather and hazard warnings/forecast (leftmost orange arrow, discussed in Chap. 6) and the gap between observationalists and the forecaster/prediction system (rightmost orange arrow) that is the subject of this chapter

Forecast information is required for a variety of weather variables at different lead times and spatial-temporal resolutions. For warnings, the probability of extreme or unusual conditions relevant to local standards (e.g. infrastructure and construction codes) and expectations are particularly important. Warnings are especially important in densely populated urban environments where hazards can lead to a cascade of impacts (Baklanov et al. 2010, 2018; Grimmond et al. 2020). For clarity, in this chapter, we make a distinction between predictions, produced by NWP models, nowcasting systems or statistical processing, and forecasts, generated by forecasters based on the interpretation of predictions (see Fig. 7.1), while recognizing that predictions are increasingly input directly into hazard models. Observations are not only critical inputs to the model prediction process but are also needed by the forecaster and for verification. Sophisticated interactive human-machine software is enabling forecasters to interact more with both observations and predictions and to create digital products based on human interpretation of observations or forecasts (e.g. the Interactive Multi-Sensor Snow and Ice Mapping System, Matson and Wiesnet 1981).

7.2 High-Impact Weather Forecasting

7.2.1 Multiscale Forecasts

High-impact weather involves multiscale processes, so both observations and predictions should capture atmospheric variation from large scale (global) to small scale (e.g. sub-urban). Different sources of forecast information are used to generate products at these different scales. Table 7.1 describes some of their characteristics, categorized by temporal scale. Spatial scale is related to temporal scale, so small-scale features such as convective thunderstorms are only explicitly resolved with fine temporal- and spatial-scale models. In the table, current typical model grid lengths are quoted, but it should be remembered that models can only resolve atmospheric features of several times (typically five to ten times) the grid spacing (Lewis and Toth 2011). As far as possible, models should be seamless across different space/time domains, but computational constraints still require them to use different resolutions for different domain sizes, which may imply using different parameterizations, data assimilation, ensemble perturbations or post-processing. The frequency of model predictions also varies with forecast length. Long-range (seasonal) predictions may be generated once or twice a month, whereas nowcast predictions may be initiated every hour or less Table 7.1.

Table 7.1 Characteristics of systems for predicting weather and climate at a variety of time scales.

There is a commensurate wide variation in the observation requirements of models. Generally, observations need to be more accurate and less biased than model predictions and must be quality controlled with respect to the capability of the model to reproduce the observation (e.g. due to its resolution or the processes it represents). It is very much a case of “treasure versus garbage”. Fine-scale variability resolved by a high-resolution model may be “noise” to a lower-resolution model. Noise in the initial model state can grow rapidly and limit accuracy at longer forecast times, so must be filtered out in the observation quality control and assimilation. Variables that change little over the time scale of a forecast (such as sea surface temperature in a short-range forecast) may be set to a fixed value in that forecast but may need to be predicted in a longer forecast.

Local physical influences often drive the details of high-impact weather, so the focus of much current high-impact weather research is in the development of high-resolution prediction, to capture not only the details of the local environment but also the hazard-related atmospheric structures. Accurate resolution of small-scale physical structures, such as orography or the urban fabric, can aid predictability. However, uncertainties in the initial conditions, under-resolved physical processes, inaccuracies in the numerical solutions and the rapid growth of small-scale perturbations ultimately pose limits to predictability that become shorter with decreasing scale. This paradox, of uncertainty increasing as resolution gets finer, can be overcome using ensembles to generate probabilistic forecasts.

In this section, we shall focus on the highest resolutions to illustrate the needs of prediction systems that support severe weather warnings. However, many of these characteristics are relevant also for the other prediction systems listed in Table 7.1.

7.2.2 Forecasters and Decision-Making The Forecaster Process

Given the variety of missions and stakeholders, there is no single process for generating a forecast, but all forecasters use common methods such as recognition of patterns and established rules relating to them, knowledge about instruments, observations, models, products, societal impacts and responses, collaboration with peers in- and outside their organization and constraints of messaging.

At the start of a duty shift, the forecaster is briefed on and reviews the previous forecast to understand the context in which it was made and any special vulnerabilities, such as a first snow event of the season, or an unusual hazard. In reviewing the previous forecast, the forecaster will take account of not just the meteorology but also any extraneous constraints on the forecast environment such as hardware or staffing issues.

The forecaster must understand the overarching nature of their shift and for whom they’re generating a forecast as their approach and strategy will differ if issuing sub-seasonal, weekly or aviation forecasts. The forecast goals depend on the needs of the user, and the role of the forecaster is adapted to the message type and to the risk that must be communicated.

The forecaster will then interact with the data to develop a four-dimensional understanding of the weather situation. An analysis of the large-scale pattern sets the context for understanding meteorological structures at smaller scales. This process continues down to the smallest scales in a process called the forecast funnel by Snellman (1982). Longer lead-time forecasts typically depend more on NWP information than shorter lead-time forecasts.

The volume and sophistication of NWP products are increasing rapidly. The forecaster interprets them in the context of past performance in those types of weather, including model climatologies and verification. Even sub-hourly severe local thunderstorm warning decisions will take account of an available mesoscale NWP analysis from a rapid update convection-permitting model (Weisman et al. 1997). The forecaster will use automated guidance including basic weather variables, such as temperature or wind, but may also have access to hazard variables such as visibility, severe wind gusts and winter precipitation amounts.

Forecasters benefit and suffer from the increasing volumes and diversifying types of observations and model data now available to them. As data volumes grow, the forecaster is increasingly reliant on computer systems to organize and present the data in a form that enables easy navigation and interaction so as to avoid data overload causing a decline in forecasting performance (Hoffman et al. 1995). Forecaster workstations must be designed from a human-centric, not system-centric, perspective (Andra et al. 2002; Heizenreder et al. 2015) as ergonomics, human factors, system architecture, bandwidth and speed of presentation are all important for forecaster effectiveness. Automated products can provide valuable guidance, but providing the “answer” is useless without the capability to efficiently interrogate, assess and evaluate the data for decision-making (Joe et al. 2002; Stuart et al. 2007). It is essential that the forecaster is able to maintain a conceptual understanding of the underlying weather processes and to be able to view how the NWP prediction matches with the conceptual model and observations, so as to make sense of potentially conflicting information. For instance, if short-term model guidance fails to produce convection when the observational data shows the necessary ingredients are present, the forecaster will challenge, and may need to abandon, the model guidance.

A forecaster must collaborate with colleagues serving other users in the same or adjacent areas, whether they are located in the next desk or another country. This is especially true when forecasting for widespread weather systems, such as winter storms or tropical cyclones. On smaller scales, the signal from a single instrument may be critical, requiring expert input from technical experts for interpretation. The forecaster must also take account of how their forecast will be used and must be an effective team player within the warning production chain.

As new observations, NWP and prediction systems are introduced or upgraded, the expert forecaster must understand not only the weather but also the capabilities and limitations of each innovation in order to assess and evaluate their efficacy. System and product training must be continually refreshed for this purpose.

7.2.3 Data-Driven Prediction Post-Processed Products

A numerical weather modelling system predicts the mean dynamic and thermodynamic weather variables such as temperature, pressure, wind and moisture using a discretized form of the continuous equations of fluid mechanics on a three-dimensional grid, with unresolved processes (e.g. those occurring in clouds or close to the ground) parameterized in various ways. Warnings of high-impact weather require knowledge of the basic variables at unresolved scales (e.g. wind gusts) and of other variables (e.g. visibility or snow depth). These may be estimated using statistical or empirical post-processing models. Sub-grid wind gusts need to be estimated using statistical relationships. The representation of terrain used in the model is smoothed and may not represent the urban texture/fabric; but the fine details are often important for warnings. For instance, there can be a considerable difference in height between the observation point and nearest model point, and model data should be adjusted to account for these differences by post-processing, either with past observations or using an estimated gradient. In some cases, the model output is adjusted on an hourly basis (Landry et al. 2004). Bias correction is less of an issue for severe storm prediction, as the objective is to model the hazardous phenomena directly (e.g. hail, tornado).

User-relevant warning products require the combination of weather elements in post-processing; for example, blizzard or dust storm warnings require predictions of snow or sand surface conditions combined with surface winds. A variety of statistical, artificial intelligence and analogue techniques are applied, combining model data with historical data and real-time observations to generate these user-oriented products (Burrows and Mooney 2018). For example, “ProbSevere” combines model predictions with highly processed real-time satellite data to create multi-sensor warning guidance products for the prediction of severe weather (Cintineo et al. 2018). Nowcasting

Nowcasting is defined as forecasting a detailed description of the weather, by any method, over a period from the present to 6 hr. ahead (Sun et al. 2014; WMO 2017). Traditionally, nowcasting was focused on severe thunderstorm warnings, but it has evolved to many more applications. Summer

Nowcasting of summer weather has focused on convective storms and their hazards, including heavy rain, flash floods, tornadoes, hail, damaging winds and lightning strikes, mainly using observation-based identification and extrapolation. These forecasts support warnings of the most immediate hazards where action should be taken immediately to save property and/or life and generally cover the 0–1 h time period (NWS 2021). Automated extrapolation has been based on spatial correlation of two-dimensional radar-derived precipitation maps at different scales (e.g. Bellon and Austin 1978; Rinehart and Garvey 1978) or tracking of thunderstorm features (Dixon and Wiener 1993).

The automated warning of microbursts at US commercial airline airports is probably the most successful of all nowcasts (see example in this chapter). These warnings have eliminated the crashing of jet aircraft, on take-off and landing, caused by microbursts, likely saving hundreds of lives. Controllers and pilots are warned of microbursts based on an automated algorithm that ingests data from the Terminal Doppler Weather Radar located near most US airports (Wilson and Wakimoto 2001).

The biggest challenge in nowcasting is predicting a severe convective storm before it has formed. This requires observation of the 3-D pre-convective environment in the lower troposphere. Unfortunately, this is currently terra incognita in earth system science (Wulfmeyer et al. 2015) despite long-standing evidence that detection of boundary layer convergence lines and of upward motion at the top of the boundary layer are key for predicting the dynamics (Wilson and Schreiber 1986) and that observations of moisture and temperature profiles are needed for predicting clouds and precipitation. High-resolution networks of surface stations (spacing 5–20 km) are valuable for identifying the sharp gradients in wind, temperature and moisture characteristic of the mesoscale boundaries on which storms may develop. Doppler lidar and radar also have the ability to observe convergence boundaries, while geostationary satellites can detect the growth of the boundary layer and subsequent development of clouds at these boundaries (Purdom 1976; Weaver and Purdom 1995). The potential for growth of the incipient storm is dependent on the stability and wind shear of the deep atmosphere, which can be obtained from radiosondes and vertical profilers and from satellite soundings assimilated in NWP models (WMO 2017). However, current observing capabilities lack the high spatial, vertical and temporal resolution profiles of wind, temperature and moisture in the lower troposphere that are needed. Such observations are becoming possible with the new generation of Doppler, Raman and differential absorption lidar systems.

Once a storm has formed, processing of channel differences in geostationary satellite data can be used to identify storm phenomena such as severe convection and overshooting tops. Storm intensity and movement can also be tracked using lightning detection, from both ground-based networks and satellite-based lightning imaging sensors. However, the most valuable observation source is Doppler radar. Within the storm, radar reflectivity information enables identification of developing and decaying areas and of storm movement, while Doppler wind information can pinpoint the development of storm rotation prior to tornado development and of severe up- and down-draughts.

Quantitative hazard information (e.g. from tornadoes, microbursts) is much more difficult to obtain, as it is generally at even finer spatial resolution than operational radars can provide. Severe convective weather warnings are thus often issued on proxy information such as radar-derived storm structure and precipitation intensity. In order to verify warnings, direct observations of the hazard are required that are typically obtained from human spotters – both professional and volunteer – and from post-event damage surveys. Increasingly, these reports are being obtained through social media with the possibilities of automated processing in real time for operational warning use. Winter

Winter nowcasting is focused on the prediction of precipitation type, extreme cold, strong winds and poor visibility. Many of these weather variables are poorly observed, making verification of forecasts difficult. For example, in situ snowfall measurements are impacted by wind, and remote sensing by radar is insufficiently precise (WMO 2018; Boudala et al. 2017). Freezing precipitation is particularly difficult to observe and forecast (Strapp et al. 1996). Frost is not observed at routine weather observing stations. Whether snow or rain occurs depends on small changes near 0 °C, where the difference between model terrain height and reality may undermine the skill of the prediction. Blending of in situ observations with high-resolution models is an emerging technique (Huang et al. 2012; Bailey et al. 2014). However, good in situ high-resolution observations are rare, so validation studies of remote sensing techniques (e.g. for snow depth) are also rare. International projects on winter weather nowcasting have documented some of these problems (Isaac et al. 2014b; Kiktev et al. 2017) and have identified the need for observations at high time resolutions (1 min) and for fine-scale models (<2 km). Typhoon/Hurricane Nowcasting

Tropical cyclones (TC), including hurricanes and typhoons, bring significant safety and economic impacts to lives and property particularly in tropical and subtropical coastal areas. The accuracy of TC track forecasts has continuously improved, but prediction of intensity (surface maximum wind and storm size), structure (symmetry and vertical structure), precipitation and associated flooding and storm surge inundation is still a challenge. Emergency and rescue response rely heavily on rapidly updated observations and nowcasts using Doppler Weather Radar and surface rain gauges for frequent updates of precipitation estimates and forecasts for decision-making. Over the open ocean, geostationary and polar orbiting multispectral satellite observations and products are the main data used to monitor, analyse and assimilate into global and regional NWP models. Significant efforts are being made to retrieve ocean surface winds, layered cloud motion vectors, cloud height, rainfall rate, atmospheric stabilities, etc. (EUMETSAT 2021). Recent advances, including the blending of radar-based and satellite-based data, allow precipitation forecasts to be extended up to 6 hours ahead and, over a broader area, to provide longer lead times enabling early preparation and decision-making. There has been progress, but there are still significant challenges in detecting rapid intensification (Fig. 7.2; Kaplan et al. 2010; EUMETSAT 2021).

Fig. 7.2
figure 2

Rapid intensification of Typhoon Higos (Aug 2020) approaching the coast of China over the Northwestern Pacific basin as seen from the “Hot Tower” satellite products. Pink in the middle represents overshooting top by the Rapid Development Thunderstorm product, while light grey hot tower area by the Hot Tower algorithm. There were only 6-hour differences between the two images. (Source: Hong Kong Observatory based on Himawari-8 satellite of the Japan Meteorological Agency)

Several existing observation platforms are currently under-utilized in operational TC nowcasts. For example, rapid scan short-wavelength radar, multispectral geostationary satellite imagery, ground-based or spaceborne lightning mapping, dropsondes from reconnaissance flights, aircraft in situ measurements (viz. AMDAR/ACARS upper air winds, temperature, humidity) and Global Positioning System constellation slant-path precipitable water vapour measurements. Studies are required to better utilize this information for rapid analysis of the atmospheric state. Ocean observations (buoys, oil rig AWSs), sea surface wave and current measurements, tide-level measurements, storm surge modelling, hydrological modelling, inundation modelling and their integration still require significant advancement for use in TC disaster nowcasting, warning and protection (Fig. 7.3; WMO RSMC 2021).

Fig. 7.3
figure 3

An atmospheric/oceanic observation integrated platform for real-time analysis of the structure and intensity of a TC (left) using latest available radar, scatterometer, automatic weather station, oil rig, buoy, lightning, etc. observations. (Source: Hong Kong Observatory)

These nowcasts should include confidence or calibrated probability information to aid users’ risk assessments. Confidence information could be generated efficiently with current computers using an ensemble approach. Probabilities need to be related to the end user’s/decision-maker’s impact parameters involving the entire value chain.

7.2.4 Numerical Prediction

High-impact weather, related to hazards, occurs mostly on very small scales (e.g. individual convective storms, urban heat islands). Although considerable advances have been made in NWP-based warnings of some high-impact weather events, such as tropical cyclones and disruptive winter weather, detailed high-impact weather forecasts have, until recently, been largely based on observational detection and/or visual confirmation, due to the limitations of operational NWP models in providing accurate predictions at these scales. Skilful probabilistic forecasts are critical to provide timely and accurate warnings, requiring access to observations of the dynamics and thermodynamics of the atmosphere at these scales and their assimilation into kilometre (km)-scale ensemble-based numerical weather prediction models. Even though there are many challenges in developing kilometre-scale NWP systems, running them, and post-processing voluminous amounts of output into useful guidance for decision-making, the potential benefits are significant. Kilometre-Scale Numerical Prediction

Kilometre-scale NWP models explicitly represent multiscale processes, including dynamic interactions between scales and organization of different types of high-impact weather. They have a more detailed representation of land surface heterogeneity than coarser-resolution models and use more sophisticated parametrizations of cloud microphysics, boundary layer mixing, turbulent entrainment and radiation. This allows for more realistic and more accurate forecasts of severe weather events. However, these sophisticated schemes are still subject to uncertainty, which needs to be captured in an ensemble prediction system, e.g. by using stochastic parameterization schemes. Improving the scientific foundation and the development of such schemes are on-going research efforts.

One of the most challenging aspects of kilometre-scale NWP is starting the model with an accurate depiction of the atmosphere that includes the representation of fine-scale atmospheric motion including clouds. To explicitly resolve multiscale processes, including deep convection, frequent initialization of the model is critical. This is done by adjusting a very short forecast to match high-resolution observations (in both space and time) of the true state of the atmosphere using data assimilation, enabling frequently updated predictions of high-impact weather events and their associated hazards. A fundamental question is whether a variational, ensemble-based or hybrid data assimilation method yields the best kilometre-scale analyses and forecasts. Current research suggests that hybrid systems may provide the best results, the ensemble-based background error covariances providing the ability to impose balance constraints to create better analyses. At the heart of these considerations is how the considerable uncertainties in the initial conditions can best be represented and how large an ensemble is needed to reliably capture the uncertainty. To maximize the utility and impact of kilometre-scale NWP model and storm-scale observations (e.g. radar and satellite) to users, hundreds of post-processed probabilistic forecast products from the ensemble system need to be generated within minutes of initialization. Figure 7.4 shows a simple timeline of a conceptual rapid update forecasting system based on a kilometre-scale model and ensemble data assimilation. It uses sophisticated process models, starting from frequently updated and perturbed initial states, to generate an ensemble of predictions from which estimates of the probability distribution of future hazards can be made. Successive forecasts should lead to converging advice on the likelihood and severity of the hazard.

Fig. 7.4
figure 4

A simple kilometre-scale ensemble data assimilation timeline. This type of frequently updating probabilistic kilometre-scale forecast system (KFS) can assist forecasters with earlier and more accurate communication of hazardous weather threats

This system is conceptually simple but scientifically very challenging. Because the kilometre-scale forecast system aims to produce forecasts from minutes to a few days, it pushes the limits, not only of NWP modelling and advanced data assimilation but also of high-performance computing. Other challenges include lack of high-density observations and optimizing the initial state for multiple space scales.

7.2.5 Probabilistic Prediction

The extreme variability of hazard-related weather requires that forecasts are probabilistic. The basis should always be the use of an ensemble of NWP forecasts that are perturbed in their initial state and/or in some aspects of the model. However, ensembles do not capture the full range of possible outcomes, so there are several post-processing methods used to estimate the probability distribution, and we describe some of them here.

  1. 1.

    Neighbourhood Methods

    Despite the relatively fine horizontal grid spacing employed by kilometre-scale ensembles, probabilistic guidance products are typically not presented at the grid scale due to positional uncertainty. For example, small variations in the location of a small-scale feature, such as a mesocyclone, in different ensemble members, may result in low grid-scale probabilities of feature occurrence within a region, even if every ensemble member has predicted a mesocyclone. These same small displacement errors are responsible for the “double penalty” when applying traditional forecast verification measures to convection-allowing scales. To overcome this, neighbourhood approaches are commonly used for probabilistic forecast product generation (e.g. Schwartz et al. 2010) and for verification (e.g. Ebert 2009; Gilleland et al. 2009).

  2. 2.

    Ensemble Probability of Exceedance and Percentile Products

    The ensemble probability distribution function is used to provide a measure of event likelihood or severity, e.g. the measurable precipitation at a given location, and can provide limited information on event severity as well (e.g. probabilities of precipitation values greater than 200 m2 s−2 imply the potential for a strong mesocyclone). However, specific measures of severity that span the range of ensemble solutions are desirable to forecasters (Novak et al. 2008; Evans et al. 2014). Specific measures of severity can be found using values at a fixed position within the ensemble distribution, represented by a percentile, as opposed to finding the proportion of the ensemble exceeding a specific value. Percentiles that represent “reasonable” best- and worst-case forecast scenarios, such as the tenth and 90th percentiles, are often used to supplement the ensemble maximum (Novak et al. 2014) to avoid overprediction by outliers.

  3. 3.

    Ensemble Statistical and Probability Matched Mean

    The statistical mean of an ensemble is possibly the most familiar ensemble product and provides a more skilful forecast than individual ensemble members when averaged over many forecasts (Leith 1974). The improved skill in the statistical mean comes from smoothing low-confidence events in a forecast while retaining higher-confidence or more frequent features. However, kilometre-scale ensembles are primarily aimed at providing guidance on rare and high-impact events with limited predictability rather than the mean. The localized probability matched mean (PMM) is a post-processing technique that restores characteristic amplitudes of ensemble members to the statistical mean field (Ebert 2001).

  4. 4.

    Pseudo-deterministic Products

    While probabilistic guidance products efficiently condense information within the ensemble and provide measures of uncertainty, they provide limited information about the physical processes responsible for the model solutions. This limitation can be overcome by the “postage stamp” plot, which summarizes each ensemble member on a single plot. Postage stamps provide users with deterministic solutions from individual members and all the information available in continuous forecast fields; however, they sacrifice readability, often to the point of being impractical in large ensembles. Alternatively, web-based ensemble viewers can provide a means for rapidly interrogating individual member solutions (Roberts et al. 2019; Schwartz et al. 2019) while preserving output readability.

A second method for displaying deterministic aspects of an ensemble forecast is to extract limited information from each ensemble member on a single plot. These visualizations remove the complexity of full deterministic products, allowing forecasters to rapidly assess ensemble spread in features of interest. The most familiar of these feature-based visualizations is the spaghetti plot (Obermaier and Joy 2014; Rautenhaus et al. 2018), which provides specific contours of a given field for each ensemble member (Sivillo et al. 1997). Spaghetti plots are typically employed to provide information on ensemble spread of features in a continuous field, for example, shortwaves in a 500 hPa geopotential height field or air mass boundaries in a 2 m dew point field. Automated detection of features associated with specific phenomena may be used to produce analogous visualizations for features like frontal boundaries (Hewson and Titley 2010), tropical cyclone tracks (Hamill et al. 2012) or thunderstorm proxies (Schwartz et al. 2015). In particular, kilometre-scale ensembles frequently use feature-based “paintball” plots to display ensemble information of thunderstorm and mesocyclone positions in simulated reflectivity and updraft helicity forecasts, respectively (Schwartz et al. 2015; Roberts et al. 2019; Schwartz et al. 2019).

7.2.6 Forecast Evaluation

Both model predictions and human forecasts must be evaluated regularly to ensure that they have value to the forecast user and to understand the weaknesses that need further research and development. Standard verification techniques are applied to compare the performance of global NWP models, but these have only limited relevance to understanding the prediction of weather-related hazards. Traditional approaches to hazard-related verification have relied on scores such as hit rate and false alarm rate, which relate well to the use of the forecast, but which can be misleading when used to compare different approaches. All evaluation depends on the availability of high-quality observations, and this is perhaps the greatest impediment to verification of hazard-related weather phenomena.

7.3 Observations for High-Impact Weather Monitoring and Prediction

Observations are the heart and language of science and describe structural characteristics of the environment, advancing our understanding of key physical processes governing atmospheric systems track, intensity, structure and impacts. They play a fundamental role in constraining uncertainties in prediction models, directly by sampling the atmospheric initial state and indirectly by providing data for process studies and machine learning approaches to improve the representation of physical processes. They provide datasets for evaluating the performance of models. Observations must be processed to fit the NWP model structure. It is important to know the characteristics of both instrument errors and NWP model-observation representativeness differences.

The current in situ observation network of surface weather stations and upper air soundings was designed for short-range forecasting at synoptic or ~1000 km scales. Forecasting systems still rely heavily on these networks of weather stations operated according to WMO standards. For practical and cost reasons, stations in such networks are generally spaced some tens of kilometres apart for surface data and hundreds of kilometres for sounding stations.

Observation-based predictions have evolved over the years as a pragmatic approach to address the nowcasting of quickly varying small-scale phenomena, such as thunderstorms, and to adjust model predictions that diverge from the observations. Advances in forecast accuracy and forecast range have been demonstrated with high-resolution models using extra data obtained from other agencies, social media/citizen science and mixed technology solutions for specific weather parameters (e.g. precipitation from satellites in remote regions). A vast amount of observations with varying or unknown quality are now available that require new collaborations to bring into effective use. Not only are there challenges in accessing the data on the right time scales, in usable formats and with the required information on error characteristics, but also in optimizing the ways in which high-volume, low-quality data are mixed with low-volume, high-quality data to achieve the best forecasts.

Innovation and adoption of new observations require investment and long-term planning by governments. The development of new technology takes time and needs to be coordinated effectively and efficiently across different mandates and funding sources. In this section, we describe the capabilities and limitations of innovations that will meet the high-resolution observation requirements of high-impact weather, starting with in situ observations (including those derived from social media) and then remote sensing technologies.

7.3.1 In Situ Observations

High-impact weather forecasting requires meteorological observations at spatial and temporal densities significantly higher than available from present National Meteorological and Hydrological Services (NMHS) networks. This has been a prime motivation for investigating the use of third-party data (TPD) which is often collected for purposes other than weather forecasting but nonetheless contains valuable meteorological information. In recent years, the increased reliability and decreased cost of atmospheric sensors, the coming of the “internet of things” and the introduction of machine learning technologies have made available a wealth of new and potentially very useful data on the fine-scale evolution of the atmosphere near the ground. The challenge is how to make this information accessible and usable for the purpose of high-impact weather forecasting. While “big data” and “artificial intelligence” tools and analytics are readily available, accuracy, routine and long-term reliable access to the data, their interpretation and data quality control require scientific and application expertise for usability.

Perhaps the most well-known, and successful, example of third-party data are meteorological observations from commercial aircraft. For several decades, AMDAR (Aircraft Meteorological Data Relay) instruments have been installed by NMHS’s on a limited number of aircraft, to provide observations of wind, temperature and humidity at flight level and upon ascent/descent into airports. These data have proven to be of great value for the quality of weather forecasts (Petersen 2016; ECMWF 2020). For cost reasons, relatively few aircraft have been adapted to carry AMDAR instruments. However, more recently, aircraft position messages (Mode-S) have been processed to produce wind and temperature data ~100–1000 times more numerous than AMDAR observations, of comparable quality to radiosonde observations and at a significantly lower cost (WMO 2020a).

A potential new source of high-resolution precipitation data has been demonstrated from cell phone networks (Overeem et al. 2011). Microwave communication signals between cell towers suffer from small time delays which are related to attenuation by precipitation. It has been demonstrated that a network of cell phone towers can provide an accurate and detailed picture of the spatial distribution and amount of precipitation. However, access to these data is a problem as they are considered proprietary by the telecommunications operators.

Observing sensors and platforms are becoming cheaper, more reliable, more widespread and of better quality. Examples include (i) drones equipped with meteorological and/or air quality sensors flying over areas difficult to access, (ii) measurements from wind energy turbines and (iii) near-surface observations from private or charitable (van de Giesen et al. 2014; Kucera 2017) weather stations. These near-surface data can be acquired at affordable cost often by crowdsourcing initiatives such as the Weather Observations Website (2021). Several studies have shown that with careful quality control and bias correction using machine learning techniques, these data (e.g. temperature, pressure and precipitation) can provide significant added value (e.g. Nipen et al. 2020; de Vos et al. 2019; Meier et al. 2017).

For the detection of highly localized severe weather events, the greater density, representativeness and coverage of TPD can be advantageous. Standard meteorological surface weather stations are situated in open fields free from obstacles. This makes them representative of idealized homogeneous surfaces but under-represents heterogeneous surface environments, particularly in urban areas, where the majority of humans live and where high-impact weather forecasts are most needed. A mixture of TPD, from several sources, can provide a useful addition to NMHS ground observing systems (e.g. de Vos et al. 2019; Fenner et al. 2019). Through data sharing, they may be obtained by NMHSs at a fraction of the cost of operating and maintaining their own networks. However, partners, such as internet service providers or wind energy farms, may be reluctant to share data for competitive reasons. For crowdsourced data from smartphones, there are legal, ethical and privacy aspects to consider. Care is needed to strip the acquired data of all but their meteorological information and to anonymize and possibly aggregate them so that the data cannot be traced back to the original provider.

Individual TPD sources are often unable to reach the standards of official meteorological in situ stations, and complex systematic errors need to be removed. However, many studies have indicated that, combined with professional in situ meteorological networks, and after careful quality control, they offer clear added value in the assimilation and post-processing of NWP forecasts. Machine learning algorithms are increasingly proving successful in providing fully automated quality control of TPD; however, they need to be interpreted within the context, scale and purpose of the forecast prediction system.

A long-term challenge will be how to coordinate the acquisition, use and exchange of TPD for meteorological use at a global level. Worldwide uptake of these new data types can be facilitated by creating and fostering a global community of meteorological TPD experts, exchanging experiences and best practices, and requires coordination such as that provided by the World Meteorological Organization to standardize protocols, metadata, formats and mechanisms for exchange of TPD.

Many weather-related elements are now measured from mobile platforms, such as smartphones and cars, and studies are being carried out to assess their value. Pressure data from smartphones have been shown to be of value for weather forecasting (Mass and Madaus 2014; McNicholas and Mass 2018). The mobile nature of these sensors presents some particular challenges to their quality control (Hintz et al. 2019). Lidars and radars are used in vehicle collision avoidance but have yet to be exploited for weather prediction.

Crowdsourcing information about hazardous weather may be provided through common social media (e.g. Twitter, Instagram) or specialized crowdsourcing apps. Given the ubiquity of mobile devices, the data are timely and may be spatially and temporally dense, depending on population density, particularly in existing data sparse regions. These data are used in various ways for warning issuance, nowcasting, verification and providing feedback on the entire high-impact warning chain that cannot be achieved in any other way. Weather-specific apps (e.g. mPing, Elmore et al. 2014) can solicit particular information such as the occurrence of particular weather phenomena (e.g. tornadoes, waterspouts, hail and hail size, storm damage and visibility restrictions) and interactively generate maps or time series products. The frequency and spatial pattern of the reports can provide significant scientific insights. For example, for small-scale hazards such as damaging hail, frequent (5–10 minutes) and high (sub-kilometre)-resolution maps can be produced to understand the evolution of the storm, for damage surveys, and to validate and verify radar and NWP products. The apps are available globally but limited by market penetration into the social media environment. Over time, through peer experience or in-app or on-line training, the quality of the reports should evolve and improve. Statistical or artificial intelligence techniques can be used for quality control. These data greatly expand, complement and supplement reports from trained volunteer spotters which can be used to quality control the reports from the general population.

7.3.2 Ground-Based Remote Sensing

Ground-based remote sensing systems can observe the entire chain of processes leading from land-atmospheric exchange, atmospheric boundary layer (ABL) development, convergence zone formation and evolution and convective initiation to the formation, evolution and decay of clouds, precipitation and other hazards. This capability depends on the remote sensing methodology, e.g. whether passive or active remote sensing is applied and which wavelengths are utilized. For observation of the pre-convective environment, wavelengths from the ultraviolet (UV) up to the infrared (IR) are required. For observation of clouds and precipitation, the microwave spectrum must be used. In passive remote sensing, the emission spectrum of the atmosphere itself or the transmissions of the sun or the moon are used. As the atmospheric variables of interest, such as water vapour and temperature, are indirectly contained in these observations, a retrieval is necessary, which requires a first guess and limits vertical resolution and accuracy. For active remote sensing using sound waves or electromagnetic waves, generally a direct derivation of variables of interest is possible, which intrinsically increases the accuracy as well as the temporal and range resolutions of the results (see, e.g. Wulfmeyer et al. 2015). Land-Atmosphere Exchange

Clear-air observations are required with vertical and temporal resolutions of metres and sub-seconds to resolve atmospheric profiles in the surface layer from the canopy top to a height of about 100 m including turbulence fluctuations. Unfortunately, this is a sore spot of passive and active remote sensing. Recently, the first surface layer scans of wind, temperature and moisture profiles with sufficient resolution and accuracy became possible (Wulfmeyer et al. 2015; Späth et al. 2016) enabling us to study flux-gradient relationships in the surface layer and to make comparisons with current theories such as the Monin-Obukhov similarity theory. Pre-Convective Environment

The simplest way to obtain information about the pre-convective environment is provided by ceilometers and backscatter lidars. These instruments are typically operated in vertically pointing mode and are used for cloud height observations and for volcanic ash monitoring (Adam et al. 2016a). However, a simple backscatter lidar provides only limited information about atmospheric dynamics and thermodynamics. For studies of the pre-convective environment, observations of lower tropospheric wind, temperature and humidity fields with temporal and spatial resolutions of the order of minutes and 100 m are fundamental. Unfortunately, due to a severe lack of availability and coverage with suitable remote sensing systems, this area must be considered as terra incognita in earth system science. As clear-air measurements are required, these observations must be performed with far thermal infrared (FTIR), microwave radiometer (MWR), Global Navigation Satellite Systems (GNSS) and lidar techniques. With respect to thermodynamic profiling, an overview is given in Wulfmeyer et al. (2015). For wind measurements, the operation of Doppler lidar systems or clear-air radar wind profilers is state of the art. If operated in scanning modes, wind profiles can be derived with resolutions of 1 min and 50 m, with an accuracy of 0.5 ms−1 in the ABL. The performance of coherent Doppler lidars depends on the presence of aerosol particles in the range of interest so that typically a very high resolution and accuracy are achieved in the atmospheric boundary layer, but this can degrade substantially at greater heights. Scanning Doppler lidars have been developed for wind shear detection at airports (Chan and Lee 2012; Nechaj et al. 2019) and for boundary layer wind profiling.

For vertical measurements of temperature and moisture, passive remote sensing systems such as FTIR and MWR can be applied. However, their vertical resolutions are rather limited: ~1000 m for FTIR and ~2000 m for MWR at 2000 m height, degrading further with greater altitude. For the determination of real profiles of water vapour and temperature with resolutions of minutes and 100 m vertically, Raman lidar and differential absorption lidar (DIAL) can be applied (Turner et al. 2002; Späth et al. 2016; Weckwerth et al. 2016). The new generation of Raman lidars permits temperature measurements throughout the troposphere, day and night (Lange et al. 2018). Water vapour Raman lidar permits measurements up to the lower troposphere during daytime and throughout the troposphere during night-time. Water vapour DIAL measurements have similar performance during daytime and night-time with resolutions of minutes and a few 100 m up to the middle and upper troposphere depending on the atmospheric moisture content. Operational water vapour measurements using DIAL are now possible with low-power, compact systems (Weckwerth et al. 2016). Clouds and Precipitation

Integrated, mainly vertically profiling observations have been deployed for a variety of climate and weather research investigations (Kollias et al. 2007a, b). Wind profilers, aerosol, Doppler and water vapour lidars, radiometers, ceilometers and short wavelength radars (W and Ka band) are maturing technologies that measure within the boundary layer and middle troposphere and have been shown to improve high-impact weather forecasts (Benjamin et al. 2004; Loehnert et al. 2007).

Radars are fundamental tools for the provision of rapidly developing hazardous weather warnings as they observe the precipitation in the atmosphere in three dimensions with spatial and temporal resolutions better than 1 km and 5 minutes, respectively. Radars can “see” precipitation at long distances (250 km or more). They transmit microwave energy into, and receive reflected energy from, the raindrops and other scatterers in the atmosphere – including clouds, airplanes, ground, insects and, if sensitive enough, clouds. The current generation of polarization diversity radars provides greater quality control and hydrometeor identification capabilities. In addition, some highly sensitive modern radars can observe reflections from the clear air (due to insects or Bragg scattering) or due to refractive index fluctuations (Knight and Miller 1998; Wilson et al. 1994; Fabry 2004; Fabry et al. 1997) to retrieve low-level winds and humidity fields that enhance the forecaster’s ability to observe pre-cursor signatures of convective initiation and hence potentially extend the lead time for thunderstorm warnings (Wilson and Schreiber 1986). Substantial processing is required to produce precipitation products (Zhang et al. 2016), wind fields (Browning and Wexlar 1968; Sun and Crook 1997) or precipitation types (Park et al. 2009). For data assimilation, radar must be quality controlled to remove features that cannot be represented in NWP models. Hence, partnerships amongst radar specialists, forecasters and assimilation scientists are needed to deliver appropriate application-based quality-controlled radar products. As an example, ground clutter, generally considered a nuisance and often eliminated, has proven useful for monitoring variations in calibration, leading to improvements in the quality of precipitation products (Wolff et al. 2015), and for the retrieval of humidity (Fabry 2004).

Due to the curvature of the earth and beam propagation paths, the radar observing range is limited to near ranges (~50 km) when observing low-level weather phenomena such as tornadoes, wind shear and precipitation type near the ground. Depending on the radar network and largely due to cost, radars generally have spacings of 150–400 km leaving substantial low-level (<1 km altitude) coverage gaps. These are exacerbated by blockage by local obstructions or complex terrain. Dense networks of limited range small radars to sense the lowest levels of the atmosphere have been proposed (Mclaughlin et al. 2009) and have been deployed for demonstration in several urban environments (Cifelli et al. 2018; Misumi et al. 2020; Chandrasekar et al. 2018).

Combining networks of heterogeneous radars, operated by different agencies for different purposes, across multiple countries, and often of mixed technology, can extend and improve the coverage domain. This requires exchanging voluminous quantities of radar data and sharing data quality information, with resulting benefits to NWP assimilation, at reduced cost, increased efficiency of operations and higher quality (Lopez 2011).

Weather radars are the main requirement for nowcasting warnings of convective storms. S-band polarimetric, Doppler, one-degree beam width radars, with good sensitivity are preferred. C-band radars are second in preference in that the unambiguous velocity is less for the same pulse repetition frequency at the cost of higher attenuation. Bragg scattering detection is considerably reduced with C-band radars. X-band radars suffer severe attenuation in regions of high rainfall rates and are thus limited to being deployed in local networks with a spacing of tens of kilometres. Developing, operating, maintaining and sustaining operational radar networks are expensive, and these on-going costs must be considered before initial installation.

Commercial ground-based lightning networks have become ubiquitous. Lightning is a severe weather hazard for fire weather, personal safety and infrastructure. It is associated with convective weather, and so statistical relationships with heavy rain, hail and strong winds have been used to generate precipitation proxy products for convective storms. Lightning has also been assimilated into NWP to improve kilometre-scale predictions (Dixon et al. 2016). The lightning “jump” is a sudden increase in flash rate, associated with the onset of severe weather (Chronis et al. 2015). The causal physical relationships still need to be understood, but they have been used for warnings (Holle et al. 2016).

7.3.3 Satellite Remote Sensing

Weather satellites are the backbone of the global weather observing system (Fig. 7.5). The principal satellite orbits are Geostationary Earth Orbit (GEO) and Low Earth Orbit (LEO) which provide different perspectives of the atmosphere and the earth (WMO 2020). GEO satellites are located at 35,786 kilometres above the earth’s surface with an orbit matching the earth’s rotation so that the earth and atmosphere can be monitored continuously at the same satellite sub-point. They are the primary source of near-real-time imagery used for nowcasting and the detection of rapidly evolving high-impact environmental phenomena (Goodman et al. 2018, 2019; Schmit et al. 2017, 2018). LEO satellites orbit at about 800 km above the surface, viewing the whole earth twice a day in multiple passes, each at the same local times. These satellites provide the primary source of temperature and humidity profiles of the atmosphere for use in NWP. Together, satellites in the LEO and GEO orbits provide a broad spectrum of atmospheric, land and ocean measurements used in weather forecasting and analysis (Table 7.2).

Fig. 7.5
figure 5

Space-based component of the global observing system. (Source: WMO)

Table 7.2 Satellite backbone with specified orbital configuration and measurement approaches

The new-generation international “GEO-Ring” satellite constellation (Fig. 7.5) provides full disk earth and atmosphere imagery and derived products (e.g. cloud mask, cloud height, cloud phase, precipitable water, stability indices, winds) every 10 minutes and at high frequencies of 1–2.5 min over limited areas. The GEO cloud/moisture-derived atmospheric motion vectors (Fig. 7.6) are widely used in global NWP to fill gaps in the global radiosonde network. Information about winds at different levels, areas of wind shear or jet maxima can be identified. Wind vectors are computed using both visible and infrared spectral bands (GOES 2021).

Fig. 7.6
figure 6

Derived motion wind vectors (DMW) from the GOES East (GOES-16) Advanced Baseline Imager overlaid on a GeoColor false colour RGB image (Miller et al. 2020) at 13 UTC on 21 October 2020. Hurricane Epsilon (29.9oN, 58.8oW) in the central Atlantic was a Category 1 hurricane at this time with maximum sustained winds of 74 kts (85 mph). The wind speed and direction, derived using sequential images, are one of the most important inputs assimilated into the global NWP models, most notably filling gaps in data-sparse areas © NOAA, 2020

The spectral bands of these new imagers in the visible and infrared portions of the electromagnetic spectrum can be combined in various ways to make decision aids and products for nowcasting and short-range forecasting (e.g. fog, smoke, air mass classification and dust, amongst others). The NOAA GOES and the EUMETSAT MTG (Meteosat Third Generation) also have lightning imagers that provide storm-scale day/night imaging of lightning discharges including their radiant energy, areal extent and propagation.

With the advent of multispectral imagers such as MODIS and the Joint Polar Satellite System (JPSS, Goldberg et al. 2018) VIIRS (visible infrared sensors) LEO satellites are also used increasingly as input to forecaster decision aids. Radiometer and spectrometer instruments in LEO may be active (radars, scatterometers, altimeters, lidars) or passive (multispectral visible (VIS)/near-infrared (NIR)/thermal infrared (TIR) imagers; IR and microwave (MW) sounders). Atmospheric sounding of the vertical temperature and moisture structure of the atmosphere are key contributions for assimilation into NWP. The LEO satellite constellation infrared and passive microwave sounders (Menzel et al. 2018) provide complementary information in clear and cloudy atmospheres as clouds are opaque in the infrared part of the spectrum and largely transparent at microwave frequencies. Operating them together makes it possible to cover a broader range of weather conditions. Infrared sounders have better horizontal and vertical resolution, while microwave sounders, although having lower resolution (~10s of km), can observe the earth’s atmosphere and surface day and night even through intervening clouds.

The Global Precipitation Measurement (GPM) mission uses multiple satellites. The core has two primary instruments, a dual-frequency precipitation radar (DPR) and a GPM passive microwave imager. The DPR consists of a Ku-band precipitation radar (KuPR, 13.6 GHz) and a Ka-band precipitation radar (KaPR, 35.5 GHz), both having 5 km spatial resolution at nadir and covering a swath width of 245 km. The DPR is more sensitive than its TRMM predecessor especially in the measurement of light rainfall and snowfall in mid-latitude regions. Rain/snow determination uses the differential attenuation between the Ku band and the Ka band. The GPM microwave imager is a multi-channel, conical-scanning, microwave radiometer that serves as both a precipitation and a radiometric standard for the other GPM international partner satellites. It has 13 microwave channels ranging in frequency from 10 GHz to 183 GHz. The GPM core and its partners are combined with GEO imagers, to create a widely used precipitation product called IMERG (Huffman et al. 2019a, b) that is updated every 30 minutes through temporal morphing of the instantaneous rainfall fields and is widely used in nowcasting, NWP and flood/landslide monitoring (Kirschbaum et al. 2017; Kirschbaum and Stanley 2018).

A constellation of satellites can fly in formation to produce synchronized data from several different instruments (Stephens et al. 2018). Current and planned constellations and future CubeSat swarms of sensors may greatly augment the capability of the global observing system and increase the revisit frequency from twice per day to perhaps hourly or better, making these data of potentially great interest and value for nowcasting and regional- to global-scale NWP. Intercalibration of these measurements will be a challenge with each instrument providing a different view geometry and atmospheric path.

Global Positioning System (GPS) radio occultation is another important satellite measurement for NWP data assimilation and is complementary to the infrared and microwave radiances observed by atmospheric sounders. The highly precise radio occultation signal is measured by the Global Navigation Satellite System. It is affected by the density, the moisture content and hence the refractive index of the atmosphere. This alters the propagation path and time of the signal between a GPS satellite and a receiver on a LEO satellite from which the atmospheric temperature and humidity can be retrieved to produce upper-troposphere to lower-stratosphere temperature profiles and lower-troposphere humidity profiles (Menzel et al. 2018).

7.3.4 Aircraft Reconnaissance of Tropical Cyclones

Tropical cyclones (TCs) plague coastal communities around the world, threatening millions of people and causing many billions of dollars in damage to infrastructure – impacts that are increasing as coastal development continues worldwide. These impacts result in severe consequences in all affected ocean basins.

Many platforms are available for observing TCs, including airborne (both manned and unmanned), spaceborne and ground-based (Rogers et al. 2019). Each of these brings advantages and disadvantages to the challenge of observing TCs. For example, spaceborne platforms provide global coverage, but are generally unable to measure structures within the inner core. Aircraft can provide this inner-core information, but their range is limited, and even in the Atlantic basin, only about 35% of TCs are sampled.

The USA has a long history of airborne TC reconnaissance, dating back to the 1940s. Currently, the two main agencies responsible for airborne reconnaissance are the National Oceanic and Atmospheric Administration (NOAA), which operates two WP-3D hurricane-penetrating aircraft and one G-IV high-altitude jet for environmental surveillance, and the Air Force 53rd Weather Reconnaissance Squadron, which operates C-130 J aircraft with capabilities similar to the WP-3Ds. An exciting development in recent years is the proliferation of airborne reconnaissance capabilities in other TC-prone regions of the world. Taiwan has carried out the DOTSTAR (Dropwindsonde Observations for Typhoon Surveillance near the Taiwan Region) programme using a high-altitude ASTRA jet since 2003. Hong Kong Observatory (HKO) began flying reconnaissance missions for TCs over the northern part of the South China Sea in 2011 and continues to do so with a Bombardier Challenger jet aircraft. Japan uses a high-altitude G-II jet as a part of their T-PARC II (Tropical cyclone-Pacific Asian Research Campaign for Improvement of Intensity estimates/forecasts) project, begun in 2016. The Shanghai Typhoon Institute (STI), in conjunction with HKO, has used a variety of airborne platforms in their Experiment on Typhoon Intensity Change in Coastal Area (EXOTICCA), begun in 2014.

Airborne instruments, both in situ and remote sensing, are used to sample the kinematic and thermodynamic characteristics of the TC inner core and its environment. Conventional instruments include the dropsondes, which provide profiles of temperature, moisture, pressure and winds; airborne Doppler radar, which provides three-dimensional distributions of reflectivity and horizontal and vertical winds in precipitation; flight-level measurements of basic state variables; and stepped frequency microwave radiometer nadir measurements of surface brightness temperatures, which can be used to infer surface wind speed. New technologies are continually being developed, including a variety of low-level and upper-level unmanned aerial systems (UAS; e.g. Braun et al. 2016; Cione et al. 2020; Wick et al. 2020), rocket sondes launched over the top of TCs in the South China Sea (Lei et al. 2017), lidars for the retrieval of kinematic and thermodynamic information when optically thick clouds are not present (Bucci et al. 2018) and dropsondes with infrared sensors to estimate sea surface temperature and provide co-located atmospheric and surface temperature and moisture needed for surface flux estimates (Zhang et al. 2017), to name just a few.

Depending on the platform, measurements are taken in the inner core to provide information vital to operational centres for accurate assessment of TC position and intensity, or they are taken in the environment in data-sparse regions over the ocean to sample features expected to impact the future track of the TC. Typically, missions sampling the inner core are performed every 6–12 h when a TC is a potential threat to land and even more frequently (e.g. every 3 h or less) when landfall is imminent. For environmental sampling, missions may be flown every 12–24 h.

In terms of value to the forecasting community, the main goals of TC airborne data collection (Rogers et al. 2006, 2013) are 1) collect observations that span the TC life cycle in a variety of environments for model initialization and evaluation; 2) develop and refine measurement strategies and technologies that provide improved real-time monitoring of TC intensity, structure and environment; and 3) improve the understanding of physical processes important in track, structure and intensity change for a TC at all stages of its life cycle. When reported in real time and combined with other platforms, e.g. satellites and ground-based sensors, these data can be a powerful tool to provide situational awareness to the forecaster and input to NWP models. Figure 7.7 shows an image combining the near-surface wind field observed by airborne Doppler radar on the WP-3D with cloud-to-ground lightning detected in Hurricane Lane (2018). It was generated in the Advanced Weather Interactive Processing System (AWIPS) used by forecasters at the National Hurricane Center (NHC) to make real-time assessments and forecasts of TC position, structure and intensity. Such a capability provides an unprecedented opportunity to assess TC inner-core structure in real time and make more informed predictions of intensity changes, at least in the short term (e.g. 6–12 h).

Fig. 7.7
figure 7

Image in AWIPS-II showing 0.5 km wind speed (shaded, kn) from airborne Doppler radar and GOES-15 infrared image with superimposed cloud-to-ground lightning strikes (white “plus” and “minus” signs) at 1700 UTC in Hurricane Lane (2018). (Image courtesy of Stephanie Stevenson, NHC; lightning data courtesy of Vaisala)

7.4 Bridges: To Forecasts from Observations

7.4.1 Overview

The gap between observationalists and prediction practitioners/forecasters arises because of several pragmatic issues, including the complexity of various individual components of the forecasting system, and the different pace of progress. The specifics of service and warning requirements evolve over time due to advancement in scientific understanding and technology. Solutions require research and development and technology transfer processes combined with long-term implementation strategies and investments. Figure 7.8 provides a simplified schematic showing the gaps, pathways and bridges from observations to weather warnings. In the following text, the numbered sequence corresponds to numbered items in grey boxes in Fig. 7.8.

Fig. 7.8
figure 8

Gaps and pathways bridging observationalists and forecast practitioners amongst research, operations and hazard warnings. See text for details

  • [1] Existing or anticipated future requirements (such as high-resolution urban observations, hazard impacts for verification, low cost) provide the starting point for the development of new observation technologies. A comprehensive study of gaps in current capability should be informed by user needs, model sensitivity studies and instrument design expertise.

  • [2] Requirements are translated into instrument concepts, based on knowledge of existing sensor and platform technologies that may have been developed in other fields of science and engineering. Within a dedicated collaboration, interaction between atmospheric scientists and instrument experts should be continuous, with funding for design and development of suitable instruments as new requirements emerge. Collaboration of this sort has contributed to the successful development of satellite remote sensing and would benefit ground-based observation.

  • [3] New scientific insights arise from new observations and their analysis, and are transferred to operations by early adopters in an ad hoc fashion [4]. Journal, conference and “grey” literature (internet) publications facilitate information exchange. Theoretical or laboratory studies from academia or research institutes lead to improved physical parameterizations and guide NWP development (e.g. sub-grid-scale and physical parameterizations, process rates, urban surface-canopy representations) [5].

  • [6] NWP models improve their resolution, representation of physics, etc. For models with an operational focus, use of existing available observations [7] remains the focus of data assimilation programmes [8]. Many existing operational observations are still not optimally used or assimilated by operational NWP despite the shortage of high-resolution data (e.g. Raman lidars: see Adam et al. 2016b; Thundathil et al. 2020).

  • [9] While some technologies are implemented through replacement or upgrade programmes (e.g. radar upgrade to polarization), the value of new technologies (e.g. ADM Aeolus, GOES-R) and network design may need to be demonstrated through improvements to forecast accuracy and cost-benefit analyses using observation system simulation experiments. For these, careful choice of success metrics is needed.

  • [10] On the global scale, there are existing protocols, standards and agreements for data sharing. For high-resolution weather data, new long-term partnerships, data sharing and quality control protocols need to be put in place, before research-operational prediction/forecast practitioners fully adapt [11], enabling the benefits to reach the hazard warning community [12].

Figure 7.8 also identifies other gaps to bridge both within and beyond the weather forecast system. The most intense partnerships between the observationalist and forecast practitioner occur in the “technology transfer or development” phase during research and development. The biggest gap is in the transfer of new monitoring technologies from research to operations. For operational implementation, developments and investments to reduce risk in terms of cost, support and maintenance cannot be overlooked particularly with limited operating budgets. The success metric for operational monitoring is to ensure the reliability, accuracy and efficient delivery of observations in a timely manner and to protect the historical climate record. In this context, the culture is to maintain the status quo, and innovations in technology or procedures are appropriately approached with great caution. Therefore, the successful innovation will require partnerships between forecasting operations and research observationalists with the knowledge to properly formulate the technical specifications and to demonstrate utility and cost-benefits. Since users of forecasts and warnings only have access to operational data and products, their early involvement in development projects or testbeds is needed to quantify benefits.

7.4.2 New Technology Development

New observing instruments are developed as technology evolves and becomes cost-effective. For example, lidars that measure moisture in the atmosphere are now available, using commercial off-the-shelf equipment, but are not yet deployed in operational weather observing networks. Processing of communication microwave signals delayed by atmospheric moisture or precipitation can retrieve humidity using radar ground clutter, inter-cell tower signals or GPS, respectively. Social media, sensors on mobile phones and on cars (including radars and lidars) and crowdsourcing have provided new opportunities to be exploited to observe the weather, to verify and validate forecasts and to determine hazard impacts.

Assimilation of data is model dependent. Observations are often indirect measures (e.g. polarization radar reflectivity variables are used to estimate precipitation rate, type and winds) and must be converted into model variables, while the scale of the observations must be filtered or processed to match the model resolution. These differences can lead to misunderstandings and misinterpretations as to the quantified impact or verification of new observations or monitoring networks. It is very much a case of “garbage is someone else’s treasure” and quality control is an important issue.

Requirements are generally well known and largely unfulfilled as expectations continually increase. Innovation often originates through recognizing and filling gaps between a technology and an application or requirement. Therefore, partnerships between academia, industry and governments to fund technology innovation, development, demonstration and implementation are required to exploit the plethora of opportunities.

7.4.3 Demonstrations, Testbeds and Technology Transfer

The added benefit of new technology is assessed, verified and validated through inter-comparison with existing accepted standards. For example, winds from wind profilers, Doppler radars or lidars, temperature and humidity profiles from satellites, radiometers, DIAL lidars or AMDAR are compared to the radiosonde. New satellite sensors are often first deployed on the ground, then on aircraft and perhaps on demonstrator satellite missions. With the critical role of numerical weather models in high-impact weather forecasting, assimilation studies and demonstrations are now a de facto requirement to bridge the gap between observations and forecasts. Assimilation studies may be in limited field projects, through simulations, at various scales and weather scenarios. These studies take into consideration the errors in the observation and contribute to determining the optimal observation requirements. Good examples include the justification of the ADM Aeolus Doppler Wind Lidar (Reitebuch 2012) and the Global Precipitation Measurement (Hou et al. 2014) satellite missions.

In recent years, the World Weather Research Programme (WWRP) of the WMO has conducted Research Development (RDP) and Forecast Demonstration Projects (FDP) to accelerate and focus progress in key research areas and to demonstrate multiple prediction systems in parallel as part of the research to operations technology transfer process (Keenan et al. 2003; Fig. 7.8). The premise is that advances are made through cooperation and collaboration, taking advantage of experts working on a high-profile project with a very firm deadline. Working on common data across a common domain demonstrates the applicability, strengths and weaknesses and transferability of the technology and allows for proper comparison of the results through verification. The ability to successfully implement and demonstrate provides insights into the maturity of the technology. Demonstration projects are also important in terms of global capacity building as many of the advances in high-impact warnings are technically challenging requiring significant research and operational resources for implementation and not always affordable by all countries (see, e.g. WMO-HIGHWAY 2021). Local relevance and application must be demonstrated, and capacity building training/workshops are a critical step, both in the technology transfer process and in tailoring to different weather regimes and local or national organizations and infrastructures.

Testbeds are on-going long-term programmes and are the next step in bridging the research-operations gap. They are essentially mini-weather services set up to test out new technologies (observations, NWP, products, paradigms) within and external to the weather forecasting system. Particularly valuable are testbeds attached to major observatories such as the Meteorological Observatory Lindenberg (MOL) of DWD in Germany, the Payerne Observatory of MeteoSwiss and the ARM research facilities at the Southern Great Plains (SGP) site. A key element is the iterative aspect where the technology can be developed over time and improved with feedback from users. Another key aspect is the participation of hazard researchers and forecast users with access to pre-operational datasets and products (DFW 2021; HMT-WPC 2021). This allows for the co-development and co-design of the system and products over time, to develop institutional/community partnerships and trust.

The introduction of new technology in the forecast office does not always result in immediate gains, particularly with increased expectations due to their complexities and their consequences (Pliske et al. 1997; Pliske et al. 2004). Introducing new warning services requires forecasters to have more extensive knowledge of the weather, the user, observation limitations, the complexity of the prediction system (NWP, data-driven predictions, system concepts) and its products and effective access to relevant critical information. Expertise and decision-making skills are required to take advantage of innovations (Klein 2000; Andra et al. 2002; Hoffman et al. 2018). It takes time, even for an experienced forecaster, to re-develop the expertise, abilities and skills to adapt, adopt and exploit new innovations. Rapid development of expertise to develop judgment and decision-making (e.g. identification of cues, maintaining situational awareness, consideration of alternative scenarios, managing second-guessing and maintaining self-awareness) has been demonstrated through scenario training and simulation (Klein 1998; WDTD 2021).

Technology transfer or adoption, particularly at the professional forecaster level, is a social diffusive process (Rogers 2003; Fig. 7.9). Innovations or new technology are first discovered by “enthusiasts” within the forecast office or community, even prior to implementation which begins the change process. However, their opinions are not necessarily trusted or followed by many others due to their high risk and technology-biased perspective. Over time, their enthusiasm may infect or diffuse to another group identified as “early adopters” who see the value and worthiness of the new technology and are able to demonstrate its use and effectiveness similar to the role of FDPs (described above). They become recognized, respected and trusted for their opinions. Their ownership of the new technology inspires “early pragmatists” as the easier and more effective way to perform their job functions. Then, the remaining forecasters follow the trend or peer pressure. A small group may remain who are inherently resistant and may need to transfer to a different task. One implication is that standard training practices aimed at the “majority” are perhaps misplaced and that the training should initially focus on and be tailored to needs and personality of the “early adopters” who then develop the training materials for the others.

Fig. 7.9
figure 9

The technology transfer process. (Adapted from Rogers 2003)

7.4.4 Strategic Planning: Integrated Observing Network Design

Given the complexity and long implementation time frames, long-term agile technology transfer, strategic plans and frameworks are critical. Integrated meteorological monitoring addresses the observation and measurement of the state and processes of the atmosphere and related geophysical and anthropogenic systems by heterogeneous measurements and technical technologies. Integrated design is the planning and design of interoperable upper air and surface in situ, surface and space remote sensing technologies together with data fusion capabilities, to meet the needs of different application areas. Ideally, the integrated network design achieves cost-efficiencies by considering observation equipment capabilities and capacities (e.g. altitude, spatial and temporal resolution, observation accuracy and uncertainty, etc.), the local weather and climate characteristics, topography (installable or not, representativeness), underlying surface characteristics (geological disaster-prone, ecological protection, etc.) and population distribution.

New technology must be combined with existing equipment in a way that is demonstrated to add value to the global observation network. Separate (non-integrated) networks lead to inefficient use of human and material resources and waste of space and may be environmentally or occupationally hazardous. To resolve these issues and to set priorities for operational deployment of new technologies, the World Meteorological Organization has established a Rolling Review of Requirements (RRR) process to identify and prioritize gaps between observation capability and application requirements, to optimize the large investments that need to be made at global and local scales and to find the optimal layout and balance of observation facilities that meets the requirements of different spatial and temporal resolutions covering all space and time scales.

The RRR process provides the scientific justification for the monitoring network for different user applications in 14 categories that include numerical models, marine, transportation, agriculture, energy, etc. The RRR method analyses the gap between observation capabilities and different application requirements and then designs and implements the network taking into consideration the observation capabilities of different instruments. For example, in order to satisfy the numerical model prediction requirement (initial value and verification), the optimum geographic locations of surface stations are based on NWP assimilation and verification (Riishojgaard 2017). In China, a national surface network design satisfying numerical forecast requirements was completed after 3 years of testing and evaluation. Similarly, the weather radar network was designed taking into account satellite monitoring capabilities, radar coverage, population and economic zone distribution, topographic and geomorphic features and installation and maintenance factors. Network design also needs to respect the existing observations network (e.g. for continuity of the climate record), which may mean reconfiguring existing networks to fill the gaps in sparsely observed areas.

The spatial and temporal resolution of grid data are defined at three levels, basic, objective and ideal, for different application areas. For example, taking temperature profiles for regional numerical model use, the most basic requirement is that its spatial and temporal resolution should reach 1 km/5 h on the ground level, 5 km(horizontal)*0.45 km (vertical)/1 h at the bottom of the troposphere and 25 km (horizontal)*1.5 km (vertical)/1 h at the tropopause. Other application areas have their own standards.

It is necessary to establish a data management system to support the analysis of three-dimensional real-time gridded fields. Requirements, data sharing, data management, quality control and data exchange are issues at the core of cooperatively building an observation network amongst national meteorological services and third-party providers. The WMO is a recognized authority for cooperation in meteorological services and provides substantial guidance in all of these key issues. New technologies require that standards, procedures and requirements are regularly updated.

Basic principles for operation and maintenance of observation stations and data are as follows:

  • Consider both the scientific basis and operational characteristics of the instruments, including temporal and spatial resolutions as well as measurement accuracy.

  • Categorize observing stations as surface (land and sea surface), upper air and/or space and as contributing to global, regional and/or local needs.

  • Meet all domestic laws, regulations and national standards as well as common practice.

  • Maintain overall stability/consistency through processing and dynamic adjustments taking into account the needs of operational developments, their categories and their management levels, to realize “multi-purposes in one station”.

  • Manage all data, incorporating observations and meta-data collected by others. Observation stations contributed by other agencies, industry, volunteers and crowdsourcing need to be included.

7.5 Examples

Box 7.1 Satellite Observations

Steve Goodman

Satellite weather missions provide examples of a mature programmatic/formal process of bridging the observationalist-forecasting practitioner gap. Generally, all missions follow the same process where a nascent idea is first proposed in a competition which in some cases spans multiple sciences. The WMO Coordination Group for Meteorological Satellites (CGMS 2021a) is the primary international partnership body whose main goals are to support operational weather and climate monitoring and forecasting end-to-end in response to user requirements formulated by WMO and other international agencies, and impact and benefits studies are critical to justify the proposal (ESA 2021; JAXA 2021; NASA 2021a; WMO 2021). Workshops are conducted to consider stakeholder needs to identify the next-generation constellation of satellites and instruments (NASA 2021b; CGMS 2021b). For example, with the recent increase and impact of wildfires and hurricanes in a warming climate, high-resolution multispectral VIS/IR imagers, IR hyperspectral sounders and lightning imagers will be the backbone of the future geostationary satellite constellation (GEO-Ring).

The process includes the following steps:

  1. 1.

    Proposals are evaluated by an expert independent panel of science and technical experts.

  2. 2.

    The evaluation is performed against stated mandates, strategic visions, science importance and impact, technical feasibility, cost (including life cycle costs and organizational impacts) and maturity (both of the science and technology). The evaluation is difficult as it must compare conflicting goals and objectives and different sciences.

  3. 3.

    Several missions may be selected for further study or demonstration or to resolve potential issues. Feasibility and trade-off studies may be initiated.

  4. 4.

    Use of the data or products by practitioners must be demonstrated. Proposed user products are scrutinized by forecasters (or their surrogate), and data are assimilated by appropriately modified prediction systems and their impacts established, thereby ensuring that the observationalist-prediction practitioner gap is bridged. An example is the Atmospheric Dynamics Mission where benefits were quantified well before launch (ESA 1999; Tan and Andersson 2004; Tan et al. 2007).

  5. 5.

    The feedback from the evaluation and impact studies are used to refine the proposals and then re-evaluated and selected for implementation as missions in final competition.

Exploitation of satellite data for NWP in the USA is facilitated by the Joint Center for Satellite Data Assimilation (JCSDA 2021). It is a partnership between observationalists and prediction practitioners to advance the ability and shorten the time to use of satellite data, particularly those with a limited lifetime (e.g. science missions), in operational NWP models. Integrated modelling systems replicate operational capability to quantify the expected impacts of new data sources on forecast accuracy. The JCSDA transitions this research to operational and university communities through a robust data infrastructure and open-source software. Successful transitions of advanced satellite data into operations include QuikSCAT winds, MODIS winds, GOES-R winds, Atmospheric Infrared Sounder [AIRS] data and Suomi NPP (National Polar-orbiting Partnership) CrIS (Cross-Track Infrared Sounder) and ATMS (Advanced Technology Microwave Sounder) data.

Box 7.2 Aviation Partnerships

George Isaac Jim Wilson Ping Wah Peter Li Paul Joe

Aviation has had a long partnership amongst the forecasting community, air traffic management, pilots, airlines, service providers and other aviation stakeholders. It is one of the best examples demonstrating how the information from observations to forecasts are integrated to support end-user planning, strategic and tactical operations. Governance of the airspace is globally coordinated by the World Meteorological Organization and the International Civil Aviation Organization. Regulations and standards are established for observations and products made by national meteorological services, airport authorities or third parties.

Aviation activities are highly weather dependent. Efficiency and safety issues are intertwined as weather can change rapidly. Planes are scheduled for take-off or landing every 30 seconds at some airports but can only take off and land under specific conditions. Aviation hazards include runway surface conditions, wind shear, visibility, crosswinds and the presence of lightning, amongst others. Widespread snowstorms can affect many airports and their alternates for hours or days, as aircraft must de-ice before taking off and runways must be kept clear. Similarly, summer convective systems can have small-scale features (tens of kilometres) that must be avoided, within a broad weather system (hundreds to thousands of kilometres), causing congestion and flight delays. All ground operations at airports stop when thunderstorms are in the vicinity. En route, aircraft are sensitive to turbulence, strong winds, volcanic ash, thunderstorms and in-cloud icing due to supercooled liquid and high ice water content.

The dependencies within aviation operations are intertwined, and interruptions at a single hub can have a domino effect elsewhere. Airports are rarely closed due to weather, and pilots (the ultimate authority) must exercise their expert judgement regarding weather hazards, in real time, under challenging situations.

Microburst Detection

The implementation of the Terminal Doppler Weather Radar ( TDWR 2015) and similar systems around the world (Hong Kong 2021; JMA 2021), and the subsequent elimination of aircraft wind shear accidents, is a prime example of the high-impact “Perfect Warning” and demonstrates the partnerships required to rapidly bridge the various gaps - from initial investigation (no knowledge), to research (including field programmes and analysis), to technology development, implementation, system co-design and implementation, to address a critical end-user hazard. The automated warning of microbursts at many airports is the most successful of all nowcasts and has saved hundreds of lives. Controllers and pilots are now warned of microbursts based on automated alerts based on the TDWR, the low-level wind shear alert system (LLWAS; a network of anemometers positioned around runways) and now Doppler lidars (Chan and Lee 2012; Nechaj et al. 2019).

Initially the reason for the crashes was unknown. A microburst is a small-scale (<4 km) and very-short-lived (<20 min) divergent low-level (<200 m AGL) outflow from a thunderstorm (Fujita 1985). This is an end-user (rather than phenomenon-based) definition based on the inability of airplanes to react and recover from such a small and intense feature and illustrates the need to understand and involve the user community at early stages to determine the requirements of the hazard warning. The research community quickly conducted field programmes to understand microbursts and then to develop techniques to detect and anticipate them (Wilson and Wakimoto 2001). Studies indicated that microbursts occur in both wet (precipitation related) and dry (where precipitation has evaporated before reaching the ground) microbursts. Specialized and specific numerical weather prediction models were developed to test the new understanding. A warning strategy with co-design of products that fit within the culture and technology environment of the aviation industry (tower and cockpit) was developed. Demonstration projects (with engagement of meteorologists and air traffic controllers) were conducted to develop, understand and demonstrate the interpretation of the products, to test the risk and communication modalities. This was followed by the development of the TDWR in universities and industry and the rapid installation of the radars at airports. Intensive education of pilots and controllers about microbursts and what pilots should do when encountering wind shear has completely eliminated wind shear crashes (Serafin et al. 1999). The entire process took less than 20 years. This success story demonstrated the multi-agency support and quick funding by the US National Science Foundation and Federal Aviation Agency and the close working relationship of government, university and private companies, particularly the Lincoln Laboratory, National Center for Atmospheric Research and National Oceanic and Atmospheric Administration.

Icing and De-Icing

Snow and ice can accumulate on aircraft fuselage and wings significantly reducing lift. De-icing fluid is sprayed on aircraft to melt the ice and to prevent accumulation. The type and efficacy of the de-icing fluid is determined by the precipitation conditions. The aircraft has a limited window of time to take-off before the fluid is diluted and becomes ineffective. This is typically 10 to 30 minutes, and so the nowcasting of precipitation conditions is critical for safe and efficient operations in winter. Similar to the microburst story, several accidents led to intense field programmes to better understand the meteorological conditions, the user processes and procedures and to co-design effective products. This led to the implementation of prototype instruments and prediction techniques based on radar or in situ instruments (Rasmussen et al. 2001; Isaac et al. 2014a).

High ice water content in the top of cirrus clouds can also affect en route flight safety. In certain conditions, ice crystals are ingested into aircraft engines causing them to shut down. This resulted in several crashes as a result of which the meteorological research community, aircraft designers and aviation regulators have created partnerships to set new flight regulations and aircraft certification requirements and procedures (Strapp et al. 2016).

Future Aviation

The WMO and ICAO (International Civil Aviation Organization) have partnered to modernize global aviation (Global Aviation Navigation Plan, GANP 2019). A key requirement is the ability to produce highly accurate forecasts for the terminal area with a precision of minutes and hundreds of metres at a lead time of 6 to 12 hours. The Aviation Research and Demonstration Project (AvRDP, from 2015 to 2019) was conducted to develop innovative aviation-specific nowcasting services and to demonstrate their benefits to end users (AvRDP 2019). Eleven international airports participated, covering a variety of climate and technology scenarios. Observation and prediction technologies included advanced cloud radars, satellite, lidar, nowcasting and high-resolution, rapidly updated models and translation of the meteorological information into an Air Traffic Management (ATM) information system. The high-impact weather studied included convection, low visibility, low cloud, dust storms and low-level wind shear (Fig. 7.10).

Fig. 7.10
figure 10

Sample integration of impacting convection nowcast data with Air Traffic Management system. The colour scale is based on an agreed likelihood-impact risk metric

A second phase is planned to further demonstrate the concepts of research-to-operations and science-for-services throughout the full value chain through collaboration in the use of advanced aviation meteorological information to seamlessly support safe and efficient gate-to-gate operations (take-off, ascent, cruising, descent, until landing – see Fig. 7.11). Here, “seamless” refers not just to the continuous information across multiple spatial and temporal scales but also across the whole value chain from observations to users’ benefits. A long-term collaborative strategic plan provides direction and guidance for both the meteorological and aviation communities (WMO LTP 2019).

Fig. 7.11
figure 11

Seamless weather information required to support the whole gate-to-gate flight trajectory: immediate and short-range information during take-off/landing and ascending/descending, combined with regional/global long-term model information during the en route phase. (© Hong Kong Observatory)

Box 7.3 Testbeds, Proving Grounds and Observatories

Nusrat Yussouf Steve Goodman Volker Wulfmeyer

Testbeds and proving grounds are a programmatic bridge between operational forecasters, model developers, social scientists, emergency managers, broadcasters and the private sector to accelerate the transition of novel research ideas and forecast products into operations while ensuring that they receive critical feedback and are co-designed during the development process (Fig. 7.12). Testbeds facilitate the future implementation of new, cutting-edge high-impact weather data or products, improved analysis techniques, better statistical or dynamic models and forecast techniques to improve situational awareness and improve forecaster warning accuracy and lead time. The feedback process is often iterative – incorporating a test-feedback loop between users and developers. Testing and evaluation are conducted with operational forecasters in a quasi-operational environment with the tools and systems the forecasters use in their everyday workflow. NOAA operates a dozen such testbeds and proving grounds (NOAA 2021a), including several successful high-impact weather testbed facilities, e.g. the Joint Hurricane Testbed, Hazardous Weather Testbed, Aviation Weather Testbed and Hydrometeorology Testbed. Satellite observation capabilities are also evaluated at the Joint Center for Satellite Data Assimilation (JCSDA), while derived products are evaluated in various testbeds and proving grounds (Goodman et al. 2012).

Fig. 7.12
figure 12

NOAA’s Hazardous Weather Testbed during the annual Spring Experiment. The Storm Prediction Center is visible through the glass. (Photo Credit: James Murnan/NOAA)

Once a product has been tested with positive results, a project plan is submitted to a formal review committee which then assesses the operational value and identifies any infrastructure, training and funding gaps to ensure a successful implementation into operations. This can take several years. It is highly desired to have a peer-reviewed publication to accompany the new science before the product or algorithm is transitioned to operations.

A specific example is the Hazardous Weather Testbed that jointly conducts the satellite product evaluation and the Experimental Warning Programme Spring Experiment demonstration in the USA. An Annual Guidance Memorandum from the National Weather Service provides a list of products to be demonstrated. Forecaster experiences are shared through weekly seminars (HWT 2021) and satellite application workshops, both nationally (COMET 2019) and internationally (European Severe Storms Laboratory, NWCSAF 2021), and through blogs (NOAA 2021b; ESSL 2021). The organizational structure, goals and objectives of the US and European testbeds are similar and include cross-fertilization as well as international participation from multiple NMHS, researchers and industry practitioners.

Another example is the GEWEX Land-Atmosphere Feedback Observatory (GLAFO), a new project of the Global Land/Atmosphere System Study (GLASS) panel (see http://www.gewex.org/panels/global-landatmosphere-system-study-panel). The scientific goal of the GLAFOs is to understand the land-atmosphere feedback chains that pre-condition the lower atmosphere in different regimes of temperature, soil and snow conditions, vegetation properties and ABL evolutions in the context of large-scale forcing. They will use new instrumentation for high-resolution observations of wind, temperature and moisture profiles. A network of GLAFOs in various climate regions will contribute to process understanding, development of new parameterizations, climate monitoring, model verification and data assimilation.

Box 7.4 Seamless Prediction and Demonstration Project

Paul Joe

ECPASS (Environment Canada Pan Am Science Showcase) was a project demonstrating the multi-facets, benefits and issues of bridging seamless weather, air quality and health prediction (Joe et al. 2018; WMO 2016) associated with the Pan-American Games of 2015 (PA15) in Toronto.

The service requirement was to provide weather, air quality and health warnings at the sporting venues. Existing operational weather warnings are a national responsibility and are provided for areas that are generally 40 km × 40 km in size. These warnings are issued by a single regional forecast office with responsibility for a very large area (1000x1000km), and a single forecaster is responsible for monitoring more than ten radars (including overlapping radars from neighbouring jurisdictions). Air quality warnings are provided at the short-term area/time scale and are issued jointly at the national and provincial level. Health warning responsibilities are issued by the national authority (ECCC), but there are 36 public health units responsible for implementing responses across the province specific to their location and partnership arrangements. In addition, urban services are the direct responsibility of the local municipality in partnership with the various levels of government (Health Ontario 2021).

Venues (such as athletic or sailing facilities) are just a few hundred metres in size and essentially considered as “points” within the context of existing forecast service domains. The venue warnings were a specialized service for PA15 to the public. Unlike other Olympic demonstration projects, services for the conduct of fair or safe competitions that require a higher level of service were not provided (Joe et al. 2010; Golding et al. 2014). As these venue warnings were outside the operational norms of monitoring, production and forecast services, a parallel weather service was set up including separate forecast desks, data management, forecasters and dedicated briefers.

As a single official public warning area may contain several venues, the specificity of the venue warnings could confound or be perceived to be in conflict with the “official operational” warning. Hence, venue warning provision was limited to PA15 officials and to centralized emergency services who were given special training. However, due to the novelty and importance of the warnings, on-site briefers were provided to accurately interpret and translate the high-resolution information. A most important aspect of the presence of on-site briefers was their engagement with the “early adopter” end user that resulted in the development of trust and technology transfer of the state-of-the-art services.

ECPASS provided the opportunity to develop and to demonstrate the concept of the state of the art in seamless weather prediction services. These demonstration projects are opportunities for researchers in different services/disciplines to interact. For example, early research-to-research interaction led to the deployment of black globe temperature sensors in the mesonet, producing high-resolution heat stress prediction products (100 m scale) on a specialized display system, all of which was unprecedented for health warning “technology innovators”. This was done through research collaborations as it was outside the requirements and mandate of operational weather monitoring services. While we intended to train health warning users such as long-term care facility operators and hospital admissions (for programming, staff scheduling and other purposes), the time limitations of the diffusion process precluded significant uptake by “early adopters”. Follow-up “testbed” programmes are needed to continue the technology transfer/adoption process.

One significant outcome was that discussion and co-design of the heat stress products contributed to the harmonization of heat stress warning standards and policy by the participating health units (Herdt 2017). The current policy requires consecutive days of heat and humidity, while high-resolution predictions provided a pathway to very-short-term heat warnings (6 to 12 hours).

Most of the PA15 venues were located near a big lake (Lake Ontario), and the lake breeze initiates thunderstorms, modifies the air quality and affects the temperature, and so it is a factor (at high resolution) for all the warning services (Mariani et al. 2018). Previous experience with evenly dispersed mesonet stations (typically 10 km spacing) was unsatisfying as the fine structures of high-resolution models could not be evaluated. The mesonet was designed with the urban-lake breeze as a harmonizing focus with stations aligned perpendicular and parallel to the lake geometry and with greater station density near the land-lake boundary for diagnostic and investigative studies.

High-resolution models were configured with parameterizations considering the urban fabric, including buildings of various heights and surfaces (e.g. green, white concrete, black roads). PA15 monitoring stations located on green, rooftop and other urban surfaces were valued for model verification/validation studies. Normally only observations from green sites are acceptable for assimilation or verification in forecast models. Even with 1-minute data, there were not enough observations to verify all the parameterizations. For example, the parameterization of outdoor cooking (barbeques) for air quality, the heat flux from rooftops, temperature variations and wind gusts within urban canyons were identified as missing observations. For the 1-minute wind data, the reporting of maximum wind gust (usually reported as maximum wind in the past hour) needed re-defining.

7.6 Summary

  • NWP and nowcasting models provide the foundation of hazardous weather forecasts. Development of higher-resolution, more detailed process models, improved data assimilation, frequent updating and ensemble probability prediction are driving improvements in forecast accuracy.

  • The latest generation of kilometre-scale NWP models predicts small-scale weather hazards, such as thunderstorms, embedded in larger-scale weather systems. Optimizing both scales simultaneously is a challenge for NWP research.

  • Forecasters use model guidance to formulate scenarios of how the weather will develop, focused on the applications in which the information will be used.

  • Observations are the fundamental ingredient for monitoring and prediction of hazardous weather and for verification of forecasts.

  • Development of new observational capabilities is a long-term process which needs to be planned a decade or more before the data are required.

  • Even in well-observed countries, current observing capabilities are inadequate for the new generation of high-resolution models, so new sources of data are needed, including new instruments, new observing platforms and extraction of weather information from data obtained for non-meteorological purposes.

  • There are particular gaps in our capability to observe pre-convective dynamics and thermodynamics of the lower troposphere.

  • Prediction models require observations that can be related to model variables, for which there are well-defined performance data, and that can be delivered quickly. Meeting these needs depends on close collaboration between observationalists and forecasters.

  • Future forecasting systems will particularly require additional observations of weather variations within urban areas and in areas of complex topography.