Keywords

Introduction

The mapping of flood hazards and risk via numerical modelling has become an integral component of flood risk management in many advanced economies. The benefits of these data with respect to land planning, insurance provision and disaster response are well established. Yet, for much of the world, flood hazard data are absent or lack the accuracy and precision required for most practical applications, including climate change impact assessment. With the hydrological cycle expected to intensify due to climate change, more accurate modelling of where is at risk from flooding is required as a prerequisite to understanding how flood risk might change with the climate. The past decade has seen rapid advances in the modelling of flood hazards in data scarce areas where the traditional local scale engineering approaches used in developed nations are not possible. Since it tends to be possible to automate the production of such models, a focus on continental to global scales applications has emerged. An example of the output from a GFM can be seen in Fig. 15.1 where flood depth is plotted for the region surrounding Bangkok for a 1 in 100-year flood hazard. Here, recent innovations in the field of global flood hazard mapping are reviewed with a steer as to how these might support enhanced climate impact assessment.

Fig. 15.1
figure 1

Flood inundation depth for the 1 in 100-year flood hazard, simulated by the Fathom GFM (Sampson et al. 2015)

Overview of Progress in Global Flood Modelling

The ability of a GFM to estimate flood hazard is broadly contingent on four components, these are:

  • The model of terrain elevations (DEM).

  • The method used to estimate extreme flows.

  • The definition of the river network.

  • The numerical model to simulate inundation.

Below, recent advances in each of these components are summarised, sometimes with a focus on the GFM developed by the University of Bristol to keep the discussion brief. Ongoing data needs and key modelling uncertainties are identified along with some opportunities to improve the models over the next 5–10 years. An opinion on the current state of the field is then provided at the end.

Digital Terrain Modelling

Floods are shallow waves with long wavelength and low amplitudes. As such, they are highly sensitive to the terrain over which they flow, which can both alter and block flow pathways. It is widely accepted that airborne LiDAR data offer the most accurate terrain data for flood mapping, with sub-metre resolution and vertical errors in the low decimetres. However, LiDAR data are absent in data sparse areas and global scale DEM data must be used instead. For much of the last two decades, data obtained by the Shuttle Radar Topography Mission (SRTM) has been the preeminent terrain data source for flood inundation mapping in data sparse regions, and the latest revisions to these data seek to remove a multiple sources of vertical errors including stripe noise, random errors, absolute bias, vegetation bias and urban biases due to buildings (Yamazaki et al. 2017). The impact of these error removal processes on the DEM can be substantial, as seen for the example from the Mekong Delta in Fig. 15.2. These data should be superseded by more accurate elevation models in the near future. For example, the TanDEM-X DEM at 90 m can in theory support more accurate flood simulation than SRTM-based DEMs (Hawker et al. 2019). However, the TanDEM-X DEM has yet to have vegetation biases systematically removed from the open data products, inhibiting its uptake by GFMs. Further advances in terrain data are most likely to come from very higher resolution proprietary datasets such as satellite photogrammetry (<2 m) and the 12.5 m version of the TanDEM-X DEM. An increased availability of such data at reduced costs is essential if global terrain data are to drive substantial improvements in global flood hazard modelling in the near future.

Fig. 15.2
figure 2

Difference between the SRTM DEM (left) and MERIT DEM (right) over the Mekong Delta. Note the wavy stripe noise in the Mekong that means the delta elevations appear to undulate from the north west to south east in the SRTM data

Extreme Flows

For extreme event simulation, it is necessary to estimate flows at ungauged sites using either regionalisation of extreme discharge observations from gauging stations or discharges simulated by a hydrological model over a long period. The gauged approaches (e.g. Smith et al. (2015)) benefit from regionalising direct observations of extreme flows, which simplifies the modelling process and can easily take advantage of new data sets and machine learning methods. They are however limited by data scarcity in many parts of the world, short record lengths and trends in river flows that mean the time series may not be representative of the present-day conditions. Although, the observational evidence that climate change has an impact on extreme river flows is weak, with discharge trends rarely clear outside of catchments that have experienced substantial human modification. Considering climate impacts with this method is therefore difficult and many studies look at the sensitivity of hazard and risk to event magnitude rather than climate change. Estimating extreme flows from hydrological modelling is appealing because flow estimates can be made for any location and the models can be forced by either observed or simulated weather (Alfieri et al. 2017). However, substantial uncertainties in the forcing, model structure and parameterisation of large-scale hydrological models means that biases can be expected in flow simulation along with regional differences in model performance. Climate impact studies often take this approach because the modelling cascade includes variables of direct relevance to climate (e.g. precipitation, temperature). Nevertheless, a detailed intercomparison of GFMs based on the gauged and hydrological modelling methods have yet to be undertaken and numerous advancements in local scale modelling have yet to be applied at global scales. Thus, further work is needed to understand the value of each approach and the potential for multi-model ensemble prediction.

Hydrography: River Location, Width and Depth

Open water is relatively simple to observe from satellite platforms, yet only recently have comprehensive global data sets on river width and location been developed. Prior to these studies, and still for many GFMs, the location and size of rivers was based on digital terrain data, with the HydroSHEDS https://hydrosheds.org/ data sets by far the most widely used. For steep catchments, this approach can be highly effective; however, in areas of low relief (e.g. deltas), the mapping of river locations based on topography often places rivers in the wrong location. River bifurcations and human alterations are also absent from such data sets. River networks that merge terrain derived rivers, map data and surface water observations have only recently begun to emerge, but should enable substantial improvements to GFM hydrography (Yamazaki et al. 2019). Perhaps a more fundamental issue for GFMs is the parameterisation of river conveyance capacity and specifically river depth and friction. These are not observable from satellite platforms and the conveyance capacity of rivers has been extensively altered via levy construction and channel modification, for which data are often poor or not openly available. Most GFMs make an assumption regarding the conveyance capacity of the river system linked to discharge return period, which conveniently acts as a form of bias correction for magnitude errors in the extreme flow generation process (Sampson et al. 2015). Inversion of river bathymetry from surface water dynamics perhaps offers the greatest potential for a paradigm shift in GFM hydrography, with the upcoming surface water and ocean topography satellite mission providing the necessary data for the world’s larger rivers.

Inundation Modelling

Numerical modelling of floodplain inundation has a substantial development history at the reach scale using computationally expensive hydrodynamic models based on shallow water flow theory. However, early GFMs tended to be extensions to the simpler river routing models used for global hydrological and land surface modelling, which estimate inundation by computing a volume excess given river channel conveyance and distributing this volume across the lowest points in the DEM (Winsemius et al. 2013). These methods are simple to implement, however, the simulations are usually less accurate than those from hydrodynamic modelling approaches. The development of more efficient hydrodynamic models and ongoing reductions in computing costs have enabled global scale hydrodynamic models to emerge (Sampson et al. 2015). Initially, these models were developed at relatively coarse resolutions for inundation simulation (> = 1 km), however, recent models have simulated inundation over two-dimensional grids at resolutions down to 30 m. These improvements to both process representation and resolution have had substantial impacts on the estimation of flood exposure over large scales because resolution and process inaccuracy tend to bias simulations towards greater exposure estimates. This occurs mainly for two reasons. Firstly, the flat nature of floodplains means it is easy to fill a floodplain to the surrounding topography with a simple volume excess model, which then has little sensitivity to event magnitude. Secondly, people tend to live and place assets adjacent to, but not on, floodplains. Thus, any loss of resolution in the hazard or exposure data sets tends to unintentionally capture these objects within the inundated floodplain (Smith et al. 2019).

Discussion

A substantial challenge associated with global flood hazard simulation is that all the components listed above are needed to estimate hazard and the necessary sophistication of each is contingent on the others. For example, it is only worth using a more accurate, yet computationally expensive, numerical scheme if the definition of the river network puts the river in the correct place. Furthermore, since each component has been advancing rapidly over the past decade, every global flood model has a different mix of component parts to the extent that understanding model uncertainties and benchmarking models has been near impossible to date. It is also possible for a seemingly sophisticated GFM to be let down by one of its component parts, for example, if vegetation and speckle noise has not been removed from the DEM, a complex two-dimensional hydrodynamic model is unlikely to outperform a simpler method because important flow pathways will be blocked. Only limited intercomparison of GFMs has been possible to date, but the few studies to be completed have identified substantial differences between GFMs, to the extent that they disagree on where is at risk more often than they agree (Trigg et al. 2016). Validation studies on individual GFMs usually conclude they are more accurate than the model benchmarking suggests, indicating that the validation studies to date are far from comprehensive, must pick easy to simulate locations and that the accuracy of GFMs is highly variable.