Design and quantification of an extreme winter storm scenario for emergency preparedness and planning exercises in California
- First Online:
- Cite this article as:
- Dettinger, M.D., Martin Ralph, F., Hughes, M. et al. Nat Hazards (2012) 60: 1085. doi:10.1007/s11069-011-9894-5
- 1.8k Downloads
The USGS Multihazards Project is working with numerous agencies to evaluate and plan for hazards and damages that could be caused by extreme winter storms impacting California. Atmospheric and hydrological aspects of a hypothetical storm scenario have been quantified as a basis for estimation of human, infrastructure, economic, and environmental impacts for emergency-preparedness and flood-planning exercises. In order to ensure scientific defensibility and necessary levels of detail in the scenario description, selected historical storm episodes were concatentated to describe a rapid arrival of several major storms over the state, yielding precipitation totals and runoff rates beyond those occurring during the individual historical storms. This concatenation allowed the scenario designers to avoid arbitrary scalings and is based on historical occasions from the 19th and 20th Centuries when storms have stalled over the state and when extreme storms have arrived in rapid succession. Dynamically consistent, hourly precipitation, temperatures, barometric pressures (for consideration of storm surges and coastal erosion), and winds over California were developed for the so-called ARkStorm scenario by downscaling the concatenated global records of the historical storm sequences onto 6- and 2-km grids using a regional weather model of January 1969 and February 1986 storm conditions. The weather model outputs were then used to force a hydrologic model to simulate ARkStorm runoff, to better understand resulting flooding risks. Methods used to build this scenario can be applied to other emergency, nonemergency and non-California applications.
KeywordsStorm hazards California Flood Scenario Emergency preparedness Disaster management
- WRF model
Weather Research and Forecast model
National Centers for Environmental Protection/National Center for Atmospheric Research Reanalysis Project
Coordinated Universal Time (Greenwich Mean Time)
- VIC model
Variable Infiltration Capacity (hydrologic) model
West Coast winter storms are natural phenomena that have always challenged Californians as major hazards. The storms are not as notorious as hurricanes in the southeastern U.S., but they rival hurricanes in important ways. West Coast winter storms bring comparable winds (in some storm sectors), comparably large rainfall totals, and major floods, storm surges and surf, and coastal erosion. Nonetheless, winter storms have often been considered as normal on the West Coast, in part, because the most extreme historical West Coast storms occurred prior to modern data gathering and decades before current populations were born (e.g., in 1862). Thus, these most extreme historical storms are treated as quasi-mythological events rather than real incidents to be accommodated in planning. Major investments in flood control infrastructure have reduced risks from storms, but the risk of catastrophic flooding remains very real, as evidenced by major floods in 1986 and 1997, and by the continuing history of flood-caused levee breaches in the Central Valley (Florsheim and Dettinger 2007). The 1986 and 1997 storms came close to overwhelming flood-control systems, threatening inundation of downtown Sacramento, and led to National Research Council studies (NRC 1995, 1999), planned major enhancements to flood control, and long-term research (e.g., Ralph et al. 2005a).
The U.S. Geological Survey’s (USGS’s) Multihazards Demonstration Project uses hazards science to improve community resilience to natural disasters, including earthquakes, tsunamis, wildfires, landslides, and floods. The first public Multihazards Project product was the Great California Shakeout Earthquake Scenario (Perry et al. 2008), a detailed scenario describing physical conditions during and following a hypothetical magnitude 7.8 earthquake on the San Andreas fault zone in Southern California. Emergency planning and preparedness exercises based on this scenario involved over 300 scientists, 5,000 emergency responders and disaster recovery agents, and the participation of over 5.5 million citizens (http://www.shakeout.org/).
The “ARkStorm” Scenario described herein (more about this title below) represents the second major public project by the Multihazards Project. In the ARkStorm case, a major winter storm sequence—surpassing many of the historical “design storms” used in current structural and planning designs—is the focus. Experts are examining this scenario’s physical, socioeconomic, and environmental consequences, addressing possibilities, costs, and consequences of resulting floods, landslides, and debris flows, coastal erosion and inundation, wind damage, pollution dispersal, threats to endangered species, and physical damages like bridge scour, road closures, dam and levee failures, property loss, water- and power-supply disruptions, and long-term recovery conditions. Dozens of emergency preparedness, first-responder, and resource management experts and organizations have been interviewed to evaluate preparedness for such a storm with results reported by Porter et al. (2011). Through a series of workshops and media reports, the public is being invited to learn more about consequences of winter storms in California and to prepare for their safety and that of families, communities, properties, and livelihoods.
Recent research (e.g., Ralph et al. 2006, and other references cited herein) has shown that extreme precipitation in California is most often the result of land-falling ARs. An intense AR striking the northern Sierra Nevada and flooding the State’s capital, Sacramento, could be the California analogue to Hurricane Katrina hitting the Gulf of Mexico Coast and flooding New Orleans. Sacramento is no less at risk now than New Orleans was before Katrina (NRC 1995, 1999; California DWR 2005; Rehmeyer 2006). (Locations of California settings mentioned herein are indicated on various maps in this paper.)
This paper describes the design and quantification of a geographically, temporally, and atmospherically detailed scenario of an extreme sequence of AR storms impacting California. The scenario was branded the “ARkStorm” early in the project (with the help of students from the Arts Center College of Design, Pasadena, California) to highlight the fact that the most threatening storms in California are atmospheric river (“AR”) storms (Ralph et al. 2006; Leung and Qian 2009), to highlight the intensity of the storm scenario (rising to 1,000-year (“k”) storm conditions in some areas), and by visual pun, to highlight the Noachian character (“ARk”) of such storms, historical and hypothetical.
Intense and sustained winter storms from California’s historical record—not least of which are the storms of the late 1800s (e.g., Kelley 1998)–provide both reality checks and motivations for developing a truly severe scenario, and are described next. Then the problems faced in constructing a realistic and detailed scenario of an extreme winter-storm sequence in California will be described and resolved, followed by a description of the resulting scenario.
2 Major historical storms
2.1 Storms and floods of 1862
The largest storm sequence in California’s history is generally considered to be that of the winter of 1861–1862, although other storm seasons late in the 19th century rivaled it. The most intense rains fell in January 1862, with about 420% of normal falling in Sacramento, 519% in San Francisco, and 300% in San Diego (Roden 1989). The unusually rainy season began on 24 December 1861 in Southern California, following what has been described as a “sunny, dry and warm” fall (Sadler 1957). In Northern California, rains in November and December 1861 contributed to early-season flooding, but the most intense period of storminess extended from 24 December 1861 through 21 January 1862, when rain fell on 28 of 30 days (Null and Hulbert 2007). Warm storm conditions exacerbated the storm’s consequences by melting snowpack in large areas of the Sierra Nevada in December.
The result of these uninterrupted storms was flooding throughout the state. William Brewer, a Yale geologist, reported the Central Valley to be inundated on 19 January 1862, by icy cold, muddy water over an area as much as 500 km long and from 20 to 100 km wide (Engstrom 1996; Null and Hulbert 2007). Thousands of farms were inundated. Freshwater flow from the Central Valley, through the Golden Gate at San Francisco, to the Pacific Ocean, was large enough so that tidal fluctuations stopped at the outlet for nearly 2 weeks, and freshwater capped the surface of San Francisco Bay for 2–3 months (Engstrom 1996). Upstream where the Sacramento River from the northern Central Valley and the San Joaquin River from the southern Central Valley merge at California’s Delta, southwest of Sacramento, the convergence of floods from multiple rivers backed water up in the Sacramento River for more than 45 km (Thompson 1957).
Flooding on the Santa Ana River (along the southern edge of Los Angeles) in Southern California lasted about 20 days; other recorded floods on that river have not lasted for more than four. Sadler (1957) estimated a peak Santa Ana River discharge of about 9,000 m3/s, compared to the highest modern measurements of 2,800 m3/s in March 1938 and 500 m3/s in January 1969, although this latter flood occurred in an era of extensive flood-control facilities. Broad areas between Los Angeles and the ocean, in areas (Anaheim) farther south, and in the Mojave Desert were inundated in the 1862 storm. The San Gabriel River through the center of Los Angeles and the San Diego River both abandoned their previous channels to cut new paths to the sea (Engstrom 1996). Indeed, channel cutting was pervasive enough so that along-river water tables fell and agricultural in several areas required irrigation for the first time after the floods.
Besides an improved understanding of the possible scale of damages from a truly extreme storm in California, the scattered records of the events of 1862 provided two major lessons for the ARkStorm designers: First, a prolonged storm sequence with truly extreme precipitation totals is demonstrably possible, and, second, extreme storm sequences can strike northern and Southern California in rapid succession, e.g., peak flooding in Southern California occurred just 12 days after that in Sacramento in 1862.
2.2 20th Century storms
Because much more data are available to describe the more recent episodes, the storms of 1969, 1986, and 1997 became a focus of ARkStorm design efforts. The 1969 storms resulted in the highest measured flows in recent decades in many Southern California streams, reaching about 8 times the average high flows. Major storms in 1950, 1955, 1986, 1997, and 2005 were focused more in Northern California, with New Years 1997 yielding the largest storm and floods in Northern California’s modern era, and February 1986 also notable for its flooding. The floods of 1986 and 1997 were the largest recorded in the American River (at about 3 times the normal peak flow), and the 1986 flood was largest in the Russian River (at about twice the normal peak flow). Each of these three storms, and probably the 1862 storm, was fed by ARs that drew tropical and subtropical heat and moisture into California (Bao et al. 2006; Leung and Qian 2009).
3 Scenario design
In order to develop a storm scenario that is historically plausible, scientifically defensible, internally consistent (from place to place, and among the various meteorological variables to be considered), and suitably challenging for emergency-management agencies, a team of meteorologists, climatologists, hydrologists, and other scientists and engineers was convened. The team consisted of two groups, a design team of 11 scientists (of whom most are coauthors here) and a review team of four experts, along with three Multihazards Program scientists. The team met to develop strategy, and then reconvened 6 months later to review the resulting storm depiction. In the interim, the lead authors here developed historical analyses and simulations to complete the ARkStorm scenario.
Although motivated by the Noachian events of 1861–1862, the designers felt that it was critical that construction of the scenario rely on events that were sufficiently recent so that modern observations exist. Several of the most recent, large storms and floods are well known to emergency managers and have already been used for design of existing flood-control structures and emergency plans. Although many local agencies focus flood preparedness on specific (generally, recent) historic floods, established procedures exist for determination of “standard project floods and storms” or “probable maximum precipitation” (e.g., U.S. Army Corps of Engineers 1952, updated in 1965) based on historical rainfall-duration-area curves, extending to 5-days durations or less. The present study expanded on the data, science and models typically used, addressing a need to describe aspects of a major storm beyond just daily-level precipitation rates; in addition to large precipitation amounts, hazards posed by major storms include short-term rainfall intensities (which condition landslide risks), winds and barometric pressures (which determine coastal storm surges, wave heights and erosion), winds (for their direct structural damages), and precipitation forms (i.e., rain vs. snow). Because the scenario is intended to challenge emergency managers and to approach the scale and consequences of the extended megastorms of the 19th century, the recent storms needed to be “amplified.” Nonetheless, in order to ensure maximum plausibility and defensibility, the project was reluctant to simply recapitulate precipitation extremes from “probable maximum precipitation” estimates or to simply specify a particular precipitation recurrence interval to be applied uniformly at every available weather station across the State. Furthermore, in the interests of scientific defensibility and internal consistency of the meteorological conditions as they evolved through the course of the scenario, it was deemed important to avoid arbitrary rescaling of the recent storm events. Rather than arbitrarily scaling up observed storm precipitation rates, a strategy of artificially concatenating historical episodes—without directly modifying precipitation rates–was used to enhance the storms and challenges as needed. In doing so, the ARkStorm design draws upon recognition that sustained storms can yield remarkably large impacts without requiring improbable storm intensities.
Consequently, several components were concatenated (in time): First, a relatively wet autumn in much of the state was assumed to occur before the ARkStorm sequences, in order to precondition land surfaces to respond with rapid runoff to the ARkStorm events. Second, drawing upon the lessons from the floods of 1861–1862, the ARkStorm scenario was designed to have substantial impacts on both Southern and Northern California in rapid succession over the course of 23 challenging days, which is nonetheless still shorter than the month-long sequence of 1862. The statewide nature of ARkStorm impacts ensure that, even if local storm effects were within the capacity of individual emergency-management systems, potential problems arising from within-state competition for emergency resources could be realistically explored. For the ARkStorm, a major, historically large storm focused on (but not restricted to) Southern California was selected to begin the ARkStorm sequence. In order to ensure that the ARkStorm was more challenging in southern California than the particular historical Southern-California storm used, the AR feeding moisture to this storm was assumed to stall, during the day with maximum precipitation in the Transverse Range of Southern California, spending one more day over Southern California than in the historical sequence. Thus precipitation rates and winds were not specifically amplified, but the storm total was made substantially larger by increasing its duration. Based on observed behavior of many recent ARs, this artificially imposed stalling of the storm is plausible and realistic. Finally, soon after the storm focused in Southern California, the ARkStorm scenario entails a second large storm focused in Northern California. The wet autumn and early winter, together with the substantial precipitation from the first (more southern) storm, ensured that this second storm would yield even larger runoff and flooding in Northern California than occurred historically.
Thus the ARkStorm is a 23-day severe-storm scenario based on two major storm sequences from recent meteorological history, with an initial 10-day storm sequence focused mostly in Southern California followed, after a brief 5-day interlude during which the next storm sequence approached, by an 8-day storm sequence focused in Northern California. The Southern California phase is based on the storms of January 1969. Although this period was prior to the era of detailed satellite information, sufficient surface meteorological and streamflow data exist to provide excellent support for most of the ARkStorm design (with one notable exception to be discussed in Sect. 4.4). The autumn and early winter of 1968–1969 were 10–50% wetter than normal in the central and northern parts of the state, so that—in order to minimize tampering with the historical record in the construction of the ARkStorm scenario—the actual historical weather prior to the major storms of January 1969 (i.e., 1 October 1968 thru 18 January 1969) was selected to provide the preconditioning period for ARkStorm. Thus, by designing the storm sequence to begin with the events of winter 1969, the stage was also set for strong flood responses in most of the state.
The ARkStorm scenario begins in earnest with the historical 19–27 January 1969 storms, a period of historically intense rains in Southern California, with considerable precipitation in much of the rest of the state as well. As discussed previously, in order to make this first, Southern California phase of ARkStorm more severe than history records, the storm was assumed to stall for an extra day on the day with peak precipitation in the Transverse Range of Southern California, 25 January. The measured Southern California precipitation on that day was so intense that it amounted to 30–40% of the entire (unusually wet) month’s total.
At the same time, the transition between storms needed to be short enough so that reservoir managers would be given the least plausible time to recover flood-control space completely from the first storm before the second storm arrived. A time window much longer than 5 days might allow reservoir operators to recover enough flood space to absorb the next storm’s runoff. The 1986 storm began with heavy rain and high snow levels on 12 February, yielding significant hydrological impacts almost from the start. In contrast, the first several days of the 1997 storm, beginning on 26 December, were characterized by significant precipitation but comparatively cold conditions, yielding limited hydrologic impacts until later in the storm period. The heaviest rainfall and highest snow levels in this storm only began about 5 days later. Furthermore, daily precipitation totals >380 mm were recorded in the 1986 storm (Fig. 2) but not in 1997, suggesting particularly high rainfall rates in the former.
In order to provide other components of the Multihazards Project with the hourly temperatures, precipitation rates, wind speeds, and barometric pressures needed to estimate resulting hazards and damages, the scenario described above was simulated using a modern weather prediction model, with results tested for accuracy by comparisons with historical observations and for efficacy by simulating hydrologic responses with a macroscale hydrologic model of the state. The use of the full meteorological model ensured that the ARkStorm description is dynamically and geographically consistent, and temporally complete, which a simple compilation of meteorological observations could not provide.
4.1 Meteorological simulations
Boundary conditions for the WRF simulations were drawn from the global-scale National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis Project (NNRP; Kistler et al. 2001; Kalnay et al. 1996) atmospheric data set. WRF was initialized every 5 days at 18 UTC and run for 5 days, 6 h, with the first 6 h being discarded as model spin-up, a strategy previously used by Hughes et al. (2009) over Southern California. Interior boundary conditions were updated at each initialization, with the sea-surface temperatures and lateral boundary conditions updated continuously throughout the run.
The raw simulation data were then pieced together to form one continuous time series made up of the 19–27 January 1969 period, immediately followed by the 8–20 February 1986 period. Simulated conditions on 25 January, 1969 were repeated, hour-by-hour. Because it was discovered that simply holding the large-scale boundary conditions steady for an extra 24 h on 25 January 1969 did not prevent the AR from continuing its progress southward through Southern California, and thus did not result in the desired stalling of the storm (as discussed in Sect. 4.4), inclusion of the “extra” 25 January 1969 in the ARkStorm was eventually handled outside of these WRF simulations. To preserve the correct diurnal variations of surface temperature on this repeated day, a typical diurnal cycle was identified from simulated temperatures for 24–26 January 1969, as the first principle component of the hourly values. This diurnal cycle was subtracted from the hourly temperatures simulated on 25 January. The residual for each hour of 25 January was then repeated hour-by-hour (00 UTC, 00 UTC, 01 UTC, 01 UTC, …, 23 UTC, 23 UTC) and, then two copies of the diurnal cycle identified earlier were added back to impose two normal diurnal temperature cycles during the resulting 48-h-long 25 January episode. Similar principle-component analyses indicated that diurnal cycles of precipitation, pressure and wind variables were negligible, with any diurnal cycles being completely overwhelmed by synoptic changes associated with the storm. Therefore, those variables were simply repeated hour-by-hour to form a 48-h-long “slow” versions of 25 January 1969 conditions in the scenario. AR conditions have been observed to stall for 24–26 h, so that the 24-h stalling created here is entirely plausible.
The meteorological requirements of ARkStorm’s other technical teams were used to define the WRF variables that were archived. Outputs provided (and still available for interested users) include hourly precipitation, 10-m winds, 2-m temperatures, water vapor, and sea level pressures.
4.2 Hydrological simulations
Simulations of runoff generation by a hydrologic model forced with the WRF-generated meteorology allowed evaluations of the efficacy of the storm-design strategy, the severity of WRF-generated storm conditions, and identification of problem areas where too little precipitation was simulated. The near-term Multihazards Program strategy was for resulting flood inundations to be hypothesized, initially, from semiquantitative linkages between WRF precipitation, simulated runoff rates, and Federal Emergency Management Agency floodplain maps to be followed, eventually, by more precise flood simulations using various operational flood-routing models at local and regional scales as models and willing partners become available. Only the former has been accomplished to date. Because it could be applied early on and repeatedly, the Variable Infiltration Capacity (VIC; Liang et al. 1994, model version 4.0.6) hydrologic model was applied as part of the efforts described herein to develop preliminary estimates of runoff responses to the WRF-simulated ARkStorm conditions. The VIC model was also used to estimate recurrence intervals for ARkStorm runoff rates and totals, as discussed in Sect. 5, to provide a basis for the semiquantitative inundations used in ARkStorm evaluations thus far (Porter et al. 2011).
VIC simulates (among other variables) daily runoff plus baseflow from grid cells on an 1/8-degree resolution over the conterminous U.S., although we focused here on California. VIC is driven by daily maximum and minimum temperatures, daily precipitation, and wind speeds on the same grid. In all VIC simulations here, wind speeds used as inputs are climatological values interpolated from NNRP data by Hamlet and Lettenmaier (2005). Temperatures and precipitation values for historical simulations were obtained from interpolated fields of daily observations (Hamlet and Lettenmaier 2005) or, for scenario simulations, were aggregated onto the VIC grid from the 6- and 2-km WRF outputs. VIC uses a tiled representation of the land surface within each model grid cell to allow for sub-grid variability in topography, infiltration, and land surface vegetation classes (Maurer et al. 2002). Derived variables such as radiation, humidity, and pressure are simulated by the model based on the input precipitation, daily maximum and minimum temperature values using the algorithms of Kimball et al. (1997) and Thornton and Running (1999). Surface runoff is simulated with an infiltration formulation based on the Xinanjiang model (Wood et al. 1992), while baseflow follows the ARNO model (Liang et al. 1994; Sheffield et al. 2004). Soil parameters and vegetation cover were obtained from Andrew W. Wood of the 3 Tier Group, Seattle, and from the North American Land Data Assimilation System, respectively. Initial soil moisture conditions at the beginning of the 1969 ARkStorm period were derived from the 18 January 1969 output from a long VIC simulation of historical conditions beginning in 1916 and extending through September 2002 (a simulation that was also used to estimate ARkStorm-runoff recurrence intervals).
4.3 Historical data
Historical observations used in this study took several forms: First, as indicated previously, the WRF model simulations were nested within the global reanalyzed atmospheric fields from the NNRP (Kalnay et al. 1996) at each time step at the WRF boundaries and every 5 days in the interior of the WRF domain. A second set of historical weather observations used was the 12-km gridded interpolation of daily surface-air temperatures and precipitation over California river basins used to drive historical VIC simulations. This dataset is from Hamlet and Lettenmaier (2005), and represents a gridding of daily weather station observations that is informed by monthly fields from the PRISM dataset (Daly et al. 1994). Finally, ungridded daily precipitation-total and temperature reports from the historical Summary of the Day (NWS 1989 and updates thereto) database for cooperative weather stations in and around California were used for evaluation of the simulations, and were obtained from the National Climatic Data Center. Records from 435 cooperative weather stations were included in comparisons to WRF model results.
4.4 Completing the scenario
Initially, VIC was used to simulate runoff rates associated with a version of the gridded historical meteorological fields usually used to drive the model (Hamlet and Lettenmaier 2005) with historical storms concatenated according to the ARkStorm design. This simulation was compared to a simulation of the same historical days without the concatenations to test whether the ARkStorm design actually provided amplifications of runoff that were both plausible and challenging. Comparison of this initial simulation with the historical simulation indicated that daily peak and scenario-total runoff rates were doubled or more in much of the state (not shown).
Then, the WRF model was used to simulate the scenario. Initially, the WRF model was configured as in Sect. 4.1 except that the Morrison double-moment microphysics scheme (Morrison et al. 2009) was originally used and without the 2-km resolution nest over Southern California. In this configuration, simulated precipitation in the Transverse Range during the January 1969 storms was less than observed. When the WRF output was used to force the VIC hydrologic model, runoff from the Transverse Range was accordingly undersimulated relative to the historical VIC simulations. Thus the team faced a problem: The aim was to produce a challenging storm over Southern California, but WRF was not producing enough precipitation to match the January 1969 storm. Several modifications to the WRF model configuration were attempted, including adding the 2-km nest centered over the Transverse Range in hopes that higher topographic resolution would extract more precipitation from the model there. Adding the 2-km grid also offered advantages to other Multihazards Project teams, some of which were focused on Southern California and needed the highest possible resolution.
Upon close inspection, the AR that fed the WRF-simulated 1969 storm was found to be making its primary landfall ~100 km too far south with this configuration, with precipitation forming only peripherally over the Transverse Ranges. We hypothesized that this problem was due to errors in the placement and configuration of the AR and its synoptic-scale environment in the large-scale NNRP fields, noting that 1969 was prior to the availability and therefore the assimilation of satellite data into the NNRP, a time when offshore meteorological observations were much less complete than in the satellite era.
A number of different model configurations were explored in attempts to correct the errors in this part of the storm simulation. Unexpectedly, the configuration change that allowed WRF to approximately capture observed precipitation patterns and rates from January 1969 was a change from representing cloud microphysics (processes associated with the formation and fall of cloud droplets and precipitation) in the model by the Morrison double-moment scheme versus using Lin microphysics. With Lin microphysics, precipitation is calculated differently from Morrison microphysics, and, as a result, latent heating of the atmosphere is different. We believe that, by chance, the change in heating was sufficient to cause the fronts, AR, and precipitation to evolve differently so that, in the simulation with Lin microphysics, the fronts and AR progressed southward more slowly, so that in simulations with the Lin microphysics the crucial AR made its landfall about 100 km farther north, in the Transverse Range, in much better agreement with historical events. Whether it be chance or dynamics, the purpose of the WRF simulation was to recreate a storm pattern like January 1969, and this particular simulation/realization provided the version that was needed for ARkStorm and thus was the simulation used thereafter.
In the process of solving the misplacement of the January 1969 AR in WRF, it was also determined that a simple repetition of the large-scale conditions on 25 January 1969 prior to downscaling by WRF did not produce the desired stalling of the storm. Doubling the duration of the large-scale boundary conditions for 25 January aggravated the problem of the poorly located AR landfall because it allowed even more time for the storm to migrate unrealistically southward. Given 24 more hours of the large-scale 25 January boundary conditions, the WRF-simulated AR did not stall for an extra day, but rather continued its within-region development, continuing southward. Because such AR stalling is realistic in some cases but did not occur in the actual 1969 storm or in WRF with an extra day of large-scale 25 January conditions imposed, the WRF outputs were postprocessed, hour by hour, to produce the desired stalling on 25 January, as a post-processing step after a WRF simulation—without the extra 25 January imposed–was completed, as described in Sect. 4.1.
Observed precipitation totals from several hundred stations are compared to WRF-simulated totals in Fig. 11c, indicating good agreement between observed and simulated precipitation for stations that received lower- to midrange totals. Stations that received the largest precipitation totals tend to have been overestimated by WRF, with most of these stations located in the western Sierra Nevada. The WRF-simulated precipitation totals there are significantly larger than the historical totals, further ensuring that the ARkStorm scenario is a true megastorm by historical standards.
Finally, ARkStorm-simulated runoff rates were compared to a long VIC simulation of historical conditions from October 1916 thru September 2002, to estimate recurrence intervals for maximum 3-day, maximum 7-day, and scenario-long runoff totals resulting from ARkStorm, with the 3-day recurrence intervals mapped in Fig. 14c. In these comparisons of unrouted VIC-simulated runoff rates, the extreme events generated in response to ARkStorm range from 50 to 100 year events in most mountain headwaters of southern, central, and (some) northern California river basins, to 500- to greater than 1,000-year events in areas around Los Angeles, in the central and southern Sierra Nevada, and in the Feather River basin of Northern California. From these runoff-generation frequencies and current 100- and 500-year floodplain maps, ARkStorm flood inundations were roughly estimated (Porter et al. 2011) and used in interviews with emergency and resource managers.
6 Summary and discussion
The USGS Multihazards Demonstration Project is working with a large number of science-, resource-, and emergency-management agencies to develop and explore an emergency-preparedness scenario in which an especially severe winter storm impacts California. The scenario has been designed to be more extreme than the typical “storms of record” or “floods of record” that many emergency-preparedness plans use but has been designed to emphasize scientific and historical plausibility. Rather than simply trying to recreate the largely unknown storm conditions that applied during the largest known flood episode in California in 1862, or arbitrarily scaling up one of the more recent, more adequately monitored historical storms, a strategy based on plausible sequencing of recent, large storms was developed.
The scenario constructed is based on a rapid sequencing of winter storms from January 1969 and February 1986 separated in this hypothetical storm scenario by only about 5 days, with the primary focus of the former (latter) being in Southern (Northern) California. The outcomes of this hypothetical sequence of storms were reconstructed by a numerical simulation of the local results of weather conditions described by the global NNRP data sets. The rapid storm sequencing exacerbates flooding by depositing rain on already saturated surfaces and by reducing the time available for water managers to prepare flood-control facilities for later storms. The design also assumed that one of the storms stalls over Southern California, enhancing rainfall totals and flooding there. Because of these elements, the scenario describes a storm sequence that is more catastrophic than either of the major historical storms that went into it. The result was a wide range of severe storm conditions summarized previously in Fig. 6: High surf and strong winds would batter the coastal zone from northwest of Los Angeles to near the Oregon border. Broad areas with extreme runoff also resulted from intense and warm rains falling at high altitudes, especially in the Sierra Nevada and across the southern mountain ranges. Finally, large areas suffer strong surface winds (35–50 m s−1) in the scenario.
In designing ARkStorm, care was taken to avoid creating an implausibly severe storm: The Southern California phase of the storm sequence was only stalled for a single day; the Northern California phase was not stalled at all. A full 5-day lull was left between the southern and northern phases of the storm sequence, so that 12 days separated the storm peaks in Southern and Northern California, as in 1862. Temperatures during the storms were not increased above historical temperatures to reflect projections of a warming world. Nonetheless, the storms were warm storms, yielding extreme precipitation and high snow levels. Using cooler historical storms might have reduced the precipitation and flooding, but also could have allowed for conditions suitable for other storm impacts, such as tornadoes and hailstorms. Finally, the scenario does not explicitly hypothesize widespread fires prior to the storms, which would have increased chances for debris flows and flash flooding.
Unlike the first hazard scenario that the Multihazards Project addressed (an earthquake disaster scenario), the ARkStorm scenario requires that an important new element—forecasting—be addressed. Earthquakes happen unexpectedly and their consequences follow thereafter. Storms, and especially the flooding that they cause, are typically forecasted (with varying accuracy) at lead times of a few days or more. Most flood managers make preparatory decisions during, and in immediate advance of, an approaching storm and in light of modern forecast systems, these decisions depend as much on forecasts as on the actual events at the time of the decisions. These decisions affect the distribution and timing of floods, ameliorating them when possible but, even when amelioration fails, modifying the floods nonetheless. Thus, the ARkStorm scenario as described here is only a first step for the Multihazards Project. Next steps will include developing realistic forecasts of the meteorological events and estimation of plausible flooding outcomes. The forecasts can then be provided to expert flood managers from selected river basins so that they can even better explore effects of an ARkStorm.
The storm scenario (Fig. 6) was designed to be larger overall than the largest 20th century events in both Northern and Southern California. Nonetheless, it probably is not as large as the largest observed storm, that of the winter of 1862. Furthermore, recent analyses of AR conditions represented in an ensemble of climate-change projections from the Intergovernmental Panel on Climate Change’s Fourth Assessment suggest that larger-than-historical AR storms in California might become more common in a warming world (Dettinger 2011). If these projections and analyses prove correct, storm sequences like the ARkStorm scenario may become more likely as the century progresses, but even without such change, history indicates that an ARkStorm could occur in almost any winter.
Thanks to Andrew W. Wood, Alan F. Hamlet, Dennis Lettenmaier, for VIC model, forcings, and parameters. The Delta Science Program’s postdoctoral fellowship provided partial salary support for TD, and support for MH was received from a National Research Council postdoctoral award at NOAA and the USGS Multihazards Program.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.