1 Introduction

The 2010 Winter Olympic and Paralympic Games took place 12–28 February 2010 and 12–21 March 2010, respectively, in the Vancouver and Whistler areas of British Columbia, Canada. In order to provide the best possible guidance and support to the Olympic Forecast Team (OFT), Environment Canada developed several experimental numerical weather prediction (NWP) systems for the Vancouver 2010 Games to augment its current operational products: a regional ensemble prediction system, a high-resolution deterministic prediction system, and an external land surface microscale modeling system. An overview of the atmospheric systems is given in Mailhot et al. (2010), Joe et al. (2010) and Isaac et al. (2012), while the land surface forecast system is described in detail by Bernier et al. (2011, 2012). The present paper focuses on the description of the high-resolution deterministic NWP system, which consisted of three nested grids (at 15-, 2.5-, and 1-km horizontal grid spacing).

High-resolution NWP model guidance has been used to support forecasting at many special events in the past. During the special observation period (Sep–Nov 1999) of the Mesoscale Alpine Programme (MAP-SOP), the Canadian Mesoscale Compressible Community Model (MC2) was run in real time at 3-km horizontal grid resolution to produce enhanced NWP forecasts over the complex terrain of the Alps (Benoit et al. 2002). The 3-km model generally provided useful guidance for planning and operations of the aircraft missions. Their study also emphasized, however, that proper simulation of some fine-scale structures and patterns associated with significant weather events over the Alps, such as the Mistral inversion and intense gravity waves, needed even higher model horizontal (1-km) and vertical resolutions.

Several past Olympic Games have also served as opportunities to develop and assess new high-resolution prediction systems. An Olympic weather support system was developed for the 2002 Winter Games in Salt Lake City, Utah (Onton et al., 2001; Horel et al., 2002). Real-time mesoscale numerical modeling was done using the Penn State/National Center for Atmospheric Research Mesoscale Model (MM5) with three nested grids of 36-, 12-, and 4-km horizontal grid-spacings, incorporating observations from the MesoWest network into the near-surface initial conditions. This Olympic system was found often to outperform operational models over complex terrain, due mainly to its improved resolution of orographic features. An advanced system combining a very dense weather observing network and high-resolution NWP modeling was developed for the 2006 Winter Games in Torino, Italy, by the Italian Weather Service (Oberto et al., 2007). It was found that higher model resolution (using an additional nested grid at 1.3 km) and data assimilation of the special observing network resulted in increased accuracy of the MM5 model forecasts, especially near the surface and in the boundary layer (Stauffer et al., 2007).

During the period of the Vancouver 2010 Olympic and Paralympic Games, most competition venues experienced rapidly changing winter weather conditions due to their location near the Pacific Ocean and the surrounding mountains (Isaac et al., 2012). An episode of unusually warm temperatures and heavy rains in the Vancouver area occurred at the beginning of February 2010 just before the Olympic period, causing serious logistical problems for the freestyle skiing events at Cypress Bowl Mountain (Doyle 2012). Local effects also played an important role at several venues, with drainage flows in narrow mountain valleys and terrain-induced upslope flows generating fog and low clouds, heavy snowfall and rapid changes in precipitation types. The comprehensive study of Mo et al. (2012) documented the impacts of mid-mountain clouds on the Whistler alpine skiing competitions. These conditions generally represented major challenges to the forecasters throughout the Olympic and Paralympic Games. Therefore, it is worth examining the potentially added value of the high-resolution Olympic forecast system and determining to what extent the high-resolution (1-km) model can improve forecasts over the lower resolution (15- and 2.5-km) models in winter conditions over coastal complex terrain.

Section 2 provides an overview of the experimental high-resolution NWP system. The enhanced mesoscale observing network set up for the Olympics is described in Sect. 3. Objective verification scores based on this dataset are then discussed in Sect. 4, while Sect. 5 shows several examples and real-time verification results from case studies during the Olympics. Finally, a few concluding remarks are given in Sect. 6.

2 The 1-km Resolution Experimental Prediction System

Compared to mesoscale forecast systems currently operational at the Canadian Meteorological Centre (CMC)—the regional 15-km Global Environmental Multiscale (GEM) model (Mailhot et al., 2006) and the 2.5-km GEM–LAM (Limited Area Model) forecast system (Erfani et al., 2005)—the Olympic prototype system is a higher-resolution system incorporating several modifications to the dynamical core and the physics package. As described in Mailhot et al. (2010), a first version of the system was available to the forecasters during their 2008 and 2009 winter Practicum sessions. The final version of the Olympic prototype then included a few adjustments to this experimental system and was delivered in December 2009 with full operational support in time for the 2010 Olympic and Paralympic Games.

2.1 Dynamical Core

The dynamical core is based on GEM model version 4.0.6 which uses a hybrid terrain-following, log-pressure based vertical coordinate and an updated vertical discretization based on Charney–Phillips staggering. Three one-way nested GEM–LAM grids (at 15-, 2.5- and 1-km grid-spacings; see Fig. 1) are used to achieve the desired high horizontal resolution, providing a fairly good representation of the complex terrain over the area of interest. Our study will focus on a comparison of the performance of the operational regional 15-km model and the higher-resolution 2.5- and 1-km Olympic LAMs, hereafter referred to as REG15, LAM2.5 and LAM1, respectively. The configuration of the runs of the Olympic prototype system is shown schematically in Fig. 2. The system was integrated twice a day, from the 0000 and 1200 UTC REG15 runs (Mailhot et al., 2006). Data assimilation was only used in the REG15 runs, as no special mesoscale data assimilation system is yet available for the high-resolution GEM–LAM grids. The 15-km GEM–LAM grid was integrated for 39 h with a timestep of 7.5 min. The LAM2.5 was integrated for 33 h with a timestep of 60 s. Finally, the LAM1 was integrated for 19 h with a timestep of 30 s. The three model grids had the same vertical configuration with 58 levels and a model top at 10 hPa (approximately 30 km).

Fig. 1
figure 1

The domains of the high-resolution forecast prototype for the Olympics consisting of a cascade of three one-way nested grids with a 15-km (261 × 260 grid points), b 2.5-km (344 × 349 grid points), and c 1-km (456 × 379 grid points) horizontal grid-spacings covering the Vancouver and Whistler areas. The shading denotes the terrain elevation

Fig. 2
figure 2

The configuration of the high-resolution modeling prototype for the Vancouver 2010 Winter Olympics. For the 0000 UTC run, the cascade of integrations proceeds as follows: (1) a LAM 15-km run is initialized from the 0-h forecast of the REG15 run started at 0000 UTC (boundary conditions for the LAM integration are also provided by the REG15 run) and integrated for 39 h until 1500 UTC the following day; (2) a LAM2.5 run is initialized at 0600 UTC from the 6-h forecast (allowing for the model spin-up period) of the GEM–LAM 15-km run started at 0000 UTC (which also provides the boundary conditions) and integrated for 33 h until 1500 UTC the next day; (3) the LAM1 run is then initialized at 1100 UTC from the 5-h forecast of the LAM2.5 run (which also provides the boundary conditions) and integrated for 19 h until 0600 UTC the next day (i.e. from 0300 to 2200 local time). A slightly modified procedure is repeated for the REG15 run starting at 1200 UTC to provide the Olympics cascade (15, 2.5, and 1-km) forecasts valid for the afternoon and evening (from 2000 UTC to 1500 UTC, i.e. from 1200 to 0700 local time)

The Olympic system used the procedure of a time-dependent adjustable topography developed for MC2 (Benoit et al., 2002) and also implemented in GEM (McTaggart-Cowan et al., 2010). This procedure (also dubbed “growing orography”) consists of adjusting the orographic height over the first few hours of integration in order to reduce the interpolation/extrapolation problems associated with an abrupt change of topography at the beginning of the simulation. In the Olympic system, this procedure was applied during the first 3 h of integration of the 2.5-km grid (starting from the 15-km grid orography) and during the first hour for the 1-km grid (starting from the 2.5-km grid orography).

The Olympic prototype was run on the IBM pSeries 690 supercomputer installed at CMC. The full run (including the production of the model output package) took typically about 2 h of wall clock time (52 min for the LAM2.5 grid on 256 CPUs and 68 min for the LAM1 grid on 320 CPUs). Timely delivery of model output products was ensured for the daily morning and early afternoon weather briefings of the OFT, which were held around 0700 and 1200 local time, respectively.

2.2 Physics Package

The Olympic prototype included several improvements to the operational physics package, in particular, to the geophysical fields and to the radiation and cloud microphysics schemes. A special emphasis was put on developing several new diagnostic model outputs that could be very useful to the forecasters, such as snow-to-liquid ratio (density of falling snow), visibility, and wind gusts. More detailed geophysical fields (orography, land-sea mask, soil and vegetation types, and surface roughness length) were generated from a variety of very-high-resolution geophysical databases newly available at CMC (going down to a 90-m horizontal grid spacing, for instance, in the case of the SRTM-DEM database—the Shuttle Radar Topography Mission-Digital Elevation Model).

The Olympic system used the radiative transfer scheme of Li and Barker (2005) which was recently included in our physics library. This new radiation package produced more realistic near-surface temperature forecasts by reducing the cold bias noted during winter conditions, and allowed a better representation of cloud-radiation interactions (detailed cloud optical properties, liquid/solid partition, etc.).

Cloud microphysical processes and precipitation were parameterized using the two-moment version of the Milbrandt–Yau bulk microphysics scheme (Milbrandt and Yau 2005). The scheme predicts the mass mixing ratio and total number concentration of six hydrometeor categories: cloud (non-sedimenting droplets), rain (drizzle and large drops), ice (pristine crystals), snow (large crystals/aggregates), graupel (heavily rimed snow), and hail (frozen drops and hail). The two-moment approach leads, in principle, to more accurate calculations of microphysical growth/decay rates and sedimentation (i.e. precipitation) compared to one-moment schemes, which typically predict only hydrometeor mixing ratios (Milbrandt and McTaggart-Cowan 2010). It also allows for better diagnosis of particle types (for example, the distinction between drizzle and rain) since the particle size distribution spectra are better represented and mean particle sizes are not simply one-to-one functions of the mixing ratios. To the authors’ knowledge, this is the first time a full two-moment microphysics scheme has been used for this type of operational forecast system.

Amongst several modifications to details of the microphysical processes themselves, a new method was developed to predict the instantaneous snow-to-liquid ratio (SLRinst) of precipitation directly from the microphysics scheme (Milbrandt et al., 2011). The method exploits the fact that “snow”, as an observer would call it (i.e. frozen, white precipitation), is represented as the sum of various hydrometeor categories in the scheme (ice, snow, and graupel) and that the snow category itself has a realistic bulk density that is inversely proportional to its size, which is in turn well simulated by a two-moment scheme. The method thereby removes the need to make any assumptions about an average snow-to-liquid ratio (SLR) such as the commonly used “10-to-1” rule, or estimates of this quantity based on available profiles (Roebber et al., 2003). Instead, it explicitly predicts the instantaneous unmelted volume flux as well as (independently) the liquid-equivalent flux. Ultimately, the unmelted snowfall amount is thus obtained. The ratio of the unmelted quantity to the liquid-equivalent quantity (i.e. the QPF) gives the SLR for a given snowfall event.

The visibility through liquid fog, rain, and/or snow was provided using prognostic hydrometeor fields and the empirically-based parameterizations of Gultepe and Milbrandt (2007, 2010). Visibility through fog is parameterized from the prognostic cloud droplet mixing ratio and number concentration; visibility through drizzle/rain and through snow is computed from the precipitation rates of the rain and snow categories, respectively. Also, the diagnostic cloud-base height and snow level, based on thresholds of mixing ratios and mean-particle sizes for cloud/ice and snow, respectively, were provided as guidance for the OFT.

Winds near the surface are strongly influenced by surface-layer turbulence due to roughness elements and surface forcings, and can generally be described by Monin–Obukhov similarity theory supplemented with convective scaling considerations (Wyngaard and Coté 1974). The variances (or standard deviations) of the 10-m horizontal wind speed and direction can then be computed from the surface-layer turbulent variables. The derivation is given in the Appendix. Surface wind gusts can also result from the deflection of air parcels flowing in the boundary layer that are brought down to the surface by the large energetic turbulent eddies. A physical model for this mechanism has been proposed by Brasseur (2001) to estimate wind gusts, together with lower and upper bounds of confidence interval for the accuracy of these estimates. The method computes the wind gusts by assuming that an air parcel flowing at a given height will be able to reach the surface if the average turbulent kinetic energy of the large eddies is sufficient to overcome the negative buoyancy effects due to the boundary layer thermal stratification. The method has been applied in mesoscale models under various conditions, including severe windstorm events (Brasseur 2001). It has been found to perform well over both flat and complex terrain, with the skill of the method being mainly limited by the accuracy of the boundary layer wind forecasts from the mesoscale models.

2.3 Customized Output Package

With the help of the OFT following the Practicum periods of winters 2008 and 2009, a comprehensive list of useful model products was finalized together with specifications related to their most appropriate display formats. Table 1 gives the list of these model outputs, which were displayed in various formats such as 2D maps, time series or meteograms at a number of surface stations, vertical cross-sections along specific lines, and vertical soundings at standard and additional Olympic locations. Several examples of these outputs will be discussed in Sect. 5.

Table 1 List of the main model outputs

3 The OAN Observational Dataset

A special mesoscale observing network was set up at the end of 2007 to provide enhanced monitoring and forecaster training prior to the Olympic Games. An overview of the main Olympic measurement sites and their instrumentation is given in Joe et al. (2010, 2012), Isaac et al. (2012) and Mailhot et al. (2010). They comprise an Olympic Autostation Network (OAN) of more than 40 standard and special surface observing sites (manual and automatic stations) with hourly or synoptic reports. The OAN provided an unprecedented mesoscale observational dataset over complex terrain during wintertime in Canada.

Mailhot et al. (2010) took advantage of this wealth of information from the OAN observations to make an objective verification of the preliminary version of the high-resolution Olympic NWP system, using a limited sample of significant weather cases from the winter of 2008. Objective verification error scores indicated marked improvements for daytime 10-m wind speeds in the LAM1 model as compared to the LAM2.5 and the REG15 models, while for 2-m temperatures both the LAM1 and the LAM2.5 configurations showed important improvements compared to the REG15 model.

In the present study, the OAN data are used to evaluate the guidance generated by the REG15, LAM2.5 and LAM1 models over the full Olympic and Paralympic period. The use of a longer period and the operational configurations of the NWP systems allows for the development of a more robust set of conclusions than those presented by Mailhot et al. (2010).

4 Verification of Near-Surface Meteorological Variables

Objective error statistics of wind speed and direction, air temperature, and dewpoint temperature have been computed using the OAN dataset from the 40-day period of 12 February to 23 March 2010. Bicubic interpolation of model outputs to observation sites is used. Two scores are used to evaluate the systems’ performance: bias is defined as,

$$ {\text{Bias}} = \frac{1}{N}\sum\limits_{i = 1}^{N} {(P_{i} - O_{i} )} $$

while standard error is defined as,

$$ {\text{SE}} = \left[ {\sum\limits_{i = 1}^{N} {\left( {\frac{{(P_{i} - O_{i} )^{2} }}{N} - {\text{Bias}}^{2} } \right)} } \right]^{1/2} $$

Here, P i is the model-predicted value and O i is the observed value for each i of N observations. Confidence intervals were also computed [following Goldstein and Healy (1995) intervals of plus and minus 1.39 standard deviation were used, corresponding to 8.2 and 91.8 % for the lower and upper bounds and a confidence interval of 84 %] using a block bootstrapping method (Candille et al., 2006) with 2000 re-sampling iterations in blocks of three consecutive days. Only the results from the 0000 UTC cascade runs are shown: the conclusions from the 1200 UTC runs are essentially the same. The common time verification window against observations for the three models corresponds to the 19-h period valid from 1100 to 0600 UTC the next day (0300–2200 local time). Note that during the period of the Olympics, the sunlight hours were from about 1600 to 0200 UTC the next day (0800–1800 local time).

The 10-m wind speed bias (Fig. 3) shows significant improvements in the higher-resolution models over REG15 during the day (LAM1 has virtually no bias while the REG15 winds are too weak by about 0.5 ms−1) but night time winds are a bit too strong especially in LAM1. All models have similar standard errors on the order of 1.4 ms−1, with a slight improvement in LAM1 during most of the period. As shown in Fig. 4, 10-m wind direction appears more difficult to forecast with all models having rather large standard errors between 40° and 50°. The bias values are much smaller though and do not indicate any systematic errors.

Fig. 3
figure 3

Time evolution (0–19 h forecasts) of objective verification scores [bias (a) and standard errors (b)] against the OAN for the 40-day period of 12 February–23 March 2010 for 10-m wind speed (in ms−1). Shading represents the 84 % confidence interval, thus a separation of the shaded backgrounds implies statistical significance at the 84 % level

Fig. 4
figure 4

Time evolution (0–19 h forecasts) of objective verification scores [bias (a) and standard errors (b)] against the OAN for the 40-day period of 12 February–23 March 2010 for 10-m wind direction (in degrees). Note that light winds below 1.5 ms−1 are not taken into account in the verification of wind direction and the sample size is then reduced in this case

For 2-m air temperatures (Fig. 5), both LAMs greatly improve on REG15 with a reduction of the cold bias by more than 1 °C during the day. Standard errors are also much lower in the LAMs by almost 1.5 °C throughout the period, with the LAM1 model being even better than LAM2.5 especially during the day. To better understand these differences, a histogram of 2-m air temperature error distribution has been computed from the model forecasts valid at 1200 UTC (0400 local time) corresponding approximately to the time of minimum temperatures. As indicated in Fig. 6, large temperature errors are much reduced with the higher-resolution models. For instance, warm errors by more than 3 °C occurred 70 times with REG15 compared to 14 times in LAM2.5 and only four times with LAM1. Corresponding cold errors larger than 3 °C were found in 206 cases for REG15, but only about half this number for the LAMs. For the 2-m dewpoint temperature (Fig. 7), all models are too dry (with biases reaching −2 °C) except in the afternoon when biases are quite small. There is a slight reduction of bias with the higher-resolution models during the morning hours but they are worse overnight. In contrast, standard errors indicate significant improvements by more than 1 °C with the LAMs, similar to the results found for 2-m temperatures (cf. Fig. 5).

Fig. 5
figure 5

Time evolution (0–19 h forecasts) of objective verification scores [bias (a) and standard errors (b)] against the OAN for the 40-day period of 12 February–23 March 2010 for 2-m air temperature (in °C)

Fig. 6
figure 6

Histogram of bias distribution for 2-m air temperature of REG15 (in blue), LAM2.5 (in red) and LAM1 (in green) forecasts valid at 1200 UTC against the OAN for the 40-day period of 12 February–23 March 2010. The number of events (vertical axis) is indicated for bin intervals of 2 °C (horizontal axis)

Fig. 7
figure 7

Time evolution (0–19 h forecasts) of objective verification scores [bias (a) and standard errors (b)] against the OAN for the 40-day period of 12 February–23 March 2010 for 2-m dewpoint temperature (in °C)

In summary, objective verification scores generally indicate that the higher-resolution models add significant value to guidance in these winter conditions over complex terrain for near-surface variables, such as wind speeds, air and dewpoint temperatures. In addition, the 1-km LAM often provided the best forecast accuracy, especially in terms of the smallest standard errors. All model configurations exhibit appreciable standard errors in wind direction and tend to be too dry near the surface except in the afternoon. Similar conclusions were reached by Chen et al. (2012) and Isaac et al. (2012) in their comparative verifications of several high- and lower-resolution models which were run during the 2010 Olympics.

5 Examples of Olympic Forecasts and Verifications

A thorough objective verification of new model products (e.g. snow-to-liquid ratio, visibility) represents a more important challenge than for traditional variables. Such verification is in progress and will be reported in the future. Meanwhile, OAN observations allowed real-time subjective assessment of several model outputs. A few examples of such real-time verification of the Olympic prototype through the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) official website are presented here.

5.1 Instantaneous Snow-to-Liquid Ratio

Since the method for diagnosing SLR presented in Sect. 2.2 was experimental, official forecasts of snowfall amounts during the 2010 Games were not based on the proposed technique.Footnote 1 However, the experimental SLRinst was made available to the forecasters to provide an opportunity for examination and subjective evaluation. Figure 8 shows an example of some of the available images for 23 February 2010. The model SLRinst values are seen to vary considerably in space and time, a behavior that was found to be typical of this coastal region in which the influence of complex orography had a dramatic impact on local temperatures and microphysical processes.

Fig. 8
figure 8

Time series (ac) at Cypress Bowl South station (VOG) from the LAM1 run for 23 February 2010 and snapshots of SLRinst for the LAM1 domain at d 2300 UTC (1500 local time) and at e 0400 UTC (2000 local time). Total precipitation rate (including liquid) in a. Graupel precipitation rate (dark blue), snow precipitation rate (medium blue), and ice precipitation rate (light blue) in b. The red curve in c denotes SLRinst. The arrow in d and e indicates the location of the VOG station and warm (cold) shaded colors denote large (small) values of SLRinst

While there were no attempts to measure the SLRinst for the case depicted in Fig. 8, one of the OFT forecasters on site (Michel Gélinas) made the subjective observation that the precipitation falling at Cypress Bowl South in the mid-afternoon consisted of predominantly “large, fluffy snowflakes” and in the early evening of “fast falling (like rain) snow pellets”. This corresponds very closely to the model SLRinst (Fig. 8c–e) which predicted values near 20 during the afternoon, consistent with low-density aggregates, and values near 5 in the evening, approaching the value corresponding to pure graupel in the model (i.e., 2.5 for the constant bulk graupel density of 400 kg m−3). For this case, the rapid transition from large to small SLRinst was due to a riming period leading to the dominant model solid-phase category switching from snow to graupel near 0300 UTC, as indicated in the meteograms in Fig. 8b.

5.2 Comparison of Model Visibility to Observations

An illustration of the products generated with outputs from the model visibility parameterization is shown in Fig. 9. The model is seen to both overforecast and underforecast the poor visibility at the freestyle skiing venue in the hours prior to and during the women’s aerial final on 24 February 2010. It appears that for the portion of time the model predicts snow and fog (from 1200 UTC to approximately 1800 UTC) the observations are relatively well matched by the visibility reduced in snow only, while the total visibility reduction is overforecast (too low). However, once the modeled snow stops and the visibility is reduced in fog or liquid precipitation, the total model visibility is underforecasting by a factor 10, even though the forecast visibility becomes lower than it was before at 2300 UTC. Just before the competition (before 1900 PST/0300 UTC 25 February) visibilities were so bad that spectators could barely see the ski jump. During the competition event itself visibilities did improve (see the last observations between 0300 UTC and 0400 UTC) enough for it to be held. Albeit not perfect, forecasters could make use of the high-resolution forecasts by adjusting to observations and building a conceptual model: if it snows, then visibilities should not be as bad as forecast due to scavenging, but if it does not snow the conditions could be worse than forecast.

Fig. 9
figure 9

Time series (a, b) at Cypress Bowl South station (VOG) from the LAM1 run for 24 February 2010. In a, relative humidity (blue) and cloud base height (green, in m AGL) from 1200 UTC 24 February to 0600 UTC 25 February. In b, reduction of visibility (in m) in fog (orange), rain (green), snow (blue), and all three combined (dashed) for the same period. In c, visibility (in m) from observations (FDP12, green dots, and Parsivel, cyan dots) against model data (LAM1 in red, LAM2.5 in magenta, nowcast based on REG15 data in blue, nowcast based on LAM1 data in orange) from 0400 UTC 24 February to 0400 UTC 25 February. The freestyle skiing women’s aerial final was held approximately between 0300 UTC and 0500 UTC 25 February, at the very right end of that figure. The dashed blue lines show the common period covered by the graphs

Another case study in reduced visibility, this time taken after the Olympic period, is shown in Fig. 10. The model exhibited skill in predicting the reduction in visibility due to fog, though model visibility was slightly too high. In general, when the timing of the large-scale weather systems was handled well by the model, and the forcing for production of liquid water was resolved (e.g., due to upslope flow), the parameterized visibility from the model compared quite favorably to the measurements.

Fig. 10
figure 10

Time series of observed and modeled visibility (in m) at Whistler Mountain station (VOA) on 3 May 2010. The green (cyan) dots depict instantaneous measurements from the FD12P (Parsivel) instruments. The curves depict model visibility (at the lowest prognostic model level) from various models/parameterizations as in Fig. 9c. The red curve corresponds to the visibility in LAM1 from the parameterization described in the text

Other cases of reduced visibility due to mid-mountain cloud on the Whistler Mountain, the so-called Harvey’s Cloud, are discussed in Mo et al. (2012). Comparisons with observations indicated that the precipitation and visibility forecasts from the LAM1 model were relatively successful in describing the evolution of the mid-mountain cloud events.

5.3 Added Value of High Resolution in Mountainous Terrain: A Squall Line on 14 February 2010

The most profitable use OFT forecasters could make of the high-resolution LAMs was not to use them in a purely deterministic fashion. Only once the collaborative forecast discussion of the OFT determined that the driving REG15 guidance was of sufficient quality could the on-site forecasters have some confidence in the more precise guidance from the LAM2.5 and LAM1 models. They would then adjust their site-specific forecast, taking into account the demands of the particular sport. A good example of this occurred on 14 February 2010, as a snowsquall going through the nordic ski venue perturbed the “nordic combined” competition. Figure 11 shows a meteogram from the 14 February run of LAM1 for the nordic ski venue site. It suggested a squall line passage around 1700–1800 UTC with a lowering of temperatures, an increase of cloudiness, a few millimeters of liquid equivalent precipitation and a rise in the values of estimated wind gusts. Although the driving REG15 model 0000 UTC integration (not shown) was not forecasting any measurable amount of precipitation at the time of the competition (approximately 1700–2000 UTC) both the LAM2.5 and LAM1 models were predicting precipitation during that period. Because the forecasters had confidence in the larger scale features and overall unstable conditions forecast by the driving model they felt confident in following the guidance of the higher-resolution integrations. After examining the latest observations and discussing all available NWP forecasts during the collaborative forecast discussion, the venue forecasters interpreted correctly that a squall line passage was quite likely and that it could cause delays to the jumping portion of the event, but also that it would not occur exactly at 1700–1800 UTC. According to the ski-jump venue forecaster Andrew Teakles (extract from the SNOW-V10 blog entry of 16 February):

Fig. 11
figure 11

Time series for the nordic ski venue (Callaghan Valley station, VOD) from the LAM1 run for 14 February 2010 of a 2-m air temperature (black) and dewpoint temperature (red), b cloud cover, c precipitation (liquid water equivalent, 30 min accumulations plotted with bars and integrated total precipitation plotted with a dark green line), and d 10-m wind speed (black), estimated wind gust (red), and wind direction (black arrows)

“An organized line of convection was noticed during the morning workup associated with upper support from a strong vorticity center aloft. Carl [Dierking] and I decided that this was the most important feature of the day and would likely drastically change the winds on the [ski jump]. During the briefing to the race official around 10 am [18 UTC], we emphasized the risk of turbulent wind… We advised that the current [favorable] conditions would last for about 1/2 h…. [We] had estimated the squall passing through at 1900 [UTC]. Unfortunately, the last round of the competition was already underway…. The jumps finished around 1905 [UTC] and the officials were wondering where the headwinds we were calling for were. At 1910 [UTC], the squall come through the site and gave us heavy wet flurries and strong headwind gusts.”

5.4 Temperatures Along the Alpine Ski Slopes

Another example of the added value of the high-resolution models can be found in the large differences in model temperatures on mountain slopes, as shown in Fig. 12 for the alpine ski venue on 13 February 2010, one of numerous days where precipitation phase change was one of the main challenges at this venue. Note that for all NWP point forecast products (meteograms) from the REG15, LAM2.5, and LAM1 models, the gridpoint associated to a given observation site was chosen subjectively (by André Giguère, a member of the CMC NWP development team who was also part of the OFT) to be the most representative of the site, based on its elevation and its situation relative to the surrounding topography. In the case presented here, each of the three observation sites was represented by a different gridpoint for the LAM2.5 and LAM1 models, but only by two different gridpoints for the REG15 model (note also these may differ from the procedure used for the objective verification discussed in Sect. 4). In general, temperatures at the three sites in LAM1 were closer to the observed temperatures. This would often be of great help in determining at what level and/or over what period of time a phase change would be occurring for the precipitation falling along the alpine ski run.

Fig. 12
figure 12

Time series of observed and modeled 2-m air temperatures (in °C) at three sites of the alpine ski venue [VOA, downhill top, elevation 1,640 m (a), VOL, “mid-station”, 1,320 m (b), and VOT, competition finish, 800 m (c)] on 13 February 2010. Station observations (green dots) and model data from the REG15 (blue), LAM2.5 (magenta) and LAM1 (red) models, from 0400 UTC 13 February to 0400 UTC 14 February. A discontinuity in the model data lines indicates a change to the most recent model integration; the REG15 model is integrated 4 times a day while the LAM2.5 and LAM1 models only twice a day (driven by the most recent 0000 or 1200 UTC REG15 integration)

5.5 Diurnal Winds at Ski Jump Competition Site

The ability of the high-resolution models to reproduce accurately the diurnal wind cycle in the absence of large-scale forcing was appreciated by the forecasters at the ski jump venue. Such a typical daytime pattern of wind flow reversal and wind gustiness is seen at the Callaghan Valley station VOW on 5 March 2010 in Fig. 13a, b, where the forecast wind speed and direction of the LAM2.5 and LAM1 models are shown along with observations. It indicates strengthened, gusty winds as overnight drainage winds are replaced by thermally-driven up-valley winds during the daytime period from 1700 UTC 5 March to 0100 UTC 6 March (0900–1700 local time). These conditions were well depicted in the LAM1 in particular by the estimated standard deviation of the wind speed appearing on the model meteogram (Fig. 13c) as a distinctive “lip” pattern, and by the change in the forecast wind direction; note that in this type of situation the estimated wind gust would most of the time correspond to a minimum value equivalent to that of the forecast 10-m wind speed. Other examples of the usefulness of the wind forecasts from the LAM1 model at the ski jump site are discussed by Teakles et al. (2012).

Fig. 13
figure 13

Time series of observed and modeled a 10-m wind speed (in ms−1) and b wind direction (in degrees) in Callaghan Valley at the ski jump top station (VOW, 940 m) on 5 March 2010. Station observations (green dots) and model data from the REG15 (blue), LAM2.5 (magenta) and LAM1 (red) models, from 0400 UTC 5 March to 0400 UTC 6 March 2010. In c, time series from the LAM1 run for 5 March 2010 of the 10-m wind speed (black, in knots) and estimated gust (red, in knots), both coinciding most of the time, with ±1 standard deviation of the 10 m wind speed (light blue and pink areas), which displays the typical “lip” pattern giving a similar range of values to the observed winds in a and b. The dashed blue lines show the common period covered by the graphs

5.6 Sharp Frontal Passage on 7 March 2010

Despite the lack of steep terrain in the immediate surroundings of Vancouver International Airport (YVR), the higher-resolution models added some sharpness to the forecast of an event such as the frontal passage observed on 7 March 2010. Figure 14 shows forecast and observed wind (speed and direction), temperature and precipitation rate at the YVR site, where the higher resolution models (LAM2.5 and LAM1) make better predictions of the abrupt changes in temperature and wind, and prefrontal precipitation compared to the driving REG15 model which displays smoother transitions and large errors in the wind direction. Although this date is between the Olympic and the Paralympic portion of the Games and no competitions were held, the forecasters who were arriving for the Paralympic period could evaluate the models’ performance for this event and build confidence in the higher-resolution models.

Fig. 14
figure 14

Time series at Vancouver International Airport (YVR) on 7 March 2010. Station observations (green dots) and model data from REG15 (blue), LAM2.5 (magenta) and LAM1 (red) models, from 0600 UTC 7 March to 0600 UTC 8 March 2010 for a wind direction (in degrees), b wind speed (in ms−1), c temperature (in °C), and d precipitation rate (in mm h−1, observations from three instruments: FD12P, Parsivel and “hot plate”, respectively in green, blue and brown dots)

In summary, the higher-resolution models provided enhanced guidance to the on-site forecasters and helped them to adjust their forecasts, with better timing of precipitation phase change, squall line passage, wind flow reversal, and visibility reduction due to fog and snow, among other things. The real-time subjective evaluation by the OFT and the SNOW-V10 website allowed forecasters to gain confidence in the reliability of the high-resolution Olympic prototype and highlighted the added value of the new model outputs.

6 Concluding Remarks

As in previous Winter Games, the Vancouver 2010 Games presented a unique opportunity as a testbed for the development and evaluation of new NWP products and to leave a significant legacy with improved high-resolution NWP systems. Continuous scrutiny by experts of the experimental prototype products proved to be quite beneficial for the model development. The advanced system was used daily during the Games by the OFT in their internal weather discussions and during briefings with competition venue managers and team coaches, especially for weather sensitive events such as alpine skiing, freestyle skiing aerials and ski jumping. Our objective verifications clearly indicated an added value of the higher-resolution Olympic prototype with respect to the usual operational CMC products. Furthermore, subjective evaluations showed that this system was reasonably skillful at forecasting high-resolution meteorological phenomena.

The model improvements in the experimental system formed the basis for a recent major upgrade to the LAM 2.5-km system running at CMC operations. This should help to increase Environment Canada’s predictive capability for high impact winter weather in complex alpine terrain. Finally, the unique experience gained during the Vancouver 2010 Olympic Games with our high-resolution NWP system will also promote Canadian participation in the upcoming FROST-2014 (Forecast and Research in the Olympic Sochi Testbed) which will be held during the Sochi 2014 Winter Olympic and Paralympic Games.