Pure and Applied Geophysics

, Volume 176, Issue 5, pp 2057–2079 | Cite as

R2O Transition of NCAR’s Icing and Turbulence Algorithms into NCEP’s Operations

  • Hui-Ya ChuangEmail author
  • Yali Mao
  • Binbin Zhou


National Center for Environmental Prediction (NCEP) started distributing global operational gridded in flight icing, turbulence and convective cloud products as part of World Area Forecast System (WAFS) products in 2007. Simple algorithms were used to derive these products during early stage based on NCEP Global Forecast System (GFS) forecast. These products quickly became essential flight planning tool for international aviation community and are especially important to developing countries that do not have resource to run numerical models themselves. To further improve these products, Environmental Modeling Center (EMC) started collaborating with National Center for Atmospheric Research (NCAR) to transition their aviation research algorithms into NCEP’s operations (R2O), particularly Forecast Icing Potential (FIP) and Graphical Turbulence Guidance (GTG) algorithms. The initial attempt is to apply FIP to GFS forecast to potentially replace WAFS icing product. Extensive evaluation demonstrated FIP outperformed original WAFS icing product and, with support from Aviation Weather Center (AWC) and Federal Aviation Administration (FAA), EMC replaced US WAFS icing product with FIP in 2015. EMC recently also implemented GTG with 2017 GFS upgrade but GTG will not replace WAFS turbulence until 2019. This paper will describe the methodology which EMC used to transition NCAR’s aviation research algorithms into NCEP’s operations. It will also describe how EMC generates icing analysis data to be used as truth for performing objective verification. Several case studies will be presented and the methodology and results for objective validation will be discussed. Finally, future collaboration plan with NCAR and implementation plans to continue to improve WAFS products will be stated.


Aviation R2O aviation R2O aviation product improvement world area forecast system WAFS global gridded icing forecast global gridded turbulence forecast global gridded icing analysis verification of in flight icing forecast verification of turbulence forecast improvement of global icing forecast improvement of global turbulence forecast collaboration between EMC and NCAR case studies for in flight icing forecast case studies for turbulence forecast FIP GTG FIS CIP G-FIP GGTG G-FIS G-CIP speed up of GTG algorithm speed up R2O process Collaborative partnership Efficient R2O Transition 

1 Introduction

There are two World Area Forecast Centers (WAFC), Washington and London, each providing real-time meteorological information broadcasts for aviation purpose to back up each other. These broadcasts are supervised by International Civil Aviation Organization (ICAO) to fulfill requirements of the ICAO Annex 3 covering meteorological information which is necessary for flight (International Civil Aviation Organization 2016). The WAFC Washington provides ICAO with global forecast of temperature, wind, height, relative humidity, and precipitation to using forecast from its Global Forecast System (GFS In response to ICAO’s initiatives to improve global flight safety, both WAFCs started to generate global gridded forecasts of flight hazard products, including in flight icing, turbulence and cumulonimbus clouds out to 36 h forecast, as part of WAFS products in 2007 (Trojan 2007, Turp 2006).

In-flight icing (Gultepe 2018) occurs when aircrafts fly through supercooled clouds and cloud droplets freeze and build up on leading edges of wings. The ice alters airflow over the wings and tail, reduces the lift force that keeps the plane in the air, and can potentially cause aerodynamic stall, a condition that can lead to temporary loss of control. Fortunately, most modern large commercial aircrafts are equipped with deicing equipment which requires burning of fuel. Therefore, most commercial aircrafts can fly through predicted in flight icing environment as long as they know to carry enough fuel ahead of time. It is noted that aircraft icing issues due to high-altitude ice water content can be problematic and is subject of many studies, although it is not the scope of this study. This means prediction of in-flight icing is important for flight planning. Another issue important for flight planning is being able to avoid flying through severe turbulence and hence importance of having capability to predict such an event.

Although international airlines quickly adapted these new gridded WAFS forecast products in their daily flight planning, the occasional large differences between these products generated by two WAFCs confused ICAO users. These differences are attributed not only to the differences in different global forecast models but also to different aviation algorithms used to derive these products. The short term solution was for both centers to start issuing the same single authoritative blended WAFS products in 2011. For icing and turbulence forecast, the blending method used by both centers is simply taking the average of products from two centers for average products and taking the maximum for maximum products. More details are described in methodology Sect. 2.5. It is noted here that the blending is only done on WAFS icing, turbulence and cumulonimbus clouds products. Two WAFCs have been working on comparing and possibly unifying icing and turbulence algorithms. More details will be provided later.

First implementation of WAFC Washington aviation hazard products in 2007 were generated by developing a post processing package that used low resolution GFS output with 50 km horizontal and 50 hPa vertical resolutions. These resolutions were highest resolution available to public at the time although the model native resolution was at approximately 35 km. The algorithm used to derive icing product was a simplified version of NCAR’s Forecast Icing Potential (FIP) (Bernstein et al. 2005) algorithm (Wolff 2009) while Ellrod Index (Ellrod 1992) was used to derive turbulence product. Therefore, WAFC Washington turbulence product is not able to explicitly predict mountain wave component and has been relying on WAFC London to predict mountain wave turbulence in blended products. In addition to this shortcoming, the use of low resolution data in deriving aviation products appeared to lose skills quickly when going up to cruise attitude. More details on comparisons of old WAFS icing and turbulence algorithms with new ones will be described in Sect. 3 on verification results. The cumulonimbus cloud product for the WAFC Washington is derived in GFS model using Slingo algorithm (Slingo 1987). Aviation Weather Center (AWC) was recently tasked to start working on improving WAFS cumulonimbus cloud product. Therefore, improvement of this product will not be discussed in this paper.

To further improve WAFC Washington aviation hazard products, the Environmental Modeling Center (EMC) of National Center for Environmental Prediction (NCEP) NOAA started to collaborate with NCAR to transition their aviation algorithms into operational GFS production system in 2009. NCAR’s aviation algorithms (Bernstein et al. 2005; Gultepe 2018) include in-flight icing, turbulence, and convection analysis and forecast. The icing and turbulence algorithms have long been coupled with North America Rapid Refresh Model (RAP) (Benjamin et al. 2016) to provide AWC with valuable aviation weather forecast guidance over Continental United States. EMC incorporated a global version of FIP (G-FIP) developed by NCAR into its Unified Post Processor (UPP) (Chuang 2010) and started generating experimental G-FIP products in late 2011. With support from AWC, EMC implemented G-FIP to replace then operational WAFC Washington icing product in 2015. EMC also implemented Global Forecast Icing Severity (G-FIS) (McDonough 2010) to complement G-FIP during 2016 GFS upgrade. A global version of Graphical Turbulence Guidance (GTG) (Sharman et al. 2006) was also incorporated into UPP recently. The new global GTG (G-GTG) (Sharman and Pearson 2017) product was implemented with 2017 GFS upgrade but the product is not yet distributed as it is undergoing objective evaluation from GSD and possibly further tuning by NCAR turbulence group. To fulfill ICAO’s goal of moving toward probabilistic aviation forecast in the near future, EMC also started generating G-FIS operationally in all its twenty-one Global Ensemble Forecast System (GEFS) (Zhou et al. 2017) members in 2018.

The objective of this paper is to describe the methodology used by EMC to transition NCAR’s aviation algorithms into NCEP’s operations. It will also describe how EMC generated icing analysis data will be used as truth for performing objective verification. The results for objective verification as well as subjective evaluation based on a few case studies will be discussed. Finally, future collaboration plan with NCAR for planned implementations to improve WAFS products and to meet ICAO requirement will be presented.

2 Methodology

2.1 Description of First Version of WAFC Washington Icing and Turbulence Algorithms Implemented in 2007

In 2007, EMC implemented first version of WAFS aviation hazardous products, including icing potential forecast and turbulence (Trojan 2007), representing WAFC Washington. Both products used GFS output generated from UPP with horizontal resolution of 0.5° and vertical resolution of 50 hPa. The final products were then interpolated to 1.25° WAFS grid on thinned vertical levels prior to dissemination.

2.1.1 WAFC Washington Icing Potential Algorithm 2007–2014

The icing potential algorithm was based on FIP algorithm developed by NCAR funded by AWRP program (Kulesa 2003). Strictly it was one version of FIP but with different strategy of cloud layers and different fuzzy functions of each member (Fig. 1), tuned by EMC staff.
Fig. 1

Fuzzy logics interest maps used in G-FIP implemented as operational US WAFS icing product in 2007, where CTT represents cloud top temperature, T represents temperature, CC represents cloud cover, and VV represents vertical velocity

Fuzzy logic functions are applied to cloud-related fields/members of model input data and map the members to 0–1 scales, representing the likelihood of icing, respectively. The members are chosen based on cloud physics, research and forecasting experience. To estimate the initial icing potential, temperature, cloud top temperature and cloud coverage are chosen for Icing 2007, while temperature, cloud top temperature and relative humidity are members for FIP 2015. In both versions of icing algorithms, fuzzy function of vertical velocity is applied to adjust the initial icing potential.

2.1.2 WAFC Washington Turbulence Algorithm 2007-Present

To be compatible with WAFC London product, Ellrod Index (Ellrod 1992) was calculated for Clear Air Turbulence (CAT), while mountain wave turbulence was not calculated for GFS.

Ellrod Index is the product of horizontal deformation and vertical wind shear. The total horizontal deformation (DEF) is composed of two parts: shearing deformation (DSH) and stretching deformation (DST) and they are defined as follows.
$${\text{DSH}} = \frac{\partial v}{\partial x} + \frac{\partial u}{\partial y}$$
$${\text{DST}} = \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y}$$
$${\text{DEF}} = \sqrt {{\text{DSH}}^{2} + {\text{DST}}^{2} }$$
The convergence (CVG) is defined as:
$${\text{CVG}} = - \left( {\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y}} \right)$$
, and the vertical wind shear (VWS) is defined as:
$${\text{VWS}} = \sqrt {\left( {\frac{\partial u}{\partial z}} \right)^{2} + \left( {\frac{\partial v}{\partial z}} \right)^{2} }$$
The resulting Ellrod Index (EI) is given as:
$${\text{EI}} = {\text{VWS}}*\left( {\text{DEF + CVG}} \right)$$
The Table below matches Ellrod index thresholds with levels of severity (

Pilot reports

Ellrod index threshold







2.2 Description of New WAFC Washington Icing and Turbulence Algorithms Incorporated into Unified Post Processor

2.2.1 New Icing (FIP) Algorithms Implemented in 2015

Through collaboration between NOAA and NCAR starting 2010, NCAR developed a global version of FIP (G-FIP) to work with GFS output (McDonough 2010) based on original FIP. The G-FIP was integrated into NCEP’s Unified Post Processor (UPP) which derives diagnostic variables on highest model horizontal and vertical resolutions.

G-FIP uses GFS forecast of temperature (T), relative humidity (RH), cloud water mixing ratio (CWM), and vertical velocity (VV). Different from the Thompson microphysics scheme (Thompson and Eidhammer 2014) used in RAP, the Zhao scheme (Zhao and Carr 1997) used by GFS does not produce suspended rain and snow. Another adjustment made in G-FIP was to take into account the higher RH values in GFS when temperature is below − 20 °C. Based on GFS features, G-FIP algorithm defines cloud layers based on different thresholds for three different regions: 80% for tropics (latitude ≤ 23.5°), 75% for mid-region (23.5° < latitude < 66°), and 70% for polar region (latitude ≥ 66°). Finally, the fuzzy functions of G-FIP members are different from those of FIP (Fig. 2).
Fig. 2

Fuzzy logics interest maps used in G-FIP implemented as operational US WAFS icing product in 2015 versus those used in Conus FIP, where CTT represents cloud top temperature, T represents temperature, RH represents relative humidity, and w represents vertical velocity

2.2.2 New Turbulence Algorithm, G-GTG to Replace WAFS Turbulence in 2019

To improve WAFS turbulence, EMC began to collaborate with NCAR turbulence group in late 2015 to expand GTG globally by coupling it with GFS output. The strategy is to integrate GTG algorithm into UPP to take advantage of high resolution GFS native model data as G-FIP does, and to speed up the product delivery of G-GTG using UPP’s MPI framework. With the same index configurations for GFS, it only takes 2.5 min per forecast for UPP to generate G-GTG on GFS native 13 km resolution using fourty-eight processors, while it takes 20 min per forecast if running serially.

The transitioning started from a latest version, GTG3 (GTG version 3, Sharman and Pearson 2017), which generates aircraft independent EDR (=ε1/3, where ε is energy dissipation rate in m2/s3). All GTG science parts remain the same when transitioning GTG3 into UPP. For GFS, G-GTG picks different indices for three different vertical regions (low/mid/upper levels) (as shown in Tables 1, 2), computes turbulence diagnostics for each index, and then each diagnostic is mapped to a common turbulence EDR 0–1. Towards deriving the final ensemble of turbulence, the indices are divided into two categories, Clear-Air Turbulence (CAT) and Mountain-Wave Turbulence (MWT). In each category of CAT and MWT, the indices weigh equally. Research shows very similar results when applying static climatological weights and dynamic weights (Sharman et al. 2006). The maximum value of CAT and MWT is used to specify final GTG value.
Table 1

Choice of indices for low/mid/high levels for CAT



Ellrod3 by addition of a divergence trend




NCEP’s Nested Grid Model (NGM) Predictor: (Def*Speed)



Shear vector magnitude



Square of horizontal deformation



Absolute value “horizontal divergence”/Ri



Inverse of Richardson number, saturated: 1/Ris




Structure function-derived eddy dissipation rate




Inverse of Ri based on wind shear derived from the thermal wind relation



Lighthill-Ford-Knox spontaneous imbalance of inertia–gravity waves/Ri




Inertial-advective wind/Ri



Square of Vertical velocity



Square of Vertical velocity/Ri



Structure function-derived sigma vertical velocity/Ri



Structure function-derived sigma vertical velocity in x direction/Ri




Frontogenesis function on a constant z surface/Ri

Table 2

Choice of indices on low/mid/high levels for MWT, each diagnostic is multiplied by MWT multiplier



Frontogenesis function on a const z surface



Absolute value “horizontal divergence”




NCEP’s Nested Grid Model (NGM) Predictor: (Def*Speed)



Inertial-advective wind




Square of structure function-derived EDR



EDR derived from Schumann’s gravity wave formulation

There are some changes to GTG3 when integrated into NCEP UPP, for two main reasons. One is to speed up GTG product delivery; the other is to be compatible with UPP features

2.3 Advantages of Using UPP as an Aviation R2O Tool

EMC’s strategy to incorporate NCAR’s aviation algorithms is to integrate these algorithms into its Unified Post processor (UPP). EMC started to develop UPP in 2006 to be used as the common processor for all NOAA’s operational models. The request to have a common post processor within NOAA came from NOAA forecasters and customers as they were often confused by different algorithms used to derive the same fields. They felt a fair comparison cannot be made between model output from different models until the same post processing algorithms were applied across all models. EMC decided to use its Mesoscale post processor as the foundation for UPP and expanded it to interface with other NOAA operational models. The first implementation of UPP took place in 2007 when GFS implemented UPP operationally as its post processor. The UPP currently supports the following eight operational models: GFS, the Global Ensemble Forecast System (GEFS), the North America Mesoscale Forecast System (NAM Rogers 2017), the Short Range Ensemble Forecast System (SREF, Du et al. 2015), the Hurricane Weather Research and Forecasting Model (HWRF), the Rapid Refresh Model (RAP, Benjamin et al. 2016), the High Resolution Rapid Refresh Model (HRRR,, the NOAA GFS Aerosol Component (NGAC, Lu et al. 2016).

There are several advantages to this new strategy of utilizing UPP as the aviation Research to Operations (R2O) tool. First, because UPP carries and derives variables on highest resolution horizontal and vertical grids, one can expect more accurate aviation forecast products when these products are derived within UPP. This is especially true at cruising altitude and near tropopause where most models place several vertical layers to resolve tropopause. However, because most users can only assess GFS output with 50 hPa vertical resolution, aviation products derived using this low resolution GFS output could easily miss important forecast signals at cruising altitude. Second, algorithms added to UPP can often be applied to most of its supported eight models. In the application for aviation algorithms, minor tuning may be necessary. For example, the GTG algorithm is sensitive to model resolution so different configuration or weighting function may be used for different models. Nevertheless, UPP provides a centralized unified framework to maintain and exercise aviation algorithms for all operational models. The bug fixes and improvement in performance or efficiency of FIP and GTG packages will only need to be applied once within UPP, instead of to many different packages used for different models. In addition, since UPP also supports NCEP’s ensemble modeling system, NCEP is able to output probabilistic aviation products by generating aviation products in all NCEP’s GEFS members. Finally, UPP uses message passing interface (MPI) to speed up operational post processing by taking advantage of available multiple processors on today’s High Performance Computing (HPC) system. The number of processors is used to decompose Y-axis direction into multiple subdomains and computation is done independently over smaller subdomain to save time. This feature is especially important for adapting GTG algorithm into NCEP’s operational production suite. The estimated run time for generating GTG on current GFS 13 km native horizontal resolution is about 20 min per forecast, without use of parallel framework. Unfortunately, this long run time will not adhere to operational product delivery time line. With use of MPI, UPP is able to generate GTG within 2.5 min per forecast. The use of UPP to generate GTG and FIP has enabled us to deliver operational aviation products much faster while still deriving these products on highest model resolution.

2.4 Adaptation of FIP and GTG Algorithm into UPP and NCEP’s Production Suite

EMC obtained two versions of FIP algorithms from NCAR in 2010. The first version was then operational version of FIP for Rapid Update Cycle (RUC) model (Benjamin et al. 2004). The second version was developed by McDonough (2010) to work for simple Zhao microphysics (1997) used in GFS. He also divided the earth into three climate zone based on latitude bands and each zone used different interest maps for temperature, relative humidity, cloud top temperature, and vertical velocity. More details can be found in McDonough (2010). The two versions were merged, recoded to use MPI and Fortran 90, and integrated to UPP. The adaptation of FIP into UPP is relatively simple because FIP employed column model concept.

EMC obtained GTG3 from NCAR in 2015 and began integration of GTG algorithm into UPP. Because GTG algorithm used differentiation, averaging, and filtering throughout its package, proper halo exchanges must be used to ensure these operations were performed correctly across boundaries of each MPI task when integrating GTG into UPP MPI framework. In addition, these operations needed to be adapted to work for different staggered grid types supported by UPP, including Arakawa A, B, and E grids (Arakawa and Lamb 1977).

Once FIP and GTG are integrated into UPP, EMC set up real time parallel to generate these two products from operational GFS output four cycles each day. EMC then worked with NCAR developers, AWC, and FAA to distribute and evaluate performance of these experimental products. The verification results will be discussed in Sect. 3. After a period of evaluation, EMC will coordinate with AWC and FAA to schedule operational implementation of these products from UPP. Current operational implementation paradigm at NCEP usually requires UPP to be implemented together with each major model upgrade which it supports. For WAFS products, a separate package to perform blending of UK’s and US’s hazard products will need to modified to ingest new UPP-generated FIP and GTG as the new official US’s hazard products. As mentioned in introduction, UPP-generated Global FIP replaced WAFS icing forecast product after 2015 GFS upgrade. While UPP-generated G-GTG was implemented (but not disseminated) with 2017 GFS upgrade, the replacement of WAFS turbulence product with G-GTG is scheduled to take place in late 2018 due to the following two reasons. First, WAFS users need a period of time to become accustomed to new products. Second, because GFS has been going through annual upgrade in recent years, this delay in implementation of post processing products is necessary to allow for tuning of these products.

2.5 Methodology to Blend WAFS Products from WAFC Washington and WAFC London

Both WAFCs have agreed to apply the same methodology to blend maximum and mean WAFS hazard products. The approach is very simple. The mean icing and turbulence forecast products are obtained by taking the average of those from two centers while maximum products are obtained by taking the maximum.

3 Validation Data and Evaluation Method

3.1 Validation Data: Development of Global Current Icing Potential Product (G-CIP) to be used as Verifying Truth for Icing Forecast Product

Although a handful of observation data, such as Cloud Sat, can be inferred to indicate current icing environment, there are no direct measurement of in-flight icing observations with adequate global coverage at a given time. The PIREPs data is often used to perform icing verification over CONUS but its coverage is sparse over much of the globe. Therefore, global verification of in-flight icing has been difficult at best. Due to lack of global coverage of many observation data, it is a common practice for Global model developers to perform objective verification against its own analysis (White 2017) to obtain global verification statistics.

NCAR developed CONUS version of Current Icing Potential (CIP) to provide real time current icing environment (Bernstein et al. 2005 and Gultepe 2018) diagnose for users to make route specific decisions to avoid flying through hazardous icing condition. It combines RAP model output with observation data from satellite, pilot report, radar, surface, and lightning with final three sets of data as optional data. Similar to FIP algorithm, the CIP algorithm uses fuzzy logic and decision-tree logic that are derived from cloud physics and forecaster and onboard flight experiences from field program. However, a major difference between FIP and CIP algorithms is observation data is used in determining the locations of clouds and precipitation along with model output.

Because it is a common practice for developers of global models to verify their forecast against their own analysis, EMC developed Global version of CIP to be used as verifying analysis for all global icing forecast. Furthermore, the major purpose of product verification at NCEP is to prove skill is not degraded after a product upgrade. Therefore, using Global version of CIP as the truth for both old and new icing products is a good way to evaluate whether icing forecast skill improves. Note that verification scores computed using analysis will be slightly more favorable than those computed using observation data. EMC’s strategy to expand CONUS CIP to Global CIP is as follows. First, GFS is used as background model data instead of RAP. Secondly, EMC requested NESDIS to make global composite of geostationary satellite image data which was implemented operationally in 2014. Data from five geostationary satellites are used to make this global composite satellite data and they are GOES-East, GOES-West, Meteosat at 0, Meteosat at 63, and MTSAT. The first two satellites are operated by NOAA while the two Meteosats are operated by European Organization for the Exploitation of Meteorological Satellites. The MTSAT is operated by Japan. Third, existing NCEP operational Global METAR data is used. METAR data is routine weather report of sensible weather elements measures at airports or other stations. In addition, optional NCEP in house PIREPS, radar, and lightning data are used wherever they are available. Prior to operational implementation of global composite of geostationary data in 2014, EMC was able to start generating experimental G-CIP data in 2013 and started to perform global verification of G-FIP and other icing forecast products using parallel global composite of geostationary data.

The verification is based on grid to grid. Since WAFS icing is at 1.25° while G-CIP is at 0.25°, G-CIP was up-scaled to match WAFS icing using bi-linear interpolation option. WAFS icing potential has two sets of values, mean/max, when it is converted from high-resolution forecast to 1.25° per ICAO standards. Therefore, two curves will be provided, representing mean and max, respectively.

3.2 Objective Evaluation Method

Three category verification scores are derived for icing potential forecast. Category verification uses contingency table, converting continuous forecast values to dichotomous (yes/no) icing events. However, as the validation data, G-CIP itself is icing potential with continuous values from 0 to 1, not a yes or no event. To work it out, G-CIP is pre-defined with four thresholds, 0.2 0.4 0.6 0.8, presenting categories of low, medium, high and extreme high potential, respectively. Then there are two steps to derive the statistical scores. First G-FIP is verified on yes/no events of the four categories of G-CIP, respectively, using a set of increasing probability thresholds of G-FIP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9. Then for each threshold of G-FIP, all yes/no events of the four categories of G-CIP are added up.

First of all, Equitable Threat Score (ETS, or Gilbert Skill score GSS, see Table 3) is used for verifying icing potential forecast in comparison to other events, icing event is lower frequent event. In addition, for large scale grid-to-grid verification of icing over global score, the number of random hits could be very large. The equitable threat score not only excludes the random hits but also treats the correct rejection (or no event forecasts) and correct forecasts equally. This implies that using ETS, correct forecasts of less frequent events are weighted more strongly than high those of frequent events (Wilks 2006).
Table 3

2 × 2 verification contingency table












PODy (Hit rate) = a/(a + c), POFD (False alarm rate) = b/(b + d)

ETS = (a − r)/(a − r + b + c), where, r = (a + b)(a + c)/n, where n is sample size

Second, frequency bias is derived for icing verification, defined by the ratio of total predicted icing events divided by total observed icing events. The best score is 1. If the bias is larger than 1.0, icing events are over-predicted to imply higher false alarm rate, otherwise are under-predicted to imply higher missing rate. An over-prediction system means higher false alarms but not necessarily higher hits.

Third, many aviation weather forecasters are more interested in evaluating Receiver Operating Characteristic (ROC), another skill measure in icing potential forecast. A ROC curve is constructed by plotting the hit rate (PODy see Table 3) as y-axis against false alarm rate (POFD see Table 3) as X-axis in a 1.0 × 1.0 coordinate system for all events. The ROC score is the area between the ROC curve and the diagonal line, with the perfect score of 0.5. A positive ROC area means that the forecast hit rates is higher than that of false alarm, a 0 ROC area means no score or similar to climatology forecast, and a negative area means that it is worse than climatology forecast. ROC measures resolution, measuring forecast discrimination from two alternative outcomes. It helps developers to improve the forecast through calibration and helps aviation community to decide thresholds of forecast icing representing low/medium/high potential.

4 Verification Results and Discussions

Extensive daily Global objective verifications of new G-FIP generated by UPP against G-CIP along with verification of then operational US WAFS Icing and blended WAFS icing products were performed from 2013 to the implementation of new G-FIP in 2015. Prior to 2013, EMC performed objective verification comparison of above mentioned products using AWC’s CONUS CIP as truth. Several case studies were also performed as a sanity check and to support qualitative validation. In Sect. 4.1, authors will first show a few case studies to demonstrate good performance of new G-CIP products and that of to be implemented G-GTG products. In Sect. 4.2, objective verification results of all WAFS old and new icing forecast prior and since 2017 will be presented. GSD recently performed objective verification of G-GTG and their results will be shown in summary section.

4.1 Qualitative Validation via Case Studies

4.1.1 Case Studies for G-FIP and G-FIS

When EMC started to generate experimental G-FIP from UPP in 2011, numerous comparisons were made between G-FIP and CONUS FIP and CIP generated from RUC forecast for sanity check. It was reassuring that the two products had similar synoptic patterns although they use forecast from different models as well as slightly different versions of FIP algorithms. Figure 3 shows such a comparison from a more recent case valid at 13Z on Jan 16 2018 for both G-FIS and G-FIP as compared with CONUS current icing severity and current icing potential analysis products, respectively. The current icing product plots from AWC ADDS web site also display PIREP report of icing with various degree of severity as symbolized under color bars. Note that this is an early morning case and hence one does not expect many PIREP reports. In this particular case, both G-FIP and G-FIS 7-hour forecasts and CONUS CIP indicated high potential of 3 moderate icing along the front from eastern Texas to Kentucky as well as from British Columbia to eastern Oregon. This agreement in forecast and analysis was further supported by PIREP of moderate icing in southeastern Texas and light icing in central Tennessee. Although both global and CONUS products were in agreement that there was severe icing in eastern central Texas, G-FIP combined G-FIS indicated sever icing in central Texas while CONUS analysis product indicated light to moderate icing over the same area. Unfortunately there was no PIREP there to indicate the icing condition at that time.
Fig. 3

Comparisons of G-FIS (a) and G-FIP (c) with Conus current icing severity (b) and Conus current icing potential (d), all valid at 13Z January 16 2018. Plots on the right-hand side panels also showed PIREPs of icing in various degrees of severity levels as indicated under color bars

The second case involved an icing incident that took place at 07Z on November 14 2016 near Bergen Norway reported by a Jettime Avions de Transport Regional ATR-72-212A on behalf of Scandinavian Airlines (2014). The flight was initially cleared to fly to FL170 out of Bergen Norway but encountered icing condition while climbing through FL150. The crew received a “Degraded Perf” caution before the stick shaker was activated. The aircraft stopped climbing at FL160 at 07Z and began to descend as both wings dropped. The crew regained control manually, turned flight toward northwest while descending to FLl00, and eventually landed safely at Aalesund. As shown in Fig. 4a, 24 h blended WAFS icing forecast predicted approximately 70% icing potential near Bergen at 06 Z. Note that WAFS forecasts are only generated three hourly. Therefore, we have chosen to show available WAFS forecast closest to the time of this incident. In addition, authors were able to re-run UPP to generate GFS icing severity from the same forecast cycle and lead time as shown in Fig. 4b. This figure shows severe icing was predicted near Bergen in addition to large icing potential as indicated in blended WAFS icing forecast.
Fig. 4

Twenty-four hour forecast of Blended WAFS icing potential product (a) and GFS icing severity (b) predicted 70% potential of severe in-flight icing (in purple lines) occurring near Bergen, Norway

4.1.2 Case Studies for G-GTG

Aviation Weather Center NOAA alerted us to a severe turbulence flight incident that took place near Pueblo Colorado at 00Z on February 28 2017. An American Airline Boeing 737 from San Diego to Chicago had to make an emergency landing at the Denver Airport after hitting extreme turbulence at FL370 that injured five passengers. The pilots were also concerned the plane may have been damaged by the turbulence. Figure 5 showed this turbulence event was predicted by EMC’s then experimental G-GTG products consistently with both 12 h as well as 36 h lead time. Unfortunately, EMC’s G-GTG products were experimental at the time while undergoing evaluation and possible further tuning. Therefore, they were only distributed to our collaborators instead of to general public to provide guidance for this event.
Fig. 5

12 h (L) and 36 h of G-GTG forecast over CONUS, both valid at 00Z February 28 2017, predicted severe turbulence event that caused American airline 1296 to make an emergency landing at the Denver Airport after hitting extreme turbulence near Pueblo Colorado at FL370. PIREPs of turbulence locations were marked with x on both plots

An Eva Airways Boeing 777–300 from Taipei Taiwan to Chicago O’Hare Airport was enroute at FL310 about 180 nautical miles east southeast of Fukuoka Japan when the aircraft encountered turbulence causing the aircraft to drop and climb 550 feet and 375 feet, respectively. A cabin crew was seriously injured while eight other crews and three passengers received minor injuries. The flight went on to land at Chicago O’Hare Airport safely. The incident took place at 14Z on November 22 2017. However, because EMC only archived model output six hourly, we can only re-run UPP to generate G-GTG valid at 12Z and then 18Z on November 22 2017, respectively. Figure 6 shows UPP generated G-GTG predicted moderate turbulence southeast of incident location 2 h prior to the event happening and then northwest of incident 4 h after event happening. Through interpolation, it is reasonable to infer G-GTG would have predicted this turbulence event at the location close to incident.
Fig. 6

12 h G-GTG forecast (L) valid at 12Z November 22 2017 and 18 h G-GTG forecast (R) valid at 18Z November 22 2017. The severe turbulence incident took place at 14Z November 22 2017 at the location marked with X

4.2 Objective Verification Results and Discussion of Icing Forecast

NCEP’s global icing products include icing potential and icing severity predictions from both GFS deterministic model and GFS ensemble forecast (GEFS). In this section, only icing potential forecasts from deterministic model and ensemble forecasts are objectively evaluated due to a lack of truth data in icing severity data.

The objective evaluation of icing potential forecast is based on grid-to-grid comparison in NCEP’s unified verification system which was established for verifying both operational forecast and parallel forecast (test) for all models at NCEP. In other words, the gridded icing potential forecast at each grid is verified against gridded icing analysis at the same grid. In the case of development and implementation of WAFS icing products, their performance and skills were evaluated and compared to make sure the new products are superior to the old version. In this section, the results of three types of evaluations will be presented.

4.2.1 Evaluation and Comparison of Old and New Global Icing Potential Predictions

Because global current icing potential analysis (G-CIP) data were under development until 2013, AWC’s CIP analysis data over CONUS was first used as truth in the grid to grid verification during the early development stage of global icing forecast. This means Global icing prediction was only evaluated over CONUS at EMC until 2013. To apply the category verification, the continuous predicted icing potential is first converted to dichotomous icing event forecast by dividing the icing potential range into 9 sub-ranges as mentioned in 3.2 (with thresholds 0.1, …. 0.9). For each of the thresholds, a greater-than threshold event can be formed, and then the nine events were, respectively, verified against CIP over CONUS to obtain appropriate scores according to the contingency table. The verification time period is from Aug 1, 2011 to Apr. 15, 2012, about 8 months. The verification levels are focused on 14,000 feet and 10,000 feet. Totally 12 forecast times (3, 6, 9,…36 h lead time) from all 4 cycle runs (00, 06,12 and 18Z) were verified at 8 validation times (00, 03, 06, 09, 12, 15, 18 and 12Z) each day. The resultant performances of old (labeled as by USFIP) and new version (labeled as GFIP) expressed by Frequency Bias and Equitable Threat Score (ETS) for all sub-ranges are shown in Fig. 7. UK Met icing uses different strategy to achieve the same goal, icing potential, as GFIP. Operationally UK and US provide the averaged/blended icing potential to the public. As reference, UK Met global icing prediction (labeled as UK Met) and blended icing prediction (USFIP + UK Met, labeled as Blended) are also shown in the same plots. From Fig. 7a, b we can see that in terms of ETS value, the new version of icing potential prediction is obviously more skillful than the old version with much higher ETS values at both flight levels and all thresholds except for largest icing event 0.9 where all predictions have no skill. It is also shown that the old version is worse than UK Met’s prediction while the new version is better than UK Met’s and is close to the blended product at both flight levels for all thresholds.
Fig. 7

Equitable Threat Scores (a, b) and Bias (c, d) evaluated against CIP over CONUS for icing potential forecasts from four different products at FL140 (7a, c) and FL100 (7b, d)

In terms of bias, it can be seen from Fig. 7c, d that the old version is over-all under-predicted at both flight levels for all thresholds, particularly when threshold is larger than 0.5, the ratio bias is close to 0, implying that the old version has very low hit rate and almost misses all severe icing events. The new version on the other hand, is generally over predicted but not too much. Over entire potential range, its ratio bias is no larger than 2.0. This means that the new version has pretty good skill in hit rate and lower missing rate with a little bit higher in false alarms. This is seen as better than almost no hit rate skill in old version. Figure 7a, b also show that UK Met’s icing potential prediction is very over-predicted, particularly for severe icing events while the blended products eased over-predicted but still worse than the new version.

The ROC diagrams for same time period and same 2 levels are shown in Fig. 8a, b, from which we can observe that the new version has much larger area than that of the old version, although the new version has a little bit smaller size than that UK Met which has similar area for the blended product.
Fig. 8

ROC curves evaluated against CIP over CONUS for icing potential forecasts from four different products at FL140 (8a) and FL100 (8b)

Figure 9 shows ROC scores of several different icing forecast products when verifying against G-CIP for the winter 2014–2015 prior to 2015 implementation of G-FIP to replace first version WAFC Washington icing forecast product at 600 hPa and 400 hPa. In term of mean product at 600 hPa, it was evident that G-FIP (in dark solid blue) is significantly better than old version of US Washington icing forecast product (in solid red). In addition, new blended WAFS icing forecast, which is blended products between G-FIP and WAFC London icing products, performed better then operational blended WAFS icing forecast. Both new and old blended icing forecast performed slightly better than G-FIP. Blended maximum product generally performed better than its corresponding mean product at threshold above 0.2. It is worth mentioning at 400 hPa, older version of WAFC Washington icing forecast showed no skill at all during this period while both mean and maximum G-FIP still retained good skills which was especially true for maximum G-FIP product.
Fig. 9

ROC curves evaluated against G-CIP globally for all WAFS icing potential forecasts at 600 hPa (L) and 400 hPa (R) during winter 2014–2015

4.2.2 Evaluation of Blended WAFS Product Against G-CIP

After G-CIP and G-FIP were implemented in 2015, the objective verification was focused on blended WAFS mean and blended WAFS max products based on blending of new G-FIP and UK Met icing potential product. The results of the evaluation are continuously updated and displayed on NCEP’s WAFC verification web site ( Here just ROC diagrams at 3 flight levels (FL100, FL140, FL180) and 3 different forecast hours (12 h, 24 h and 36 h) from May 2015 to Feb. 2017 are shown in Fig. 10. It can be seen that both blended mean and blended max are skillful in that both have very high ROC area at all forecast hours and all three flight levels. In addition, the blended mean and max ROC curves at all 3 forecast hours and 3 flight levels are very close to each other although the ROC areas at FL140 are a little larger than Fl100 and FL180, and ROC areas decrease with increased forecast time.
Fig. 10

ROC curves for blended mean (solid line) and blended max (dash line) at three flight levels (left for FL180, middle for FL140, right for FL100) and three forecast times (upper for 12 h, middle for 24 h and lower for 36 h)

4.2.3 Ensemble Evaluation of GEFS Icing

The purpose of ensemble verification is to see whether the ensemble forecast can provide enough uncertainty information and is better than one single model forecast. During and after the new G-FIP was implemented in all members of NCEP’s global ensemble forecast system, the performance of ensemble prediction of icing potential was evaluated against G-CIP and compared to the deterministic GFS model icing potential prediction. The detailed probabilistic verification method can be seen in Zhou and Du (2010) where the ensemble verification has two aspects. One is ensemble system evaluation by estimating the over-all ensemble system performance, particularly the member diversity, RMSE and mean error distribution, and bias. The second aspect is event-based forecast skills, particularly in comparison to the single model forecast. Following plots show some of the ensemble verification results in 5 months from June to October 2016, for all forecast running times and all validation times.

Figure 11 is mean error global distribution at FL180 (18,000 feet) for 24 forecast hours. The positive mean error indicate over prediction while negative value under-prediction. It can be seen that overall the ensemble icing potential mean at FL180 is a little bit over-predicted over lower lands/oceans and higher over-predicted in tropical regions but under-predicted in mountain regions, particularly in Himalaya and Rocky mountains. The member diversity expressed as skill score at three flight levels for different forecast times are shown in Fig. 12. The skill score is the ratio of spread (standard deviation among all ensemble members) to the root mean square error (RMSE). A perfect ensemble forecast system should have equal level of spread and RMSE. In this case the spread can precisely represent the forecast error. On the other hand, if spread score is over 1.0 (spread > RMSE), the ensemble forecast system is over diverse and the error is over estimated by the spread, while spread score is under 1.0 (spread < RMSE), the ensemble forecast system is under-diverse and the error is under estimated by the spread. We can see that current GEFS icing potential prediction is under-diverse, and the spread may under-estimate the actual forecast error for all forecast hours and this situation becomes worse for lower flight levels although it is improved a little for the lower level.
Fig. 11

Mean Error (mean value—G-CIP) over flight level 18,000 feet for 24 forecast hours

Fig. 12

Spread/RMS ratio for three flight levels and different forecast times over entire GEFS Domain

The under dispersion may not be due to ice potential computation method. It could largely be due to the problem in the global ensemble forecast system. Currently, almost all ensemble forecasts at NCEP suffer from under dispersion issue. Improvement of the member diversity has been one of goals for ensemble research and development at NCEP.

To compare ensemble forecast to single model forecast (reference), continuous ranked probabilistic skill score (CRPSS) is shown in Fig. 13 where the reference is G-FIP and averaged CRPSS between FL140 and FL180 over north hemisphere and south hemisphere are provided. A positive CRPSS indicates the ensemble forecast has smaller error than that of referenced forecast and negative CRPSS is otherwise. The CRPSS curves shows all positive CRPSS values for entire forecast time range for both south and north hemisphere which is a little better than the south. Moreover, as forecast hour increases, the ensemble forecast becomes better.
Fig. 13

FL140-FL180 averaged CRPSS referenced by G-FIP for various forecast times over south hemisphere (blue) and north hemisphere (red)

Reliability (also called statistical consistency or probability bias, Zhou and Du 2010) is an important measure to indicate whether the forecast probability is statistically consistent with observed frequency of the occurrence of events over the verification time. It also implies confidence of the probability forecast. The reliability curves for 9 icing potential thresholds (> 0.01, 0.2, …,0.9) at 3 different flight levels over entire domain and all forecast hours are displayed in Fig. 14, showing that the higher ensemble probability forecast is over confident, or too aggressive (forecast probability > observed frequency) while lower ensemble probability is under confident, or too conservative (forecast probability < observed frequency). Such a reliability information is useful to forecasters when they use ensemble probability in icing forecast. As is in single model forecast, ROC area can also be used to show forecast skills in terms of hit rate and false alarm rates. See Fig. 14 (R), where ROC curves for 9 icing potential thresholds at 3 flight levels over entire domain and all forecast hours are shown. We can see that the ensemble icing potential forecasts at all three levels are skillful. Comparing Fig. 14 to Fig. 10 one can conclude that the ensemble icing prediction is more skillful than single model forecast in terms of hit rates and false alarm rates.
Fig. 14

Reliability diagrams (left) and ROC diagrams (right) for three flight levels over entire global Domain and all forecast times (Note here ‘observation’ refers to analysis)

Finally, the event-driven icing potential forecast was evaluated. Similar as icing potential is converted to nine events, the ensemble probability forecast for each icing event (range from 0 to 100%) also can be converted to 9 deterministic and category forecasts by probability > 10, 20, … 90% as thresholds. And then the contingency table can be applied to verify the category forecasts. To do this, therefore, totally 9 × 9 (81) events can be verified in category way (i.e., according to the contingency table). The ETS at FL180 over entire domain and all forecast hours are summarized in Fig. 15 where only icing potential events with thresholds 0.1, 0.3, 0.5, 0.7 and 0.9 are displayed. It can be seen that using ensemble probability, the performance of weak icing event prediction are generally better than that of severe icing event. We also see that for weak icing events, the best probability thresholds are around 30 ~ 50%, while for severe icing events, the best probability thresholds near 10%. This implies that severe icing events are difficult to predict and prone to miss if using higher ensemble probability. In other words, asking too many members to capture a severe icing event is difficult. This evaluation also provides valuable information about the forecast uncertainty in icing potential ensemble forecast.
Fig. 15

ETS of GEFS icing prediction at FL180 for different ensemble probability thresholds and different icing potential thresholds over entire domain and all forecast hours

5 Summary and Future Work

To further improve WAFC Washington aviation hazard products, EMC began to collaborate with NCAR icing and turbulence groups in 2009 to transition their icing forecast, icing analysis, and turbulence forecast algorithms into NCEP operations. The new paradigm is to use EMC’s Unified Post Processor (UPP) as an aviation research to operation (R2O) tool. NCAR’ algorithms are re-worked and incorporated into UPP that supports eight operational models and uses MPI to speed up post-processing when multiple processors are used. This new strategy has been proven successful. First of all, EMC is able to improve global aviation products significantly through the use of high resolution grid available to UPP. Second, because UPP also supports operational Global Ensemble Forecast System (GEFS), EMC is able to start deriving probabilistic icing and turbulence products by scheduling operational implementation for all its twenty-one Global ensemble members to output icing and turbulence products. Finally, much improved efficiency gained by generating aviation product using UPP has not only sped up product delivery time, but also make it feasible to start planning generation of aviation product on very high resolution model such as HRRR. Using UPP to generate Global Graphical Turbulence Guidance (G-GTG) products with fourty-eight processors is eight times faster than generating G-GTG serially.

Adaptation of FIP (Forecast Icing Potential), FIS (Forecast Icing Severity), and GTG algorithms into UPP was performed through multiple year careful collaboration with NCAR icing and turbulence groups. The first effort was to incorporate a global version of FIP algorithm (G-FIP) developed by NCAR (McDonough 2010) into UPP to work with GFS output. In 2011, EMC started generating experimental G-FIP products generated by UPP. These products were soon distributed to Aviation Weather Center (AWC) and Alaska Aviation Weather Unit for evaluation. Subsequently, Global version of FIS (G-FIS) algorithm was also incorporated into UPP. Because National Weather Service (NWS) also committed itself to performing icing verification for both WAFCs, EMC was tasked to develop Global version of CIP (G-CIP) to be used as verifying truth for WAFS icing forecast.

The global version of GTG (G-GTG) developed by Sharman and Pearson (2017) was the latest to be incorporated into UPP. Integration of G-GTG into UPP was more labor intensive not only due to its much more extensive algorithms but also because it computes many derivatives and spatial averaging which requires extra handling in UPP’s MPI environment. More details on algorithms can be found in Sect. 2.2 while information on how algorithms were adapted in UPP can be found in Sect. 2.4.

Qualitative validations via case studies were performed to evaluate the performance of G-FIP, G-FIS, and G-GTG along with objective verification. In Sect. 4.1, two in flight icing cases were shown first to demonstrate good performance of G-FIP and G-FIS. The first case showed how G-FIP and G-FIS 7-hour forecast was able to predict moderate icing event in southeastern Texas and light icing event in central Tennessee that occurred at 13Z on Jan. 16 2018 (Fig. 3). The second case involved a pilot experienced aerodynamic stall while climbing though icing condition at flight level 150 at 07Z on Nov. 14 2016 near Bergen Norway. This incident was predicted by twenty-four hours WAFS icing forecast. As indicated on Fig. 4, 24 h WAFS icing forecast indicated there would be 70% chance of severe icing near Bergen Norway.

Two high impact turbulence cases were also examined to demonstrate great performance of recently implemented G-GTG products. The first case involved a highly publicized case of America Airline hitting extreme turbulence near Pueblo Colorado at FL370 at 00z on Feb 27 2017 that resulted in five injuries. Figure 5 shows this incident was predicted by then experimental G-GTG products consistently starting thirty-six hours ahead of time to 12 h ahead of time. In case 2, EVA air reported hitting severe turbulence near Fukuoka Japan at FL310 at 14Z on November 22 2017 that caused its Boing 777 to first drop 550 feet and then climb 375 feet. As results, a cabin crew was seriously injured and eight other crews and three passengers received minor injuries. Although EMC only stores GFS output at 12Z and 18Z (Fig. 5) on the event day, it was evident from these forecasts that a severe turbulence could have happened at 14Z near Fukuoka Japan.

Objective verification of G-FIP against CONUS CIP at early development stage and then against G-CIP after 2014 indicated it out-performed older version of WAFC Washington icing forecast in terms of equitable threat score, bias score, and ROC score. Therefore, EMC implemented G-FIP to become new WAFC Washington icing forecast product in 2015 with support from AWC and FAA. After this implementation, EMC continuously verifies blended WAFS mean and max icing forecast against G-CIP and publishes updated statistics on a web site every few month. In this paper, authors showed averaged ROC scores from May 2015 to Feb 2017 (Fig. 10) for various forecast times and flight levels and concluded both blended WAFS mean and max forecast have been very skillful at all examined levels and forecast times since 2015 G-FIP implementation. The skills of ensemble icing forecast were also examined. Although probabilistic icing forecast showed problems of under dispersion, which is most likely due to inherent under dispersion problem with GEFS, it does show that ensemble icing forecast is more skillful than deterministic icing forecast in terms of hit rates and false alarm rates.

GSD verification group recently finished their objective verification of recently implemented G-GTG against both EDR and PIREP data from July to October of 2017. Note that although G-GTG was implemented operationally, it has not replaced WAFC Washington turbulence product. The verification results indicated G-GTG performed better than blended WAFS turbulence product in all thresholds above 0.1. When verified against EDR data, G-GTG is significantly better than blended WAFS turbulence product in terms of ROC scores (Fig. 16). Furthermore, G-GTG on higher resolution of 0.25 degree grid has slightly better skills than G-GTG mapped to operational WAFS resolution of 1.25 degree grid. Authors wish to credit Forecast Impact and Quality Assessment Section (FIQAS) within NOAA/OAR/ESRL/GSD for providing these verification results. Furthermore, this research was performed in response to requirements and funding by the FAA and views expressed in this publication are those of the authors and do not necessarily represent the official policy or position of the FAA.
Fig. 16

Global verification of G-GTG and WAFS turbulence products against EDR data expressed in ROC curves for all cycles, forecast hours, and all levels 150–400 hPas from July to October of 2017. The Y axis PODY indicates probability of detection and X-axis indicates probability of false detection. The letters L, M, and S mark the forecast thresholds for light, moderate, and severe turbulences, respectively

This plot is courtesy of Forecast Impact and Quality Assessment Section (FIQAS) within NOAA/OAR/ESRL/GSD

EMC will continue to work with NCAR and AWC to further improve WAFS products and to meet ICAO milestones. With NOAA’s plan to replace Global forecast system with Finite Volume Model Version 3 (FV3 in 2019, EMC has begun to modify G-FIP and G-FIS algorithms within UPP to work with more sophisticated GFDL and Thompson microphysics schemes. EMC is also working with NCAR to ensure G-GTG algorithm works as well for FV3 output and perform minor tuning when necessary. The plan is to replace US WAFS turbulence with G-GTG in 2019. It is worth mentioning that UK is also implementing G-GTG to replace UK WAFS turbulence. As a result, WAFS users will see improvement in blended WAFS turbulence forecast product. EMC is collaborating with AWC to develop tools to compute global probabilistic icing severity product to complete ICAO’s milestone of start distributing probabilistic aviation product.



NCEP WAFS work was previously funded by former NOAA Aviation Service Branch but is currently funded by NOAA Office of Science and Technology Integration (STI). Authors would like to thank NCAR icing and turbulence groups for providing EMC with their latest version of G-FIP and G-GTG algorithms. Authors also wish to thank NOAA Aviation Weather Center and EMC management for their support in providing feedback and for coordinating implementations to meet ICAO milestones. Finally, FAA’s funding support to NCAR to collaborate with EMC on this R2O effort and to GSD to validate G-GTG products is much appreciated.


  1. Arakawa, A., & Lamb, V. R. (1977). Computational design of the basic dynamical processes of the UCLA general circulation model. Methods of computational physics (Vol. 17, pp. 173–265). New York: Academic Press.Google Scholar
  2. The aviation herald. Available online.
  3. Benjamin, S., et al. (2004). An hourly assimilation and model-forecast cycle: The RUC. Monthly Weather Review, 132, 495–518.CrossRefGoogle Scholar
  4. Benjamin, S., et al. (2016). A North American hourly assimilation and model forecast cycle: The rapid refresh. Monthly Weather Review, 144, 1669–1694.CrossRefGoogle Scholar
  5. Bernstein, B., et al. (2005). Current icing potential: Algorithm description and comparison with aircraft observations. Journal of Applied Meteorology Science, 44, 969–986.CrossRefGoogle Scholar
  6. Chuang, H.-Y. (2010). 2010: Development and application of the unified post processor for WRF NMM, WRF ARW, and GFS. Boulder: WRF Workshop.Google Scholar
  7. Du J., DiMego, G., Zhou, B., Jovic, D., Ferrier, B., Yang, B. (2015). Regional ensemble forecast systems at NCEP, 23rd Conf. on numerical weather prediction and 27th Conf. on weather analysis and forecasting, Chicago, IL, Amer. Meteor. Soc., June 29–July 3Google Scholar
  8. Ellrod, G. P., & Knapp, D. I. (1992). An objective clear-air turbulence forecasting technique: verification and operational use. Weather and Forecasting, 7, 150–165.CrossRefGoogle Scholar
  9. The global forecast system. Available online.
  10. Gultepe, I., et al. (2018). A meteorological supersite for aviation and cold weather applications. Pure and Applied Geophysics. (this issue)Google Scholar
  11. International Civil Aviation Organization (2016). “REVIEW AMENDMENT 77 TO ANNEX 3”, MET SG/20 − IP/05, BangkokGoogle Scholar
  12. Kulesa, G. J., Pace, D.J., Fellner, W.L., Sheets, J.E., Travers, V.S., Kirchoffer, P.J. (2003). The FAA aviation weather research program’s contribution to air transportation safety and efficiency”. P9.1 ARMS, AMS. (
  13. Lu, C. H., et al. (2016). The implementation of NEMS GFS aerosol component (NGAC) Version 1.0 for global dust forecasting at NOAA/NCEP. Geoscientific Model Development, 9, 1905–1919.CrossRefGoogle Scholar
  14. McDonough, F., Politovich, M., Wolff, C. (2010). The Global forecast icing product, Paper AIAA-2010-8111, AIAA Atmospheric and Space Environments Conference, TorontoGoogle Scholar
  15. Rogers E., et al. (2017). Mesoscale modeling development at the national centers for environmental prediction: Version 4 of the NAM forecast system and scenarios for the evolution to a high-resolution ensemble forecast system, 28th Conf. on weather analysis and forecasting, SeattleGoogle Scholar
  16. Sharman, R. D., & Pearson, J. M. (2017). Prediction of energy dissipation rates for aviation turbulence. Part I: Forecasting nonconvective turbulence. Journal of Applied Meteorology and Climatology, 56, 317–337.CrossRefGoogle Scholar
  17. Sharman, R., Tebaldi, C., Wiener, G., & Wolff, J. (2006). An integrated approach to mid- and upper-level turbulence forecasting. Weather and Forecasting, 21, 268–287.CrossRefGoogle Scholar
  18. Slingo, J. M. (1987). The development and verification of a cloud prediction scheme for the ECMWF model. Quarterly Journal of the Royal Meteorological Society, 113, 899–927.CrossRefGoogle Scholar
  19. Thompson, G., & Eidhammer, T. (2014). A study of aerosol impacts on clouds and precipitation development in a large winter cyclone. Journal of Atmospheric Science, 71, 3636–3658.CrossRefGoogle Scholar
  20. Trojan, G. (2007). Grib aviation products: WAFC Washington Progress ReportGoogle Scholar
  21. Turp, D.J., Macadam, I., Bysouth, C., Jerrett,D., (2006). Development of grib icing and turbulence products for WAFC London, WAFC London Progress reportGoogle Scholar
  22. White, G., et al. (2017). Evaluation of May 2016 GFS upgrade, 28th Conf. on weather analysis and forecasting. SeattleGoogle Scholar
  23. Wilks, D. S. (2006). Statistical methods in atmospheric sciences. International Geophysics Series (Vol. 59, p. 627). New York: Academic Press.Google Scholar
  24. Wolff, C., McDonough, F., Politovich, M., Gary, C. (2009). Forecast icing product: Recent upgrades and improvements, 1st AIAA Atmospheric and Space Environments Conference. San AntonioGoogle Scholar
  25. Zhao, Q. Y., & Carr, F. H. (1997). A prognostic cloud scheme for operational NWP models. Monthly Weather Review, 125, 1931–1953.CrossRefGoogle Scholar
  26. Zhou, B., & Du, J. (2010). Fog prediction from a multi-model mesoscale ensemble prediction system. Weather and Forecasting, 25, 303–322.CrossRefGoogle Scholar
  27. Zhou, X., Zhu, Y., Hou, D., Luo, Y., Peng, J., & Wobus, D. (2017). The NCEP global ensemble forecast system with the EnKF initialization. Weather and Forecasting, 32, 1989–2004.CrossRefGoogle Scholar

Copyright information

© This is a U.S. government work and its text is not subject to copyright protection in the United States; however, its text may be subject to foreign copyright protection 2018

Authors and Affiliations

  1. 1.NOAA/NCEP, Environmental Modeling CenterCollege ParkUSA
  2. 2.IMSGRockvilleUSA

Personalised recommendations