Climate Dynamics

, Volume 38, Issue 1–2, pp 161–173 | Cite as

Hindcast skill and predictability for precipitation and two-meter air temperature anomalies in global circulation models over the Southeast United States

  • Lydia Stefanova
  • Vasubandhu Misra
  • James J. O’Brien
  • Eric P. Chassignet
  • Saji Hameed


This paper presents an assessment of the seasonal prediction skill of current global circulation models, with a focus on the two-meter air temperature and precipitation over the Southeast United States. The model seasonal hindcasts are analyzed using measures of potential predictability, anomaly correlation, Brier skill score, and Gerrity skill score. The systematic differences in prediction skill of coupled ocean–atmosphere models versus models using prescribed (either observed or predicted) sea surface temperatures (SSTs) are documented. It is found that the predictability and the hindcast skill of the models vary seasonally and spatially. The largest potential predictability (signal-to-noise ratio) of precipitation anywhere in the United States is found in the Southeast in the spring and winter seasons. The maxima in the potential predictability of two-meter air temperature, however, reside outside the Southeast in all seasons. The largest deterministic hindcast skill over the Southeast is found in wintertime precipitation. At the same time, the boreal winter two-meter air temperature hindcasts have the smallest skill. The large wintertime precipitation skill, the lack of corresponding two-meter air temperature hindcast skill, and a lack of precipitation skill in any other season are features common to all three types of models (atmospheric models forced with observed SSTs, atmospheric models forced with predicted SSTs, and coupled ocean–atmosphere models). Atmospheric models with observed SST forcing demonstrate a moderate skill in hindcasting spring-and summertime two-meter air temperature anomalies, whereas coupled models and atmospheric models forced with predicted SSTs lack similar skill. Probabilistic and categorical hindcasts mirror the deterministic findings, i.e., there is very high skill for winter precipitation and none for summer precipitation. When skillful, the models are conservative, such that low-probability hindcasts tend to be overestimates, whereas high-probability hindcasts tend to be underestimates.


Ensemble Member Winter Precipitation Forecast Skill Seasonal Forecast Anomaly Correlation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We thank the various model providers for making available, through APEC Climate Center (APCC), the hindcast datasets used in this study. We thank Ms. Kyong Hee An for facilitating the data access and for providing associated documentation and Ms. Kathy Fearon for her careful reading of the manuscript and helpful editorial comments. All model data used in this study are available online from APCC ( This research was supported by NOAA grant NA07OAR4310221 and USDA grant 2088-38890-19013.


  1. Barnston AG (1994) Linear statistical short-term climate predictive skill in the Northern Hemisphere. J Climate 7:1513–1564CrossRefGoogle Scholar
  2. Brankovic C, Palmer TN (2000) Seasonal skill and predictability of ECMWF PROVOST ensembles. Q J Roy Meteor Soc 126(567):2035–2067CrossRefGoogle Scholar
  3. Cocke S, LaRow TE, Shin DW (2007) Seasonal rainfall predictions over the southeast United States using the Florida State University nested regional spectral model. J Geophys Res 112. doi: 10.1029/2006JD007535
  4. Gates WL, Boyle J, Cove C, Dease C, Doutriaux C, Drach R, Fiorino M, Gleckler P, Hnilo J, Marlais S, Phillips T, Potter G, Santer BD, Sperber KR, Taylor K, Williams D (1999) An overview of the results of the Atmospheric Model Intercomparison Project (AMIP I). Bull Amer Meteorol Soc 80:29–55CrossRefGoogle Scholar
  5. Gerrity JP (1992) A note on Gandin and Murphy’s equitable skill score. Mon Wea Rev 120:2709–2712CrossRefGoogle Scholar
  6. Hardy JW, Henderson KG (2003) Cold front variability in the southern United States and the influence of atmospheric teleconnection patterns. Phys Geogr 24:120–137CrossRefGoogle Scholar
  7. Higgins RW, Leetma A, Xue Y, Barnston A (2000) Dominant factors influencing the seasonal predictability of US precipitation and surface air temperature. J Climate 13:3994–4017CrossRefGoogle Scholar
  8. Kanamitsu M, Ebisuzaki W, Woollen J, Yang S-K, Hnilo JJ, Fiorino M, Potter GL (2002) NCEP-DOE AMIP-II reanalysis (R-2). Bull Amer Meteorol Soc 83:1631–1643CrossRefGoogle Scholar
  9. Kiladis GN, Diaz HF (1989) Global climate extremes associated with extremes of the Southern oscillation. J Climate 2:1069–1090CrossRefGoogle Scholar
  10. Kirtman BP, Shukla J, Balmaseda M, Graham N, Penland C, Xue Y, Zebiak S (2002) Current status of ENSO forecast skill: a report to the Climate Variability and Predictability (CLIVAR) Numerical Experimentation Group (NEG). CLIVAR Working group on seasonal to interannual prediction. p 31Google Scholar
  11. Kumar A, Hoerling MP (1995) Prospects and limitations of seasonal atmospheric GCM predictions. Bull Amer Meteorol Soc 76:335–345CrossRefGoogle Scholar
  12. Liou C-S, Chen J-H, Terng C-T, Wang F-J, Fong C-T, Rosmond TE, Kuo H-C, Shiao C-H, Cheng M-D (1997) The second-generation global forecast system at the Central Weather Bureau in Taiwan. Weather Forecast 12(3):653–663CrossRefGoogle Scholar
  13. Markowski GR, North GR (2003) Climatic influence of sea surface temperature: evidence of substantial precipitation correlation and predictability. J Hydromet 4:856–877CrossRefGoogle Scholar
  14. Misra V, Dirmeyer PA (2009) Air, sea, and land interactions of the continental US hydroclimate. J Hydromet 10:353–373CrossRefGoogle Scholar
  15. Murphy AH (1973) A new vector partition of the probability score. J Appl Meteor 12:595–600CrossRefGoogle Scholar
  16. Reynolds RW, Rayner NA, Smith TM, Stokes DC, Wang W (2002) An improved in situ and satellite SST analysis for climate. J Climate 15:1609–1625CrossRefGoogle Scholar
  17. Ropelewski CF, Halpert MS (1986) North American precipitation and temperature patterns associated with the El Nino/Southern Oscillation (ENSO). Mon Wea Rev 114:2352–2362CrossRefGoogle Scholar
  18. Ropelewski CF, Halpert MS (1987) Global and regional scale precipitation patterns associated with the El Niño/Southern Oscillation. Mon Wea Rev 115:1606–1626CrossRefGoogle Scholar
  19. Saha S, Nadiga S, Thiaw C, Wang J, Wang W, Zhang Q, van den Dool HM, Pan H-L, Moorthi S, Behringer D, Stokes D, Peña M, Lord S, White G, Ebisuzaki W, Peng P, Xie P (2006) The NCEP climate forecast system. J Climate 19:3483–3517CrossRefGoogle Scholar
  20. Schneider EK (2002) Understanding differences between the equatorial Pacific as simulated by two coupled GCMs. J Climate 15:449–469CrossRefGoogle Scholar
  21. Shneerov BE, Meleshko VP, Matjugin VA, Spryshev PV, Pavlova TV, Vavulin SV, Shkolnik IM, Subov VA, Gavrilina VM, Govorkova VA (2002) The current status of the MGO global atmospheric circulation model (version-MGO-03). MGO Procceeding 550:3–43Google Scholar
  22. Shukla J (1998) Predictability in the midst of chaos: a scientific basis for climate forecasting. Science 282(5389):728–731CrossRefGoogle Scholar
  23. Shukla J, Anderson J, Baumhefner D, Brankovic C, Chang Y, Kalnay E, Marx L, Palmer T, Paolino D, Ploshay L, Schubert S, Straus D, Suarez M, Tribbia J (2000) Dynamical seasonal prediction. Bull Amer Meteor Soc 81(11):2593–2606CrossRefGoogle Scholar
  24. Smith RL, Blomley JE, Meyers G (1991) A univariate statistical interpolation scheme for subsurface thermal analyses in the tropical oceans. Prog Oceanogr 28:219–256CrossRefGoogle Scholar
  25. Trosnikov IV, Kaznacheeva VD, Kiktev DB, Tolstikh MA (2005) Assessment of potential predictability of meteorological variables in dynamical seasonal modeling of atmospheric circulation on the basis of semi-Lagrangian model SL-AV. Russian Meteorol Hydrol 12Google Scholar
  26. Wang G, Alves O, Hudson D, Hendon H, Liu G, Tseitkin F (2008) SST skill assessment from the new POAMA-1.5 system. BMRC Res Lett 8:2–6Google Scholar
  27. Wang B, Lee J-Y, Kang I-S, Shukla J, Park C-K, Kumar A, Schemm J, Cocke S, Kug J-S, Luo J-J, Zhou T, Wang B, Fu X, Yun W-T, Alves O, Jun EK, Kinter J, Kirtman B, Krishnamurti T, Lau NC, Lau W, Liu P, Peigon P, Rosati T, Schubert S, Stern W, Suarez M, Yamagata T (2009) Advance and prospectus of seasonal prediction: assessment of the APCC/CliPAS 14-model ensemble retrospective seasonal prediction (1980–2004). Clim Dyn 33:93–117Google Scholar
  28. Weng S-P, Tung Y-C, Huang W-H (2005) Predictions of global sea surface temperature anomalies: introduction of CWB/OPGSST1.1 Forecast System. Proceedings, Conference on Weather Analysis and Forecasting, Taipei, Taiwan, pp 341–345Google Scholar
  29. Winsberg MD (2003) Florida weather. University Press of Florida. p 218Google Scholar
  30. Xie P, Arkin PA (1996) Global precipitation: a 17-year monthly analysis based on gauge observations, satellite estimates, and numerical model outputs. Bull Amer Meteor Soc 78:2539–2558CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Lydia Stefanova
    • 1
  • Vasubandhu Misra
    • 1
  • James J. O’Brien
    • 1
  • Eric P. Chassignet
    • 1
  • Saji Hameed
    • 2
    • 3
  1. 1.Center for Ocean-Atmospheric Prediction StudiesFlorida State UniversityTallahasseeUSA
  2. 2.Asia–Pacific Economic Cooperation (APEC) Climate CenterBusanKorea
  3. 3.University of AizuFukushimaJapan

Personalised recommendations