Advertisement

Acta Meteorologica Sinica

, Volume 26, Issue 1, pp 41–51 | Cite as

A comparison of three kinds of multimodel ensemble forecast techniques based on the TIGGE data

  • Xiefei Zhi (智协飞)Email author
  • Haixia Qi (祁海霞)
  • Yongqing Bai (白永清)
  • Chunze Lin (林春泽)
Article

Abstract

Based on the ensemble mean outputs of the ensemble forecasts from the ECMWF (European Centre for Medium-Range Weather Forecasts), JMA (Japan Meteorological Agency), NCEP (National Centers for Environmental Prediction), and UKMO (United Kingdom Met Office) in THORPEX (The Observing System Research and Predictability Experiment) Interactive Grand Global Ensemble (TIGGE) datasets, for the Northern Hemisphere (10°–87.5°N, 0°–360°) from 1 June 2007 to 31 August 2007, this study carried out multimodel ensemble forecasts of surface temperature and 500-hPa geopotential height, temperature and winds up to 168 h by using the bias-removed ensemble mean (BREM), the multiple linear regression based superensemble (LRSUP), and the neural network based superensemble (NNSUP) techniques for the forecast period from 8 to 31 August 2007.

The forecast skills are verified by using the root-mean-square errors (RMSEs). Comparative analysis of forecast results by using the BREM, LRSUP, and NNSUP shows that the multimodel ensemble forecasts have higher skills than the best single model for the forecast lead time of 24–168 h. A roughly 16% improvement in RMSE of the 500-hPa geopotential height is possible for the superensemble techniques (LRSUP and NNSUP) over the best single model for the 24–120-h forecasts, while it is only 8% for BREM. The NNSUP is more skillful than the LRSUP and BREM for the 24–120-h forecasts. But for 144–168-h forecasts, BREM, LRSUP, and NNSUP forecast errors are approximately equal. In addition, it appears that the BREM forecasting without the UKMO model is more skillful than that including the UKMO model, while the LRSUP forecasting in both cases performs approximately the same.

A running training period is used for BREM and LRSUP ensemble forecast techniques. It is found that BREM and LRSUP, at each grid point, have different optimal lengths of the training period. In general, the optimal training period for BREM is less than 30 days in most areas, while for LRSUP it is about 45 days.

Key words

multimodel superensemble bias-removed ensemble mean multiple linear regression neural network running training period TIGGE 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bougeault, P., and Coauthors, 2010: The THORPEX Interactive Grand Global Ensemble. Bull. Amer. Meteor. Soc., 91, 1059–1072, doi: 10.1175/2010BAMS2853.1.CrossRefGoogle Scholar
  2. Buizza, R., D. Richardson, and T. N. Palmer, 2003: Benefits of increased resolution in the ECMWF ensemble system and comparison with poor-mans ensembles. Quart. J. Roy. Meteor. Soc., 129, 1269–1288.CrossRefGoogle Scholar
  3. Cartwright, T. J., and T. N. Krishnamurti, 2007: Warm season mesoscale super-ensemble precipitation forecasts in the southeastern United States. Wea. Forecasting, 22, 873–886.CrossRefGoogle Scholar
  4. Geman, S., E. Bienenstock, and R. Doursat, 1992: Neural networks and the bias/variance dilemma. Neural Computation, 4, 1–58.CrossRefGoogle Scholar
  5. Krishnamurti, T. N., C. M. Kishtawal, T. LaRow, et al., 1999: Improved weather and seasonal climate forecasts from multimodel superensemble. Science, 285, 1548–1550.CrossRefGoogle Scholar
  6. —, —, Z. Zhang, et al., 2000a: Multimodel ensemble forecasts for weather and seasonal climate. J. Climate, 13, 4197–4216.Google Scholar
  7. —, —, D. W. Shin, et al., 2000b: Improving tropical precipitation forecasts from multianalysis superensemble. J. Climate, 13, 4217–4227.CrossRefGoogle Scholar
  8. —, K. Rajendran, and T. S. V. Vijaya Kumar, 2003: Improved skill for the anomaly correlation of geopotential height at 500 hPa. Mon. Wea. Rev., 131, 1082–1102.CrossRefGoogle Scholar
  9. —, C. Gnanaseelan, and A. Chakraborty, 2007a: Forecast of the diurnal change using a multimodel superensemble. Part I: Precipitation. Mon. Wea. Rev., 135, 3613–3632.CrossRefGoogle Scholar
  10. —, S. Basu, J. Sanjay, et al., 2007b: Evaluation of several different planetary boundary layer schemes within a single model, a unified model and a multimodel superensemble. Tellus A, 60, 42–61.Google Scholar
  11. —, A. D. Sagadevan, A. Chakraborty, et al., 2009a: Improving multimodel forecast of monsoon rain over China using the FSU superensemble. Adv. Atmos. Sci., 26(5), 819–839.CrossRefGoogle Scholar
  12. —, A. K. Mishra, and A. Chakraborty, 2009b: Improving global model precipitation forecasts over India using downscaling and the FSU superensemble. Part I: 1–5-day forecasts. Mon. Wea. Rev., 137, 2713–2734.CrossRefGoogle Scholar
  13. Leith, C. E., 1974: Theoretical skill of Monte Carlo forecasts. Mon. Wea. Rev., 102, 409–418.CrossRefGoogle Scholar
  14. Lorenz, E. N., 1969: A study of the predictability of 28-variable atmosphere model. Tellus, 21, 739–759.CrossRefGoogle Scholar
  15. Mishra, A. K., and T. N. Krishnamurti, 2007: Current status of multimodel super-ensemble and operational NWP forecast of the Indian summer monsoon. J. Earth Syst. Sci., 116, 369–384.CrossRefGoogle Scholar
  16. Park, Y.-Y., R. Buizza, and M. Leutbecher, 2008: TIGGE: preliminary results on comparing and combining ensembles. Quart. J. Roy. Meteor. Soc., 134, 2029–2050.CrossRefGoogle Scholar
  17. Rixen, M., and E. Ferreira-Coelho, 2006: Operational surface drift forecast using linear and nonlinear hyper-ensemble statistics on atmospheric and ocean models. J. Mar. Syst., 65, 105–121.CrossRefGoogle Scholar
  18. —, J. C. Le Gac, J. P. Hermand, et al., 2009: Superensemble forecasts and resulting acoustic sensitivities in shallow waters. J. Mar. Syst., 78, S290–S305.CrossRefGoogle Scholar
  19. Shi Xiangjun and Zhi Xiefei, 2007: Statistical characteristics of blockings in Eurasia from 1950 to 2004. Journal of Nanjing Institute of Meteorology, 30(3), 338–344. (in Chinese)Google Scholar
  20. Stefanova, L., and T. N. Krishnamurti, 2002: Interpretation of seasonal climate forecast using brier skill score, FSU superensemble, and the AMIP-I data set. J. Climate, 15, 537–544.CrossRefGoogle Scholar
  21. Toth, Z., and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations. Bull. Amer. Meteor. Soc., 74, 2317–2330.CrossRefGoogle Scholar
  22. Warner, B., and M. Misra, 1996: Understanding neural networks as statistical tools. J. Amer. Stat., 50, 284–293.CrossRefGoogle Scholar
  23. Zhi Xiefei and Shi Xiangjun, 2006: Interannual variation of blockings in Eurasia and its relation to the flood disaster in the Yangtze River valley during boreal summer. Proceedings of the 10th WMO International Symposium on Meteorological Education and Training, 21–26 September 2006, Nanjing, China.Google Scholar
  24. Zhi Xiefei, Lin Chunze, Bai Yongqing, et al., 2009a: Superensemble forecasts of the surface temperature in Northern Hemisphere middle latitudes. Scientia Meteorologica Sinica, 29(5), 569–574. (in Chinese)Google Scholar
  25. —, —, —, et al., 2009b: Multimodel superensemble forecasts of surface temperature using TIGGE datasets. Preprints of the Third THORPEX International Science Symposium, 14–18 September 2009, Monterey, USA.Google Scholar
  26. —, Wu Qing, Bai Yongqing, et al., 2010: The multimodel superensemble prediction of the surface temperature using the IPCC AR4 scenario runs. Scientia Meteorologica Sinica, 30(5), 708–714. (in Chinese)Google Scholar

Copyright information

© The Chinese Meteorological Society and Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Xiefei Zhi (智协飞)
    • 1
    Email author
  • Haixia Qi (祁海霞)
    • 1
  • Yongqing Bai (白永清)
    • 1
  • Chunze Lin (林春泽)
    • 2
  1. 1.Key Laboratory of Meteorological Disaster of Ministry of EducationNanjing University of Information Science & TechnologyNanjingChina
  2. 2.Wuhan Institute of Heavy RainChina Meteorological AdministrationWuhanChina

Personalised recommendations