Neural Computing & Applications

, Volume 13, Issue 2, pp 112–122 | Cite as

An ensemble of neural networks for weather forecasting

  • Imran MaqsoodEmail author
  • Muhammad Riaz Khan
  • Ajith Abraham
Original Article


This study presents the applicability of an ensemble of artificial neural networks (ANNs) and learning paradigms for weather forecasting in southern Saskatchewan, Canada. The proposed ensemble method for weather forecasting has advantages over other techniques like linear combination. Generally, the output of an ensemble is a weighted sum, which are weight-fixed, with the weights being determined from the training or validation data. In the proposed approach, weights are determined dynamically from the respective certainties of the network outputs. The more certain a network seems to be of its decision, the higher the weight. The proposed ensemble model performance is contrasted with multi-layered perceptron network (MLPN), Elman recurrent neural network (ERNN), radial basis function network (RBFN), Hopfield model (HFM) predictive models and regression techniques. The data of temperature, wind speed and relative humidity are used to train and test the different models. With each model, 24-h-ahead forecasts are made for the winter, spring, summer and fall seasons. Moreover, the performance and reliability of the seven models are then evaluated by a number of statistical measures. Among the direct approaches employed, empirical results indicate that HFM is relatively less accurate and RBFN is relatively more reliable for the weather forecasting problem. In comparison, the ensemble of neural networks produced the most accurate forecasts.


Artificial neural networks Ensembles Forecasting Model Weather 



The authors would like to thank the staff of Environment Canada for the provision of the weather information as needed for this study. The authors are grateful for the comments by the anonymous reviewers, which helped to improve the presentation of this paper.


  1. 1.
    Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford, UK, pp 364-369Google Scholar
  2. 2.
    Breiman L (1999) Combining predictors. In: Sharkey AJC (ed) Combining artificial neural nets: ensemble and modular multi-net systems. Springer, Berlin Heidelberg New York, pp 31-50Google Scholar
  3. 3.
    Carney JG, Cunningham P (1999a) Confidence and prediction intervals for neural network ensembles. In: Proceedings of the international joint conference on neural networks (IJCNN’99), Washington DC, July 1999Google Scholar
  4. 4.
    Carney JG, Cunningham P (1999b) Tuning diversity in bagged neural network ensembles. Technical report TCD-CS-1999-44, Trinity College, DublinGoogle Scholar
  5. 5.
    Cho SB, Ahn JH, Lee SI (2001) Exploiting diversity of neural ensembles with speciated evolution. In: Proceedings of the international joint conference on neural networks (IJCNN’01), Washington DC, July 2001, vol 2, pp 808-813Google Scholar
  6. 6.
    Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal 12(10):993-1001CrossRefGoogle Scholar
  7. 7.
    Hartono P, Hashimoto S (2001) Learning from imperfect superior using neural network ensemble. IPSJ J 42(5):1214-1222Google Scholar
  8. 8.
    Islam MM, Shahjahan M, Murase K (2001) Exploring constructive algorithms with stopping criteria to produce accurate and diverse individual neural networks in an ensemble. In: Proceedings of the IEEE international conference on systems man and Cybernetics, Tucson, Arizona, October 2001, vol 3, pp 1526-1531Google Scholar
  9. 9.
    Jiang Y, Zhou ZH, Chen ZQ (2002) Rule learning based on neural network ensemble. In: Proceedings of the international joint conference on neural networks (IJCNN’02), Honolulu, Hawaii, May 2002, vol 2, pp 1416-1420Google Scholar
  10. 10.
    Jimenez D, Walsh N (1998) Dynamically weighted ensemble neural networks for classification. In: Proceedings of the international joint conference on neural networks (IJCNN’98), Anchorage, Alaska, May 1998, pp 753-756Google Scholar
  11. 11.
    Khan MR, Ondrusek C (2000) Short-term electric demand prognosis using artificial neural networks. Electr Eng 51:296-300Google Scholar
  12. 12.
    Krogh A, Vedelsby J (1995) Neural network ensembles, cross validation and active learning. In: Tesauro G, Touretzky DS, Keen TK (eds) Neural information processing systems, vol 7. MIT Press, Cambridge, Massachusetts, pp 231-238Google Scholar
  13. 13.
    Kuligowski RJ, Barros AP (1998) Localized precipitation forecasts from a numerical weather prediction model using artificial neural networks. Weather Forecast 13:1194-1205CrossRefGoogle Scholar
  14. 14.
    Liu Y, Yao X (1997) Evolving modular neural networks which generalize well. In: Proceedings of the IEEE international conference on evolutionary computation (ICEC’97), Indianapolis, Indiana, April 1997, pp 605-610Google Scholar
  15. 15.
    Liu Y, Yao X, Higuchi T (2000) Evolutionary ensembles with negative correlation learning. IEEE Trans Evolut Comput 4(4):380-387Google Scholar
  16. 16.
    Lorenz EN (1969) Three approaches to atmospheric predictability. Bull Am Meteorol Soc 50:345-349Google Scholar
  17. 17.
    Mao J (1998) A case study on bagging, boosting and basic ensembles of neural networks for OCR. In: Proceedings of the joint conference on neural networks (IJCNN’98), Anchorage, Alaska, May 1998, vol 3, pp 1828-1833Google Scholar
  18. 18.
    Maqsood I, Khan MR, Abraham A (2002a) Intelligent weather monitoring systems using connectionist models. Neural Parallel Sci Comput 10:157-178Google Scholar
  19. 19.
    Maqsood I, Khan MR, Abraham A (2002b) Neurocomputing based Canadian weather analysis. In: Proceedings of the 2nd international workshop on intelligent systems design and applications (ISDA’02), Atlanta, Georgia, August 2002. Dynamic Publishers, Atlanta, Georgia, pp 39-44Google Scholar
  20. 20.
    Moro QI, Alonso L, Vivaracho CE (1994) Application of neural networks to weather forecasting with local data. In: Proceedings of the 12th IASTED international conference on applied informatics, Annecy, France, May 1994, pp 68-70Google Scholar
  21. 21.
    Perrone MP, Cooper LN (1993) When networks disagree: ensemble methods for hybrid neural networks. In: Mammone RJ (ed) Neural networks for speech and image processing. Chapman-Hall, LondonGoogle Scholar
  22. 22.
    Rosen BE (1996) Ensemble learning using decorrelated neural networks. Connect Sci 8(3-4):373-384Google Scholar
  23. 23.
    Sharkey AJC (1999) Combining artificial neural nets: ensemble and modular multi-net systems. Springer, Berlin Heidelberg New YorkzbMATHGoogle Scholar
  24. 24.
    Shimshoni Y, Intrator N (1998) Classification of seismic signals by integrating ensembles of neural networks. IEEE Trans Signal Proces 46(5):1194-1201CrossRefGoogle Scholar
  25. 25.
    Sollich P, Krogh A (1996) Learning with ensembles: how over-fitting can be useful. In: Touretzky DS, Mozer MC, Hasselmo ME (eds) Advances in neural information processing systems 8. MIT Press, Cambridge, Massachusetts, pp 190-196Google Scholar
  26. 26.
    Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci (special issue on combining artificial neural networks: ensemble approaches) 8(3-4):385-404Google Scholar
  27. 27.
    Zhou ZH, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1-2):239-263Google Scholar
  28. 28.
    Zurada JM (1992) Introduction to artificial neural systems. West Publishing Company, Saint Paul, MinnesotaGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2004

Authors and Affiliations

  • Imran Maqsood
    • 1
    Email author
  • Muhammad Riaz Khan
    • 2
  • Ajith Abraham
    • 3
  1. 1.Faculty of EngineeringUniversity of ReginaReginaCanada
  2. 2.AMEC Technologies Training and Development ServicesVancouverCanada
  3. 3.Computer Science DepartmentOklahoma State UniversityUSA

Personalised recommendations