Skip to main content

Advertisement

Log in

Deep Neural Networks for Wind and Solar Energy Prediction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Deep Learning models are recently receiving a large attention because of their very powerful modeling abilities, particularly on inputs that have a intrinsic one- or two-dimensional structure that can be captured and exploited by convolutional layers. In this work we will apply Deep Neural Networks (DNNs) in two problems, wind energy and daily solar radiation prediction, whose inputs, derived from Numerical Weather Prediction systems, have a clear spatial structure. As we shall see, the predictions of single deep models and, more so, of DNN ensembles can improve on those of Support Vector Regression, a Machine Learning method that can be considered the state of the art for regression.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow IJ, Harp A, Irving G, Isard M, Jia Y, Józefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray DG, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker PA, Vanhoucke V, Vasudevan V, Viégas FB, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. CoRR abs/1603.04467. http://arxiv.org/abs/1603.04467

  2. Arpit D, Zhou Y, Kota BU, Govindaraju V (2016) Normalization propagation: a parametric technique for removing internal covariate shift in deep networks. In: Proceedings of the 33nd international conference on machine learning, ICML 2016, New York City, NY, USA, June 19–24, 2016, pp 1168–1176. http://jmlr.org/proceedings/papers/v48/arpitb16.html

  3. Bastien F, Lamblin P, Pascanu R, Bergstra J, Goodfellow IJ, Bergeron A, Bouchard N, Bengio Y (2012) Theano: new features and speed improvements. In: Deep learning and unsupervised feature learning NIPS 2012 workshop

  4. Bengio Y, Courville AC, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828. doi:10.1109/TPAMI.2013.50

    Article  Google Scholar 

  5. Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. In: Advances in neural information processing systems 19 (NIPS’06), pp 153–160. http://www.iro.umontreal.ca/~lisa/pointeurs/BengioNips2006All.pdf

  6. Bergstra J, Breuleux O, Bastien F, Lamblin P, Pascanu R, Desjardins G, Turian J, Warde-Farley D, Bengio Y (2010) Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation

  7. Bergstra J, Yamins D, Cox DD (2013) Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In: Proceedings of the 30th international conference on machine learning

  8. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  9. Chollet F (2015) Keras: deep learning library for theano and tensorflow. http://keras.io

  10. Díaz D, Torres A, Dorronsoro JR (2015) Deep neural networks for wind energy prediction. In: Advances in computational intelligence—13th international work-conference on artificial neural networks, IWANN 2015, Palma de Mallorca, Spain, June 10–12, 2015. Proceedings, Part I, pp 430–443

  11. Duchi JC, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12:2121–2159. http://dl.acm.org/citation.cfm?id=2021069

  12. E.C. for Medium-Range Weather Forecasts: European center for medium-range weather forecasts. http://www.ecmwf.int/

  13. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: JMLR W&CP: proceedings of the thirteenth international conference on artificial intelligence and statistics (AISTATS 2010), vol 9, pp 249–256

  14. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: JMLR W&CP: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2011)

  15. Goodfellow IJ, Warde-Farley D, Lamblin P, Dumoulin V, Mirza M, Pascanu R, Bergstra J, Bastien F, Bengio Y (2013) Pylearn2: a machine learning research library. arXiv preprint arXiv:1308.4214. http://arxiv.org/abs/1308.4214

  16. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR abs/1502.01852. http://arxiv.org/abs/1502.01852

  17. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507. doi:10.1126/science.1127647. http://www.sciencemag.org/content/313/5786/504.abstract

  18. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd international conference on machine learning, ICML 2015, Lille, France, 6–11 July 2015, pp 448–456. http://jmlr.org/proceedings/papers/v37/ioffe15.html

  19. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick RB, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. CoRR abs/1408.5093. http://arxiv.org/abs/1408.5093

  20. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980

  21. Kruger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, Rodriguez-Sanchez A, Wiskott L (2013) Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE Trans Pattern Anal Mach Intell 35(8):1847–1871. doi:10.1109/TPAMI.2012.272

    Article  Google Scholar 

  22. Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444. doi:10.1038/nature14539

    Article  Google Scholar 

  23. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, vol 86, no 11, pp 2278–2324

  24. LeCun Y, Bottou L, Orr G, Muller K (1998) Efficient backprop. In: Orr G, K M (eds) Neural networks: tricks of the trade. Springer, Berlin

    Google Scholar 

  25. Ma H, Mao F, Taylor GW (2016) Theano-mpi: a theano-based distributed training framework. CoRR abs/1605.08325 . http://arxiv.org/abs/1605.08325

  26. Murphy KP (2012) Machine learning: a probabilistic perspective. MIT Press, Cambridge

    MATH  Google Scholar 

  27. NOAA: Global forecast system. http://www.emc.ncep.noaa.gov/index.php?branch=gfs. http://www.emc.ncep.noaa.gov/index.php?branch=GFS

  28. Society AM (2013) 2013–2014 solar energy prediction contest. https://www.kaggle.com/c/ams-2014-solar-energy-prediction-contest

  29. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15: 1929–1958. http://jmlr.org/papers/v15/srivastava14a.html

  30. Sutskever I, Martens J, Dahl GE, Hinton GE (2013) On the importance of initialization and momentum in deep learning. In: Dasgupta S, Mcallester D (eds) Proceedings of the 30th international conference on machine learning (ICML-13), vol 28, pp 1139–1147. JMLR workshop and conference proceedings. http://jmlr.org/proceedings/papers/v28/sutskever13.pdf

  31. Zeiler MD (2012) ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701

Download references

Acknowledgements

With partial support from Spain’s Grants TIN2013-42351-P (MINECO), S2013/ICE-2845 CASI-CAM-CM (Comunidad de Madrid), FACIL (Ayudas Fundación BBVA a Equipos de Investigación Científica 2016) and the UAM–ADIC Chair for Data Science and Machine Learning. The second author is also kindly supported by the FPU-MEC Grant AP-2012-5163. The authors gratefully acknowledge access to the MARS repository granted by the ECMWF, the use of the facilities of Centro de Computación Científica (CCC) at UAM and thank Red Eléctrica de España for kindly supplying wind energy production data and to Sotavento for making their production data publicly available.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Díaz–Vico.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Díaz–Vico, D., Torres–Barrán, A., Omari, A. et al. Deep Neural Networks for Wind and Solar Energy Prediction. Neural Process Lett 46, 829–844 (2017). https://doi.org/10.1007/s11063-017-9613-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-017-9613-7

Keywords

Navigation