Advertisement

Deep Neural Network Modeling for Big Data Weather Forecasting

  • James N. K. LiuEmail author
  • Yanxing Hu
  • Yulin He
  • Pak Wai Chan
  • Lucas Lai
Chapter
Part of the Studies in Big Data book series (SBD, volume 8)

Abstract

The coming of the big data era brings the opportunities to greatly improve the forecasting accuracy of weather phenomena. Specifically, weather change is quite a complex process that is affected by thousands of variables. In the traditional computational intelligence models, we have to select the features from variables according to some fundamental assumptions, thus the correctness of these assumptions may crucially affect the prediction accuracy. Meanwhile, the principle of big data is to let data speaking, which means, when the volume of data is big enough, the hidden statistical disciplines in domain data will be revealed by the data set itself. Therefore, if massive volume of weather data is employed, we may be able to avoid using assumptions in the models, and we have the opportunity to improve the weather prediction accepted by learning the correlations hidden in the data. In our investigation, we employ a new computational intelligence technology called stacked Auto-Encoder to simulate hourly weather data in 30 years. This method can automatically learn the features from massive volume of data set via layer-by-layer feature granulation, and the large size of the data set can make sure that the complex deep model does avoid the overfitting problem. The experimental results demonstrate that using the new represented features in the classical model can obtain higher accuracy in time series problems.

Keywords

Weather forecasting Big data Deep Neural Network 

Notes

Acknowledgment

The authors would like to acknowledge the partial support of the CRG grants G-YL14, G-YM07 of The Hong Kong Polytechnic University.

References

  1. 1.
    Aronova, E., Baker, K.S., Oreskes, N.: Big science and big data in biology: from the international geophysical year through the international biological program to the long term ecological research (LTER) network, 1957–present. Hist. Stud. Nat. Sci. 40(2), 183–224 (2010)Google Scholar
  2. 2.
    Bargiela, A., Pedrycz, W.: Granular Computing: An Introduction. Springer, Berlin (2003)Google Scholar
  3. 3.
    Bengio, Y.: Learning deep architectures for AI, vol. 2. Now Publishers Inc. (2009)Google Scholar
  4. 4.
    Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. J. Mach. Learn. Res. Proc. Track 27, 17–36 (2012)Google Scholar
  5. 5.
    Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153 (2007)Google Scholar
  6. 6.
    Chen, S.M., Hwang, J.R.: Temperature prediction using fuzzy time series. IEEE Trans. Syst. Man Cybern. B: Cybern. 30(2), 263–275 (2000)CrossRefGoogle Scholar
  7. 7.
    Chen, S.M., Tanuwijaya, K.: Multivariate fuzzy forecasting based on fuzzy time series and automatic clustering techniques. Expert Syst. Appl. 38(8), 10594–10605 (2011)Google Scholar
  8. 8.
    Coates, A., Ng, A.Y., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. In: International Conference on Artificial Intelligence and Statistics, pp. 215–223 (2011)Google Scholar
  9. 9.
    Condie, T., Mineiro, P., Polyzotis, N., Weimer, M.: Machine learning for big data. In: Proceedings of the 2013 International conference on Management of Data, pp. 939–942. ACM (2013)Google Scholar
  10. 10.
    Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4), 303–314 (1989)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)zbMATHMathSciNetGoogle Scholar
  12. 12.
    Herrero, J., Valencia, A., Dopazo, J.: A hierarchical unsupervised growing neural network for clustering gene expression patterns. Bioinformatics 17(2), 126–136 (2001)CrossRefGoogle Scholar
  13. 13.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  14. 14.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  15. 15.
    Kissinger, C.R., Gehlhaar, D.K., Fogel, D.B.: Rapid automated molecular replacement by evolutionary search. Acta Crystallogr. D Biol. Crystallogr. 55(2), 484–491 (1999)CrossRefGoogle Scholar
  16. 16.
    Kuligowski, R.J., Barros, A.P.: Localized precipitation forecasts from a numerical weather prediction model using artificial neural networks. Weather Forecast. 13(4), 1194–1204 (1998)CrossRefGoogle Scholar
  17. 17.
    Kwong, K., Wong, M.H., Liu, J.N., Chan, P.: An artificial neural network with chaotic oscillator for wind shear alerting. J. Atmos. Oceanic Technol. 29(10), 1518–1531 (2012)CrossRefGoogle Scholar
  18. 18.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 609–616. ACM (2009)Google Scholar
  19. 19.
    Liu, J., Kwong, K.M., Chan, P.W.: Chaotic oscillatory-based neural network for wind shear and turbulence forecast with lidar data. IEEE Trans. Syst. Man Cybern. C: Appl. Rev. 42(6), 1412–1423 (2012)CrossRefGoogle Scholar
  20. 20.
    Liu, J.N., Hu, Y.: Application of feature-weighted support vector regression using grey correlation degree to stock price forecasting. Neural Comput. Appl., 1–10 (2013)Google Scholar
  21. 21.
    Maier, H.R., Jain, A., Dandy, G.C., Sudheer, K.P.: Methods used for the development of neural networks for the prediction of water resource variables in river systems: current status and future directions. Environ. Model Softw. 25(8), 891–909 (2010)CrossRefGoogle Scholar
  22. 22.
    Miller, S.M., Geng, Y., Zheng, R.Z., Dewald, A.: Presentation of complex medical information: Interaction between concept maps and spatial ability on deep learning. Int. J. Cyber Behav. Psychol. Learn. (IJCBPL) 2(1), 42–53 (2012)Google Scholar
  23. 23.
    Mohamed, A.R., Dahl, G.E., Hinton, G.: Acoustic modeling using deep belief networks. IEEE Trans. Audio Speech Lang. Process. 20(1), 14–22 (2012)Google Scholar
  24. 24.
    Pedrycz, W., Bargiela, A.: Granular clustering: a granular signature of data. IEEE Trans. Syst. Man Cybern. B: Cybern. 32(2), 212–224 (2002)CrossRefGoogle Scholar
  25. 25.
    Pedrycz, W., Skowron, A., Kreinovich, V.: Handbook of Granular Computing. Wiley, New York (2008)Google Scholar
  26. 26.
    Pedrycz, W., Vukovich, G.: Granular neural networks. Neurocomputing 36(1), 205–224 (2001)CrossRefzbMATHGoogle Scholar
  27. 27.
    Pielke, R.A.: Mesoscale Meteorological Modeling. Academic Press, US (2002)Google Scholar
  28. 28.
    Raina, R., Madhavan, A., Ng, A.Y.: Large-scale deep unsupervised learning using graphics processors. ICML 9, 873–880 (2009)Google Scholar
  29. 29.
    Ranzato, M., Huang, F.J., Boureau, Y.L., Lecun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPR’07, pp. 1–8. IEEE (2007)Google Scholar
  30. 30.
    Reichman, O., Jones, M.B., Schildhauer, M.P.: Challenges and opportunities of open data in ecology. Science (Washington) 331(6018), 703–705 (2011)Google Scholar
  31. 31.
    Rushton, J.P., Irwing, P.: A general factor of personality (GFP) from two meta-analyses of the big five: and. Personality Individ. Differ. 45(7), 679–683 (2008)CrossRefGoogle Scholar
  32. 32.
    Sánchez Reinoso, C., Cutrera, M., Battioni, M., Milone, D., Buitrago, R.: Photovoltaic generation model as a function of weather variables using artificial intelligence techniques. Int. J. Hydrogen Energy 37(19), 14781–14785 (2012)Google Scholar
  33. 33.
    Sundqvist, H., Berge, E., Kristjánsson, J.E.: Condensation and cloud parameterization studies with a mesoscale numerical weather prediction model. Mon. Weather Rev. 117(8), 1641–1657 (1989)CrossRefGoogle Scholar
  34. 34.
    Suzuki, K., Zhang, J., Xu, J.: Massive-training artificial neural network coupled with laplacian-eigenfunction-based dimensionality reduction for computer-aided detection of polyps in CT colonography. IEEE Trans. Med. Imaging 29(11), 1907–1917 (2010)CrossRefGoogle Scholar
  35. 35.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y.: Locality-constrained linear coding for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 3360–3367. IEEE (2010)Google Scholar
  36. 36.
    Wang, X., He, Q.: Enhancing generalization capability of SVM classifiers with feature weight adjustment. In: Knowledge-Based Intelligent Information and Engineering Systems, pp. 1037–1043. Springer, Berlin (2004)Google Scholar
  37. 37.
    White, T.: Hadoop: The Definitive Guide. O’Reilly (2012)Google Scholar
  38. 38.
    Wu, X., Zhu, X., Wu, G.Q., Ding, W.: Data mining with big data. IEEE Trans. Knowl. Data Eng. 26(1), 97–107 (2014)CrossRefGoogle Scholar
  39. 39.
    Yao, Y.: Information granulation and rough set approximation. Int. J. Intel. Syst. 16(1), 87–104 (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • James N. K. Liu
    • 1
    Email author
  • Yanxing Hu
    • 1
  • Yulin He
    • 2
  • Pak Wai Chan
    • 3
  • Lucas Lai
    • 1
  1. 1.Department of ComputingThe Hong Kong Polytechnic UniversityHung HomHong Kong
  2. 2.College of Mathematics and Computer ScienceHebei UniversityBaodingChina
  3. 3.Hong Kong ObservatoryKowloonHong Kong

Personalised recommendations