Advertisement

ST-DenNetFus: A New Deep Learning Approach for Network Demand Prediction

  • Haytham Assem
  • Bora Caglayan
  • Teodora Sandra Buda
  • Declan O’SullivanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11053)

Abstract

Network Demand Prediction is of great importance to network planning and dynamically allocating network resources based on the predicted demand, this can be very challenging as it is affected by many complex factors, including spatial dependencies, temporal dependencies, and external factors (such as regions’ functionality and crowd patterns as it will be shown in this paper). We propose a deep learning based approach called, ST-DenNetFus, to predict network demand (i.e. uplink and downlink throughput) in every region of a city. ST-DenNetFus is an end to end architecture for capturing unique properties from spatio-temporal data. ST-DenNetFus employs various branches of dense neural networks for capturing temporal closeness, period, and trend properties. For each of these properties, dense convolutional neural units are used for capturing the spatial properties of the network demand across various regions in a city. Furthermore, ST-DenNetFus introduces extra branches for fusing external data sources that have not been considered before in the network demand prediction problem of various dimensionalities. In our case, these external factors are the crowd mobility patterns, temporal functional regions, and the day of the week. We present an extensive experimental evaluation for the proposed approach using two types of network throughput (uplink and downlink) in New York City (NYC), where ST-DenNetFus outperforms four well-known baselines.

Keywords

Spatio-temporal data Deep learning Convolutional neural networks Dense networks Network demand prediction 

Notes

Acknowledgment

This work was partly supported by the Science Foundation Ireland ADAPT centre (Grant 13/RC/2106) and by the EC project ASGARD, 700381 (H2020-ICT-2016-09, Research and Innovation action).

References

  1. 1.
    Abou-Zeid, H., Hassanein, H.S.: Predictive green wireless access: exploiting mobility and application information. IEEE Wirel. Commun. 20(5), 92–99 (2013)CrossRefGoogle Scholar
  2. 2.
    Assem, H., Buda, T.S., O’sullivan, D.: RCMC: recognizing crowd-mobility patterns in cities based on location based social networks data. ACM Trans. Intell. Syst. Technol. (TIST) 8(5), 70 (2017)Google Scholar
  3. 3.
    Assem, H., Ghariba, S., Makrai, G., Johnston, P., Gill, L., Pilla, F.: Urban water flow and water level prediction based on deep learning. In: Altun, Y., et al. (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10536, pp. 317–329. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-71273-4_26CrossRefGoogle Scholar
  4. 4.
    Assem, H., O’Sullivan, D.: Discovering new socio-demographic regional patterns in cities. In: Proceedings of the 9th ACM SIGSPATIAL Workshop on Location-Based Social Networks, p. 1. ACM (2016)Google Scholar
  5. 5.
    Assem, H., Xu, L., Buda, T.S., O’Sullivan, D.: Spatio-temporal clustering approach for detecting functional regions in cities. In: 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 370–377. IEEE (2016)Google Scholar
  6. 6.
    Chai, T., Draxler, R.R.: Root mean square error (RMSE) or mean absolute error (MAE)?-Arguments against avoiding RMSE in the literature. Geoscientific Model Dev. 7(3), 1247–1250 (2014)CrossRefGoogle Scholar
  7. 7.
    Chollet, F.: Deep learning library for python. Runs on TensorFlow, Theano or CNTK. https://github.com/fchollet/keras. Accessed 09 Aug 2017
  8. 8.
    Cisco, I.: Cisco visual networking index: forecast and methodology, 2011–2016. CISCO White paper, pp. 2011–2016 (2012)Google Scholar
  9. 9.
    Dong, X., Fan, W., Gu, J.: Predicting lte throughput using traffic time series. ZTE Commun. 4, 014 (2015)Google Scholar
  10. 10.
    Hasan, Z., Boostanimehr, H., Bhargava, V.K.: Green cellular networks: a survey, some research issues and challenges. IEEE Commun. Surv. Tutorials 13(4), 524–540 (2011)CrossRefGoogle Scholar
  11. 11.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  12. 12.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  13. 13.
    Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 (2016)
  14. 14.
    Jain, V., et al.: Supervised learning of image restoration with convolutional networks. In: IEEE 11th International Conference on Computer Vision, ICCV 2007, pp. 1–8. IEEE (2007)Google Scholar
  15. 15.
    Khan, L.U.: Performance comparison of prediction techniques for 3G cellular traffic. Int. J. Comput. Sci. Netw. Secur. (IJCSNS) 17(2), 202 (2017)Google Scholar
  16. 16.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  17. 17.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  18. 18.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  19. 19.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_3CrossRefGoogle Scholar
  20. 20.
    Makridakis, S., Wheelwright, S.C., Hyndman, R.J.: Forecasting Methods and Applications. Wiley, Hoboken (2008)Google Scholar
  21. 21.
    Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015)
  22. 22.
    Mikolov, T., Karafiát, M., Burget, L., Cernockỳ, J., Khudanpur, S.: Recurrent neural network based language model. In: Interspeech, vol. 2, p. 3 (2010)Google Scholar
  23. 23.
    Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: Deep learning applications and challenges in big data analytics. J. Big Data 2(1), 1 (2015)CrossRefGoogle Scholar
  24. 24.
    Oh, E., Krishnamachari, B., Liu, X., Niu, Z.: Toward dynamic energy-efficient operation of cellular network infrastructure. IEEE Commun. Mag. 49(6), 56–61 (2011)CrossRefGoogle Scholar
  25. 25.
    Papagiannaki, K., Taft, N., Zhang, Z.L., Diot, C.: Long-term forecasting of internet backbone traffic. IEEE Trans. Neural Netw. 16(5), 1110–1124 (2005)CrossRefGoogle Scholar
  26. 26.
    Paul, U., Subramanian, A.P., Buddhikot, M.M., Das, S.R.: Understanding traffic dynamics in cellular data networks. In: 2011 Proceedings IEEE INFOCOM, pp. 882–890. IEEE (2011)Google Scholar
  27. 27.
    Sadek, N., Khotanzad, A.: Multi-scale high-speed network traffic prediction using k-factor Gegenbauer ARMA model. In: 2004 IEEE International Conference on Communications, vol. 4, pp. 2148–2152. IEEE (2004)Google Scholar
  28. 28.
    Sahu, A.: Survey of reasoning using neural networks. arXiv preprint arXiv:1702.06186 (2017)
  29. 29.
    Sayeed, Z., Liao, Q., Faucher, D., Grinshpun, E., Sharma, S.: Cloud analytics for wireless metric prediction-framework and performance. In: 2015 IEEE 8th International Conference on Cloud Computing (CLOUD), pp. 995–998. IEEE (2015)Google Scholar
  30. 30.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  31. 31.
    Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  32. 32.
  33. 33.
    Wu, J., Wei, S.: Time Series Analysis. Hunan Science and Technology Press, ChangSha (1989)Google Scholar
  34. 34.
    Yao, J., Kanhere, S.S., Hassan, M.: Improving QoS in high-speed mobility using bandwidth maps. IEEE Trans. Mob. Comput. 11(4), 603–617 (2012)CrossRefGoogle Scholar
  35. 35.
    Yu, Y., Song, M., Fu, Y., Song, J.: Traffic prediction in 3G mobile networks based on multifractal exploration. Tsinghua Sci. Technol. 18(4), 398–405 (2013)CrossRefGoogle Scholar
  36. 36.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  37. 37.
    Zhang, J., Zheng, Y., Qi, D.: Deep spatio-temporal residual networks for citywide crowd flows prediction. In: AAAI, pp. 1655–1661 (2017)Google Scholar
  38. 38.
    Zhang, J., Zheng, Y., Qi, D., Li, R., Yi, X., Li, T.: Predicting citywide crowd flows using deep spatio-temporal residual networks. arXiv preprint arXiv:1701.02543 (2017)
  39. 39.
    Zhou, B., He, D., Sun, Z., Ng, W.H.: Network traffic modeling and prediction with ARIMA/GARCH. In: Proceedings of HET-NETs Conference, pp. 1–10 (2005)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Cognitive Computing Group, Innovation ExchangeIBMDublinIreland
  2. 2.School of Computer Science and StatisticsTrinity College DublinDublinIreland

Personalised recommendations