Abstract
Internet traffic prediction is an important task for many applications, such as adaptive applications, congestion control, admission control, anomaly detection and bandwidth allocation. In addition, efficient methods of resource management can be used to gain performance and reduce costs. The popularity of the newest deep learning methods has been increasing in several areas, but there is a lack of studies concerning time series prediction. This paper compares two different artificial neural network approaches for the Internet traffic forecast. One is a Multilayer Perceptron (MLP) and the other is a deep learning Stacked Autoencoder (SAE). It is shown herein how a simpler neural network model, such as the MLP, can work even better than a more complex model, such as the SAE, for Internet traffic prediction.
Chapter PDF
Similar content being viewed by others
Keywords
References
Han, M.-S.: Dynamic bandwidth allocation with high utilization for XG-PON. In: 16th International Conference on Advanced Communication Technology (ICACT), pp. 994–997. IEEE (2014)
Zhao, H., Niu, W., Qin, Y., Ci, S., Tang, H., Lin, T.: Traffic Load-Based Dynamic Bandwidth Allocation for Balancing the Packet Loss in DiffServ Network. In: 11th International Conference on Computer and Information Science (ICIS), pp. 99–104. IEEE/ACIS (2012)
Liang, Y., Han, M.: Dynamic Bandwidth Allocation Based on Online Traffic Prediction for Real-Time MPEG-4 Video Streams. EURASIP Journal on Advances in Signal Processing (2007)
Nguyen, T.D., Eido, T., Atmaca, T.: An Enhanced QoS-enabled Dynamic Bandwidth Allocation Mechanism for Ethernet PON. In: International Conference on Emerging Network Intelligence, pp. 135–140. EMERGING (2009)
Cortez, P., Rio, M., Rocha, M., Sousa, P.: Multi-scale Internet traffic forecasting using neural networks and time series methods. ExpertSystems: The Journal of Knowledge Engineering 29, 143–155 (2012)
Hallas, M., Dorffner, G.: A comparative study of feedforward and recurrent neural networks in time series prediction. In: 14th European Meet. Cybernetics Systems Research, vol. 2, pp. 644–647 (1998)
Ding, X., Canu, S., Denoeux, T.: Neural Network Based Models for Forecasting. In: Proceedings of Applied Decision Technologies (ADT 1995), pp. 243–252. Wiley and Sons, Uxbridge (1995)
Feng, H., Shu, Y.: Study on network traffic prediction techniques. In: International Conference on Wireless Communications, Networking and Mobile Computing, vol. 2, pp. 1041–1044. WiCOM (2005)
Bengio, Y.: Learning deep architectures for AI. Foundations and Trends in Machine Learning 2, 1–127 (2009)
Hinton, G.E., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)
Hyndman, R.J.: Time Series Data Library, http://data.is/TSDLdemo
Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice Hall PTR, Upper Saddle River (1998)
Erhan, D., Manzagol, P.-A., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Proceedings of The Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS 2009), pp. 153–160 (2009)
Villiers, J., Barnard, E.: Backpropagation neural nets with one and two hidden layers. IEEE Transactions on Neural Networks 4, 136–141 (1993)
Hornik, K., Stinchcombe, M., White, H.: Multi- layer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and Composing Robust Features with Denoising Autoencoders. In: Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML 2008), pp. 1096–1103. ACM, New York (2008)
Unsupervised Feature Learning and Deep Learning. Stanford’s online wiki. Stacked Autoencoders, http://ufldl.stanford.edu/wiki/index.php/Stacked_Autoencoders
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Schlkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153–160. MIT Press (2007)
Larochelle, H., Erhan, D., Vincent, P.: Deep learning using robust interdependent codes. In: Dyk, D.V., Welling, M. (eds.) Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS 2009), vol. 5, pp. 312–319 (2009); Journal of Machine Learning Research - Proceedings Track (2009)
Ranzato, M.A., Boureau, Y.-L., LeCun, Y.: Sparse Feature Learning for Deep Belief Networks. In: Platt, J., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems 20, pp. 1185–1192. MIT Press, Cambridge (2007)
Palm, R.B.: DeepLearnToolbox, a Matlab toolbox for Deep Learning, https://github.com/rasmusbergpalm/DeepLearnToolbox
Busseti, E., Osband, I., Wong, S.: Deep Learning for Time Series Modeling. Stanford, CS 229: Machine Learning (2012)
Arel, I., Rose, D.C., Karnowski, T.P.: Deep Machine Learning - A New Frontier in Artificial Intelligence Research [research frontier]. IEEE Computational Intelligence Magazine 5, 13–18 (2010)
Chao, J., Shen, F., Zhao, J.: Forecasting exchange rate with deep belief networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1259–1266 (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 IFIP International Federation for Information Processing
About this paper
Cite this paper
Oliveira, T.P., Barbar, J.S., Soares, A.S. (2014). Multilayer Perceptron and Stacked Autoencoder for Internet Traffic Prediction. In: Hsu, CH., Shi, X., Salapura, V. (eds) Network and Parallel Computing. NPC 2014. Lecture Notes in Computer Science, vol 8707. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44917-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-662-44917-2_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44916-5
Online ISBN: 978-3-662-44917-2
eBook Packages: Computer ScienceComputer Science (R0)