Bagging Technique Using Temporal Expansion Functions

  • Sebastián Basterrech
  • Andrea Mesa
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 303)


The Bootstrap aggregating (Bagging) technique is widely used in the Machine Learning area, in order to reduce the prediction error of several unstable predictors. The method trains many predictors using bootstrap samples and combine them generating a new power learning tool. Although, if the training data has temporal dependency the technique is not applicable. One of the most efficient models for the treatment of time series is the Recurrent Neural Network (RNN) model. In this article, we use a RNN to encode the temporal dependency of the input data, then in the new encoding space the Bagging technique can be applied. We analyze the behavior of various neural activation functions for encoding the input data. We use three simulated and three real time-series data to analyze our approach.


Bagging Ensemble learning Reservoir Computing Recurrent Neural Network Time series problems 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)MATHMathSciNetGoogle Scholar
  2. 2.
    Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report 148, German National Research Center for Information Technology (2001)Google Scholar
  3. 3.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for a neural computation based on perturbations. Neural Computation, 2531–2560 (November 2002)Google Scholar
  4. 4.
    Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Networks (3), 287–289 (2007)Google Scholar
  5. 5.
    Basterrech, S., Snášel, V.: Time-series forecasting using Bagging techniques and Reservoir Computing. In: IEEE Conference on Soft Computing and Pattern Recognition (SoCPaR), Hanoi, Viet Nam (December 2013)Google Scholar
  6. 6.
    Freund, Y., Shapire, R.E.: Experiments with a new Boosting Algorithm. In: Machine Learning: Proceedings of Thirteenth International Conference (1996)Google Scholar
  7. 7.
    Shapire, R.E.: The strength of weak learnability. Machine Learning 5(2) (1990)Google Scholar
  8. 8.
    Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap, 1st edn. Monographs on Statistics & Applied Probability, vol. 57. Chapman and Hall, New York (1994)Google Scholar
  9. 9.
    Breiman, L.: Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California at Berkeley (1996)Google Scholar
  10. 10.
    Schrauwen, B., Wardermann, M., Verstraeten, D., Steil, J.J., Stroobandt, D.: Improving Reservoirs using Intrinsic Plasticity. Neurocomputing 71, 1159–1171 (2007)CrossRefGoogle Scholar
  11. 11.
    Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review, 127–149 (2009)Google Scholar
  12. 12.
    Steil, J.J.: Backpropagation-Decorrelation: online recurrent learning with O(N) complexity. In: Proceedings of IJCNN 2004, vol. 1 (2004)Google Scholar
  13. 13.
    Gagliolo, M., Schmidhuber, J., Wierstra, D., Gomez, F.: Training Recurrent Networks by Evolino. Neural Networks 19, 757–779 (2007)MATHGoogle Scholar
  14. 14.
    Basterrech, S., Rubino, G.: Echo State Queueing Network: a new Reservoir Computing learning tool. In: IEEE Consumer Comunications & Networking Conference, CCNC 2013 (January 2013)Google Scholar
  15. 15.
    Maass, W., Natschläger, T., Markram, H.: Computational models for generic cortical microcircuits. In: Neuroscience Databases. A Practical Guide, pp. 121–136. Kluwer Academic Publishers, Boston (2003)Google Scholar
  16. 16.
    Basterrech, S., Zjavka, L., Prokop, L., Mišák, S.: Irradiance prediction using echo state queueing networks and differential polynomial neural networks. In: IEEE Intelligent Systems Design and Applications, ISDA (December 2013)Google Scholar
  17. 17.
    Basterrech, S., Snášel, V.: Initializing Reservoirs With Exhibitory And Inhibitory Signals Using Unsupervised Learning Techniques. In: International Symposium on Information and Communication Technology (SoICT). ACM Digital Library (December 2013)Google Scholar
  18. 18.
    Basterrech, S., Fyfe, C., Rubino, G.: Self-Organizing Maps and Scale-Invariant Maps in Echo State Networks. In: 2011 11th International Conference on Intelligent Systems Design and Applications (ISDA), pp. 94–99 (November 2011)Google Scholar
  19. 19.
    Rodan, A., Tiňo, P.: Minimum Complexity Echo State Network. IEEE Transactions on Neural Networks, 131–144 (2011)Google Scholar
  20. 20.
    Lukoševičius, M.: A practical guide to applying echo state networks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 659–686. Springer, Heidelberg (2012)Google Scholar
  21. 21.
    Maass, W.: Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Technical Report TR–1999–037, Institute for Theorical Computer Science. Technische Universitaet Graz. Graz, Austria (1999)Google Scholar
  22. 22.
    Lukoševičius, M.: On self-organizing reservoirs and their hierarchies. Technical Report 25, Jacobs University, Bremen (2010)Google Scholar
  23. 23.
    Hénon, M.: A two dimensional mapping with a strange attractor. Commun. Math. Phys. 50, 69–77 (1976)CrossRefMATHGoogle Scholar
  24. 24.
    Cortez, P., Rio, M., Rocha, M., Sousa, P.: Multiscale Internet traffic forecasting using Neural Networks and time series methods. Expert Systems (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.IT4InnovationsVŠB–Technical University of OstravaOstravaCzech Republic
  2. 2.Facultad de Ciencias Económicas y de AdministraciónUniversidad de la RepúblicaMontevideoUruguay

Personalised recommendations