Skip to main content

Covariance Features Improve Low-Resource Reservoir Computing Performance in Multivariate Time Series Classification

  • Conference paper
  • First Online:
Computational Vision and Bio-Inspired Computing

Abstract

Biological systems exhibit tremendous performance and flexibility in learning for a broad diversity of inputs, which are in general time series. Inspired by their biological counterpart, artificial neural networks used in machine learning for classification aim to extract activity patterns within input signals to transform them into stereotypical output patterns that represent categories. For the vast majority, they rely on fixed target values in output to represent probabilities or implement winner-take-all decisions, which correspond in the case of time series to first-order statistics. In other words, the basis for such classification of time series is the transformation of input high-order statistics into output first-order statistics. However, the transformation of input statistics to second- or higher-order statistics has not been much explored yet. Here, we consider a computational scheme based on a reservoir that maps information engrained in input multivariate time series statistics to second-order statistics of its own activity, before being fed to a usual classifier (logistic regression). We compare this covariance decoding with the “classical” mean decoding applied to the reservoir for classification with both synthetic and real datasets of multivariate time series. We show that covariance decoding can extract a broader diversity of second-order statistics from the input signals, yielding higher performance with smaller resources (i.e., reservoir size). Our results pave the way for the characterization of elaborate input-output mappings between statistical orders to efficiently represent and process input signals with complex spatio-temporal structures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abreu Araujo, F., Riou, M., Torrejon, J., Tsunegi, S., Querlioz, D., Yakushiji, K., Fukushima, A., Kubota, H., Yuasa, S., Stiles, M.D., Grollier, J.: Role of non-linear data processing on speech recognition task in the framework of reservoir computing. Sci. Rep. 10(1), 1–11 (2020). https://doi.org/10.1038/s41598-019-56991-x

    Article  Google Scholar 

  2. Aceituno, P.V., Yan, G., Liu, Y.Y.: Tailoring echo state networks for optimal learning. iScience 23(9), 101440 (2020). https://doi.org/10.1016/j.isci.2020.101440, https://doi.org/10.1016/j.isci.2020.101440

  3. Aimone, J.B.: A roadmap for reaching the potential of brain-derived computing. Adv. Intell. Syst. 3(1) (2021). https://doi.org/10.1002/aisy.202000191

  4. Alalshekmubarak, A., Smith, L.S.: On improving the classification capability of reservoir computing for Arabic speech recognition. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8681 LNCS, pp. 225–232 (2014). https://doi.org/10.1007/978-3-319-11179-7_29

  5. Aswolinskiy, W., Reinhart, R.F., Steil, J.: Time series classification in reservoir- and model-space. Neu. Process. Lett. 48(2), 789–809 (2018). https://doi.org/10.1007/s11063-017-9765-5

  6. Barachant, A., Bonnet, S., Congedo, M., Jutten, C.: Classification of covariance matrices using a Riemannian-based kernel for BCI applications. Neurocomputing 112, 172–178 (2013). https://doi.org/10.1016/j.neucom.2012.12.039, https://linkinghub.elsevier.com/retrieve/pii/S0925231213001574

  7. Bishop, C.M.: Pattern Recognition and Machine Learning (2006)

    Google Scholar 

  8. Chen, W., Shi, K.: Multi-scale attention convolutional neural network for time series classification. Neu. Netw. 136, 126–140 (2021). https://doi.org/10.1016/j.neunet.2021.01.001, https://linkinghub.elsevier.com/retrieve/pii/S0893608021000010

  9. Dahmen, D., Gilson, M., Helias, M.: Capacity of the covariance perceptron. J. Phys. A Math. Theor. 53(35), 354002 (2020). https://doi.org/10.1088/1751-8121/ab82dd, https://iopscience.iop.org/article/10.1088/1751-8121/ab82dd

  10. Davis, S., Mermelstein, P.: Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process. 28(4), 357–366 (1980). https://doi.org/10.1109/TASSP.1980.1163420, http://ieeexplore.ieee.org/document/1163420/

  11. Dua, D., Graff, C.: UCI Machine Learning Repository (2019). http://archive.ics.uci.edu/ml

  12. Farkaš, I., Bosák, R., Gergeľ, P.: Computational analysis of memory capacity in echo state networks. Neu. Netw. 83, 109–120 (2016). https://doi.org/10.1016/j.neunet.2016.07.012, https://linkinghub.elsevier.com/retrieve/pii/S0893608016300946

  13. Freiberger, M., Bienstman, P., Dambre, J.: A training algorithm for networks of high-variability reservoirs. Sci. Rep. 10(1), 1–11 (2020). https://doi.org/10.1038/s41598-020-71549-y

  14. Gallicchio, C.: Sparsity in reservoir computing neural networks. In: 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pp. 1–7. IEEE (2020). https://doi.org/10.1109/INISTA49547.2020.9194611, https://ieeexplore.ieee.org/document/9194611/

  15. Gallicchio, C., Micheli, A.: Reservoir Topology in Deep Echo State Networks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11731. LNCS, pp. 62–75 (2019). https://doi.org/10.1007/978-3-030-30493-5_6

  16. Gilson, M., Dahmen, D., Moreno-Bote, R., Insabato, A., Helias, M.: The covariance perceptron: a new paradigm for classification and processing of time series in recurrent neuronal networks. PLOS Comput. Biol. 16(10), e1008127 (2020). https://doi.org/10.1371/journal.pcbi.1008127, https://dx.plos.org/10.1371/journal.pcbi.1008127

  17. Hammami, N., Sellam, M.: Tree distribution classifier for automatic spoken Arabic digit recognition. In: 2009 International Conference for Internet Technology and Secured Transactions (ICITST), pp. 1–4. IEEE, Nov 2009. https://doi.org/10.1109/ICITST.2009.5402575, http://ieeexplore.ieee.org/document/5402575/

  18. Hammami, N., Bedda, M.: Improved tree model for arabic speech recognition. In: 2010 3rd International Conference on Computer Science and Information Technology, pp. 521–526. IEEE, Jul 2010. https://doi.org/10.1109/ICCSIT.2010.5563892, http://ieeexplore.ieee.org/document/5563892/

  19. Hermans, M., Schrauwen, B.: Memory in reservoirs for high dimensional input. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2010). https://doi.org/10.1109/IJCNN.2010.5596884, http://ieeexplore.ieee.org/document/5596884/

  20. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Tech. rep. (2001). https://doi.org/10.1054/nepr.2001.0035, http://www.faculty.jacobs-university.de/hjaeger/pubs/EchoStatesTechRep.pdf

  21. Jaeger, H.: Short term memory in echo state networks. Sankt Augustin: GMD Forschungszentrum Informationstechnik, 2001, 60 pp. GMD Report, 152 (2002). http://publica.fraunhofer.de/documents/B-73131.htmlpapers://78a99879-71e7-4c85-9127-d29c2b4b416b/Paper/p14153Sramko-EchoStateNNinPrediction/STMEchoStatesTechRep.pdf

  22. Jaeger, H., Lukoševičius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neu. Netw. 20(3), 335–352 (2007). https://doi.org/10.1016/j.neunet.2007.04.016, https://linkinghub.elsevier.com/retrieve/pii/S089360800700041X

  23. Jin, Y., Li, P.: Performance and robustness of bio-inspired digital liquid state machines: a case study of speech recognition. Neurocomputing 226, 145–160 (2017). https://doi.org/10.1016/j.neucom.2016.11.045, https://linkinghub.elsevier.com/retrieve/pii/S0925231216314606

  24. Khacef, L., Rodriguez, L., Miramond, B.: Written and spoken digits database for multimodal learning (2019). https://doi.org/10.5281/zenodo.3515935, https://doi.org/10.5281/zenodo.3515935

  25. Khacef, L., Rodriguez, L., Miramond, B.: Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning. Electronics (Switzerland) 9(10), 1–32 (2020). https://doi.org/10.3390/electronics9101605

    Article  Google Scholar 

  26. Lim, B., Zohren, S.: Time-series forecasting with deep learning: a survey. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 379(2194), 20200209 (2021). https://doi.org/10.1098/rsta.2020.0209, https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0209

  27. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009). https://doi.org/10.1016/j.cosrev.2009.03.005

    Article  MATH  Google Scholar 

  28. Maass, W., Natschläger, T., Markram, H.: Real-Time computing without stable states: a new framework for neural computation based on perturbations. Neu. Comput. 14(11), 2531–2560 (2002). https://doi.org/10.1162/089976602760407955, https://www.mitpressjournals.org/doi/abs/10.1162/089976602760407955

  29. Morales, G.B., Mirasso, C.R., Soriano, M.C.: Unveiling the role of plasticity rules in reservoir computing (2021). http://arxiv.org/abs/2101.05848

  30. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, É.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(85), 2825–2830 (2011). http://jmlr.org/papers/v12/pedregosa11a.html

  31. Roca, D., Zhao, L., Choquenaira, A., Milón, D., Romero, R.: Echo State Network Performance Analysis Using Non-random Topologies, pp. 133–146 (2021). https://doi.org/10.1007/978-3-030-69774-7_10, https://link.springer.com/10.1007/978-3-030-69774-7_10

  32. Ruiz, A.P., Flynn, M., Large, J., Middlehurst, M., Bagnall, A.: The great multivariate time series classification bake off: a review and experimental evaluation of recent algorithmic advances, vol. 35. Springer US (2021). https://doi.org/10.1007/s10618-020-00727-3, https://doi.org/10.1007/s10618-020-00727-3

  33. Sahidullah, M., Kinnunen, T.: Local spectral variability features for speaker verification. Dig. Signal Process. 50, 1–11 (2016). https://doi.org/10.1016/j.dsp.2015.10.011, https://linkinghub.elsevier.com/retrieve/pii/S1051200415003140

  34. Skowronski, M.D., Harris, J.G.: Automatic speech recognition using a predictive echo state network classifier. Neu. Netw. 20(3), 414–423 (2007). https://doi.org/10.1016/j.neunet.2007.04.006, https://linkinghub.elsevier.com/retrieve/pii/S0893608007000330

  35. Song, Q., Feng, Z.: Effects of connectivity structure of complex echo state network on its prediction performance for nonlinear time series. Neurocomputing 73(10-12), 2177–2185 (2010). https://doi.org/10.1016/j.neucom.2010.01.015

  36. Stephenson, C., Feather, J., Padhy, S., Elibol, O., Tang, H., McDermott, J., Chung, S.Y.: Untangling in invariant speech recognition. arXiv (NeurIPS) (2020)

    Google Scholar 

  37. Strauss, T., Wustlich, W., Labahn, R.: Design strategies for weight matrices of echo state networks. Neu. Comput. 24(12), 3246–3276 (2012). https://doi.org/10.1162/NECO_a_00374, https://direct.mit.edu/neco/article/24/12/3246-3276/7845

  38. Triefenbach, F., Jalalvand, A., Schrauwen, B., Martens, J.P.: Phoneme recognition with large hierarchical reservoirs. In: Advances in Neural Information Processing Systems, vol. 23, pp. 2307–2315. Curran Associates, Inc. (2010). https://proceedings.neurips.cc/paper/2010/file/2ca65f58e35d9ad45bf7f3ae5cfd08f1-Paper.pdf

  39. Usman, M.: On the performance degradation of speaker recognition system due to variation in speech characteristics caused by physiological changes. Int. J. Comput. Dig. Syst. 6(3), 119–127 (2017). https://doi.org/10.12785/IJCDS/060303, https://journal.uob.edu.bh/handle/123456789/273

  40. Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neu. Netw. 20(3), 391–403 (2007). https://doi.org/10.1016/j.neunet.2007.04.003, https://linkinghub.elsevier.com/retrieve/pii/S089360800700038X

  41. Verstraeten, D., Schrauwen, B., Stroobandt, D.: Reservoir-based techniques for speech recognition. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 1050–1053. IEEE (2006). https://doi.org/10.1109/IJCNN.2006.246804, http://ieeexplore.ieee.org/document/1716215/

  42. Verstraeten, D., Schrauwen, B., Stroobandt, D., Van Campenhout, J.: Isolated word recognition with the liquid state machine: a case study. Inf. Process. Lett. 95(6), 521–528 (2005). https://doi.org/10.1016/j.ipl.2005.05.019, https://linkinghub.elsevier.com/retrieve/pii/S0020019005001523

  43. Verstraeten, D., Schrauwen, B.: On the quantification of dynamics in reservoir computing. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5768. LNCS (PART 1), pp. 985–994 (2009). https://doi.org/10.1007/978-3-642-04274-4_101

  44. Warden, P.: Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv (2018)

    Google Scholar 

  45. Zerari, N., Abdelhamid, S., Bouzgou, H., Raymond, C.: Bidirectional deep architecture for Arabic speech recognition. Open Comput. Sci. 9(1), 92–102 (2019). https://doi.org/10.1515/comp-2019-0004, https://www.degruyter.com/view/journals/comp/9/1/article-p92.xml

  46. Zhang, Y., Li, P., Jin, Y., Choe, Y.: A digital liquid state machine with biologically inspired learning and its application to speech recognition. IEEE Trans. Neu. Netw. Learn. Syst. 26(11), 2635–2649 (2015). https://doi.org/10.1109/TNNLS.2015.2388544, http://ieeexplore.ieee.org/document/7024132/

Download references

Acknowledgements

S.L. is supported by a FI fellowship from the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR, 2021 FI-B2 00121). R.M.B is supported by the Howard Hughes Medical Institute (HHMI, ref 55008742), MINECO (Spain; BFU2017-85936-P) and ICREA Academia (2016). M.G acknowledges funding from the German Excellence Strategy of the Federal Government and the Länder (G:(DE-82)EXS-PF-JARA-SDS005) and the European Union’s Horizon 2020 research and innovation program under grant agreement No. 785907 (Human Brain Project SGA2).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sofía Lawrie .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lawrie, S., Moreno-Bote, R., Gilson, M. (2022). Covariance Features Improve Low-Resource Reservoir Computing Performance in Multivariate Time Series Classification. In: Smys, S., Tavares, J.M.R.S., Balas, V.E. (eds) Computational Vision and Bio-Inspired Computing. Advances in Intelligent Systems and Computing, vol 1420. Springer, Singapore. https://doi.org/10.1007/978-981-16-9573-5_42

Download citation

Publish with us

Policies and ethics