Advertisement

On Improving the Classification Capability of Reservoir Computing for Arabic Speech Recognition

  • Abdulrahman Alalshekmubarak
  • Leslie S. Smith
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8681)

Abstract

Designing noise-resilient systems is a major challenge in the field of automated speech recognition (ASR). These systems are crucial for real-world applications where high levels of noise tend to be present. We introduce a noise robust system based on Echo State Networks and Extreme Kernel machines which we call ESNEKM. To evaluate the performance of the proposed system, we used our recently released public Arabic speech dataset and the well-known spoken Arabic digits (SAD) dataset. Different feature extraction methods considered in this study include mel-frequency cepstral coefficients (MFCCs), perceptual linear prediction (PLP) and RASTA- perceptual linear prediction. These extracted features were fed to the ESNEKM and the result compared with a baseline hidden Markov model (HMM), so that nine models were compared in total. ESNEKM models outperformed HMM models under all the feature extraction methods, noise levels, and noise types. The best performance was obtained by the model that combined RASTA-PLP with ESNEKM.

Keywords

Reservoir computing Speech recognition PLP MFCC RASTA-PLP Speech corpus Arabic language 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jaeger, H.: The ”echo state” approach to analysing and training recurrent neural networks-with an erratum note. Tecnical report GMD report 148 (2001)Google Scholar
  2. 2.
    Huang, G.B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 42, 513–529 (2012)CrossRefGoogle Scholar
  3. 3.
    Verstraeten, D.: Reservoir computing: computation with dynamical systems. Electronics and Information Systems, Gent. Ghent University (2009)Google Scholar
  4. 4.
    Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI-Künstliche Intelligenz, 1–7 (2012)Google Scholar
  5. 5.
    Lukoševičius, M.: A practical guide to applying echo state networks. Neural Networks: Tricks of the Trade, 659–686 (2012)Google Scholar
  6. 6.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: A new learning scheme of feedforward neural networks. In: Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990. IEEE (2004)Google Scholar
  7. 7.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)CrossRefGoogle Scholar
  8. 8.
    Huang, G.B., Wang, D.H., Lan, Y.: Extreme learning machines: a survey. International Journal of Machine Learning and Cybernetics 2, 107–122 (2011)CrossRefGoogle Scholar
  9. 9.
    Triefenbach, F., Martens, J.P.: Can non-linear readout nodes enhance the performance of reservoir-based speech recognizers? In: 2011 First International Conference on Informatics and Computational Intelligence (ICI), pp. 262–267 (2011)Google Scholar
  10. 10.
    Alalshekmubarak, A., Smith, L.S.: A novel approach combining recurrent neural network and support vector machines for time series classification. In: 2013 9th International Conference on Innovations in Information Technology (IIT), pp. 42–47 (2013)Google Scholar
  11. 11.
    Hammami, N., Bedda, M.: Improved tree model for arabic speech recognition. In: 2010 3rd IEEE International Conference on Computer Science and Information Technology (ICCSIT), vol. 5, pp. 521–526 (2010)Google Scholar
  12. 12.
    Hammami, N., Bedda, M., Nadir, F.: The second-order derivatives of mfcc for improving spoken arabic digits recognition using tree distributions approximation model and hmms. In: 2012 International Conference on Communications and Information Technology (ICCIT), pp. 1–5 (2012)Google Scholar
  13. 13.
    Cavalin, P.R., Sabourin, R., Suen, C.Y.: Logid: An adaptive framework combining local and global incremental learning for dynamic selection of ensembles of hmms. Pattern Recognition 45, 3544–3556 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Abdulrahman Alalshekmubarak
    • 1
  • Leslie S. Smith
    • 1
  1. 1.Dept. of Computing ScienceUniversity of StirlingStirlingUK

Personalised recommendations