RoboASR: A Dynamic Speech Recognition System for Service Robots

  • Abdelaziz A. Abdelhamid
  • Waleed H. Abdulla
  • Bruce A. MacDonald
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7621)


This paper proposes a new method for building dynamic speech decoding graphs for state based spoken human-robot interaction (HRI). The current robotic speech recognition systems are based on either finite state grammar (FSG) or statistical N-gram models or a dual FSG and N-gram using a multi-pass decoding. The proposed method is based on merging both FSG and N-gram into a single decoding graph by converting the FSG rules into a weighted finite state acceptor (WFSA) then composing it with a large N-gram based weighted finite state transducer (WFST). This results in a tiny decoding graph that can be used in a single pass decoding. The proposed method is applied in our speech recognition system (RoboASR) for controlling service robots with limited resources. There are three advantages of the proposed approach. First, it takes the advantage of both FSG and N-gram decoders by composing both of them into a single tiny decoding graph. Second, it is robust, the resulting tiny decoding graph is highly accurate due to it fitness to the HRI state. Third, it has a fast response time in comparison to the current state of the art speech recognition systems. The proposed system has a large vocabulary containing 64K words with more than 69K entries. Experimental results show that the average response time is 0.05% of the utterance length and the average ratio between the true and false positives is 89% when tested on 15 interaction scenarios using live speech.


Human-robot interaction automatic speech recognition weighted finite state transducers 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., Hagita, N.: Communication robot in a shopping mall. IEEE Transactions on Robotics, 897–913 (2010)Google Scholar
  2. 2.
    Paliwal, K.K., Yao, K.: Robust speech recognition under noisy ambient conditions. In: Human-Centric Interfaces for Ambient Intelligence. Academic Press, Elsevier (2009)Google Scholar
  3. 3.
    Alonso-Martin, F., Salichs, M.A.: Integration of a voice recognition system in a social robot. IEEE Transactions on Cybernetics and Systems, 215–245 (2011)Google Scholar
  4. 4.
    Heinrich, S., Wermter, S.: Towards robust speech recognition for human-robot interaction. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 468–473 (2011)Google Scholar
  5. 5.
    Doostdar, M., Schiffer, S., Lakemeyer, G.: A Robust Speech Recognition System for Service-Robotics Applications. In: Iocchi, L., Matsubara, H., Weitzenfeld, A., Zhou, C. (eds.) RoboCup 2008. LNCS, vol. 5399, pp. 1–12. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  6. 6.
    Lin, Q., Lubensky, D., Picheny, M., Rao, P.S.: Key-phrase spotting using an integrated language model of N-grams and finite-state grammar. In: Proceedings of the European Conference on Speech Communication and Technology, pp. 255–258 (1997)Google Scholar
  7. 7.
    Levit, M., Chang, S., Buntschuh, B.: Garbage modeling with decoys for a sequential recognition scenario. In: Proceedings of the IEEE Workshop on Automatic Speech Recognition & Understanding, pp. 468–473 (2009)Google Scholar
  8. 8.
    Allauzen, C., Schalkwyk, J.: Generalized composition algorithm for weighted finite state transducers. In: Proceedings of the International Speech Communication Association (2009)Google Scholar
  9. 9.
    Rabinar, L., Juang, B.-H.: Fundamental of speech recognition. Prentice-Hall (1993)Google Scholar
  10. 10.
    Mohri, M., Pereira, F., Riley, M.: Weighted finite state transducers in speech recognition. Transactions on Computer Speech and Language 16, 69–88 (2002)CrossRefGoogle Scholar
  11. 11.
    Novak, J.R., Minemaysu, N., Hirose, K.: Painless WFST cascade construction for LVCSR-Transducersaurus. In: Proceedings of the International Speech Communication Association (2011)Google Scholar
  12. 12.
    Broadbent, E., Jayawardena, C., Kerse, N., Stafford, R.Q., MacDonald, B.A.: Human-robot interaction research to improve quality of life in elder care - An approach and issues. In: Proceedings of the Workshop on Human-Robot Interaction in Elder Care, pp. 7–11 (2011)Google Scholar
  13. 13.
    Abdelhamid, A.A., Abdulla, W.H., MacDonald, B.A.: WFST-based large vocabulary continuous speech decoder for service robots. In: Proceedings of the International Conference on Imaging and Signal Processing for Healthcare and Technology, pp. 150–154 (2012)Google Scholar
  14. 14.
    Lee, A., Kawahara, T.: Recent development of open-source speech recognition engine Julius. In: Proceedings of the APSIPA, pp. 131–137 (2009)Google Scholar
  15. 15.
    Rybach, D., Gollan, C., Heigold, G., Hoffmeister, B., Loof, J., Schluter, R., Ney, H.: The RWTH Aachen university open source speech recognition system. In: Proceedings of the International Conference of Speech Communication Association, pp. 2111–2114 (2009)Google Scholar
  16. 16.
    Young, S., Russell, N., Thornton, J.: Token passing: A simple conceptual model for connected speech recognition systems. Tech. Rep. (1989)Google Scholar
  17. 17.
    Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X.A., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V., Woodland, P.: The HTK book. Cambridge University (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Abdelaziz A. Abdelhamid
    • 1
  • Waleed H. Abdulla
    • 1
  • Bruce A. MacDonald
    • 1
  1. 1.Electrical and Computer EngineeringThe University of AucklandNew Zealand

Personalised recommendations