Environmental Sound Recognition for Robot Audition Using Matching-Pursuit

  • Nobuhide Yamakawa
  • Toru Takahashi
  • Tetsuro Kitahara
  • Tetsuya Ogata
  • Hiroshi G. Okuno
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6704)


Our goal is to achieve a robot audition system that is capable of recognizing multiple environmental sounds and making use of them in human-robot interaction. The main problems in environmental sound recognition in robot audition are: (1) recognition under a large amount of background noise including the noise from the robot itself, and (2) the necessity of robust feature extraction against spectrum distortion due to separation of multiple sound sources. This paper presents the environmental recognition of two sound sources fired simultaneously using matching pursuit (MP) with the Gabor wavelet, which extracts salient audio features from a signal. The two environmental sounds come from different directions, and they are localized by multiple signal classification and, using their geometric information, separated by geometric source separation with the aid of measured head-related transfer functions. The experimental results show the noise-robustness of MP although the performance depends on the properties of the sound sources.


Environmental sound recognition Matching pursuit Robot audition Computational auditory scene analysis 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rosenthal, D.F., Okuno, H.G.: Computational auditory scene analysis. L. Erlbaum Associates Inc., Mahwah (1998)Google Scholar
  2. 2.
    Brown, G., Cooke, M.: Computational auditory scene analysis. Computer Speech and Language 8(4), 297–336 (1994)CrossRefGoogle Scholar
  3. 3.
    Okuno, H.G., Ogata, T., Komatani, K.: Computational Auditory Scene Analysis and Its Application to Robot Audition: Five Years Experience. In: ICKS 2007, pp. 69–76 (2007)Google Scholar
  4. 4.
    Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., Kobayashi, T.: Multi-person conversation via multi-modal interface — a robot who communicates with multi-user. In: EUROSPEECH 1999, pp. 1723–1726 (1999)Google Scholar
  5. 5.
    Nishimura, R., Uchida, T., Lee, A., Saruwatari, H., Shikano, K.: Aska: Receptionist robot with speech dialogue system. In: IROS 2002, pp. 1308–1313 (2002)Google Scholar
  6. 6.
    Brooks, R., Breazeal, C., Marjanovie, M., Scassellati, B., Williamson, M.: The cog project: Building a humanoid robot. In: Computation for Metaphors, Analogy, and Agents, pp. 52–87 (1999)Google Scholar
  7. 7.
    Nakadai, K., Takahashi, T., Okuno, H.G., Nakajima, H., Hasegawa, Y., Tsujino, H.: Design and Implementation of Robot Audition System’HARK’Open Source Software for Listening to Three Simultaneous Speakers. Advanced Robotics 24 5(6), 739–761 (2010)CrossRefGoogle Scholar
  8. 8.
    Ikeda, Y., Jahns, G., Kowalczyk, W., Walter, K.: Acoustic Analysis to Recognize Individuals and Animal Conditions. In: The XIV Memorial CIGR World Congress, vol. 8206 (2000)Google Scholar
  9. 9.
    Jahns, G.: Call recognition to identify cow conditions–A call-recogniser translating calls to text. Computers and Electronics in Agriculture 62(1), 54–58 (2008)CrossRefGoogle Scholar
  10. 10.
    Eronen, A.J., Peltonen, V.T., Tuomi, J.T., Klapuri, A.P., Fagerlund, S., Sorsa, T., Lorho, G., Huopaniemi, J.: Audio-based context recognition. IEEE TASLP 14(1), 321–329 (2005)Google Scholar
  11. 11.
    Chu, S., Narayanan, S., Kuo, C.: Environmental sound recognition with timefrequency audio features. IEEE TASL 17(6), 1142 (2009)Google Scholar
  12. 12.
    Ntalampiras, S., Potamitis, I., Fakotakis, N.: Sound classification based on temporal feature integration. ISCCSP 2010, 1–4 (2010)Google Scholar
  13. 13.
    Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE TSP 41(12), 3397–3415 (1993)zbMATHGoogle Scholar
  14. 14.
    Schmidt, R.: Multiple emitter location and signal parameter estimation. IEEE TAP 34(3), 276–280 (1986)Google Scholar
  15. 15.
    Parra, L.C., Alvino, C.V.: Geometric source separation: Mergin convolutive source separation with geometric beamforming. IEEE TSALP 10(6), 352–362 (2002)Google Scholar
  16. 16.
    Real World Computing Partnership, Rwcp sound scene database in real acoustical environments,
  17. 17.
    Yamakawa, N., Kitahara, T., Takahashi, T., Komatani, K., Ogata, T., Okuno, H.G.: Effects of modelling within-and between-frame temporal variations in power spectra on non-verbal sound recognition. In: INTERSPEECH 2010, pp. 2342–2345 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Nobuhide Yamakawa
    • 1
  • Toru Takahashi
    • 1
  • Tetsuro Kitahara
    • 2
  • Tetsuya Ogata
    • 1
  • Hiroshi G. Okuno
    • 1
  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan
  2. 2.Department of Computer Science and System Analysis, College of Humanities and SciencesNihon UniversityTokyoJapan

Personalised recommendations