Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System

  • Taichi Nakashima
  • Kazunori Komatani
  • Satoshi Sato
Conference paper

DOI: 10.1007/978-1-4614-8280-2_14

Cite this paper as:
Nakashima T., Komatani K., Sato S. (2014) Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System. In: Mariani J., Rosset S., Garnier-Rizet M., Devillers L. (eds) Natural Interaction with Robots, Knowbots and Smartphones. Springer, New York, NY

Abstract

Humanoid robots need to head toward human participants when answering to their questions in multiparty dialogues. Some positions of participants are difficult to localize from robots in multiparty situations, especially when the robots can only use their own sensors. We present a method for identifying the speaker more accurately by integrating the multiple sound source localization results obtained from two robots: one talking mainly with participants and the other also joining the conversation when necessary. We place them so that they can compensate for each other’s localization capabilities and then integrate their two results. Our experimental evaluation revealed that using two robots improved speaker identification compared with using only one robot. We furthermore implemented our method into humanoid robots and constructed a demo system.

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Taichi Nakashima
    • 1
  • Kazunori Komatani
    • 1
  • Satoshi Sato
    • 1
  1. 1.Graduate School of Engineering, Nagoya UniversityNagoyaJapan