Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System
- First Online:
- Cite this paper as:
- Nakashima T., Komatani K., Sato S. (2014) Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System. In: Mariani J., Rosset S., Garnier-Rizet M., Devillers L. (eds) Natural Interaction with Robots, Knowbots and Smartphones. Springer, New York, NY
Humanoid robots need to head toward human participants when answering to their questions in multiparty dialogues. Some positions of participants are difficult to localize from robots in multiparty situations, especially when the robots can only use their own sensors. We present a method for identifying the speaker more accurately by integrating the multiple sound source localization results obtained from two robots: one talking mainly with participants and the other also joining the conversation when necessary. We place them so that they can compensate for each other’s localization capabilities and then integrate their two results. Our experimental evaluation revealed that using two robots improved speaker identification compared with using only one robot. We furthermore implemented our method into humanoid robots and constructed a demo system.