Advertisement

Real-time Sound Source Localization and Separation based on Active Audio-Visual Integration

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2686)

Abstract

Robot audition in the real world should cope with environment noises and reverberation and motor noises caused by the robot’s own movements. This paper presents the active direction-pass filter (ADPF) to separate sounds originating from the specified direction with a pair of microphones. The ADPF is implemented by hierarchical integration of visual and auditory processing with hypothetical reasoning on interaural phase difference (IPD) and interaural intensity difference (IID) for each subband. In creating hypotheses, the reference data of IPD and IID is calculated by the auditory epipolar geometry on demand. Since the performance of the ADPF depends on the direction, the ADPF controls the direction by motor movement. The human tracking and sound source separation based on the ADPF is implemented on an upper-torso humanoid and runs in realtime with 4 PCs connected over Gigabit ethernet. The signal-to-noise ratio (SNR) of each sound separated by the ADPF from a mixture of two speeches with the same loudness is improved to about 10 dB from 0 dB.

Keywords

Linear Discriminant Analysis Sound Source Automatic Speech Recognition Stereo Vision Microphone Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aloimonos, Y., Weiss, I., AND Bandyopadhyay., A. Active vision. International Journal of Computer Vision 1, 4 (1987), 333–356.Google Scholar
  2. 2.
    Asano, F., Goto, M., Itou, K., AND Asoh, H. Real-time sound source localization and separation system and its application to automatic speech recognition. Proc. of International Conf. on Speech Processing (Eurospeech 2001), ESCA, 1013–1016.Google Scholar
  3. 3.
    Bell, A.J., AND Sejnowski, T.J. An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7, 6 (Jun 1995), 1129–1159.Google Scholar
  4. 4.
    Breazeal, C., AND Scassellati, B. A context-dependent attention system for a social robot. Proc. of 16th Intern’l Joint Conf. on Atificial Intelligence (IJCAI-1999), 1146–1151.Google Scholar
  5. 5.
    Hidai, K., Mizoguchi, H., Hiraoka, K., Tanaka, M., Shigehara, T., AND Mishima, T. Robust face detection against brightness fluctuation and size variation. Proc. of IEEE/RAS Intern’l Conf. on Intelligent Robots and Systems (IROS 2000), IEEE, 1397–1384.Google Scholar
  6. 6.
    Hiraoka, K., Hamahira, M., Hidai, K., Mizoguchi, H., Yoshizawa, S. Fast algorithm for online linear discriminant analysis. Proc. of ITC-2000, IEEE, 274–277.Google Scholar
  7. 7.
    Matsusaka, Y., Tojo, T., Kuota, S., Furukawa, K., Tamiya, D., Hayata, K., Nakano, Y., AND Kobayashi, T. Multi-person conversation via multi-modal interface—a robot who communicates with multi-user. Proc. of 6th European Conf. on Speech Communication Technology (EUROSPEECH-1999), ESCA, 1723–1726.Google Scholar
  8. 8.
    Mizumachi, M., AND Akagi, M. Noise reduction by paired-microphones using spectral subtraction. Proc. of 1998 Intern’l Conf. on Acoustics, Speech, and Signal Processing (ICASSP-98) (1998), IEEE, 1113–1116.Google Scholar
  9. 9.
    Nakadai, K., Lourens, T., Okuno, H.G., AND Kitano, H. Active audition for humanoid. Proc. of 17th National Conf. on Artificial Intelligence (AAAI-2000), 832–839.Google Scholar
  10. 10.
    Nakadai, K., Hidai, K., Mizoguchi, H., Okuno, H.G., AND Kitano, H. Real-time auditory and visual multiple-object tracking for robots. Proc. of 17th Intern’l Joint Conf. on Artificial Intelligence (IJCAI-2001), IJCAI, 1425–1432.Google Scholar
  11. 11.
    Nakadai, K., Okuno, H.G., Kitano, H. Exploiting auditory fovea in humanoid-human interaction. Proc. of 18th National Conf. on Artificial Intelligence (AAAI-2002), 431–438.Google Scholar
  12. 12.
    Okuno, H., Nakadai, K., Hidai, K., Mizoguchi, H., AND Kitano, H. Human-robot non-verbal interaction empowered by real-time auditory and visual multiple-talker tracking. Advanced Robotics 17, 2 (2003), in print.Google Scholar
  13. 13.
    Saruwatari, H., Kajita, S., Takeda, K., AND Itakura, F. Speech enhancement using nonlinear microphone array based on complementary beamforming. IEICE Trans. Fundamentals E82-E, 8 (Aug. 1999), 1501–1510.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan
  2. 2.Kitano Symbiotic Systems ProjectERATO, Japan Science and Technology CorporationJapan

Personalised recommendations