Sound Source Localization with Non-calibrated Microphones

  • Tomoyuki Kobayashi
  • Yoshinari Kameda
  • Yuichi Ohta
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4903)

Abstract

We propose a new method for localizing a sound source in a known space with non-calibrated microphones. Our method does not need the accurate positions of the microphones that are required by traditional sound source localization. Our method can make use of wide variety of microphone layout in a large space because it does not need calibration step on installing microphones. After a number of sampling points have been stored in a database, our system can estimate the nearest sampling point of a sound by utilizing the set of time delays of microphone pairs. We conducted a simulation experiment to determine the best microphone layout in order to maximize the accuracy of the localization. We also conducted a preliminary experiment in real environment and obtained promising results.

Keywords

non-calibrated microphones sound source localization time-delay 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brandstein, M., Ward, D. (eds.): Microphone Arrays: Signal Processing Techniques and Applications. Springer, Heidelberg (2001)Google Scholar
  2. 2.
    Brandstein, M., Adcock, J., Silverman, H.: A closed-form method for finding source locations from microphone-array time-delay estimates. In: ICASSP 1995, pp. 3019–3022 (1995)Google Scholar
  3. 3.
    Omologo, M., Svaizer, P.: Acoustic source location in noisy and reverberant environment using csp analysis. In: ICASSP 1996, pp. 921–924 (1996)Google Scholar
  4. 4.
    Peterson, J.M., Kyriakakis, C.: Hybrid algorithm for robust, real-time source localization in reverberant environments. In: ICASSP 2005, vol. 4, pp. 1053–1056 (2005)Google Scholar
  5. 5.
    Raykar, V.C., Yegnanarayana, B., Parasanna, S.R., Duraiswami, R.: Speaker localization using excitation source information in speech. IEEE Trans. Speech and Audio Processing 13(5), 751–761 (2005)CrossRefGoogle Scholar
  6. 6.
    O’ Donovan, A., Duraiswami, R., Neumann, J.: Microphone arrays as generalized cameras for integrated audio visual processing. In: CVPR 2007, pp. 1–8 (2007)Google Scholar
  7. 7.
    Scott, J., Dragovic, B.: Audio location: Accurate low-cost location Sencing. In: Gellersen, H.-W., Want, R., Schmidt, A. (eds.) PERVASIVE 2005. LNCS, vol. 3468, pp. 1–18. Springer, Heidelberg (2005)Google Scholar
  8. 8.
    Coen, M.H.: Design principles for intelligent environments. In: Proceedings of AAAI, pp. 547–554 (1998)Google Scholar
  9. 9.
    Mori, T., Noguchi, H., Takada, A., Sato, T.: Informational support in distributed sensor environment sensing room. RO-MAN 2004, pp. 353–358 (2004)Google Scholar
  10. 10.
    Nishiguchi, S., Kameda, Y., Kakusho, K., Minoh, M.: Automatic video recording of lecture’s audience with activity analysis and equalization of scale for students observation. JACIII 8(2), 181–189 (2004)Google Scholar
  11. 11.
    Omologo, M., et al.: Acoustic event location using a crosspower-spectrum phase based. technique. In: ICASSP 1994, vol. 2, pp. 273–276 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Tomoyuki Kobayashi
    • 1
  • Yoshinari Kameda
    • 1
  • Yuichi Ohta
    • 1
  1. 1.Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573Japan

Personalised recommendations