Advertisement

Extraction of Key Segments from Day-Long Sound Data

  • Akinori Kasai
  • Sunao Hara
  • Masanobu AbeEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 528)

Abstract

We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

Keywords

Life-log Multisensing Sound Acceleration GPS Syllable count 

References

  1. 1.
    Gemmell, J., Bell, G., Lueder, R.: MyLifeBits: a personal database for everything. Commun. ACM 49, 88–95 (2006)CrossRefGoogle Scholar
  2. 2.
    Silva, G., Yamasaki, T., Aizawa, K.: Ubiquitous home: retrieval of experiences in a home environment. IEICE Trans. Inf. Syst. 91(2), 330–340 (2008)CrossRefGoogle Scholar
  3. 3.
    Kern, N., Schiel, B., Schmidt, A.: Recognizing context for annotating a live life recording. Pers. Ubiquit. Comput. 11(4), 251–263 (2007)CrossRefGoogle Scholar
  4. 4.
    Gemmell, J., Aris, A., Lueder, R.: Telling stories with Mylifebits. In: Proceedings IEEE International Conference on Multimedia and Expo, pp. 1536–1539 (2005)Google Scholar
  5. 5.
    Lee, A., Kawahara, T.: recent development of open-source speech recognition engine Julius. In: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference APSIPA ASC, pp. 131–137 (2009)Google Scholar
  6. 6.

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Graduate School of Natural Science and TechnologyOkayama UniversityOkayamaJapan

Personalised recommendations