Skip to main content

A Study on Demonstrative Words Extraction in Instructor Utterance on Communication Support for Hearing Impaired Persons

  • Conference paper
Computers Helping People with Special Needs (ICCHP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5105))

Included in the following conference series:

Abstract

Remote transcription system is one of the communication support systems for hearing impaired students in a lecture. Because the remote transcription system requires several seconds from the utterance of the instructor to displaying the summarized text, hearing impaired students cannot associate the demonstrative word. In this paper, we analyze how demonstrative words appear in the utterance of instructor. About 75% of demonstrative words follow the pause and are uttered within 500ms after the pause. We propose a method for extracting demonstrative words based on acoustic features and conducted experiments. Experimental results show that the recall rate is more than 80% but the relevance rate is less than 20% in the cross validation test.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kato, N., et al.: A Basic Study on Effects of Keyword in Remote Captioning Services for a Symposium. IEICE technical report Welfare Information technology 106(57), 47–52 (2006) (in Japanese)

    Google Scholar 

  2. Kimura, T., Umemura, Y., Kanda, K., Hara, D.: An automated caption insertion system for the PowerPoint presentation. IEICE technical report Welfare Information technology 107(61), 75–78 (2007) (in Japanese)

    Google Scholar 

  3. Murata, K.T., Kimura, E., Kurita, S.: Impact of Network Delay to Remote Captioning Service. IEICE technical report Welfare Information technology 106(489), 37–42 (2007) (in Japanese)

    Google Scholar 

  4. Mukhopadhyay, S., Smith, B.: Passive capture and structuring of lectures. In: Proc. 7th ACM Int. Conf. on MM, pp. 477–487 (1999)

    Google Scholar 

  5. Ishizuka, K., Kamera, Y., Minoh, M.: Speech and Video Indexing on Automatic Lecture Recording System. Technical report of IEICE PRMU 99(709), 91–98 (2006)

    Google Scholar 

  6. Takeuchi, Y., Sakashita, Y., Wakatsuki, D., Minagawa, H., Ohnishi, N.: Communication Supporting System in a Classroom Environment for the Hearing Impaired. In: Proc. Int. Conf. Computer Helping People with Special Needs, pp. 627–634 (2006)

    Google Scholar 

  7. Takeuchi, Y., Saito, K., Ohnishi, N., Iizuka, S., Nakajima, S.: Camera Control for Remote Transcription System. In: Proc. IASTED Int. Conf. Computer Graphics and Imaging, pp. 88–93 (2008)

    Google Scholar 

  8. Onishi, M., Fukunaga, K.: Shooting the lecture scene using computer-controlled cameras based on situation understanding and evaluation of video images. In: Proc. of Int. Conf. on Pattern Recognition, vol. 1, pp. 781–784 (2004)

    Google Scholar 

  9. Kamada, Y., Ishizuka, K., Minoh, M.: A Live Video Imaging Method for Capturing Presentation Information In Distance Learning. In: Int. Conf. on Multimedia and Expo., vol. 3, pp. 1237–1240 (2000)

    Google Scholar 

  10. Nishigori, S., Suganuma, A.: Automatic Camera Control System for a Distant Lecture with Videoing a Normal Classroom. In: Proc. of World Conf. on Educational Multimedia, Hypermedia and Telecommunications, pp. 1892–1897 (2002)

    Google Scholar 

  11. Ashikawa, T., Suganuma, A., Taniguchi, R.-i.: A Judgment of Beginning and Ending of Teacher’s Chalking with Images and Sounds Processing. IEICE technical report Education technology 102(509), 43–48 (2002) (in Japanese)

    Google Scholar 

  12. Pozzer-Ardenghi, L., Roth, W.M.: Gestures: Helping students to understand photographs in Lectures. In: Proc. Connections 2003, pp. 31–58 (2003)

    Google Scholar 

  13. Sakiyama, T., Mukunoki, M., Katsuo, I.: Detection of the Indicated Area with an Indication Srick. In: Int. Conf. on Multimodal Interfaces (200), pp. 480–487

    Google Scholar 

  14. Marutani, T., Nishiguchi, S., Kakusho, K., Minoh, M.: Making a lecture content with deictic information about indicated objects in lecture materials. In: AEARU Workshop on Network Education, pp. 70–75 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus Miesenberger Joachim Klaus Wolfgang Zagler Arthur Karshmer

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ito, A., Saito, K., Takeuchi, Y., Ohnishi, N., Iizuka, S., Nakajima, S. (2008). A Study on Demonstrative Words Extraction in Instructor Utterance on Communication Support for Hearing Impaired Persons. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds) Computers Helping People with Special Needs. ICCHP 2008. Lecture Notes in Computer Science, vol 5105. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-70540-6_90

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-70540-6_90

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-70539-0

  • Online ISBN: 978-3-540-70540-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics