Skip to main content

Implementation of Automatic Captioning System to Enhance the Accessibility of Meetings

  • Conference paper
  • First Online:
Book cover Computers Helping People with Special Needs (ICCHP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10896))

Abstract

In terms of information accessibility tools for hearing-impaired people, in order to understand meetings, expectations for real-time captioning utilizing speech recognition technology are increasing, from manual handwritten abstracts. However, it is still difficult to provide automatic closed captioning with a practical level of accuracy stably, without regard to various speakers and content. Therefore, we develop a web-based real-time closed captioning system that is easy to use in contact conferences, lectures, forums, etc., through trial and feedback from hearing-impaired people in the company. In this report, we outline this system as well as the results of a simple evaluation conducted inside and outside the company.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Berke, L., Caulfield, C., Huenerfauth, M.: Deaf and hard-of-hearing perspectives on imperfect automatic speech recognition for captioning one-on-one meetings. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ASSETS 2017, New York, NY, USA, pp. 155–164. ACM (2017)

    Google Scholar 

  2. Fujitsu Social Science Laboratory Limited: LiveTalk. http://www.fujitsu.com/jp/group/ssl/products/software/applications/ud/livetalk/index.html

  3. Gaur, Y., Metze, F., Miao, Y., Bigham, J.P.: Using keyword spotting to help humans correct captioning faster. In: 16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015, pp. 2829–2833 (2015)

    Google Scholar 

  4. Huang, X., Baker, J., Reddy, R.: A historical perspective of speech recognition. Commun. ACM 57(1), 94–103 (2014)

    Article  Google Scholar 

  5. Kafle, S., Huenerfauth, M.: Evaluating the usability of automatically generated captions for people who are deaf or hard of hearing. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2017, New York, NY, USA, pp. 165–174. ACM (2017)

    Google Scholar 

  6. Lasecki, W.S., Miller, C.D., Kushalnagar, R., Bigham, J.P.: Real-time captioning by non-experts with legion scribe. In: Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2013, New York, NY, USA, pp. 56:1–56:2. ACM (2013)

    Google Scholar 

  7. Lasecki, W.S., Miller, C.D., Naim, I., Kushalnagar, R., Sadilek, A., Gildea, D., Bigham, J.P.: Scribe: deep integration of human and machine intelligence to caption speech in real time. Commun. ACM 60(9), 93–100 (2017)

    Article  Google Scholar 

  8. Naim, I., Gildea, D., Lasecki, W., Bigham, J.: Text alignment for real-time crowd captioning. In: North American Chapter of the Association for Computational Linguistics, NAACL 2013, pp. 201–210 (2013)

    Google Scholar 

  9. Nasu, Y., Fujimura, H.: Acoustic event detection and removal using LSTM-CTC for speech recognition. IEICE Tech. Rep. 116(208), 121–126 (2016). (in Japanese)

    Google Scholar 

  10. NICT: SpeechCanvas. http://speechcanvas.nict.go.jp/

  11. Ranchal, R., Taber-Doughty, T., Guo, Y., Bain, K., Martin, H., Robinson, J.P., Duerstock, B.S.: Using speech recognition for real-time captioning and lecture transcription in the classroom. IEEE Trans. Learn. Technol. 6(4), 299–311 (2013)

    Article  Google Scholar 

  12. Shamrock Records Inc: UD Talk. http://udtalk.jp/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kosei Fume .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fume, K., Ashikawa, T., Watanabe, N., Fujimura, H. (2018). Implementation of Automatic Captioning System to Enhance the Accessibility of Meetings. In: Miesenberger, K., Kouroupetroglou, G. (eds) Computers Helping People with Special Needs. ICCHP 2018. Lecture Notes in Computer Science(), vol 10896. Springer, Cham. https://doi.org/10.1007/978-3-319-94277-3_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-94277-3_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-94276-6

  • Online ISBN: 978-3-319-94277-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics