Advertisement

Integration of Caption Editing System with Presentation Software

  • Kohtaroh Miyamoto
  • Kenichi Arakawa
  • Masakazu Takizawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4554)

Abstract

There is an increasing number of rich content that includes audio and presentations. It is important to caption these contents to assure accessibility for hearing impaired persons and seniors. Initially, we conducted a survey and found that the combination of video with audio, captions, and presentation slides (hereafter "multimedia composite") is helpful in understanding the content. Also our investigation shows that the availability of captioning is still very low and therefore there is a strong need for an effective captioning system. Based on this preliminary survey and investigation, we would like to introduce a new method which integrates caption editing software with presentation software. Three major problems are identified: Content layout definition, editing focus linkage, and exporting to speaker notes. This paper will show how our Caption Editing System with Presentation Integration (CESPI) solves these problems. Experiments showed 37.6% improvement in total editing time.

Keywords

Accessibility Captioning Presentation Voice Recognition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Nakamura, H., Seki, T.: IT Accessibility Standards- Equipment, Services And Web: JIS And ISO. In: CSUN 19th International Annual International ConferenceGoogle Scholar
  2. 2.
    Retrieved 1/24/2007, http://www.section508.gov
  3. 3.
  4. 4.
  5. 5.
    Captions for Deaf and Hard-of-Hearing Viewers. In: NIH Publication No. 00-4834 (July 2002)Google Scholar
  6. 6.
    Harvey, A.: Nevada Legislature Develops Web Captioning System for the Hearing Impaired While technology can provide some elegant solutions, cost is still a major factor. In: National Association of Legislative Information Technology Newsletter Winter (2004)Google Scholar
  7. 7.
    Bain, K., Basson, S.H., Wald, M.: Speech recognition in university classrooms: liberated learning project. In: ACM SIGACCESS Conference on Assistive Technologies, pp. 192–196 (2002)Google Scholar
  8. 8.
  9. 9.
    Television Access Services. In: Ofcom (Office of Communications), 23–25 (March 23, 2006)Google Scholar
  10. 10.
  11. 11.
  12. 12.
    Hewitt, J., Lyon, C., Britton, C., Mellor, B.: SpeakView: Live Captioning of Lectures.In: Universal Access in HCI. HCI International, vol. 8 (2005)Google Scholar
  13. 13.
    Miyamoto, K.: Effective Master Client Closed Caption Editing System for Wide Range Workforce. In: Universal Access in HCI. HCI International vol. 7 (2005)Google Scholar
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
    Synchronized Multimedia Integration Language (SMIL) 1.0 Specification. W3C Recommendation 15-June-1998. REC-smil-19980615 Google Scholar
  20. 20.
  21. 21.
    Understanding SAMI 1.0. October 2001. Updated February 2003. Microsoft Corporation Google Scholar
  22. 22.
    About Notes. Microsoft PowerPoint, Help (2002)Google Scholar
  23. 23.
    Baloff, N.: Extensions of the Learning Curve-Some Empirical Results. In: Operations Research Quarterly, 22.44 (1971)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Kohtaroh Miyamoto
    • 1
  • Kenichi Arakawa
    • 1
  • Masakazu Takizawa
    • 1
  1. 1.1623-14 Shimotsuruma, Yamato, Kanagawa, Accessibility Center AP, IBMJapan

Personalised recommendations