EnACT: A Software Tool for Creating Animated Text Captions

  • Quoc V. Vy
  • Jorge A. Mori
  • David W. Fourney
  • Deborah I. Fels
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5105)

Abstract

Music in captioning is often represented by only its title and/or a music note. This representation provides little to no information of the intended effect or emotion of the music. In this paper, we present a software tool that was created to enable users to mark emotions in a script or lyrics and then render those marks into animated text for display as captions. A pilot study was conducted to collect initial responses to, preferences and understanding of the animated lyrics of one song by a deaf and hard of hearing audience. Participants were able to identify the animated lyrics as belonging to a song and found that the animations helped them understand the portrayed emotions. They also identified the shaking style of animation portraying fear as least preferable.

Keywords

Music visualization kinetic text animation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Graziplene, L.R.: Teletext: Its promise and demise. Associated University Press, London (2000)Google Scholar
  2. 2.
    Harkins, J.E., Korres, E., Singer, B.R., Virvan, B.M.: Non-Speech Information in Captioned Video: A Consumer Opinion Study with Guidelines for the Captioning Industry, Gallaudet Research Institute (1995)Google Scholar
  3. 3.
    Field, G.: Implementing DTV Closed Captions and Video Description, http://ncam.wgbh.org/dtv/overview/nab2000paper.html
  4. 4.
    Jordan, A.B., Albright, A., Branner, A., Sullivan, J.: The state of closed captioning services in the United States. National Captioning Institute Foundation (2003)Google Scholar
  5. 5.
    Kaper, H.G., Wiebel, E., Tipei, S.: Data sonification and sound visualization. Computing in Science & Engineering 1, 48–58 (1999)CrossRefGoogle Scholar
  6. 6.
    Hiraga, R., Watanabe, F.: Music learning through visualization. In: IEEE Second International Conference on WEB Delivering of Music, Stuttgart, pp. 101–108 (2002)Google Scholar
  7. 7.
    Rashid, R., Aitken, J., Fels, D.I.: Expressing emotions using animated text captions. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A.I. (eds.) ICCHP 2006. LNCS, vol. 4061, pp. 24–31. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Ekman, P.: Basic Emotions. In: Dalgleish, T., Power, M. (eds.) Handbook of Cognition and Emotion. John Wiley & Sons, Sussex (1999)Google Scholar
  9. 9.
    Fels, D., Lee, D., Branje, C., Reid, M., Shallow, E., Hornburg, M.: Captioning with an emotive twist. American Conference in Information Systems, Omaha, 2330–2337 (2005)Google Scholar
  10. 10.
    Chion, M., Audiovision. Washington, D.C.: Audiovision. Columbia Books, Washington (1994)Google Scholar
  11. 11.
    Csikszentmihalyi, M.: Flow: the psychology of optimal experience. Harper & Row, New York (1990)Google Scholar
  12. 12.
    Preece, J., Rogers, Y., Sharp, H.: Interaction Design. John Wiley & Sons, New York (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Quoc V. Vy
    • 1
  • Jorge A. Mori
    • 1
  • David W. Fourney
    • 1
  • Deborah I. Fels
    • 1
  1. 1.Ryerson UniversityTorontoCanada

Personalised recommendations