Advertisement

Synote: Accessible and Assistive Technology Enhancing Learning for All Students

  • Mike Wald
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6180)

Abstract

Although manual transcription and captioning can increase the accessibility of multimedia for deaf students it is rarely provided in educational contexts in the UK due to the cost and shortage of highly skilled and trained stenographers. Speech recognition has the potential to reduce the cost and increase the availability of captioning if it could satisfy accuracy and readability requirements. This paper discusses how Synote, a web application for annotating and captioning multimedia, can enhance learning for all students and how, finding ways to improve the accuracy and readability of automatic captioning, can encourage its widespread adoption and so greatly benefit disabled students.

Keywords

Speech Recognition Automatic Speech Recognition Speech Recognition System Word Error Rate Deaf People 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Whittaker, S., Hyland, P., Wiley, M.: Filochat handwritten notes provide access to recorded conversations. In: Proceedings of CHI 1994, pp. 271–277 (1994)Google Scholar
  2. 2.
    Wilcox, L., Schilit, B., Sawhney, N.: Dynomite: A Dynamically Organized Ink and Audio Notebook. In: Proc. of CHI 1997, pp. 186–193 (1997)Google Scholar
  3. 3.
    Chiu, P., Kapuskar, A., Reitmeief, S., Wilcox, L.: NoteLook: taking notes in meetings with digital video and ink. In: Proceedings of the seventh ACM international conference on Multimedia (Part 1), pp. 149–158 (1999)Google Scholar
  4. 4.
    Brotherton, J.A., Abowd, G.D.: Lessons Learned From eClass: Assessing Automated Capture and Access in the Classroom. ACM Transactions on Computer-Human Interaction 11(2) (2004)Google Scholar
  5. 5.
    Bain, K., Hines, J., Lingras, P.: Using Speech Recognition and Intelligent Search Tools to Enhance Information Accessibility. In: Proceedings of HCI International 2007. LNCS, Springer, Heidelberg (2007)Google Scholar
  6. 6.
    Wald, M.: An exploration of the potential of Automatic Speech Recognition to assist and enable receptive communication in higher education. ALT-J, Research in Learning Technology 14(1), 9–20 (2006)Google Scholar
  7. 7.
  8. 8.
    Leitch, D., MacMillan, T.: Liberated Learning Initiative Innovative Technology and Inclusion: Current Issues and Future Directions for Liberated Learning Research. Saint Mary’s University, Nova Scotia (2003), http://www.liberatedlearning.com (Retrieved February 7, 2007)
  9. 9.
  10. 10.
  11. 11.
    Bain, K., Basson, S., Wald, M.: Speech recognition in university classrooms. In: Proceedings of the Fifth International ACM SIGCAPH Conference on Assistive Technologies, pp. 192–196. ACM Press, New York (2002)CrossRefGoogle Scholar
  12. 12.
    Kheir, R., Way, T.: Inclusion of deaf students in computer science classes using real-time speech transcription. In: Proceedings of the 12th annual SIGCSE conference on Innovation and technology in computer science education (ITiCSE 2007). ACM SIGCSE Bulletin, vol. 39(3), pp. 261–265 (2007)Google Scholar
  13. 13.
    Bennett, S., Hewitt, J., Mellor, B., Lyon, C.: Critical Success Factors for Automatic Speech Recognition in the Classroom. In: Proceedings of HCI International (2007)Google Scholar
  14. 14.
    Wald, M.: Creating Accessible Educational Multimedia through Editing Automatic Speech Recognition Captioning in Real Time. International Journal of Interactive Technology and Smart Education: Smarter Use of Technology in Education 3(2), 131–142 (2006)CrossRefGoogle Scholar
  15. 15.
  16. 16.
    Fiscus, J., Radde, N., Garofolo, J., Le, A., Ajot, J., Laprun, C.: The Rich Transcription 2005 Spring Meeting Recognition Evaluation. National Institute Of Standards and Technology (2005)Google Scholar
  17. 17.
    Automatic Sync Technologies, http://www.automaticsync.com/caption/
  18. 18.
  19. 19.
  20. 20.
  21. 21.
    Wald, M.: A Research Agenda for Transforming Pedagogy and Enhancing Inclusive Learning through Synchronised Multimedia Captioned Using Speech Recognition. In: Proceedings of ED-MEDIA 2007: World Conference on Educational Multimedia, Hypermedia & Telecommunications (2007)Google Scholar
  22. 22.
    Twitter Synote, http://twitter.com/synote
  23. 23.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Mike Wald
    • 1
  1. 1.School of Electronics and Computer ScienceUniversity of SouthamptonUnited Kingdom

Personalised recommendations