Skip to main content

Synchronized Subtitles in Live Television Programmes

  • Chapter
  • 392 Accesses

Part of the Palgrave Studies in Translating and Interpreting book series (PTTI)

Abstract

In recent years, a substantial increase in the demand for multimedia products has taken place, an increase that is being met by prerecorded or live multimedia programmes offered by broadcasters, IPTV or the Internet. At the same time, in the coming years, an increase is expected in the number of adults in Europe with problems accessing digital television, as has been highlighted by the DTV4All project (Looms 2009). For this part of the population, subtitles are needed to access the audio content of TV programmes and to ensure the compliance of broadcasters with regulatory standards currently in place worldwide (Romero-Fresco 2011).

Keywords

  • Automatic Speech Recognition
  • Automatic Speech Recognition System
  • Digital Video Broadcasting
  • European Telecommunication Standard Institute
  • Original Audio

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1057/9781137552891_4
  • Chapter length: 21 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-1-137-55289-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Hardcover Book
USD   139.99
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Álvarez, Aitor, Arantza del Pozo, Andoni Arruti. 2010. ‘Apyca: Towards the automatic subtitling of television content in Spanish.’ In International Multiconference on Computer Science and Information Technology — IMCSIT (pp. 567–574). Wisla: Poland, http://www.informatik.uni-trier.de/~ley/db/conf/imcsit/imcsit2010.html.

  • AENOR. 2012. Subtitulado para personas sordas y personas con discapacidad auditiva. Madrid: AENOR.

    Google Scholar 

  • Bisani, Maximilian and Hermann Ney. 2005. ‘Open vocabulary speech recognition with flat hybrid models.’ Proceedings of Interspeech 2005: 725–8.

    Google Scholar 

  • Boulianne, Gilles, Maryse Boisvert and Frederic Osterrath. 2008. ‘Real-time speech recognition captioning of events and meetings.’ IEEE Spoken Language Technology Workshop, SLT 2008: 197–200.

    Google Scholar 

  • de Castro, Mercedes, Manuel de Pedro, Belén Ruiz and Javier Jiménez. 2010. Procedimiento y dispositivo para sincronizar subtítulos con audio en subtitulación en directo. Oficina Española de Patentes y Marcas, Patent Id. P201030758.

    Google Scholar 

  • ETSI. 2003. Digital Video Broadcasting (DVB); Specification for conveying ITU-R System B Teletext in DVB bitstreams. European Telecommunications Standards Institute, ETSI EN 300 472 V1.3.1.

    Google Scholar 

  • ETSI. 2006. Digital Video Broadcasting (DVB); Subtitling systems. European Telecommunications Standards Institute, ETSI EN 300 743. V1.3.1.

    Google Scholar 

  • ETSI. 2009. Digital Video Broadcasting (DVB); Specification for Service Information in DVB Systems. European Telecommunications Standards Institute, (DVB-SI) ETSI EN 300 468 V1.11.1.

    Google Scholar 

  • Eugeni, Carlo. 2009. ‘Respeaking the BBC news.’ The Sign Language Translator and Interpreter 3(1): 29–68.

    Google Scholar 

  • Gao, Jie, Qingwei Zhao and Yonghong Yan. 2010. ‘Automatic synchronization of live speech and its transcripts based on a frame-synchronous likelihood ratio test.’ IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP 2010): 1622–5. García, José E., Alfonso Ortega, Eduardo Lleida, Tomas Lozano, Emiliano Bernues and Daniel Sanchez. 2009. ‘Audio and text synchronization for TV news subtitling based on automatic speech recognition.’ IEEE International Symposium on Broadband Multimedia Systems and Broadcasting BMSB’ 09: 1–6.

    Google Scholar 

  • ISO. 2007. Information Technology — Generic Coding of Moving Pictures and Associated Audio Information: Systems. International Organization for Standardization, ISO/IEC 13818–1:2007.

    Google Scholar 

  • Lambourne, Andrew, Jill Hewitt, Caroline Lyon and Aandra Warren. 2004.

    Google Scholar 

  • ‘Speech-based real-time subtitling services.’ LNCS International Journal of Speech Technology 7: 269–79.

    Google Scholar 

  • Luyckx, Bieke, Tijs Delbeke, Luuk Van Waes, Mariëlle Leijten and Aline Remael. 2010. Live subtitling with speech recogntion: Causes and consequences of text reduction. Artesis VT Working Papers in Translation Studies 2010–2011. Antwerp: Artesis University College Antwerp.

    Google Scholar 

  • Looms, Peter O. 2009. ‘E-inclusiveness and digital television in Europe -A holistic model.’ In C. Stephanidis (ed.) Universal Access in Human-Computer Interaction: Part I (pp. 550–8) Berlin: Springer Verlag.

    Google Scholar 

  • Looms, Peter O. 2010. ‘The production and delivery of access services.’ EuBU TechnicalReview-201Q3.

    Google Scholar 

  • Mason, Andrew and Robert A. Salmon. 2009. Factors Affecting Perception of Audio-video Synchronisation in Television.’ BBC R&D White Paper WHP176.

    Google Scholar 

  • Rander, Annie and Peter O. Looms. 2010. ‘The accessibility of television news with live subtitling on digital television.’ In Proceedings of the 8th International Interactive Conference on Interactive TV&Video (EuroITV ‘10) (pp. 155–60). New York: ACM.

    Google Scholar 

  • Reimers, Ulrich H. 2006. ‘DVB — The family of international standards for digital video broadcasting.’ Proceedings of the IEEE 94(1): 173–82.

    CrossRef  Google Scholar 

  • Romero-Fresco, Pablo. 2011. Subtitling through Speech Recognition: Respeaking. Manchester: St Jerome.

    Google Scholar 

  • Ruokolainen, Teemu. 2009. Topic Adaptation for Speech Recognition in Multimodal Environment. MA thesis. Helsinki University of Technology.

    Google Scholar 

  • Siivola, Vea. 2007. Language Models for Automatic Speech Recognition: Construction and Complexity Control. PhD thesis. Helsinki University of Technology.

    Google Scholar 

  • Wald, Mike, John-Mark Bell, Philip Boulain, Karl Doody and Jim Gerrard. 2007. ‘Correcting automatic speech recognition captioning errors in real time.’ International Journal of Speech Technology 10(1): 1–15.

    CrossRef  Google Scholar 

  • Wald, Mike. 2008. ‘Captioning multiple speakers using speech recognition to assist disabled people.’ Computers Helping People with Special Needs, Lecture Notes in Computer Science 5105: 617–23.

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Copyright information

© 2015 Mercedes de Castro, Luis Puente Rodríguez and Belén Ruiz Mezcua

About this chapter

Cite this chapter

de Castro, M., Rodríguez, L.P., Mezcua, B.R. (2015). Synchronized Subtitles in Live Television Programmes. In: Piñero, R.B., Cintas, J.D. (eds) Audiovisual Translation in a Global Context. Palgrave Studies in Translating and Interpreting. Palgrave Macmillan, London. https://doi.org/10.1057/9781137552891_4

Download citation