Skip to main content
Log in

Web-based platform for a customizable and synchronized presentation of subtitles in single- and multi-screen scenarios

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper presents a web-based platform for a customized and synchronized presentation of subtitles in both single- and multi-screen scenarios. The platform enables a dynamic user-level customization of subtitles in terms of format (font family, size, color, transparency...) and position, according to the users’ preferences and/or needs. It also allows adjusting the number of subtitle lines to be presented, being able to skip to the corresponding playout position of the beginning of a specific line by clicking on it. Likewise, multiple languages can be simultaneously presented, and a delay offset to the presentation of subtitles can be applied. All these functionalities can also be available on companion devices, by associating them to the session on the main screen. This enables the presentation of subtitles in a synchronized manner with the content on the main screen and their independent customization. The platform provides support for different subtitle formats, as well as for HTML5 and Youtube videos. It includes a module to upload videos and their subtitle files, and to manage playlists. Overall, the platform enables personalized and more engaging consumption experiences, contributing to improve the Quality of Experience (QoE). It can additionally provide benefits in a variety of scenarios, such as language learning, crowded multi-culture and noisy environments. The results from a subjective evaluation study, with the participation of 40 users without accessibility needs, reveal that the platform can provide relevant benefits for the whole spectrum of consumers. In particular, users have been very satisfied with the usability, attractiveness, effectiveness and usefulness of all features of the platform.

Demo video: https://goo.gl/TdixNz

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. http://www.who.int/mediacentre/factsheets/fs300/en/ Last access in December 2019

  2. http://www.who.int/mediacentre/factsheets/fs282/en/ Last access in December 2019

  3. Opencast Video Solution, https://opencast.org/ Last Access in December 2019.

  4. https://nodejs.org/en/ Last access in December 2019

  5. https://socket.io/ Last access in December 2019

  6. https://ffmpeg.org/ Last access in December 2019

  7. https://davidshimjs.github.io/qrcodejs Last access in March 2019.

References

  1. Armstrong M 2013, “The development of a methodology to evaluate the perceived quality of live TV subtitles”, BBC Research & Development, White Paper WHP 259

  2. Armstrong M, et al. 2015, “Understanding the Diverse Needs of Subtitle Users in a Rapidly Evolving Media Landscape”, IBC 2015, Amsterdam (the Netherlands), September

  3. Arntzen IM, Borch NT, Daoust F 2018, “Media Synchronization on the Web”. In MediaSync: Handbook on Multimedia Synchronization, Montagud M., Cesar P., Boronat F., Jansen J. (eds), Springer-Verlag, ISBN 978–3–319-65840-7,

  4. Bartoll E, Martínez TA (2010) The Positioning of Subtitles for Deaf and Hard of Hearing. In: Matamala A, Orero P (eds) Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing. Peter Lang, Bern, pp 69–87

    Google Scholar 

  5. Bastanfard A, Rezaei NA, Mottaghizadeh M, Fazel M (2010) A novel multimedia educational speech therapy system for hearing impaired children. In: Qiu G, Lam KM, Kiya H, Xue XY, Kuo CCJ, Lew MS (eds) Advances in multimedia information processing - PCM 2010. Springer, Berlin

    Google Scholar 

  6. Boronat F, Marfil D, Montagud M, Pastor J 2018, “HbbTV-Compliant Platform for Hybrid Media Delivery and Synchronization on Single and Multi-Device Scenarios”, IEEE Transactions on Broadcasting, Volume 64, Issue 3,

  7. Brown A et al (2015) “Dynamic subtitles: the user experience”, ACM TVX’15, Brussels

  8. van Deventer MO, Probst M, Ziegler C (2018), “Media synchronisation for television services through HbbTV”. In MediaSync: handbook on multimedia synchronization, Montagud M., Cesar P., Boronat F., Jansen J. (eds), springer-Verlag, ISBN 978-3-319-65840-7, 2018

  9. DIAL (DIscovery And Launch) protocol specification, Version 1.7.2, 2015, http://www.dial-multiscreen.org/dial-protocol-specification/DIAL-2ndScreenProtocol-1.7.2.pdf

  10. Foerster A (2010) Towards a creative approach in subtitling: a case study. In: Cintas J, Matamala A, Neves J (eds) New Insights into Audiovisual Translation and Media Accessibility. Rodopi, New York, pp 81–98

    Google Scholar 

  11. Gómez D, Núñez JA, Montagud M, Fernández S 2018, “ImmersiaTV: enabling customizable and immersive multi-screen TV experiences”, ACM MMSYS’18, Amsterdam

  12. Hong R., et al. 2011, “Video accessibility enhancement for hearing-impaired users”, ACM TOMCCAP, 7S, 1, article 24, 19 pages

  13. Hu Y, Kautz J, Yu Y., and Wang W 2015, “Speaker-following video subtitles”, ACM TOMCCAP, 11, 2, article 32, 17 pages

  14. Hybrid Broadcast Broadband TV (HbbTV) 2.0.1 Specification, HbbTV Association Resource Library, https://www.hbbtv.org/resource-library, 2016.

  15. Jensema CJ, el Sharkawy S, Danturthi RS, Burch R, Hsu D (July 2000) Eye movement patterns of captioned television viewers. Am Ann Deaf 145(3):275–285

    Google Scholar 

  16. Lavaur JM, Bairstow D (2011) Languages on the screen: is film comprehension related to the viewers’ fluency level and to the language in the subtitles? Int J Psychol 46:455–462

    Google Scholar 

  17. Lee DG, Fels DI, Udo JP (2007) “Emotive captioning”, Computers in Entertainment (CIE), Volume 5, Issue 2, Article No 11

  18. Lorenzo L (2010) Criteria for Elaborating Subtitles for Deaf and Hard of Hearing Children in Spain: A Guide of Good Practice. In: Matamala A, Orero P (eds) Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing. Peter Lang, Bern, pp 139–148

    Google Scholar 

  19. Montagud M, Boronat F, Stokking H, van Brandenburg R (November 2012) Inter-destination multimedia synchronization; schemes, use cases and standardization. Multimedia Systems Journal (Springer) 18(6):459–482

    Google Scholar 

  20. Montagud M, Boronat F, Belda J, Marfil D (2016) Use of web components to develop interactive, customizable and multi-device video consumption platforms. In: applications and usability of interactive TV, vol 605. springer-Verlag, pp 26–43

  21. M Montagud, F Boronat, J González y J Pastor 2017, "web-based platform for subtitles customization and synchronization in multi-screen scenarios", ACM TVX 2017, Hilversum (The Netherlands)

  22. Montagud M, Boronat F, Roig B, Sapena A (May 2017) How to perform AMP? Cubic adjustments for improving the QoE. Computer Communications (Elsevier) 103:61–73

    Google Scholar 

  23. Montagud M, Fraile I, Núñez JA, Fernández S 2018, “ImAc: Enabling Immersive, Accessible and Personalized Media Experiences”, ACM TVX 2018, Seoul (South Korea)

  24. Neate T, Jones M, Evans M 2015, “Designing attention for multi-screen TV experiences”, in proceedings of the 2015 British HCI conference (British HCI '15), 285–286, Lincoln (UK)

  25. Neate T, Jones M, Evans M (April 2017) Cross-device media: a review of second screening and multi-device television. Pers Ubiquit Comput 21(2):391–405

    Google Scholar 

  26. Pereira A (2010) Criteria for Elaborating Subtitles for Deaf and Hard of Hearing Adults in Spain: Description of a Case Study. In: Matamala A, Orero P (eds) Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing. Peter Lang, Bern, pp 87–102

    Google Scholar 

  27. H Petrie, N Bevan 2009, “The evaluation of accessibility, usability and user experience”, Book Chapter in The Universal Access Handbook, C. Stepanidis (ed.), CRC Press,

  28. Porteiro M (2012) The use of subtitles to treat speech-language disorders. Perspectives: Studies in Translatology 21(1):1–12

    Google Scholar 

  29. CD Power, H Petrie 2018, “Working with participants”, Book Chapter in Web accessibility: A foundation for research, S. Harper and Y. Yesilada (eds), 2nd Ed. (Springer)

  30. Rodriguez A, Talavera G, Orero P, Carrabina J (June 2012) Subtitle synchronization across multiple screens and devices. Sensors 12(7):8710–8731

    Google Scholar 

  31. Subtitle Guidelines, BBC, Subtitle Guidelines, https://bbc.github.io/subtitle-guidelines/ 2019.

  32. TTML Profiles for Internet Media Subtitles and Captions 1.1, W3C Recommendation, Pierre Lemieux (Editor), https://www.w3.org/TR/ttml-imsc1.1, 2018.

  33. Vanattenhoven J, Geerts D (2018) Synchronization for Secondary Screens and Social TV: User Experience Aspects. In: Montagud M, Cesar P, Boronat F, Jansen J (eds) MediaSync: Handbook on Multimedia Synchronization. Springer-Verlag

  34. Vinayagamoorthy V, Ramdhany R, Hammond M 2016, “Enabling frame-accurate synchronised companion screen experiences”, ACM TVX '16, Chicago (USA)

  35. Vy QV, Fels DI (2010) Using placement and name for speaker identification in captioning. In: Computers Helping People with Special Needs, vol 6179. Springer Berlin Heidelberg, pp 247–254

  36. WebVTT: The Web Video Text Tracks Format, W3C Community Group Draft Report, Simon Pieters (Editor), https://www.w3.org/TR/webvtt1/, 2017.

  37. Ziegler C, Keimel C, Ramdhany R, Vinayagamoorthy V 2017, “On time or not on time: a user study on delays in a synchronised companion-screen experience”, ACM TVX’17, Hilversum (Netherlands)

Download references

Acknowledgments

This work has been funded, partially, by the Spanish Ministry of Economy and Competitiveness, under its R&D&I Support Program, in project with Ref. TEC2013-45492-R, and by “Fundación Española para la Ciencia y Tecnología (FECYT)”, in project with Ref. FCT-15-9579. Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – Incorporación grant, with reference IJCI-2017-34611. Finally, the authors would also like to thank our student Juan González for his contribution in the development of the platform.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mario Montagud.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Highlights

• Customized and synchronized presentation of subtitles on single- and on multi-screen scenarios.

• Dynamic Customization in terms of format (font family, size, color, transparency...) and position.

• Simultaneous presentation of multiple lines and in several languages.

• Web-based components: no need for installations and universal support.

• Better language learning, social integration and improved Quality of Experience (QoE).

• Support for different subtitle file formats, as well as for HTML5 and Youtube videos

• Satisfactory results in terms of performance, usability and applicability.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Montagud, M., Boronat, F., Pastor, J. et al. Web-based platform for a customizable and synchronized presentation of subtitles in single- and multi-screen scenarios. Multimed Tools Appl 79, 21889–21923 (2020). https://doi.org/10.1007/s11042-020-08955-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-08955-x

Keywords

Navigation