Cognitive Computation

, Volume 4, Issue 4, pp 497–514 | Cite as

A Collaborative Video Annotation System Based on Semantic Web Technologies

Article

Abstract

In recent years, videos have become more and more a familiar multimedia format for common users. In particular, the advent of Web 2.0 and the spreading of video-sharing services over the Web have led to an explosion of online video content. The capability to provide broader support in accessing and exploring video content, and in general other kind of multimedia formats as images and documents, is becoming more and more important. In this context, the value of semantically structured data and metadata is recognized as a key factor both to improve search efficiency and to guarantee data interoperability. This latter aspect is critical to connect different, heterogeneous content coming from a variety of data sources. On the other hand, the annotation of video resources has been increasingly understood as a medium factor to enable deep analysis of contents and collaborative study of online digital objects. However, as existing annotation tools provide poor support for semantically structured content or in some cases express the semantics in proprietary and non-interoperable formats, such knowledge that users build by carefully annotating contents hardly crosses the boundaries of a single system and often cannot be reused by different communities (e.g., to classify content or to discover new relations among resources). In this paper, a novel Semantic Web-based annotation system is presented that enables user annotations to form semantically structured knowledge at different levels of granularity and complexity. Annotation can be reused by external applications and mixed with Web of Data sources to enable “serendipity,” the reuse of data produced for a specific task (annotation) by different people and in different contexts from the one data originated from. The main ideas behind the approach are to build on ontologies and support linking, at data level, to precise thesauri and vocabularies, as well as to the Linked Open Data cloud. By describing the software model, developed in the context of SemLib EU project, and by providing an implementation of an online video annotation tool, the main aim of this paper is to demonstrate how such technologies can enable a scenario where users annotations are created while browsing the Web, naturally shared among users, stored in machine readable format and then possibly recombined with external data and ontologies to enhance end-user experience.

Keywords

Semantic Web Video annotation Ontologies Knowledge representation Media fragment Information sharing 

References

  1. 1.
    Hu W, Xie N, Li L, Zeng X, Maybank S. A survey on visual content-based video indexing and retrieval. IEEE Trans Syst Man Cybern Part C Appl Rev. 2011;41(6):797–819.CrossRefGoogle Scholar
  2. 2.
    Andrews P, Zaihrayeu I, Pane J. A classification of semantic annotation systems. Technical Report DISI-10-064, Ingegneria e Scienza dell’Informazione. University of Trento. 2010.Google Scholar
  3. 3.
    Grassi M, Morbidoni C, Nucci M. Semantic web techniques application for video fragment annotation and management. In: Proceedings of the SSPnet-COST 2102 PINK international conference on analysis of verbal and nonverbal communication and enactment: the processing issues. 2011. p. 95–103.Google Scholar
  4. 4.
    Dasiopoulou S, Giannakidou E, Litos G, Malasioti P, Kompatsiaris Y. A survey of semantic image and video annotation tools. In: Paliouras G, Spyropoulos CD, Tsatsaronis G, editors. Knowledge-driven multimedia information extraction and ontology evolution, 6050/BOEMIE EU project. Berlin: Springer; 2011.Google Scholar
  5. 5.
    Media Fragments URI 1.0. W3C Working Draft 24 June 2010. [Online]. Available: http://www.w3.org/TR/media-frags/.
  6. 6.
    Morbidoni C, Grassi M, Nucci M. Introducing SemLib project: semantic web tools for digital libraries. To Appear in: International workshop on semantic digital archives—sustainable long-term curation perspectives of cultural heritage held as part of the 15th international conference on theory and practice of digital libraries (TPDL). 29.09.2011 in Berlin.Google Scholar
  7. 7.
    Manola F, Miller E (eds). RDF primer W3C recommendation 10 February 2004. Available: http://www.w3.org/TR/REC-rdf-syntax.
  8. 8.
    Grassi M, Morbidoni C, Piazza F. Towards semantic multimodal video annotation. Lect Notes Comput Sci. 2011;6456/2011:305–16.Google Scholar
  9. 9.
    Rohlfing K et al. Comparison of multimodal annotation tools—workshop report. Gesprachsforschung-OnlineZeitschrift zur verbalen Interaktion 7. 2006;7(7):99–123.Google Scholar
  10. 10.
    Kipp M. Anvil—a generic annotation tool for multimodal dialogue. In: Proceedings of the 7th European conference on speech communication and technology (Eurospeech). Aalborg; 2001. p. 1367–70.Google Scholar
  11. 11.
    Wittenburg P et al. ELAN: a professional framework for multimodality research. In: Proceedings of the 5th international conference on language resources and evaluation (LREC 2006). p. 1556–9.Google Scholar
  12. 12.
    Schmidt T. Transcribing and annotating spoken language with EXMARaLDA. In: Proceedings of the LREC-workshop on XML based richly annotated corpora. Lisbon 2004. Paris: ELRA; 2004.Google Scholar
  13. 13.
    Chebotko A et al. OntoELAN: an ontology-based linguistic multimedia annotator multimedia software engineering. In: Proceedings of the IEEE sixth international symposium on 13–15 Dec. 2004. p. 329–36.Google Scholar
  14. 14.
    Petridis K et al. MOntoMat-Annotizer: image annotation. Linking ontologies and multimedia low-level features in KES 2006— 10th international conference on knowledge based, intelligent information and engineering systems.Google Scholar
  15. 15.
    Video Image Annotiation (VIA) Informatics and Telematics Insitute (CERTH-ITI). Available: http://mklab.iti.gr/via/.
  16. 16.
    SVAT Semantic Video Annotation Tool by JOANNEUM RESEARCH. Available: http://www.joanneum.at/en/fb2.html.
  17. 17.
    YouTube Video Annotations. Available: http://www.youtube.com/it/annotations_about.
  18. 18.
    VideoANT. Available: http://ant.umn.edu/.
  19. 19.
  20. 20.
    Kaltura Video Editing and Annotation. Available: http://corp.kaltura.com/video_platform/video_editing.
  21. 21.
    Haslhofer B, Momeni E, Gay M, Simon R. Augmenting Europeana content with linked data resources. In 6th international conference on semantic systems (I-Semantics), September 2010.Google Scholar
  22. 22.
    Bizer C, Lehmann J, Kobilarov G, Auer S, Becker C, Cyganiak R, Hellmann S. DBpedia a crystallization point for the web of data. J Web Semant Sci Serv Agents World Wide Web. 2009;7:154–165.Google Scholar
  23. 23.
    Bertini M, D’Amico G, Ferracani A, Meoni M, Serra G. Sirio, orione and pan: an integrated web system for ontology-based video search and annotation. In: Proceedings of the international conference on multimedia (MM ’10). New York: ACM. p. 1625–8. doi:10.1145/1873951.1874305. http://doi.acm.org/10.1145/1873951.1874305.
  24. 24.
    Holley Rose Crowdsourcing: How and Why Should Libraries Do It?. D-Lib Magazine, The Magazine of Digital Library Research. March/April, 2010.Google Scholar
  25. 25.
    Sparql query language for rdf, W3C Recommendation, January 2008. [Online]. Available: http://www.w3.org/TR/rdf-sparql-query/.
  26. 26.
    Sanderson R, Van de Sompel H (eds). Open annotation: Alpha3 data model guide 15 October 2010. Available: http://www.openannotation.org/spec/alpha3/.
  27. 27.
    Kahan J, Koivunen MR. Annotea: an open RDF infrastructure for shared web annotations. In: Proceedings of the 10th international conference on World Wide Web. 2001. p. 623–32.Google Scholar
  28. 28.
  29. 29.
    Cross-Origin Resource Sharing. W3C Working Draft 27 July 2010. Available: http://www.w3.org/TR/cors/.
  30. 30.
    Grassi M. Developing HEO human emotion ontology. LNCS, vol 5707. Berlin: Springer; 2009. p. 244–51.Google Scholar
  31. 31.
  32. 32.
    FOAF Vocabulary Specification 0.98. Namespace Document 9 August 2010. Available: http://xmlns.com/foaf/spec/.
  33. 33.
    Grassi M, Cambria E, Hussain A, Piazza F. Sentic web: a new paradigm for managing social media affective information. Cognitive computation. Berlin: Springer. In press. doi:10.1007/s12559-011-9101-8.
  34. 34.
    Cambria E. In: Roelandse M, editor. Sentic computing: techniques, tools and applications. Berlin: Springer. In press.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Marco Grassi
    • 1
  • Christian Morbidoni
    • 1
  • Michele Nucci
    • 1
  1. 1.Semedia (Semantic Web and Multimedia)Polytechnic University of MarcheAnconaItaly

Personalised recommendations