In recent years, videos have become more and more a familiar multimedia format for common users. In particular, the advent of Web 2.0 and the spreading of video-sharing services over the Web have led to an explosion of online video content. The capability to provide broader support in accessing and exploring video content, and in general other kind of multimedia formats as images and documents, is becoming more and more important. In this context, the value of semantically structured data and metadata is recognized as a key factor both to improve search efficiency and to guarantee data interoperability. This latter aspect is critical to connect different, heterogeneous content coming from a variety of data sources. On the other hand, the annotation of video resources has been increasingly understood as a medium factor to enable deep analysis of contents and collaborative study of online digital objects. However, as existing annotation tools provide poor support for semantically structured content or in some cases express the semantics in proprietary and non-interoperable formats, such knowledge that users build by carefully annotating contents hardly crosses the boundaries of a single system and often cannot be reused by different communities (e.g., to classify content or to discover new relations among resources). In this paper, a novel Semantic Web-based annotation system is presented that enables user annotations to form semantically structured knowledge at different levels of granularity and complexity. Annotation can be reused by external applications and mixed with Web of Data sources to enable “serendipity,” the reuse of data produced for a specific task (annotation) by different people and in different contexts from the one data originated from. The main ideas behind the approach are to build on ontologies and support linking, at data level, to precise thesauri and vocabularies, as well as to the Linked Open Data cloud. By describing the software model, developed in the context of SemLib EU project, and by providing an implementation of an online video annotation tool, the main aim of this paper is to demonstrate how such technologies can enable a scenario where users annotations are created while browsing the Web, naturally shared among users, stored in machine readable format and then possibly recombined with external data and ontologies to enhance end-user experience.
Semantic Web Video annotation Ontologies Knowledge representation Media fragment Information sharing