Multimodal Analytics Dashboard for Story Detection and Visualization
- 509 Downloads
The InVID Multimodal Analytics Dashboard is a visual content exploration and retrieval system to analyze user-generated video content from social media platforms including YouTube, Twitter, Facebook, Reddit, Vimeo, and Dailymotion. It uses automated knowledge extraction methods to analyze each of the collected postings and stores the extracted metadata for later analyses. The real-time synchronization mechanisms of the dashboard help to track information flows within the resulting information space. Cluster analysis is used to group related postings and detect evolving stories, to be analyzed along multiple semantic dimensions such as sentiment and geographic location. Data journalists can not only visualize the latest trends across communication channels, but also identify opinion leaders (persons or organizations) as well as the relations among these opinion leaders.
The multimodal analytics dashboard presented in this chapter has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 687786. The authors would like to thank the researchers and software engineers of webLyzard technology gmbh, MODUL Technology GmbH, the Department of New Media Technology at MODUL University Vienna, and the Swiss Institute for Information Science at HTW Chur for their continued efforts to improve and extend the platform, as well as for their feedback on earlier versions of this article.
- 3.Scharl A, Lalicic L, Oender I (2016) Tourism intelligence and visual media analytics for destination management organizations. Springer, Cham, pp 165–178Google Scholar
- 5.Scharl A, Hubmann-Haidvogel A, Jones A, Fischl D, Kamolov R, Weichselbraun A, Rafelsberger W (2016) Analyzing the public discourse on works of fiction automatic emotion detection in online media coverage about HBO’s game of thrones. Inf Process Manag 52(1):129–138. (Best Paper Award – Honorable Mention)Google Scholar
- 8.Fischl D, Scharl A (2014) Metadata enriched visualization of keywords in context. In: Proceedings of the 2014 ACM SIGCHI symposium on engineering interactive computing systems, EICS ’14. ACM, New York, pp 193–196. https://doi.org/10.1145/2607023.2611451
- 10.Scharl A, Tochtermann K (2007) The geospatial web - how geo-browsers, social software and the web 2.0 are shaping the network society. Springer, LondonGoogle Scholar
- 12.Pujara J, Singh S (2018) Mining knowledge graphs from text. In: Proceedings of the eleventh ACM international conference on web search and data mining, WSDM ’18. ACM, New York, pp 789–790. https://doi.org/10.1145/3159652.3162011
- 13.Weichselbraun A, Kuntschik P, Braşoveanu AM (2018) Mining and leveraging background knowledge for improving named entity linking. In: Proceedings of the 8th international conference on web intelligence, mining and semantics, WIMS ’18. ACM, New York, pp 27:1–27:11. https://doi.org/10.1145/3227609.3227670
- 14.Nixon L, Apostolidis E, Markatopoulou F, Patras I, Mezaris V (2019) Multimodal video annotation for retrieval and discovery of newsworthy video in a news verification scenario. In: Kompatsiaris I, Huet B, Mezaris V, Gurrin C, Cheng WH, Vrochidis S (eds) MultiMedia modeling. Springer International Publishing, Cham, pp 143–155CrossRefGoogle Scholar
- 15.Matera M, Rizzo F, Carughi GT (2006) Web usability: principles and evaluation methods. Springer, Berlin, pp 143–180Google Scholar