Advertisement

Multimodal Analytics Dashboard for Story Detection and Visualization

Chapter

Abstract

The InVID Multimodal Analytics Dashboard is a visual content exploration and retrieval system to analyze user-generated video content from social media platforms including YouTube, Twitter, Facebook, Reddit, Vimeo, and Dailymotion. It uses automated knowledge extraction methods to analyze each of the collected postings and stores the extracted metadata for later analyses. The real-time synchronization mechanisms of the dashboard help to track information flows within the resulting information space. Cluster analysis is used to group related postings and detect evolving stories, to be analyzed along multiple semantic dimensions such as sentiment and geographic location. Data journalists can not only visualize the latest trends across communication channels, but also identify opinion leaders (persons or organizations) as well as the relations among these opinion leaders.

Notes

Acknowledgements

The multimodal analytics dashboard presented in this chapter has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 687786. The authors would like to thank the researchers and software engineers of webLyzard technology gmbh, MODUL Technology GmbH, the Department of New Media Technology at MODUL University Vienna, and the Swiss Institute for Information Science at HTW Chur for their continued efforts to improve and extend the platform, as well as for their feedback on earlier versions of this article.

References

  1. 1.
    Hubmann-Haidvogel A, Scharl A, Weichselbraun A (2009) Multiple coordinated views for searching and navigating web content repositories. Inf Sci 179(12):1813–1821CrossRefGoogle Scholar
  2. 2.
    Scharl A, Herring D, Rafelsberger W, Hubmann-Haidvogel A, Kamolov R, Fischl D, Fls M, Weichselbraun A (2017) Semantic systems and visual tools to support environmental communication. IEEE Syst J 11(2):762–771CrossRefGoogle Scholar
  3. 3.
    Scharl A, Lalicic L, Oender I (2016) Tourism intelligence and visual media analytics for destination management organizations. Springer, Cham, pp 165–178Google Scholar
  4. 4.
    Scharl A, Weichselbraun A (2008) An automated approach to investigating the online media coverage of us presidential elections. J Inf Technol Polit 5(1):121–132CrossRefGoogle Scholar
  5. 5.
    Scharl A, Hubmann-Haidvogel A, Jones A, Fischl D, Kamolov R, Weichselbraun A, Rafelsberger W (2016) Analyzing the public discourse on works of fiction automatic emotion detection in online media coverage about HBO’s game of thrones. Inf Process Manag 52(1):129–138. (Best Paper Award – Honorable Mention)Google Scholar
  6. 6.
    Weichselbraun A, Gindl S, Fischer F, Vakulenko S, Scharl A (2017) Aspect-based extraction and analysis of affective knowledge from social media streams. IEEE Intell Syst 32(3):80–88.  https://doi.org/10.1109/MIS.2017.57CrossRefGoogle Scholar
  7. 7.
    Cambria E (2016) Affective computing and sentiment analysis. IEEE Intell Syst 31(2):102–107.  https://doi.org/10.1109/MIS.2016.31CrossRefGoogle Scholar
  8. 8.
    Fischl D, Scharl A (2014) Metadata enriched visualization of keywords in context. In: Proceedings of the 2014 ACM SIGCHI symposium on engineering interactive computing systems, EICS ’14. ACM, New York, pp 193–196.  https://doi.org/10.1145/2607023.2611451
  9. 9.
    Wattenberg M, Vis FB (2008) The word tree, an interactive visual concordance. IEEE Trans Vis Comput Gr 14(6):1221–1228CrossRefGoogle Scholar
  10. 10.
    Scharl A, Tochtermann K (2007) The geospatial web - how geo-browsers, social software and the web 2.0 are shaping the network society. Springer, LondonGoogle Scholar
  11. 11.
    Middleton SE, Kordopatis-Zilos G, Papadopoulos S, Kompatsiaris Y (2018) Location extraction from social media: geoparsing, location disambiguation, and geotagging. ACM Trans Inf Syst 36(4):40:1–40:27.  https://doi.org/10.1145/3202662CrossRefGoogle Scholar
  12. 12.
    Pujara J, Singh S (2018) Mining knowledge graphs from text. In: Proceedings of the eleventh ACM international conference on web search and data mining, WSDM ’18. ACM, New York, pp 789–790.  https://doi.org/10.1145/3159652.3162011
  13. 13.
    Weichselbraun A, Kuntschik P, Braşoveanu AM (2018) Mining and leveraging background knowledge for improving named entity linking. In: Proceedings of the 8th international conference on web intelligence, mining and semantics, WIMS ’18. ACM, New York, pp 27:1–27:11.  https://doi.org/10.1145/3227609.3227670
  14. 14.
    Nixon L, Apostolidis E, Markatopoulou F, Patras I, Mezaris V (2019) Multimodal video annotation for retrieval and discovery of newsworthy video in a news verification scenario. In: Kompatsiaris I, Huet B, Mezaris V, Gurrin C, Cheng WH, Vrochidis S (eds) MultiMedia modeling. Springer International Publishing, Cham, pp 143–155CrossRefGoogle Scholar
  15. 15.
    Matera M, Rizzo F, Carughi GT (2006) Web usability: principles and evaluation methods. Springer, Berlin, pp 143–180Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.webLyzard technology gmbhViennaAustria
  2. 2.MODUL Technology GmbHViennaAustria

Personalised recommendations