Graph-Based Format for Modeling Multimodal Annotations in Virtual Reality by Means of VAnnotatoR

  • Giuseppe AbramiEmail author
  • Alexander Mehler
  • Christian Spiekermann
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1088)


Projects in the field of Natural Language Processing (NLP), the Digital Humanities (DH) and related disciplines dealing with machine learning of complex relationships between data objects need annotations to obtain sufficiently rich training and test sets. The visualization of such data sets and their underlying Human Computer Interaction (HCI) are perennial problems of computer science. However, despite some success stories, the clarity of information presentation and the flexibility of the annotation process may decrease with the complexity of the underlying data objects and their relationships. In order to face this problem, the so-called VAnnotatoR was developed, as a flexible annotation tool using 3D glasses and augmented reality devices, which enables annotation and visualization in three-dimensional virtual environments. In addition, multimodal objects are annotated and visualized within a graph-based approach.


Annotation Virtual Reality VAnnotatoR Image analysis Digital Humanities 


  1. 1.
    Abrami, G., Freiberg, M., Warner, P.: Managing and annotating historical multimodal corpora with the eHumanities desktop - an outline of the current state of the LOEWE project Illustrations of Goethe’s Faust. In: Historical Corpora, pp. 353–363 (2015)Google Scholar
  2. 2.
    Abrami, G., Mehler, A.: A UIMA database interface for managing NLP-related text annotations. In: 2018 Proceedings of LREC, LREC 2018, 7–12 May, Miyazaki, Japan (2018)Google Scholar
  3. 3.
    Abrami, G., Mehler, A., Pravida, D.: Fusing text and image data with the help of the OWLnotator. In: Yamamoto, S. (ed.) HIMI 2015. LNCS, vol. 9172, pp. 261–272. Springer, Cham (2015). Scholar
  4. 4.
    Bateman, J.A., et al.: A linguistic ontology of space for natural language processing. Artif. Intell. 174(14), 1027–1071 (2010)CrossRefGoogle Scholar
  5. 5.
    Bowman, D.A., Hodges, L.F., Bolter, J.: The virtual venue: user-computer interaction in information-rich virtual environments. Presence 7(5), 478–493 (1998)CrossRefGoogle Scholar
  6. 6.
    Brown, R.A.: Conceptual modelling in 3D virtual worlds for process communication. In: 2010 Proceedings of APCCM, APCCM 2010, Brisbane, Australia, pp. 25–32. Australian Computer Society Inc. (2010)Google Scholar
  7. 7.
    Dudley, J.D., Vertanen, K., Kristensson, P.O.: Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion. ACM Trans. Comput.-Hum. Interact. 25(6), 30:1–30:40 (2018)CrossRefGoogle Scholar
  8. 8.
    Gabbard, J.L.: A taxonomy of usability characteristics in virtual environments. Ph.D. thesis, Virginia Tech (1997)Google Scholar
  9. 9.
    Götz, T., Suhre, O.: Design and implementation of the UIMA Common Analysis System. IBM Syst. J. 43(3), 476–489 (2004) CrossRefGoogle Scholar
  10. 10.
    Harmon, R., et al.: The virtual annotation system. In: 1996 Proceedings of the IEEE Virtual Reality Annual International Symposium, pp. 239–245. IEEE (1996)Google Scholar
  11. 11.
    Hemati, W., Uslu, T., Mehler, A.: TextImager: a distributed UIMA-based system for NLP. In: Proceedings of the COLING 2016 System Demonstrations. Federated Conference on Computer Science and Information Systems, Osaka, Japan (2016)Google Scholar
  12. 12.
    Frankfurter Goethe-Haus/Freies Deutsches Hochstift. Gretchen vor der Mater dolorosa (2015).
  13. 13.
    Kett , A., et al.: Resources2City Explorer: a system for generating interactive walkable virtual cities out of file systems. In: Proceedings of the UIST 2018, Berlin, Germany (2018)Google Scholar
  14. 14.
    Kühn, V., et al.: Digital reconstruction of the former main synagogue in Frankfurt. Result of a student practical course Ubiquitious Texttechnologies in summer term 2018 (2018)Google Scholar
  15. 15.
    Mehler, A., et al.: Stolperwege: an app for a digital public history of the holocaust. In: Proceedings of the 28th ACM Conference on Hypertext and Social Media, HT 2017, Prague, Czech Republic, pp. 319–320. ACM (2017)Google Scholar
  16. 16.
    Mehler, A., et al.: VAnnotatoR: a framework for generating multimodal hypertexts. In: Proceedings of HT 2018, Baltimore, Maryland. ACM (2018)Google Scholar
  17. 17.
    Mehler, A., et al.: Wikidition: automatic lexiconization and linkication of text corpora. Inf. Technol. 58, 70–79 (2016)Google Scholar
  18. 18.
    Renner, P., Pfeiffer, T.: Evaluation of attention guiding techniques for augmented reality-based assistance in picking and assembly tasks. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion, IUI 2017 Companion, Limassol, Cyprus, pp. 89–92. ACM (2017)Google Scholar
  19. 19.
    Saalfeld, P., Glaßer, S., Preim, B.: 3D user interfaces for interactive annotation of vascular structures. In: Mensch und Computer 2015-Proceedings (2015)Google Scholar
  20. 20.
    Spiekermann, C., Abrami, G., Mehler, A.: VAnnotatoR: a gesture-driven annotation framework for linguistic and multimodal annotation. In: Proceedings of the AREA Workshop, AREA, Miyazaki, Japan (2018)Google Scholar
  21. 21.
    Teo, T., et al.: Data fragment: virtual reality for viewing and querying large image sets. In: 2017 IEEE Virtual Reality, January, pp. 327–328 (2017)Google Scholar
  22. 22.
    Zhao, J., et al.: Annotation graphs: a graph-based visualization for meta-analysis of data based on user-authored annotations. IEEE Trans. Vis. Comput. Graph. 23(1), 261–270 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Giuseppe Abrami
    • 1
    Email author
  • Alexander Mehler
    • 1
  • Christian Spiekermann
    • 1
  1. 1.Texttechnology LabGoethe University FrankfurtFrankfurtGermany

Personalised recommendations