Advertisement

Virtual Reality Annotator: A Tool to Annotate Dancers in a Virtual Environment

  • Claudia Ribeiro
  • Rafael Kuffner
  • Carla Fernandes
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10605)

Abstract

In this paper we describe the Virtual Reality Annotator, a visualization tool that allows users to dynamically annotate dancers in a virtual reality environment. The current annotation types supported are sketches, speech-to-text, and highlighting. Each type of annotation can be applied to a bone or a set of bones of the skeleton data or a set of points of the point-cloud data. Using a wireless mouse, users can interact with the 3D objects of the virtual reality space, specifically a 3D menu that allows to choose the type of annotation and desired color as well as the skeleton and point cloud data. Examples of usage of this system using data from expert dancers are presented and discussed as well as suggestions for future work.

Keywords

Virtual reality Annotations Contemporary dance Visualization Interaction 

Notes

Acknowledgements

This work was supported by the European Research Council under the project “BlackBox: A collaborative platform to document performance composition: from conceptual structures in the backstage to customizable visualizations in the front-end” (Grant agreement n. 336200).

Supplementary material

Supplementary material 1 (mp4 336611 KB)

References

  1. 1.
    Addison, A.C., Gaiani, M.: Virtualized architectural heritage: new tools and techniques. IEEE MultiMedia 7(2), 26–31 (2000)CrossRefGoogle Scholar
  2. 2.
    Bacim, F., Nabiyouni, M., Bowman, D.A.: Slice-n-swipe: a free-hand gesture user interface for 3D point cloud annotation. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 185–186. IEEE (2014)Google Scholar
  3. 3.
    Cabral, D., Valente, J.G., Aragão, U., Fernandes, C., Correia, N.: Evaluation of a multimodal video annotator for contemporary dance. In: Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI 2012, pp. 572–579. ACM, New York (2012)Google Scholar
  4. 4.
    Camurri, A., El Raheb, K., Even-Zohar, O., Ioannidis, Y., Markatzi, A., Matos, J.-M., Morley-Fletcher, E., Palacio, P., Romero, M., Sarti, A., Di Pietro, S., Viro, V., Whatley, S.: WhoLoDancE: towards a methodology for selecting motion capture data across different dance learning practice. In: Proceedings of the 3rd International Symposium on Movement and Computing, MOCO 2016, pp. 43:1–43:2. ACM, New York (2016)Google Scholar
  5. 5.
    Chan, J.C.P., Leung, H., Tang, J.K.T., Komura, T.: A virtual reality dance training system using motion capture technology. IEEE Trans. Learn. Technol. 4(2), 187–195 (2011)CrossRefGoogle Scholar
  6. 6.
    ChoreoPro. Dance designer (2014). https://www.choreopro.com
  7. 7.
    Credo Interactive. Danceforms (2014). http://charactermotion.com/products/danceforms/
  8. 8.
    Gould, A.S.: Invisible visualities: augmented reality art and the contemporary media ecology. Convergence 20(1), 25–32 (2014)CrossRefGoogle Scholar
  9. 9.
    Guest, A.H.: Dance Notation: The Process of Recording Movement on Paper. Dance Horizons, New York (1984)Google Scholar
  10. 10.
    James, S., Fonseca, M.J., Collomosse, J.: ReEnact: sketch based choreographic design from archival dance footage. In: Proceedings of International Conference on Multimedia Retrieval, ICMR 2014, pp. 313:313–313:320. ACM, New York (2014)Google Scholar
  11. 11.
    Kipp, M.: ANVIL: the video annotation research tool. In: Handbook of Corpus Phonology. Oxford University Press, Oxford (2010)Google Scholar
  12. 12.
    Kyan, M., Sun, G., Li, H., Zhong, L., Muneesawang, P., Dong, N., Elder, B., Guan, L.: An approach to ballet dance training through MS kinect and visualization in a cave virtual reality environment. ACM Trans. Intell. Syst. Technol. (TIST) 6(2), 23 (2015)Google Scholar
  13. 13.
    Lubos, P., Beimler, R., Lammers, M., Steinicke, F.: Touching the cloud: bimanual annotation of immersive point clouds. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 191–192, March 2014Google Scholar
  14. 14.
    Moghaddam, E.R., Sadeghi, J., Samavati, F.F.: Sketch-based dance choreography. In: 2014 International Conference on Cyberworlds (CW), pp. 253–260, October 2014Google Scholar
  15. 15.
    Ribeiro, C., dos Anjos, R.K., Fernandes, C.: Capturing and documenting creative processes in contemporary dance. In: Proceedings of the 4th International Conference on Movement Computing, MOCO 2017, pp. 7:1–7:7. ACM, New York (2017)Google Scholar
  16. 16.
    Ribeiro, C., Kuffner, R., Fernandes, C., Pereira, J.: 3D annotation in contemporary dance: enhancing the creation-tool video annotator. In: Proceedings of the 3rd International Symposium on Movement and Computing, MOCO 2016, pp. 41:1–41:4. ACM, New York (2016)Google Scholar
  17. 17.
    Sanchez-Vives, M.V., Slater, M.: From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6(4), 332–339 (2005)CrossRefGoogle Scholar
  18. 18.
    Singh, V., Latulipe, C., Carroll, E., Lottridge, D.: The choreographer’s notebook: a video annotation system for dancers and choreographers. In: Proceedings of the 8th ACM Conference on Creativity and Cognition, C&C 2011, pp. 197–206. ACM, New York (2011)Google Scholar
  19. 19.
    Sousa, M., Mendes, D., Dos Anjos, R.K., Medeiros, D., Ferreira, A., Raposo, A., Pereira, J.M., Jorge, J.: Creepy tracker toolkit for context-aware interfaces. In: Proceedings of the 2017 ACM on Interactive Surfaces and Spaces, ISS 2017. ACM, New York (2017)Google Scholar
  20. 20.
    Umino, B., Soga, A., Hirayama, M.: Feasibility study for contemporary dance e-learning: an interactive creation support system using 3D motion data. In: 2014 International Conference on Cyberworlds (CW), pp. 71–76. IEEE (2014)Google Scholar
  21. 21.
    Veit, M., Capobianco, A.: Go‘then’tag: a 3-D point cloud annotation technique. In: 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 193–194. IEEE (2014)Google Scholar
  22. 22.
    Wither, J., DiVerdi, S., Höllerer, T.: Annotation in outdoor augmented reality. Comput. Graph. 33(6), 679–689 (2009)CrossRefGoogle Scholar
  23. 23.
    Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., Sloetjes, H.: ELAN: a professional framework for multimodality research. In: Proceedings of LREC, vol. 2006, p. 5 (2006)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Universidade NOVA de Lisboa, Faculdade de Ciências Sociais e Humanas, BlackBox ProjectLisbonPortugal

Personalised recommendations