Advertisement

Enhancing Communication through Distributed Mixed Reality

  • Divesh Lala
  • Christian Nitschke
  • Toyoaki Nishida
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8610)

Abstract

A navigable mixed reality system where humans and agents can communicate and interact with each other in a virtual environment can be an appropriate tool for analyzing multi-human and multi-agent communication. We propose a prototype of our system, FCWorld, which has been developed to meet these requirements. FCWorld integrates various technologies with a focus on allowing natural human communication. In this paper we discuss the requirements for FCWorld, the technical issues which it must address, and our proposed solutions. We intend it to become a novel tool for a variety of communication tasks such as real-time analysis and facilitation.

Keywords

Virtual Environment Virtual World Virtual Object Task Environment Mixed Reality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lala, D., Nishida, T.: Joint activity theory as a framework for natural body expression in autonomous agents. In: Proc. of the 1st Intl. Workshop on Multimodal Learning Analytics, MLA 2012, pp. 2:1–2:8 (2012)Google Scholar
  2. 2.
    da Silva, C., Garcia, A.: A collaborative working environment for small group meetings in second life. Springer Plus 2(1), 1–14 (2013)Google Scholar
  3. 3.
    Bredl, K., Groß, A., Hünniger, J., Fleischer, J.: The avatar as a knowledge worker? how immersive 3d virtual environments may foster knowledge acquisition. Electronic J. of Knowl. Mgmt. 10(1), 15–25 (2012)Google Scholar
  4. 4.
    Kim, K., Bolton, J., Girouard, A., Cooperstock, J., Vertegaal, R.: Telehuman: Effects of 3d perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In: Proc. of the SIGCHI Conf. on Human Factors in Comp. Sys., CHI 2012, pp. 2531–2540 (2012)Google Scholar
  5. 5.
    Benko, H., Jota, R., Wilson, A.: Miragetable: Freehand interaction on a projected augmented reality tabletop. In: Proc. of the SIGCHI Conf. on Human Factors in Comp. Sys., CHI 2012, pp. 199–208 (2012)Google Scholar
  6. 6.
    Hirata, K., Harada, Y., Takada, T., Aoyagi, S., Shirai, Y., Yamashita, N., Kaji, K., Yamato, J., Nakazawa, K.: t-room: Next generation video communication system. In: Glob. Telecom. Conf., pp. 1–4. IEEE GLOBECOM (2008)Google Scholar
  7. 7.
    Beck, S., Kunert, A., Kulik, A., Froehlich, B.: Immersive group-to-group telepresence. IEEE Trans. on Visualization and Comp. Graph. 19(4), 616–625 (2013)CrossRefGoogle Scholar
  8. 8.
    Zhou, Z., Tedjokusumo, J., Winkler, S., Ni, B.: User studies of a multiplayer first person shooting game with tangible and physical interaction. In: Shumaker, R. (ed.) Virtual Reality, HCII 2007. LNCS, vol. 4563, pp. 738–747. Springer, Heidelberg (2007)Google Scholar
  9. 9.
    Misawa, K., Ishiguro, Y., Rekimoto, J.: Ma petite chérie: What are you looking at?: A small telepresence system to support remote collaborative work for intimate communication. In: Proc. of the 3rd Augmented Human Intl. Conf. AH 2012, pp. 17:1–17:5 (2012)Google Scholar
  10. 10.
    Demeulemeester, A., Kilpi, K., Elprama, S.A., Lievens, S., Hollemeersch, C.-F., Jacobs, A., Lambert, P., Van de Walle, R.: The ICOCOON virtual meeting room: A virtual environment as a support tool for multipoint teleconference systems. In: Herrlich, M., Malaka, R., Masuch, M. (eds.) ICEC 2012. LNCS, vol. 7522, pp. 158–171. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Cassola, F., Morgado, L., de Carvalho, F., Paredes, H., Fonseca, B., Martins, P.: Online-gym: A 3d virtual gymnasium using kinect interaction. Procedia Technology 13, 130–138 (2014); SLACTIONS 2013: Research conference on virtual worlds Learning with simulationsGoogle Scholar
  12. 12.
    Lala, D., Nishida, T.: VISIE: A spatially immersive interaction environment using real-time human measurement. In: 2011 IEEE Intl. Conf. on Granular Computing (GrC), pp. 363–368 (2011)Google Scholar
  13. 13.
    Google: Google Street View Image API (2014). https://developers.google.com/maps/documentation/streetview/ (Online; accessed February 17, 2014)
  14. 14.
    Lala, D.: VISIE: A spatially immersive environment for capturing and analyzing body expression in virtual worlds. Masters thesis, Kyoto University (2012)Google Scholar
  15. 15.
    Shum, H., Kang, S.B.: Review of image-based rendering techniques. In: Proc. of SPIE, vol. 4067, pp. 2–13 (2000)Google Scholar
  16. 16.
    Alexiadis, D., Zarpalas, D., Daras, P.: Real-time, full 3-d reconstruction of moving foreground objects from multiple consumer depth cameras. IEEE Trans. on Multimedia 15(2), 339–358 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Divesh Lala
    • 1
  • Christian Nitschke
    • 1
  • Toyoaki Nishida
    • 1
  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations