Abstract
There are several ways in which a conversation partner points to a remote place in videoconferencing: (1) displaying the partner’s pointing gesture, that is, ordinary videoconferencing, (2) displaying the partner’s arm on a tabletop display, (3) projecting a laser dot and it is synchronized with the laser pointer held by the partner, and (4) embodying the partner’s pointing behavior by a robotic pointer or a robotic arm. In this study, we implemented these methods on the videoconferencing system and compared the effect on social telepresence (i.e. the sense that a participant feels as if he/she meets with the conversation partner in the same place). We found that the fourth method, which embodied the remote partner’s pointing behavior, enhanced social telepresence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alem, L., Li, J.: A study of gestures in a video-mediated collaborative assembly task. Adv. Hum.-Comput. Interact. 2011, 1–7 (2011)
Bainbridge, W.A., Hart, J.W., Kim, E.S., Scassellati, B.: The benefits of interactions with physically present robots over video-displayed agents. Int. J. Soc. Robot. 3(1), 41–52 (2011)
Bondareva, Y., Bouwhuis, D.: Determinants of social presence in videoconferencing. In: Proceedings of AVI 2004, pp. 1–9 (2004)
Chidambaram, V., Chiang, Y.H., Mutlu, B.: Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. In: Proceedings of International Conference on Human-Robot Interaction (HRI 2012), pp. 293–300 (2012)
De Greef, P., Ijsselsteijn, W.: Social presence in a home tele-application. Cyberpsychol. Behav. Soc. Netw. 307–315 (2001)
Finn, K.E., Sellen, A.J., Wilbur, S.B.: Video-Mediated Communication. In: Proceedings of CSCW 1999, pp. 299–301. Lawrence Erlbaum Associates (1997)
Fussell, R.S., Setlock, D.L., Yang, J., Ou, J., Mauer, E., Kramer, I.D.A.: Gestures over video streams to support remote collaboration on physical tasks. In: Proceedings of HCI 2004, pp. 273–309 (2004)
Genest, A., Gutwin, C.: Evaluating the effectiveness of height visualizations for improving gestural communication at distributed tabletops. In: Proceedings of CSCW 2012, pp. 519–528 (2012)
Giuseppe, S., Fridolin, W., Peter, S.: The GhostHands UX: telementoring with hands-on augmented reality instruction. In: Proceedings of IE2015, pp. 236–243 (2015)
Ishii, H., Kobayashi, M.: ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In: Proceedings of CHI 1992, pp. 525–532 (1992)
Izadi, S., Agarwal, A., Criminisi, A., Winn, J., Blake, A., Fitzgibbon, A.: C-Slate: a multi-touch and object recognition system for remote collaboration using horizontal surfaces. In: Proceedings of Tabletop 2007, pp. 3–10 (2007)
Kirk, D., Rodden, T., Fraser, S.D.: Turn it this way: grounding collaborative action with remote gestures. In: Proceedings of CHI 2007, pp. 1039–1047 (2007)
Kuzuoka, H., Oyama, S., Yamazaki, K., Suzuki, K., Mitsuishi, M.: GestureMan: a mobile robot that embodies a remote instructor’s actions. In: Proceedings of CSCW 2000, pp. 155–162 (2000)
Leithinger, D., Follmer, S., Olwal, A., Ishii, H.: Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. In: Proceedings of UIST 2014, pp. 461–470 (2014)
Luff, P., Heath, C., Kuzuoka, H., Yamazaki, K., Yamashita, J.: Handling documents and discriminating objects in hybrid spaces. In: Proceedings of CHI 2006, pp. 561–570 (2006)
Nakanishi, H., Murakami, Y., Nogami, D., Ishiguro, H.: Minimum movement matters: impact of robot-mounted cameras on social telepresence. In: Proceedings of CSCW 2008, pp. 303–312 (2008)
Nakanishi, H., Murakami, Y., Kato, K.: Movable cameras enhance social telepresence in media spaces. In: Proceedings of CHI 2009, pp. 433–442 (2009)
Nakanishi, H., Kato, K., Ishiguro, H.: Zoom cameras and movable displays enhance social telepresence. In: Proceedings of CHI 2011, pp. 63–72 (2011)
Nakanishi, H., Tanaka, K., Wada, Y.: Remote handshaking: touch enhances video-mediated social telepresence. In: Proceedings of CHI 2014, pp. 2143–2152 (2014)
Onishi, Y., Tanaka, K., Nakanishi, H.: Embodiment of video-mediated communication enhances social telepresence. In: Proceedings of HAI 2016, pp. 171–178 (2016)
Ou, J., Chen, X., Fussell, S., Yang, J.: DOVE: drawing over video environment. In: Proceedings of Multimedia 2003, pp. 100–101 (2003)
Pauchet, A., Coldefy, F., Lefebvre, S., Louis, S., Perron, L., Bouguet, A., Collobert, M., Guerin, J., Corvaisier, D.: TableTops: worthwhile experiences of collocated and remote collaboration. In: Proceedings of TABLETOP 2007, pp. 27–34 (2007)
Prussog, A., Muhlbach, L., Bocker, M.: Telepresence in videocommunications. In: Proceedings of Annual Meeting of Human Factors and Ergonomics Society, pp. 25–38 (1994)
Riether, N., Hegel, F., Wrede, B., Horstmann, G.: Social facilitation with social robots? In: Proceedings of HRI 2012, pp. 41–47 (2012)
Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H., Hagita, N.: Android as a telecommunication medium with a human-like presence. In: Proceedings of HRI 2007, pp. 193–200 (2007)
Sakata, N., Kurata, T., Kato, T., Kourogi, M., Kuzuoka, H.: WACL: Supporting telecommunications using wearable active camera with laser pointer. In: Proceedings of Wearable Computers 2003, pp. 53–56 (2003)
Tanaka, T., Nakanishi, H., Ishiguro, H.: Comparing video, avatar, and robot mediated communication: pros and cons of embodiment. In: Proceedings of CollabTech 2014, pp. 96–110 (2014)
Tang, J., Minneman, S.: VideoWhiteboard: video shadows to support remote collaboration. In: Proceedings of CHI 1991, pp. 315–322 (1991)
Tang, A., Pahud, M., Inkpen, K., Benko, H., Tang, C.J., Buxton, B.: Three’s company: understanding communication channels in three-way distributed collaboration. In: Proceedings of CSCW 2010, pp. 338–348 (2010)
Yamashita, N., Kaji, K., Kuzuoka, H., Hirata, K.: Improving visibility of remote gestures in distributed tabletop collaboration. In: Proceedings of CSCW 2011, pp. 95–104 (2011)
Acknowledgment
This work was supported by JSPS KAKENHI Grant Numbers JP26280076, JP15K12081, KDDI Foundation, Telecommunication Advancement Foundation, Foundation for the Fusion of Science and Technology, Tateishi Science and technology Foundation and JST CREST.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Onishi, Y., Tanaka, K., Nakanishi, H. (2017). Spatial Continuity and Robot-Embodied Pointing Behavior in Videoconferencing. In: Gutwin, C., Ochoa, S., Vassileva, J., Inoue, T. (eds) Collaboration and Technology. CRIWG 2017. Lecture Notes in Computer Science(), vol 10391. Springer, Cham. https://doi.org/10.1007/978-3-319-63874-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-63874-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63873-7
Online ISBN: 978-3-319-63874-4
eBook Packages: Computer ScienceComputer Science (R0)