Investigating Tangible Collaboration for Design Towards Augmented Physical Telepresence

  • Alexa F. Siu
  • Shenli Yuan
  • Hieu Pham
  • Eric Gonzalez
  • Lawrence H. Kim
  • Mathieu Le Goc
  • Sean Follmer
Chapter
Part of the Understanding Innovation book series (UNDINNO)

Abstract

While many systems have been designed to support collaboration around visual thinking tools, much less work has investigated how to share and collaboratively design physical prototypes—an important part of the design process. We describe preliminary results from a formative study on how designers communicate and collaborate in design meetings around physical and digital artifacts. Addressing some limitations in current collaboration platforms and drawing guidelines from our study, we introduce a new prototype platform for remote collaboration. This platform leverages the use of augmented reality (AR) for rendering of the remote participant and a pair of linked actuated tabletop tangible interfaces that acts as the participant’s shared physical workspace. We propose the use of actuated tabletop tangibles to synchronously render complex shapes and to act as physical input.

Notes

Acknowledgements

This work is supported in part by the NSF GRFP, Stanford School of Engineering Fellowship, Hasso Plattner Design Thinking Research Program, and HP Inc.

References

  1. Benko, H., Jota, R., & Wilson, A. (2012). MirageTable: Freehand interaction on a projected augmented reality tabletop. In ACM CHI ’12 (pp. 199–208). http://dl.acm.org/citation.cfm?id=2207676.2207704
  2. Brave, S., Ishii, H., & Dahley, A. (1998). Tangible interfaces for remote collaboration and communication. In Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (pp. 169–178). New York: ACM.Google Scholar
  3. Buxton, B. (2009). Mediaspace – meaningspace – meetingspace. In Media space 20 + years of mediated life (pp. 217–231). London: Springer.CrossRefGoogle Scholar
  4. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 13–1991). Washington, DC: American Psychological Association.Google Scholar
  5. Coldefy, F., & Louis-dit Picard, S. (2007). Digitable: An interactive multiuser table for collocated and remote collaboration enabling remote gesture visualization. In 2007 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). New York: IEEE.Google Scholar
  6. Everitt, K. M., Klemmer, S. R., Lee, R., & Landay, J. A. (2003a). Two worlds apart: Bridging the gap between physical and virtual media for distributed design collaboration. In ACM CHI ’03 (pp. 553–560). http://dl.acm.org/citation.cfm?id=642611.642707
  7. Everitt, K. M., Klemmer, S. R., Lee, R., & Landay, J. A. (2003b). Two worlds apart: Bridging the gap between physical and virtual media for distributed design collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 553–560). New York: ACM.Google Scholar
  8. Fakourfar, O., Ta, K., Tang, R., Bateman, S., & Tang, A. (2016). Stabilized annotations for mobile remote assistance. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 1548–1560). New York: ACM.CrossRefGoogle Scholar
  9. Follmer, S., Leithinger, D., Olwal, A., Hogge, A., & Ishii, H. (2013). Inform: Dynamic physical affordances and constraints through shape and object actuation. In UIST (Vol. 13, pp. 417–426).Google Scholar
  10. Fussell, S. R., Setlock, L. D., Yang, J., Ou, J., Mauer, E., & Kramer, A. D. I. (2004). Gestures over video streams to support remote collaboration on physical tasks. Human–Computer Interaction, 19(3), 273–309.CrossRefGoogle Scholar
  11. Gauglitz, S., Nuernberger, B., Turk, M., & Höllerer, T. (2014). World-stabilized annotations and virtual scene navigation for remote collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (pp. 449–459). New York: ACM.Google Scholar
  12. Gilpin, K., Kotay, K., & Rus, D. (2007). Miche: Modular shape formation by self-disassembly. In IEEE International Conference on Robotics and Automation 2007 (pp. 2241–2247). New York: IEEE. http://ieeexplore.ieee.org/articleDetails.jsp?arnumber=4209417 CrossRefGoogle Scholar
  13. Gross, M., Lang, S., Strehlke, K., Moere, A. V., Staadt, O., Würmlin, S., Naef, M., Lamboray, E., Spagno, C., Kunz, A., Koller-Meier, E., Svoboda, T., & Van Gool, L. (2003). blue-c: A spatially immersive display and 3D video portal for telepresence. ACM Transactions on Graphics, 22(3), 819–827. http://dl.acm.org/citation.cfm?id=882262.882350 CrossRefGoogle Scholar
  14. Ishii, H. (1990). TeamWorkStation: Towards a seamless shared workspace. In ACM CSCW ’90 (pp. 13–26). http://dl.acm.org/citation.cfm?id=99332.99337
  15. Ishii, H., & Kobayashi, M. (1992). ClearBoard: A seamless medium for shared drawing and conversation with eye contact. In ACM CHI ’92 (pp. 525–532). http://dl.acm.org/citation.cfm?id=142750.142977
  16. Kim, K., Bolton, J., Girouard, A., Cooperstock, J., & Vertegaal, R. (2012). TeleHuman: Effects of 3d perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In ACM CHI ’12 (p. 2531). http://dl.acm.org/citation.cfm?id=2207676.2208640
  17. Kraut, R. E., Gergle, D., & Fussell, S. R. (2002). The use of visual information in shared visual spaces: Informing the development of virtual co-presence. In Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (pp. 31–40). New York: ACM.Google Scholar
  18. Le Goc, M., Kim, L. H., Parsaei, A., Fekete, J. D., Dragicevic, P., & Follmer, S. (2016). Zooids: Building blocks for swarm user interfaces. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (pp. 97–109). New York: ACM.CrossRefGoogle Scholar
  19. Lee, M. K., & Takayama, L. (2011). “Now, i have a body”: Uses and social norms for mobile remote presence in the workplace. In ACM CHI ’11 (pp. 33–42). http://dl.acm.org/citation.cfm?id=1978942.1978950
  20. Leithinger, D., Follmer, S., Olwal, A., & Ishii, H. (2014). Physical telepresence: Shape capture and display for embodied, computer-mediated remote collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (pp. 461–470). New York: ACM.Google Scholar
  21. Pangaro, G., Maynes-Aminzade, D., & Ishii, H. (2002). The actuated workbench: Computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (pp. 181–190). New York: ACM.CrossRefGoogle Scholar
  22. Paulos, E., & Canny, J. (1998). PRoP: Personal roving presence. In ACM CHI ’98 (pp. 296–303). http://dl.acm.org/citation.cfm?id=274644.274686
  23. Poupyrev, I., Nashida, T., & Okabe, M. (2007). Actuation and tangible user interfaces: The Vaucanson duck, robots, and shape displays. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction (pp. 205–212). New York: ACM.CrossRefGoogle Scholar
  24. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., & Fuchs, H. (1998). The office of the future: A unified approach to image-based modeling and spatially immersive displays. In ACM SIGGRAPH ’98 (pp. 179–188). http://dl.acm.org/citation.cfm?id=280814.280861
  25. Richter, J., Thomas, B. H., Sugimoto, M., & Inami, M. (2007). Remote active tangible interactions. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction (pp. 39–42). New York: ACM.CrossRefGoogle Scholar
  26. Riedenklau, E., Hermann, T., & Ritter, H. (2012). An integrated multi-modal actuated tangible user interface for distributed collaborative planning. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction (pp. 169–174). New York: ACM.CrossRefGoogle Scholar
  27. Sodhi, R. S., Jones, B. R., Forsyth, D., Bailey, B. P., & Maciocci, G. (2013). BeThere: 3D mobile collaboration with spatial input. In ACM CHI ’13 (pp. 179–188). http://dl.acm.org/citation.cfm?id=2470654.2470679
  28. Tang, J. C., & Minneman, S. (1991a). VideoWhiteboard: Video shadows to support remote collaboration. In ACM CHI ’91 (pp. 315–322). http://dl.acm.org/citation.cfm?id=108844.108932
  29. Tang, J. C., & Minneman, S. (1991b). Videowhiteboard: Video shadows to support remote collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 315–322). New York: ACM.Google Scholar
  30. Tang, J. C., & Minneman, S. L. (1991c). Videodraw: A video interface for collaborative drawing. ACM Transactions on Information Systems, 9(2), 170–184. http://dl.acm.org/citation.cfm?id=123078.128729
  31. Tsui, K. M., Desai, M., Yanco, H. A., & Uhlik, C. (2011). Exploring use cases for telepresence robots. In ACM/IEEE HRI ’11 (pp. 11–18). http://dl.acm.org/citation.cfm?id=1957656.1957664
  32. Ullmer, B., & Ishii, H. (1997). The metadesk: Models and prototypes for tangible user interfaces. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology (pp. 223–232). New York: ACM.Google Scholar
  33. Wilson, A. D., & Robbins, D. C. (2007). Playtogether: Playing games across multiple interactive tabletops. In IUI Workshop on Tangible Play: Research and Design for Tangible and Tabletop Games.Google Scholar
  34. Wood, E., Taylor, J., Fogarty, J., Fitzgibbon, A., & Shotton, J. (2016). Shadowhands: High-fidelity remote hand gesture visualization using a hand tracker. In Proceedings of the 2016 ACM on Interactive Surfaces and Spaces (pp. 77–84). New York: ACM.CrossRefGoogle Scholar
  35. Yarosh, S., Tang, A., Mokashi, S., & Abowd, G. D. (2013). “almost touching”: Parent-child remote communication using the sharetable system. In ACM CSCW ’13 (pp. 181–192). http://dl.acm.org/citation.cfm?id=2441776.2441798

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Alexa F. Siu
    • 1
  • Shenli Yuan
    • 1
  • Hieu Pham
    • 1
  • Eric Gonzalez
    • 1
  • Lawrence H. Kim
    • 1
  • Mathieu Le Goc
    • 1
  • Sean Follmer
    • 1
  1. 1.Stanford UniversityStanfordUSA

Personalised recommendations