Human-Computer Interaction

INTERACT 2015: Human-Computer Interaction – INTERACT 2015 pp 296-314 | Cite as

MovemenTable: The Design of Moving Interactive Tabletops

  • Kazuki Takashima
  • Yusuke Asari
  • Hitomi Yokoyama
  • Ehud Sharlin
  • Yoshifumi Kitamura
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9298)

Abstract

MovemenTable (MT) is an exploration of moving interactive tabletops which can physically move, gather together or depart according to people’s dynamically varying interaction tasks and collaborative needs. We present the design and implementation of a set of MT prototypes and discuss a technique that allows MT to augment its visual content in order to provide motion cues to users. We outline a set of interaction scenarios using single and multiple MTs in public, social and collaborative settings and discuss four user studies based on these scenarios, assessing how people perceive MT movements, how these movements affect their interaction, and how synchronized movements of multiple MTs impacts people’s collaborative interactions. Our findings confirm that MT’s augmentation of its visual content was helpful in providing motion cues to users, and that MT’s movement had significant effects on people’s spatial behaviors during interaction, effects that peaked in collaborative scenarios with multiple MTs.

Keywords

Human-robot interaction Social interfaces CSCW 

References

  1. 1.
    Annett, M., Grossman, T., Wigdor, D., Fitzmaurice, G.: Medusa: a proximity-aware multi-touch tabletop. In: UIST 2011, pp. 337–346 (2011)Google Scholar
  2. 2.
    Bartneck, C., Kulic, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1(1), 71–81 (2009)CrossRefGoogle Scholar
  3. 3.
    Blythe, P., Todd, P. Filler, G.: How motion reveals intentions: categorizing social interactions, In: Simple Heuristics that Make us Smart, pp. 257–285. Oxford Univsity Press, The ABC Research Groupxv (1999)Google Scholar
  4. 4.
    Drury, J.L., Scholtz, J. Yanco, H.A.: Awareness in human-robot interactions. In: SMC 2003, pp. 912–918 (2003)Google Scholar
  5. 5.
    Hall, E.: The Hidden Dimension. Doubleday, Newyork (1966)Google Scholar
  6. 6.
    Heider, F., Simmel, M.: An experimental study of apparent behavior. Am. J. Psychol. 57, 243–259 (1944)CrossRefGoogle Scholar
  7. 7.
    Kwon, J., Lee, I.: The squash-and-stretch stylization for character motions. TVCG 18(3), 488–499 (2012)Google Scholar
  8. 8.
    Marquardt, N., Greenberg, S.: Proxemic interactions: from theory to practice. Synthesis Lectures on Human-Centered Informatics. Morgan & Claypool Publishers, San Francisco (2015)Google Scholar
  9. 9.
    Marquardt, N., Hinckley, K. Greenberg, S.: Cross-device interaction via micro-mobility and f-formations. In: UIST 2012, pp. 13–22 (2012)Google Scholar
  10. 10.
    Masuch, M., Schlechtweg, S., Schulz, R.: Speedlines–depicting motion in motionless pictures. In: SIGGRAPH 1999, Abst. 277 (1999)Google Scholar
  11. 11.
    Mutlu, B., Forlizzi, J., Nourbakhsh, I., Hodgins. J.: The use of abstraction and motion in the design of social interfaces. In: DIS 2006, pp. 251–260 (2006)Google Scholar
  12. 12.
    Riek, L.: Wizard of Oz Studies in HRI: a systematic review and new reporting guidelines. J. Human-Robot Interaction 1(1), 119–136 (2012)CrossRefGoogle Scholar
  13. 13.
    Ryall, K., Forlines, C., Shen, C., Morris, M.: Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In: CSCW 2004, pp. 284–293 (2004)Google Scholar
  14. 14.
    Satake, S., Kanda, T., Glas, D., Imai, M., Ishiguro, H., Hagita, N.: How to approach humans? –strategies for social robots to initiate interaction. In: HRI 2009, pp. 109–116 (2009)Google Scholar
  15. 15.
    Sawada, Y., Tsubouchi, T.: Autonomous re-alignment of multiple table robots. In: ICRA 2010, pp. 1098–1099 (2010)Google Scholar
  16. 16.
    Scott, S., Carpendale, S., Inkpen, K.: Territoriality in collaborative tabletop workspaces. In: CSCW 2004, pp. 294–303 (2004)Google Scholar
  17. 17.
    Strauss, A., Corbin, J.: J. Basics of qualitative research techniques and procedures for developing grounded theory. Sage Publications, Inc., Thousand Oaks (2015)Google Scholar
  18. 18.
    Takashima, K., Aida, N., Yokoyama, H., Kitamura, Y.: Transfomtable: a self-actuated shape-changing digital table. In: ITS 2013, pp.179–188 (2013)Google Scholar
  19. 19.
    Takayama, L., Dooley, D., Ju, W.: Expressing thought: improving robot readability with animation principles. In: HRI 2011, pp. 69–76 (2011)Google Scholar
  20. 20.
    Tandler, P., Prante, P., Müller-Tomfelde, C., Streitz, N., Steinmetz, R.: Connectables: dynamic coupling of displays for flexible creation of shared workspaces. In: UIST 2001, pp.11–20 (2001)Google Scholar
  21. 21.
    Tang, A., Tory, M., Po, B., Neumann, P., Carpendale, S.: Collaborative coupling over tabletop displays. In: CHI 2006, pp.1181–1190 (2006)Google Scholar
  22. 22.
    Young, J. Xin, M., Sharlin, E.: Robot expressionism through cartooning. In: HRI 2007, pp. 309–316 (2007)Google Scholar
  23. 23.
    Young, J., Sharlin, E., Igarashi, T.: Teaching robots style: designing and evaluating style-by-demonstration for interactive robotic locomotion. Hum. Comput. Interact. 28(5), 379–416 (2013)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  • Kazuki Takashima
    • 1
  • Yusuke Asari
    • 1
  • Hitomi Yokoyama
    • 2
  • Ehud Sharlin
    • 3
  • Yoshifumi Kitamura
    • 1
  1. 1.Research Institute of Electrical CommnicationTohoku UniversitySendaiJapan
  2. 2.Institute of EngineeringTokyo Univeristy of Agriculture and TechnologyKoganeiJapan
  3. 3.Department of Computer ScienceUniversity of CalgaryCalgaryCanada

Personalised recommendations