Advertisement

Understanding Multi-touch Manipulation for Surface Computing

  • Chris North
  • Tim Dwyer
  • Bongshin Lee
  • Danyel Fisher
  • Petra Isenberg
  • George Robertson
  • Kori Inkpen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5727)

Abstract

Two-handed, multi-touch surface computing provides a scope for interactions that are closer analogues to physical interactions than classical windowed interfaces. The design of natural and intuitive gestures is a difficult problem as we do not know how users will approach a new multi-touch interface and which gestures they will attempt to use. In this paper we study whether familiarity with other environments influences how users approach interaction with a multi-touch surface computer as well as how efficiently those users complete a simple task. Inspired by the need for object manipulation in information visualization applications, we asked users to carry out an object sorting task on a physical table, on a tabletop display, and on a desktop computer with a mouse. To compare users’ gestures we produced a vocabulary of manipulation techniques that users apply in the physical world and we compare this vocabulary to the set of gestures that users attempted on the surface without training. We find that users who start with the physical model finish the task faster when they move over to using the surface than users who start with the mouse.

Keywords

Surface Multi-touch Gestures Tabletop 

References

  1. 1.
    Forlines, C., Wigdor, D., Shen, C., Balakrishnan, R.: Direct-Touch vs. Mouse Input for Tabletop Displays. In: Proc. CHI 2007, pp. 647–656. ACM Press, New York (2007)Google Scholar
  2. 2.
    van Ham, F., Rogowitz, B.: Perceptual Organization in User-Generated Graph Layouts. IEEE Trans. Visualization and Computer Graphics (InfoVis 2008) 14(6), 1333–1339 (2008)CrossRefGoogle Scholar
  3. 3.
    Hinckley, K., Baudisch, P., Ramos, G., Guimbretière, F.: Design and Analysis of Delimiters for Selection-action Pen Gesture Phrases in Scriboli. In: Proc. CHI 2005, pp. 451–460. ACM Press, New York (2005)Google Scholar
  4. 4.
    i2 – Analyst’s Notebook, http://www.i2inc.com (accessed 29 January 2009)
  5. 5.
    Jong Jr., A.C., Landay, J.A., Rowe, L.A.: Implications for a Gesture Design Tool. In: Proc. CHI 1999, pp. 40–47. ACM Press, New York (1999)Google Scholar
  6. 6.
    Klemmer, S.R., Newman, M.W., Farrell, R., Bilezikjian, M., Landay, J.A.: The Designers’ Outpost: A Tangible Interface for Collaborative Web Site Design. In: Proc. UIST 2001, pp. 1–10. ACM Press, New York (2001)Google Scholar
  7. 7.
    Leganchuk, A., Zhai, S., Buxton, W.: Manual and Cognitive Benefits of Two-Handed Input: An Experimental Study. ACM Transaction on Computer-Human Interaction 5(4), 326–359 (1998)CrossRefGoogle Scholar
  8. 8.
  9. 9.
    Pedersen, E.R., McCall, K., Moran, T., Halasz, F.T.: An Electronic Whiteboard for Informal Workgroup Meetings. In: Proc. CHI 1993, pp. 391–398. ACM Press, New York (1993)Google Scholar
  10. 10.
    Proulx, P., Chien, L., Harper, R., Schroh, D., Kapler, T., Jonker, D., Wright, W.: nSpace and GeoTime: A VAST 2006 Case Study. IEEE Computer Graphics and Applications 27(5), 46–56 (2007)CrossRefGoogle Scholar
  11. 11.
    Robinson, A.C.: Collaborative Synthesis of Visual Analytic Results. In: Proc. VAST 2008, pp. 61–74. IEEE Press, Los Alamitos (2008)Google Scholar
  12. 12.
    Stasko, J., Görg, C., Liu, Z.: Jigsaw: supporting investigative analysis through interactive visualization. Information Visualization 7(2), 118–132 (2008)CrossRefGoogle Scholar
  13. 13.
    Thomas, J.J., Cook, K.A.: Illuminating the Path. IEEE Press, Los Alamitos (2005)Google Scholar
  14. 14.
    Tse, E., Greenberg, S., Shen, C., Forlines, C., Kodama, R.: Exploring true multi-user multimodal interaction over a digital table. In: Proc. DIS 2008, pp. 109–118. ACM Press, New York (2008)Google Scholar
  15. 15.
    Wilson, A.D., Izadi, S., Hilliges, O., Garcia-Mendoza, A., Kirk, D.: Bringing physics to the surface. In: Proc. UIST 2008, pp. 67–76. ACM Press, New York (2008)Google Scholar
  16. 16.
    Wise, J.A., Thomas, J.J., Pennock, K., Lantrip, D., Pottier, M., Schur, A., Crow, V.: Visualizing the non-visual: spatial analysis and interaction with information from text documents. In: Proc. InfoVis 1995, pp. 51–58. IEEE Press, Los Alamitos (1995)Google Scholar
  17. 17.
    Wobbrock, J., Morris, M.R., Wilson, D.A.: User-Defined Gestures for Surface Computing. In: Proc. CHI 2009. ACM Press, New York (to appear, 2009)Google Scholar
  18. 18.
    Wu, M., Shen, C., Ryall, K., Forlines, C., Balakrishnan, R.: Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces. In: Proc. TableTop 2006, pp. 185–192. IEEE Press, Los Alamitos (2006)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2009

Authors and Affiliations

  • Chris North
    • 1
  • Tim Dwyer
    • 2
  • Bongshin Lee
    • 2
  • Danyel Fisher
    • 2
  • Petra Isenberg
    • 3
  • George Robertson
    • 2
  • Kori Inkpen
    • 2
  1. 1.Virginia TechBlacksburgUSA
  2. 2.Microsoft ResearchRedmondUSA
  3. 3.University of CalgaryAlbertaCanada

Personalised recommendations