SimpleFlow: Enhancing Gestural Interaction with Gesture Prediction, Abbreviation and Autocompletion

  • Mike Bennett
  • Kevin McCarthy
  • Sile O’Modhrain
  • Barry Smyth
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6946)


Gestural interfaces are now a familiar mode of user interaction and gestural input is an important part of the way that users can interact with such interfaces. However, entering gestures accurately and efficiently can be challenging. In this paper we present two styles of visual gesture autocompletion for 2D predictive gesture entry. Both styles enable users to abbreviate gestures. We experimentally evaluate and compare both styles of visual autocompletion against each other and against non-predictive gesture entry. The best performing visual autocompletion is referred to as SimpleFlow. Our findings establish that users of SimpleFlow take significant advantage of gesture autocompletion by entering partial gestures rather than whole gestures. Compared to non-predictive gesture entry, users enter partial gestures that are 41% shorter than the complete gestures, while simultaneously improving the accuracy (+13%, from 68% to 81%) and speed (+10%) of their gesture input. The results provide insights into why SimpleFlow leads to significantly enhanced performance, while showing how predictive gestures with simple visual autocompletion impacts upon the gesture abbreviation, accuracy, speed and cognitive load of 2D predictive gesture entry.


Cognitive Load Visual Feedback Mouse Button Mouse Pointer Short Gesture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Supplementary material (384 kb)
Electronic Supplementary material (384 KB)


  1. 1.
    Accot, J., Zhai, S.: Beyond Fitts’ Law: Models for trajectory-based HCI tasks. In: Proc. CHI 1997, pp. 295–302 (1997)Google Scholar
  2. 2.
    Agar, P., Novins, K.: Polygon recognition in sketch-based interfaces with immediate and continuous feedback. In: Proc. GRAPHITE 2003, pp. 147–150 (2003)Google Scholar
  3. 3.
    Appert, C., Zhai, S.: Using strokes as command shortcuts: Cognitive benefits and toolkit support. In: Proc. CHI 2009, pp. 2289–2298 (2009)Google Scholar
  4. 4.
    Arvo, J., Novins, K.: Fluid sketches: Continuous recognition and morphing of simple hand-drawn shapes. In: Proc. UIST 2000, pp. 73–80 (2000)Google Scholar
  5. 5.
    Bau, O., Mackay, W.E.: Octopocus: A dynamic guide for learning gesture-based command sets. In: Proc. UIST 2008, pp. 37–46 (2008)Google Scholar
  6. 6.
    Baudisch, P., Cutrell, E., Robbins, D., Czerwinski, M., Tandler, P., Bederson, B., Zierlinger, A.: Drag-and-Pop and Drag-and-Pick: Techniques for accessing remote screen content on touch- and pen-operated systems. In: Proc. Interact 2003, pp. 57–64 (2003)Google Scholar
  7. 7.
    Beatty, J., Lucero-Wagoner, B.: The Pupillary System. In: Handbook of Psychophysiology, 2nd edn., ch. 6, pp. 142–161. Cambridge University Press, Cambridge (2000)Google Scholar
  8. 8.
    Bennett, M.: wayv: Gestures for Linux (2011),
  9. 9.
    Cao, X., Balakrishnan, R.: Evaluation of an on-line adaptive gesture interface with command prediction. In: Proc. GI 2005, pp. 187–194 (2005)Google Scholar
  10. 10.
    Cao, X., Zhai, S.: Modeling human performance of pen stroke gestures. In: Proc. CHI 2007, pp. 1495–1504 (2007)Google Scholar
  11. 11.
    Collomb, M., Hascoet, M., Baudisch, P., Lee, B.: Improving drag-and-drop on wall-size displays. In: Proc GI 2005, pp. 25–32 (2005)Google Scholar
  12. 12.
    Doozan, J.: Strokeit gestures for Windows (2011),
  13. 13.
    Freeman, D., Benko, H., Morris, M.R., Wigdor, D.: Shadowguides: Visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proc. Tabletop 2009, pp. 183–190 (2009)Google Scholar
  14. 14.
    Gomita.: Firegestures: Gesture control for Firefox web browser ( 2011),
  15. 15.
    Gustafson, S., Bierwirth, D., Baudisch, P.: Imaginary interfaces: Spatial interaction with empty hands and without visual feedback. In: Proc. UIST 2010 (2010)Google Scholar
  16. 16.
    Harrison, C., Tan, D., Morris, D.: Skinput: Appropriating the body as an input surface. In: Proc. CHI 2010, pp. 453–462 (2010)Google Scholar
  17. 17.
    Hart, S.G., Staveland, L.E.: Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In: Human Mental Workload, pp. 239–250. North Holland Press, Amsterdam (1988)Google Scholar
  18. 18.
    Igarashi, T., Matsuoka, S., Kawachiya, S., Tanaka, H.: Interactive beautification: A technique for rapid geometric design. In: Proc. UIST 1997, pp. 105–114 (1997)Google Scholar
  19. 19.
    Iqbal, S.T., Zheng, X.S., Bailey, B.P.: Task-evoked pupillary response to mental workload in Human-Computer Interaction. In: Proc. CHI 2004, Extended Abstracts, pp. 1477–1480 (2004)Google Scholar
  20. 20.
    Isokoski, P.: Model for unistroke writing time. In: CHI 2001, pp. 357–364 (2001)Google Scholar
  21. 21.
    Klingner, J., Kumar, R., Hanrahan, P.: Measuring the task-evoked pupillary response with a remote eye tracker. In: Proc. ETRA 2008, pp. 69–72 (2008)Google Scholar
  22. 22.
    Kristensson, P.O., Zhai, S.: Command strokes with and without preview: Using pen gestures on keyboard for command selection. In: CHI 2007, pp. 1137–1146 (2007)Google Scholar
  23. 23.
    Kurtenbach, G., William, B.: The limits of expert performance using hierarchic marking menus. In: Proc. CHI 1993, pp. 482–487 (1993)Google Scholar
  24. 24.
    Li, J., Zhang, X., Ao, X., Dai, G.: Sketch recognition with continuous feedback based on incremental intention extraction. In: Proc. IUI, pp. 145–150 (2005)Google Scholar
  25. 25.
    Li, Y.: Protractor: A fast and accurate gesture recognizer. In: Proc. CHI 2010, pp. 2169–2172 (2010)Google Scholar
  26. 26.
    Long Jr., A.C., Landay, J.A., Rowe, L.A., Michiels, J.: Visual similarity of pen gestures. In: Proc. CHI 2000, pp. 360–367 (2000)Google Scholar
  27. 27.
    MacKenzie, I.S., Soukoreff, R.W.: Text entry for mobile computing: Models and methods, theory and practice. Journal of Human-Computer Interaction 17, 147–198 (2002)CrossRefGoogle Scholar
  28. 28.
    Rubine, D.: Specifying gestures by example. SIGGRAPH Computer Graphics 25(4), 329–337 (1991)CrossRefGoogle Scholar
  29. 29.
    Soukoreff, R.W., MacKenzie, I.S.: Metrics for text entry research: An evaluation of msd and kspc, and a new unified error metric. In: Proc. CHI 2003, pp. 113–120 (2003)Google Scholar
  30. 30.
    Tandler, P., Prante, T.: Using incremental gesture recognition to provide immediate feedback while drawing pen gestures. In: Proc. UIST 2001, pp. 18–25 (2001)Google Scholar
  31. 31.
    Ward, D.J., Blackwell, A.F., MacKay, D.J.: Dasher - a gesture–driven data entry interface for mobile computing. In: Proc. UIST 2000, pp. 129–138 (2000)Google Scholar
  32. 32.
    Wobbrock, J.O., Wilson, A.D., Li, Y.: Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In: Proc. UIST 2007, pp. 159–168 (2007)Google Scholar
  33. 33.
    Yatani, K., Partridge, K., Bern, M., Newman, M.W.: Escape: A target selection technique using visually-cued gestures. In: Proc. CHI 2008, pp. 285–294 (2008)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Mike Bennett
    • 1
    • 2
  • Kevin McCarthy
    • 2
  • Sile O’Modhrain
    • 3
  • Barry Smyth
    • 2
  1. 1.SCIEN, Department Of PsychologyStanford UniversityUSA
  2. 2.School of Computer ScienceUniversity College DublinIreland
  3. 3.Sonic Arts Research CentreQueens UniversityBelfastUK

Personalised recommendations