Advertisement

A Compositional Model for Gesture Definition

  • Lucio Davide Spano
  • Antonio Cisternino
  • Fabio Paternò
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7623)

Abstract

The description of a gesture requires temporal analysis of values generated by input sensors and does not fit well the observer pattern traditionally used by frameworks to handle user input. The current solution is to embed particular gesture-based interactions, such as pinch-to-zoom, into frameworks by notifying when a whole gesture is detected. This approach suffers from a lack of flexibility unless the programmer performs explicit temporal analysis of raw sensors data. This paper proposes a compositional, declarative meta-model for gestures definition based on Petri Nets. Basic traits are used as building blocks for defining gestures; each one notifies the change of a feature value. A complex gesture is defined by the composition of other sub-gestures using a set of operators. The user interface behaviour can be associated to the recognition of the whole gesture or to any other sub-component, addressing the problem of granularity for the notification events. The meta-model can be instantiated for different gesture recognition supports and its definition has been validated through a proof of concept library. Sample applications have been developed for supporting multitouch gestures on iOS and full body gestures with Microsoft Kinect.

Keywords

Input and Interaction Technologies Model-based design Software architecture and engineering Gestural Interaction 

References

  1. 1.
    Accot, J., Chatty, S., Palanque, P.A.: A formal description of low level interaction and its application to multimodal interactive system. In: 3rd DSVIS EUROGRAPHICS, pp. 92–105. Springer (1996)Google Scholar
  2. 2.
    Bastide, R., Palanque, P.A.: A Petri-Net based Environment for the Design of Event-driven Interfaces. In: DeMichelis, G., Díaz, M. (eds.) ICATPN 1995. LNCS, vol. 935, pp. 66–83. Springer, Heidelberg (1995)CrossRefGoogle Scholar
  3. 3.
    Buxton, W.: Lexical and pragmatic considerations of input structures. SIGGRAPH Comput. Graph. 17(1), 31–37 (1983)CrossRefGoogle Scholar
  4. 4.
    González-Calleros, J.M., Vanderdonckt, J.: 3D User Interfaces for Information Systems based on UsiXML. In: Proc. of 1st Int. Workshop on User Interface Extensible Markup Language, UsiXML 2010, Berlin, Germany. Thales Research and Technology, France, Paris (2010)Google Scholar
  5. 5.
    Jacob, R.J.K., Giroaud, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T., Zingelbaum, J.: Reality-based interaction: a framework for post-WIMP interafaces. In: CHI 2008, Florence, Italy, pp. 201–210. ACM Press (April 2008)Google Scholar
  6. 6.
    Kin, K., Hartmann, B., DeRose, T., Agrawala, M.: Proton: multitouch gestures as regular expressions. In: CHI 2012, Austin, Texas, U.S., pp. 2885–2894 (May 2012)Google Scholar
  7. 7.
    Kammer, D., Wojdziak, J., Keck, M., Groh, R., Taranko, S.: Towards a formalization of multi-touch gestures. In: ITS 2010, ACM International Conference on Interactive Tabletops and Surfaces, Saabrucken, Germany, pp. 49–58. ACM Press (November 2010)Google Scholar
  8. 8.
    Luyten, K., Vanacken, D., Weiss, M., Borchers, J., Izadi, S., Wigdor, D.: Engineering patterns for multi-touch interfaces. In: EICS 2010, Proceedings of the 2nd ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Berlin, Germany, pp. 365–366. ACM Press (June 2010)Google Scholar
  9. 9.
    NUI Group, Gesture Recognition, http://wiki.nuigroup.com/Gesture_Recognition (Website retrieved: May 27, 2012)
  10. 10.
    Palanque, P.A., Bastide, R., Sengès, V.: Validating interactive system design through the verification of formal task and system models. In: EHCI 1995, Yellowstone Park, USA, pp. 189–212. Chapman & Hall (1995)Google Scholar
  11. 11.
    Paternò, F.: Model-based design and evaluation of interactive applications. Applied Computing (2000)Google Scholar
  12. 12.
    Paternò, F., Santoro, C., Spano, L.D.: MARIA: A Universal Language for Service-Oriented Applications in Ubiquitous Environment. ACM Transactions on Computer-Human Interaction 16(4), 1–30 (2009)CrossRefGoogle Scholar
  13. 13.
    René, D., Alla, H.: Discrete, Continuous and Hybrid Petri Nets. Springer (2005)Google Scholar
  14. 14.
    Scottidi, A., Blanch, R., Coutaz, J.: A Novel Taxonomy for Gestural Interaction techniques based on accelerometers. In: IUI 2011, Proceedings of the 16th International Conference on Intelligent User Interfaces, Palo Alto, CA, USA, pp. 63–72. ACM Press (February 2011)Google Scholar
  15. 15.
    Vanacken, D., Boeck, J.D., Raymaekers, C., Coninx, K.: NIMMIT: A notation for modeling multimodal interaction techniques. In: GRAPP 2006, Setubal, Portugal, pp. 224–231 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Lucio Davide Spano
    • 1
  • Antonio Cisternino
    • 2
  • Fabio Paternò
    • 1
  1. 1.ISTI-CNRPisaFrance
  2. 2.Dipartimento di InformaticaUniversità di PisaPisaFrance

Personalised recommendations