Unifying Events from Multiple Devices for Interpreting User Intentions through Natural Gestures

  • Pablo Llinás
  • Manuel García-Herranz
  • Pablo A. Haya
  • Germán Montoro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6946)


As technology evolves (e.g. 3D cameras, accelerometers, multitouch surfaces, etc.) new gestural interaction methods are becoming part of the everyday use of computational devices. This trend forces practitioners to develop applications for each interaction method individually. This paper tackles the problem of interpreting gestures in a multiple ways of interaction scenario, by focusing on the abstract gesture rather than on the technology or technologies used to generate it. This article describes the Flash Library for Interpreting Natural Gestures (FLING), a framework for developing multi-gestural applications integrated and running in different gestural-platforms. By offering an architecture for the integration and unification of different types of interaction, FLING eases scalability while presenting an environment for rapid prototyping by novice multi-gestural programmers. Throughout the article we analyse the benefits of this approach, comparing it with state of the art technologies, describe the framework architecture, and present several examples of applications and experiences of use.


FLING framework Multi-touch interface multiple input peripherals application development 


  1. 1.
    Lester, J., Hurvitz, P., Chaudhri, R., Hartung, C., Borriello, G.: MobileSense-Sensing modes of transportation in studies of the built environment. In: UrbanSense 2008, pp. 46–50 (2008) Google Scholar
  2. 2.
    Dragicevic, P., Fekete, J.: Input device selection and interaction configuration with ICON. In: Blanford, A., Vanderdonkt, J., Gray, P. (eds.) People and Computers XV Interaction without Frontiers: Joint Proceedings of IHM 2001 and HCI 2001 (IHM-HCI 2001), pp. 543–558. Springer, Heidelberg (2001)Google Scholar
  3. 3.
    Flippo, F., Krebs, A., Marsic, I.: A framework for rapid development of multimodal interfaces. In: 5th International Conference on Multimodal Interfaces (ICMI 2003), pp. 109–116. ACM, New York (2003)CrossRefGoogle Scholar
  4. 4.
    Serrano, M., Nigay, L., Lawson, J., Ramsay, A., Murray-Smith, R., Denef, S.: The openinterface framework: a tool for multimodal interaction. In: CHI 2008 Extended Abstracts on Human Factors in Computing Systems (CHI EA 2008), pp. 3501–3506. ACM, New York (2008)Google Scholar
  5. 5.
    Touchlib: an opensource multi-touch framework,
  6. 6.
    Kaltenbrunner, M., Bencina, R.: reacTIVision: a computer-vision framework for table-based tangible interaction. In: 1st International Conference on Tangible and Embedded Interaction (TEI 2007), pp. 69–74. ACM, New York (2007)CrossRefGoogle Scholar
  7. 7.
    Community Core Vision,
  8. 8.
  9. 9.
    Bederson, B.B., Grosjean, J., Meyer, J.: Toolkit Design for Interactive Structured Graphics. IEEE Trans. Softw. Eng. 30(8), 535–546 (2004)CrossRefGoogle Scholar
  10. 10.
    Gokcezade, A., Leitner, J., Haller, M.: LightTracker: An Open-Source Multitouch Toolkit. J. Comput. Entertain. 8, article 19 (2010)Google Scholar
  11. 11.
  12. 12.
    Hansen, T.E., Hourcade, J.P., Virbel, M., Patali, S., Serra, T.: PyMT: a post-WIMP multi-touch user interface toolkit. In: ACM International Conference on Interactive Tabletops and Surfaces (ITS 2009), pp. 17–24. ACM, New York (2009)CrossRefGoogle Scholar
  13. 13.
    De Nardi, A.: Grafiti: Gesture Recognition mAnagement Framework for Interactive Tabletop Interfaces. Diploma thesis. University of Pisa (2008) Google Scholar
  14. 14.
  15. 15.
    Esenther, A., Forlines, C., Ryall, K., Shipman, S.: DiamondTouch SDK: Support for Multi-User, Multi-Touch Applications. Mitsubishi Electronics Research Laboratory, Report No. TF2002-48 (2002) Google Scholar
  16. 16.
  17. 17.
    König, W.A., Rädle, R., Reiterer, H.: Squidy: a zoomable design environment for natural user interfaces. In: 27th International Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA 2009), pp. 4561–4566. ACM, New York (2009)CrossRefGoogle Scholar
  18. 18.
    Ramanahally, P., Gilbert, S., Niedzielski, T., Velázquez, D., Anagnost, C.: Sparsh UI: A Multi-Touch Framework for Collaboration and Modular Gesture Recognition. In: Proc. of WINVR 2009, Conference on Innovative Virtual Reality, pp. 1–6 (2009) Google Scholar
  19. 19.
    Scholliers, C., Hoste, L., Signer, B., De Meuter, W.: Midas: a declarative multi-touch interaction framework. In: 5th International Conference on Tangible, Embedded, and embodied Interaction (TEI 2011), pp. 49–56. ACM, New York (2011)CrossRefGoogle Scholar
  20. 20.
    Echtler, F., Klinker, G.: A Multitouch Software Architecture. In: 5th Nordic Conference on Human-Computer Interaction (NordiCHI 2008), pp. 463–466 (2008) Google Scholar
  21. 21.
    Laufs, U., Ruff, C., Zibuschka, J.: MT4j - A Cross-platform Multi-touch Development Framework. In: Engineering Patterns for Multi-Touch Interfaces 2010, Workshop of the ACM EICS (2010) Google Scholar
  22. 22.
    Blom, S., Book, M., Gruhn, V., Hrushchak, R., Kohler, A.: Write Once, Run Anywhere A Survey of Mobile Runtime Environments. In: 3rd International Conference on Grid and Pervasive Computing – Workshops, pp. 132–137. IEEE Press, New York (2008)CrossRefGoogle Scholar
  23. 23.
    Jordà, S., Geiger, G., Alonso, M., Kaltenbrunner, M.: The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. In: 1st International Conference on Tangible and Embedded Interaction (TEI 2007), pp. 139–146. ACM, New York (2007)CrossRefGoogle Scholar
  24. 24.
    Bencina, R., Kaltenbrunner, M.: The design and evolution of fiducials for the reactivision system. In: 3rd International Conference on Generative Systems in the Electronic Arts (3rd Iteration 2005), Melbourne, Australia (2005) Google Scholar
  25. 25.
    Wang, F., Ren, X., Liu, Z.: A Robust Blob Recognition and Tracking Method in Vision-Based Multi-touch Technique. In: International Symposium on Parallel and Distributed Processing with Applications (ISPA 2008), pp. 971–974. IEEE Press, New York (2008)CrossRefGoogle Scholar
  26. 26.
    Kaltenbrunner, M., Bovermann, T., Bencina, R., Costanza, E.: TUIO: A protocol for table-top tangible user interfaces. In: 6th Int’l. Workshop on Gesture in Human-Computer Interaction and Simulation (2005) Google Scholar
  27. 27.
    TUIO Flash client library,
  28. 28.
    Gaines, B., Shaw, M.: A learning model for forecasting the future of information technology. J. Future Computing Systems 1, 31–69 (1986)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2011

Authors and Affiliations

  • Pablo Llinás
    • 1
  • Manuel García-Herranz
    • 1
  • Pablo A. Haya
    • 1
  • Germán Montoro
    • 1
  1. 1.Dept. Ingeniería InformáticaUniversidad Autónoma de MadridMadridSpain

Personalised recommendations