Advertisement

Multi-Touch pp 47-67 | Cite as

Deklarative Programmierung von Multi-Touch-Gesten

  • Dietrich KammerEmail author
Chapter
  • 3.2k Downloads
Part of the Xpert.press book series (XPERT.PRESS)

Zusammenfassung

Dieser Beitrag behandelt den aktuellen Stand der Forschung und Technik für das deklarative Programmieren von Multi-Touch-Gesten. Die dafür notwendige Abstraktion von Gesteneigenschaften führt zu mehr Wiederverwendung von Gestenerkennungscode und Herausbildung von Mustern für die Gestenentwicklung. Zudem wird die Entwicklung neuer Gesten basierend auf bereits bestehenden erleichtert. Dabei unterscheiden sich die betrachteten Ansätze insbesondere in ihrer Granularität und der Beschreibungsfähigkeit. Als Herangehensweisen werden reguläre Ausdrücke, die JavaScript Object Notation und domänenspezifische Sprachen betrachtet. Konkrete Implementierungen von vier Ansätzen werden praxisnah beschrieben.

Literatur

  1. 1.
    Bederson, B.B.: The promise of zoomable user interfaces. Tech, Rep., HCIL-2009-21 (2009)Google Scholar
  2. 2.
    Bolt, R.A.: ,,Put-that-there“: voice and gesture at the graphics interface. In: Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’80, S. 262–270. ACM, New York, (1980). doi: 10.1145/800250.807503. http://doi.acm.org/10.1145/800250.807503
  3. 3.
    Crockford, D.: JSON: the fat-free alternative to XML. In: Proceedings of XML, Bd. 2006 (2006). http://www.json.org/fatfree.html
  4. 4.
    Echtler, F.: TISCH – tangible interactive surfaces for collaboration between humans. http://tisch.sourceforge.net/ (2010). Zugegriffen: 23. Apr. 2013
  5. 5.
    Echtler, F.: GISpL specification. http://gispl.org/ (2012). Zugegriffen: 20. Apr. 2013
  6. 6.
    Echtler, F., Butz, A.: GISpL: gestures made easy. In: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI ’12, S. 233–240. ACM, New York (2012). doi: 10.1145/2148131.2148181. http://doi.acm.org/10.1145/2148131.2148181
  7. 7.
    Echtler, F., Klinker, G.: A multitouch software architecture. In: Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges, NordiCHI ’08, S. 463–466. ACM, New York (2008). http://doi.acm.org/10.1145/1463160.1463220
  8. 8.
    Echtler, F., Klinker, G., Butz, A.: Towards a unified gesture description language. In: Proceedings of the 13th International Conference on Humans and Computers, HC ’10, S. 177–182. University of Aizu Press, Fukushima-ken, Japan. (2010). http://dl.acm.org/citation.cfm?id=1994486.1994528
  9. 9.
    Eco, U.: Einführung in die Semiotik, 9, unveränd, a edn. UTB, Stuttgart (2002)Google Scholar
  10. 10.
    Freeman, D., Benko, H., Morris, M.R., Wigdor, D.: ShadowGuides: visualizations for in-situ learning of multi-touch and whole-hand gestures. In: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces – ITS ’09, S. 165. Banff, Alberta (2009). doi: 10.1145/1731903.1731935 http://portal.acm.org/citation.cfm?doid=1731903.1731935
  11. 11.
    Görg, M.T., Cebulla, M., Garzon, S.R.: A framework for abstract representation and recognition of gestures in multi-touch applications. In: 2010 Third International Conference on Advances in Computer-Human Interactions, S. 143–147. St Maarten, Netherlands Antilles (2010). doi: 10.1109/ACHI.2010.8, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5430107
  12. 12.
    Groh, R.: Das Interaktions-Bild: Theorie und Methodik der Interfacegestaltung. TUDpress, Dresden (2007)Google Scholar
  13. 13.
    Kammer, D.: Gesture formalization for multi-touch. http://vi-c.de/geformt/ (2013). Zugegriffen: 23. Apr. 2013
  14. 14.
    Kammer, D., Henzen, C., Keck, M., Taranko, S.: GeForMT – Gestenformalisierung für Multi-Touch. In: Groh, R., Zavesky, M. (Hrsg.) Wieder mehr Sehen! – Aktuelle Einblicke in die Technische Visualistik. TUD press Verlag der Wissenschaft, Dresden (2011)Google Scholar
  15. 15.
    Kammer, D., Wojdziak, J., Keck, M., Groh, R., Taranko, S.: Towards a formalization of multi-touch gestures. In: ACM International Conference on Interactive Tabletops and Surfaces, ITS ’10, S. 49–58. ACM, New York, NY, USA (2010). http://doi.acm.org/10.1145/1936652.1936662, http://doi.acm.org/10.1145/1936652.1936662
  16. 16.
    Khandkar, S., Maurer, F.: A domain specific language to define gestures for multi-touch applications. In: M. Rossi, J.P. Tolvanen, J. Sprinkle, S. Kelly (Hrsg.) In: Proceedings of the 10th Workshop on Domain-Specific Modeling (DSM10). Aalto University School of Economics, B-120, Aalto-Print (2010). http://www.dsmforum.org/events/DSM10/papers.html
  17. 17.
    Khandkar, S.H.: A domain-specific language for multi-touch gestures. Ph.D. thesis, University of Calgary, Calgary, Alberta (2010)Google Scholar
  18. 18.
    Khandkar, S.H.: GestureToolkit – home. http://gesturetoolkit.codeplex.com/ (2011). Zugegriffen: 23. Apr. 2013
  19. 19.
    Khandkar, S.H., Sohan, S.M., Sillito, J., Maurer, F.: Tool support for testing complex multi-touch gestures. In: ACM International Conference on Interactive Tabletops and Surfaces – ITS ’10, Saarbrücken, S. 59 (2010). doi: 10.1145/1936652.1936663. http://portal.acm.org/citation.cfm?doid=1936652.1936663
  20. 20.
    Kin, K.: Proton – a declarative multitouch framework. http://vis.berkeley.edu/software/proton_multitouch_framework/proton/ (2012). Zugegriffen: 23. Apr. 2013
  21. 21.
    Kin, K., Hartmann, B., DeRose, T., Agrawala, M.: Proton++: a customizable declarative multitouch framework. In: Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST ’12, S. 477–486. ACM, New York (2012). doi: 10.1145/2380116.2380176. http://doi.acm.org/10.1145/2380116.2380176
  22. 22.
    Kin, K., Hartmann, B., DeRose, T., Agrawala, M.: Proton: multitouch gestures as regular expressions. In: Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI ’12, S. 2885–2894. ACM, New York, NY, USA (2012). doi: 10.1145/2208636.2208694, http://doi.acm.org/10.1145/2208636.2208694
  23. 23.
    Langworthy, D., Lovering, B., Box, D.: The Oslo modeling language draft specification, October 2008. Addison-Wesley, Upper Saddle River, N.J (2009)Google Scholar
  24. 24.
    Lao, S., Heng, X., Zhang, G., Ling, Y., Wang, P.: A gestural interaction design model for multi-touch displays. In: Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology, BCS-HCI ’09, S. 440–446. British Computer Society, Swinton (2009). http://portal.acm.org/citation.cfm?id=1671011.1671068
  25. 25.
    Lü, H., Li, Y.: Gesture coder: a tool for programming multi-touch gestures by demonstration. In: Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems, CHI ’12, S. 2875–2884. ACM, New York (2012). doi: 10.1145/2208636.2208693, http://doi.acm.org/10.1145/2208636.2208693
  26. 26.
    McNeill, D.: Hand and Mind : What Gestures Reveal About Thought. University of Chicago Press, Chicago (1995)Google Scholar
  27. 27.
    McNeill, D.: Gesture and Thought. University of Chicago Press, University Presses Marketing [distributor], Chicago (2007)Google Scholar
  28. 28.
    NUIGroup Authors: Gesture recognition – NUI group community wiki (2009). http://wiki.nuigroup.com/Gesture_Recognition, http://wiki.nuigroup.com/Gesture_Recognition
  29. 29.
    Scholliers, C., Hoste, L., Signer, B., De Meuter, W.: Midas: A declarative multi-touch interaction framework. In: Proceedings of the fifth international conference on Tangible, Embedded, and Embodied Interaction – TEI ’11, Funchal, S. 49 (2011). doi: 10.1145/1935701.1935712, http://portal.acm.org/citation.cfm?doid=1935701.1935712
  30. 30.
    Spano, L.D.: A model-based approach for gesture interfaces, S. 327. ACM Press (2011). doi: 10.1145/1996461.1996548. http://portal.acm.org/citation.cfm?doid=1996461.1996548
  31. 31.
    Thiruvengadam, M.V.: IntelliPad an intelligent notepad (2009). http://intellipad.wikidot.com/, http://intellipad.wikidot.com/
  32. 32.
    Vanacken, D.: Touch-based interaction and collaboration in walk-up-and-use and multi-user environments. Ph.D. thesis (2012). http://hdl.handle.net/1942/14432
  33. 33.
    Werning, M.: Right and wrong reasons for compositionality. In: M. Werning, d. Machery, G. Schurz (Hrsg.) The Compositionality of Meaning and Content, Bd. 1, S. 285– 309. Ontos Verlag, Frankfurt (2005)Google Scholar
  34. 34.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: Proceedings of the 27th international conference on Human Factors in Computing Systems - CHI ’09, Boston, p. 1083 (2009). doi: 10.1145/1518701.1518866. http://portal.acm.org/citation.cfm?doid=1518701.1518866

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Technische Universität DresdenProfessur MediengestaltungDresdenDeutschland

Personalised recommendations