Advertisement

Towards User-Aware Multi-touch Interaction Layer for Group Collaborative Systems

  • Vít Rusňák
  • Lukáš Ručka
  • Petr Holub
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7721)

Abstract

State-of-the-art collaborative workspaces are represented either by large tabletops or wall-sized interactive displays. Extending bare multi-touch capability with metadata for association of touch events to individual users could significantly improve collaborative work of co-located group. In this paper, we present several techniques which enable development of such interactive environments. First, we describe an algorithm for scalable coupling of multiple touch sensors and a method allowing association of touch events with users. Further, we briefly discuss the Multi-Sensor (MUSE) framework which utilizes the two techniques and allows rapid development of touch-based user interface. Finally, we discuss the preliminary results of the prototype implementation.

Keywords

Depth Sensor Touch Sensor Virtual Sensor Touch Point User Tracking 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Leigh, J.: Cyber-commons: merging real and virtual worlds. Communications of the ACM 51(1), 82–85 (2008)CrossRefGoogle Scholar
  2. 2.
    Schmidt, G., et al.: A Survey of Large High-Resolution Display Technologies, Techniques, and Applications. In: Virtual Reality Conference 2006, pp. 223–236 (2006)Google Scholar
  3. 3.
    Teiche, A., et al.: Multitouch Technologies. NUI Group (2009), http://nuicode.com/projects/wiki-book/files
  4. 4.
    Stødle, D.: Device-Free Interaction and Cross-Platform Pixel Based Output to Display Walls. Ph.d. thesis, Uni. of Tromsø (2009)Google Scholar
  5. 5.
    Dietz, P., et al.: DiamondTouch: A Multi-User Touch Technology. In: User Interface Software and Technology 2001, pp. 219–226 (2001)Google Scholar
  6. 6.
    Schmidt, D., et al.: HandsDown: Hand-contour-based user identification for interactive surfaces. In: Nordic Human-Computer Interaction 2010, pp. 432–441 (2010)Google Scholar
  7. 7.
    Tanase, C.A., et al.: Detecting and Tracking Multiple Users in the Proximity of Interactive Tabletops. Advances in Electrical and Computer Engineering 8(2), 61–63 (2008)CrossRefGoogle Scholar
  8. 8.
    Annett, M., et al.: Medusa: A Proximity-Aware Multi-Touch Tabletop. In: User Interface Software and Technology 2011, pp. 337–346 (2011)Google Scholar
  9. 9.
    Richter, S., Holz, C., Baudisch, P.: Bootstrapper: Recognizing Tabletop Users by their Shoes. In: Human Factors in Computing Systems 2012, p. 4 (2012)Google Scholar
  10. 10.
    Wilson, A.D., Hrvoje, B.: Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: User Interface Software and Technology 2010 (2010)Google Scholar
  11. 11.
    Gjerlufsen, T., et al.: Shared Substance: developing flexible multi-surface applications. In: Human Factors in Computing Systems 2011, pp. 3383–3392 (2011)Google Scholar
  12. 12.
    Ponto, K., et al.: CGLXTouch: A multi-user multi-touch approach for ultra-high-resolution collaborative workspaces. Future Generation Computer Systems 27(6), 649–656 (2010)CrossRefGoogle Scholar
  13. 13.
    Jagodic, R., et al.: Enabling multi-user interaction in large high-resolution distributed environments. Future Generation Computer Systems, 914–923 (2010)Google Scholar
  14. 14.
    Renambot, L., et al.: SAGE: the Scalable Adaptive Graphics Environment. In: Workshop on Advanced Collaborative Environments 2004, p. 8 (2004)Google Scholar
  15. 15.
    Kaltenbrunner, M., Bencina, R.: reacTIVision (2012), http://reactivision.sourceforge.net/
  16. 16.
    Kim, M., et al.: Design and Development of a Distributed Tabletop System using EBITA Framework. Digital Media, 1–6 (2009)Google Scholar
  17. 17.
    Tuddenham, P., Robinson, P.: T3: A Toolkit for High-Resolution Tabletop Interfaces. In: Computer Supported Cooperative Work 2006, pp. 3–4 (2006)Google Scholar
  18. 18.
    Serrano, M., et al.: The OpenInterface Framework: a tool for multimodal interaction. In: Human Factors in Computing Systems, pp. 3501–3506 (2008)Google Scholar
  19. 19.
    Figueroa, P., et al.: InTml: a description language for VR applications. In: International Conference on 3D Web Technology 2002, pp. 53–58 (2002)Google Scholar
  20. 20.
    Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: Computer Vision and Pattern Recognition 2011, pp. 1297–1304 (2011)Google Scholar
  21. 21.
    Chang, F., Chen, C., Lu, C.: A linear-time component-labeling algorithm using contour tracing technique. Comput. Vis. Image Underst. 93(2), 206–220 (2004)CrossRefGoogle Scholar
  22. 22.
    Senior, A., et al.: Appearance models for occlusion handling. Image and Vision Computing 24(11), 1233–1243 (2006)CrossRefGoogle Scholar
  23. 23.
    Zhang, T.Y., Suen, C.Y.: A fast parallel algorithm for thinning digital patterns. Communications of the ACM 27(3), 236–239 (1984)CrossRefGoogle Scholar
  24. 24.
    Kaltenbrunner, M.: TUIO 2.0 (2011), http://www.tuio.org/?tuio20
  25. 25.
    Leigh, J., et al.: SageCommons (2012), http://www.sagecommons.org/

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Vít Rusňák
    • Lukáš Ručka
      • Petr Holub
        • 1
        • 2
      1. 1.Institute of Computer ScienceMasaryk UniversityBrnoCzech Republic
      2. 2.CESNET z.s.p.o.PragueCzech Republic

      Personalised recommendations