Advertisement

Multimedia Tools and Applications

, Volume 70, Issue 3, pp 1837–1868 | Cite as

HumanTop: a multi-object tracking tabletop

  • Emilio Soto Candela
  • Mario Ortega Pérez
  • Clemente Marín Romero
  • David C. Pérez López
  • Gustavo Salvador Herranz
  • Manuel Contero
  • Mariano Alcañiz Raya
Article

Abstract

In this paper, a computer vision based interactive multi-touch tabletop system called HumanTop is introduced. HumanTop implements a stereo camera vision subsystem which allows not only an accurate fingertip tracking algorithm but also a precise touch-over-the-working surface detection method. Based on a pair of visible spectra cameras, a novel synchronization circuit makes the camera caption and the image projection independent from each other, providing the minimum basis for the development of computer vision analysis based on visible spectrum cameras without any interference coming from the projector. The assembly of both cameras and the synchronization circuit is not only capable of performing an ad-hoc version of a depth camera, but it also introduces the recognition and tracking of textured planar objects, even when contents are projected over them. On the other hand HumanTop supports the tracking of sheets of paper and ID-code markers. This set of features makes the HumanTop a comprehensive, intuitive and versatile augmented tabletop that provides multitouch interaction with projective augmented reality on any flat surface. As an example to exploit all the capabilities of HumanTop, an educational application has been developed using an augmented book as a launcher to different didactic contents. A pilot study in which 28 fifth graders participated is presented. Results about efficiency, usability/satisfaction and motivation are provided. These results suggest that HumanTop is an interesting platform for the development of educational contents.

Keywords

Tabletop Multitouch Markerless tracking Finger detection Camera-projector system Technology enhanced learning 

Notes

Acknowledgements

This study was funded by Ministerio de Educación y Ciencia Spain, Project SALTET (TIN2010-21296-C02-01), Project Game Teen (TIN2010-20187) projects Consolider-C (SEJ2006-14301/PSIC), “CIBER of Physiopathology of Obesity and Nutrition, an initiative of ISCIII” and Excellence Research Program PROMETEO (Generalitat Valenciana. Conselleria de Educació, 2008-157).

References

  1. 1.
    Agarwal A, Izadi S, Chandraker M, Blake A (2007) High precision multi-touch sensing on surfaces using overhead cameras. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 197–200Google Scholar
  2. 2.
    Alexa M, Bollensdorff B, Bressler I, Elstner S, Hahne U, Kettlitz N, Lindow N, Lubkoll R, Richter R, Stripf C et al (2008) Continuous reference images for ftir touch sensing. In: ACM SIGGRAPH poster. ACM, p 49Google Scholar
  3. 3.
    Argyros A, Lourakis M (2006) Vision-based interpretation of hand gestures for remote control of a computer mouse. In: Comput. vis. hum.-comput. interact., pp 40–51Google Scholar
  4. 4.
    Barnes C, Jacobs D, Sanders J, Goldman D, Rusinkiewicz S, Finkelstein A, Agrawala M (2008) Video puppetry: a performative interface for cutout animation. ACM Trans Graph (TOG) 27:124CrossRefGoogle Scholar
  5. 5.
    Bradski G, Kaehler A (2008) Learning OpenCV: computer vision with the OpenCV library. O’Reilly MediaGoogle Scholar
  6. 6.
    Campbell D, Stanley J, Gage N (1963) Experimental and quasi-experimental designs for research. Houghton Mifflin, BostonGoogle Scholar
  7. 7.
    Chen D, Zhang G (2005) A new sub-pixel detector for x-corners in camera calibration targets. In: 13th int. conf. cent. Eur. comput. graph., vis. comput. vis.Google Scholar
  8. 8.
    Dietz P, Leigh D (2001) Diamondtouch: a multi-user touch technology. In: Proc. 14th ACM symp. user interface softw. technol. ACM, pp 219–226Google Scholar
  9. 9.
    Do-Lenh S, Kaplan F, Sharma A, Dillenbourg P (2009) Multi-finger interactions with papers on augmented tabletops. In: Proc. 3rd int. conf. tangible embed. int. ACM, pp 267–274Google Scholar
  10. 10.
    Dung L, Mizukawa M (2009) Fast hand feature extraction based on connected component labeling, distance transform and hough transform. J. Robot. Mechatronics 21(6):726–738Google Scholar
  11. 11.
    Echtler F, Sielhorst T, Huber M, Klinker G (2009) A short guide to modulated light. In: Proc. 3rd int. conf. tang. embed. interact. ACM, pp 393–396Google Scholar
  12. 12.
    Echtler F, Pototschnig T, Klinker G (2010) An led-based multitouch sensor for lcd screens. In: Proc. 4th int. conf. tang. embed. interact.. ACM, pp 227–230Google Scholar
  13. 13.
    Han J (2005) Low-cost multi-touch sensing through frustrated total internal reflection. In: Proc. 18th ACM symp. user interface softw. technol. ACM, pp 115–118Google Scholar
  14. 14.
    Holman D, Vertegaal R, Altosaar M, Troje N, Johns D (2005) Paper windows: interaction techniques for digital paper. In: Proc. SIGCHI conf. hum. factor comput. syst. ACM, pp 591–599Google Scholar
  15. 15.
    Izadi S, Agarwal A, Criminisi A, Winn J, Blake A, Fitzgibbon A (2007) C-slate: a multi-touch and object recognition system for remote collaboration using horizontal surfaces. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 3–10Google Scholar
  16. 16.
    Jordà S, Geiger G, Alonso M, Kaltenbrunner M (2007) The reactable: exploring the synergy between live music performance and tabletop tangible interfaces. In: Proc. 1st int. conf. tangible embed. interact. ACM, pp 139–146Google Scholar
  17. 17.
    Kaltenbrunner M (2009) Reactivision and tuio: a tangible tabletop toolkit. In: Proc. ACM int. conf. interact. tabletop. surf. ACM, pp 9–16Google Scholar
  18. 18.
    Katz I, Gabayan K, Aghajan H (2007) A multi-touch surface using multiple cameras. In: Proc. 9th int. conf. adv. concept. intell. vis. syst.. Springer, pp 97–108Google Scholar
  19. 19.
    Kim K, Lepetit V, Woo W (2010) Scalable real-time planar targets tracking for digilog books. Vis Comput 26(6):1145–1154CrossRefGoogle Scholar
  20. 20.
    Lee T, Hollerer T (2007) Handy ar: markerless inspection of augmented reality objects using fingertip tracking. In: 11th IEEE int. symp. wearable comput. IEEE, pp 83–90Google Scholar
  21. 21.
    Letessier J, Bérard F (2004) Visual tracking of bare fingers for interactive surfaces. In: Proc. 17th ACM symp. user interface softw. technol. ACM, pp 119–122Google Scholar
  22. 22.
    Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 140:1–55Google Scholar
  23. 23.
    Lucchese L, Mitra S (2002) Using saddle points for subpixel feature detection in camera calibration targets. In: Asian-Pac. conf. circuit. syst., vol 2. IEEE, pp 191–195Google Scholar
  24. 24.
    Malik S, Laszlo J (2004) Visual touchpad: a two-handed gestural input device. In: Proc. 6th int. conf. multimodal interface. ACM, pp 289–296Google Scholar
  25. 25.
    Manresa C, Varona J, Mas R, Perales F (2000) Real–time hand tracking and gesture recognition for human-computer interaction. In: Comput. vis. cent., pp 1–7Google Scholar
  26. 26.
    Martín-Gutiérrez J, Luís Saorín J, Contero M, Alcañiz M, Pérez-López D, Ortega M (2010) Design and validation of an augmented book for spatial abilities development in engineering students. Comput Graph 34(1):77–91CrossRefGoogle Scholar
  27. 27.
    McNaughton J (2010) Utilising emerging multi-touch table designs. Durham UniversityGoogle Scholar
  28. 28.
    Microsoft (2011) Microsoft surface. URL http://www.microsoft.com/surface/
  29. 29.
    Muja M, Lowe D (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: Int. conf. comput. vis. theory appl. VISSAPP, pp 331–340Google Scholar
  30. 30.
    Nister D, Stewenius H (2006) Scalable recognition with a vocabulary tree. In: IEEE Comput. Soc. conf. comput. vis. pattern recognit., vol 2. IEEE, pp 2161–2168Google Scholar
  31. 31.
    Oka K, Sato Y, Koike H (2002) Real-time fingertip tracking and gesture recognition. IEEE Comput Graph 22(6):64–71CrossRefGoogle Scholar
  32. 32.
    OpenSource (2011) Fast sift image features library. URL http://libsift.sourceforge.net/
  33. 33.
    Peer P, Kovac J, Solina F (2003) Human skin color clustering for face detection, vol 2. IEEEGoogle Scholar
  34. 34.
    Pilet J, Saito H (2010) Virtually augmenting hundreds of real pictures: an approach based on learning, retrieval, and tracking. In: IEEE virtual real. conf. (VR). IEEE, pp 71–78Google Scholar
  35. 35.
    Rekimoto J (2002) Smartskin: an infrastructure for freehand manipulation on interactive surfaces. In: Proc. SIGCHI conf. hum. factor. comput. syst.. ACM, pp 113–120Google Scholar
  36. 36.
    Shi J, Tomasi C (1994) Good features to track. In: IEEE comput. soc. conf. proc. comput. vis. pattern recognit. IEEE, pp 593–600Google Scholar
  37. 37.
    Tomasi C, Kanade T (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon UniversityGoogle Scholar
  38. 38.
    Verdié Y (2008) Evolution of hand tracking algorithms to mirrortrack. Tech. Rep. Vis. Interface Syst. Lab.Google Scholar
  39. 39.
    Vos N, van der Meijden H, Denessen E (2011) Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Comput Educ 56(1):127–137CrossRefGoogle Scholar
  40. 40.
    Wagner D, Reitmayr G, Mulloni A, Drummond T, Schmalstieg D (2010) Real-time detection and tracking for augmented reality on mobile phones. IEEE Trans Vis Comput Graph 16(3):355–368CrossRefGoogle Scholar
  41. 41.
    Welch G, Bishop G (1995) An introduction to the Kalman filter. University of North Carolina at Chapel Hill, CiteseerGoogle Scholar
  42. 42.
    Wilson A (2004) Touchlight: an imaging touch screen and display for gesture-based interaction. In: Proc. 6th int. conf. multimodal interface. ACM, pp 69–76Google Scholar
  43. 43.
    Wilson A (2005) Playanywhere: a compact interactive tabletop projection-vision system. In: Proc. 18th ACM symp user interface softw. technol. ACM, pp 83–92Google Scholar
  44. 44.
    Wilson A (2010) Using a depth camera as a touch sensor. In: ACM int. conf. interact. tabletop. surf. ACM, pp 69–72Google Scholar
  45. 45.
    Zerofrog (2011) Libsiftfast. URL http://sourceforge.net/projects/libsift
  46. 46.
    Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334CrossRefGoogle Scholar
  47. 47.
    Zhang Z, Wu Y, Shan Y, Shafer S (2001) Visual panel: virtual mouse, keyboard and 3d controller with an ordinary piece of paper. In: Proc. workshop percept. user interface. ACM, pp 1–8Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Emilio Soto Candela
    • 1
  • Mario Ortega Pérez
    • 1
  • Clemente Marín Romero
    • 1
  • David C. Pérez López
    • 1
  • Gustavo Salvador Herranz
    • 2
  • Manuel Contero
    • 1
  • Mariano Alcañiz Raya
    • 1
  1. 1.Universitat Politécnica de ValénciaValenciaSpain
  2. 2.Universidad Cardenal Herrera (CEU)MoncadaSpain

Personalised recommendations