A Multi-touch Surface Using Multiple Cameras

  • Itai Katz
  • Kevin Gabayan
  • Hamid Aghajan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4678)


In this paper we present a framework for a multi-touch surface using multiple cameras. With an overhead camera and a side-mounted camera we determine the fingertip positions. After determining the fundamental matrix that relates the two cameras, we calculate the three dimensional coordinates of the fingertips. The intersection of the epipolar lines from the overhead camera with the fingertips detected in the side camera image provides the fingertip height. Touches are detected when the measured fingertip height from the touch surface is zero. We interpret touch events as hand gestures which can be generalized into commands for manipulating applications. We offer an example application of a multi-touch finger painting program.


Gesture Recognition Fundamental Matrix Multiple Camera Epipolar Line Contact Detection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Buxton, B.: Multi-Touch Systems That I Have Known And Loved (accessed March 21, 2007),
  2. 2.
    Synaptics Technologies, Capacitive Position Sensing (visited March 21, 2007),
  3. 3.
    Mitsubishi DiamondTouch (visited March 21, 2007),
  4. 4.
    Levin, G.: Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers. AI & Society 20(4), 462–482 (2006)CrossRefGoogle Scholar
  5. 5.
    Haller, M., Leithinger, D., Leitner, J., Seifried, T.: Coeno-Storyboard: An Augmented Surface for Storyboard Presentations. In: Mensch & Computer 2005, Linz, Austria (September 4-7, 2005)Google Scholar
  6. 6.
    Han, J.: Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection. In: ACM Symposium on User Interface Software and Technology (UIST), ACM, New York (2005)Google Scholar
  7. 7.
    Kjeldsen, R., Levas, A., Pinhanez, C.: Dynamically Reconfigurable Vision-Based User Interfaces. In: Crowley, J.L., Piater, J.H., Vincze, M., Paletta, L. (eds.) ICVS 2003. LNCS, vol. 2626, Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Mo, Z., Lewis, J.P., Neumann, U.: SmartCanvas: A Gesture-Driven Intelligent Drawing Desk System. In: IUI 2005. Proceedings of Intelligent User Interfaces (January 9-12, 2005)Google Scholar
  9. 9.
    Malik, S., Laszlo, J.: Visual Touchpad: A Two-handed Gestural Input Device. In: ICMI 2004. International Conference on Multimodal Interfaces (October 13-15, 2004)Google Scholar
  10. 10.
    Wilson, A.: PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System. In: ACM Symposium on User Interface Software and Technology (UIST), ACM, New York (2005)Google Scholar
  11. 11.
    Morris, T., Eshehry, H.: Real-Time Fingertip Detection for Hand Gesture Recognition. In: ACIVS 2004. Advanced Concepts for Intelligent Vision Systems, Ghent University, Belgium (September 9-11, 2002)Google Scholar
  12. 12.
    Toor, P.H.S., Murray, D.W.: The development and comparison of robust methods for estimating the fundamental matrix. Intl. J. of Computer Vision 24(3), 271–300 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Itai Katz
    • 1
  • Kevin Gabayan
    • 1
  • Hamid Aghajan
    • 1
  1. 1.Dept. of Electrical Engineering, Stanford University, Stanford, CA 94305 

Personalised recommendations