Advertisement

3D Gestural Interaction for Stereoscopic Visualization on Mobile Devices

  • Shahrouz Yousefi
  • Farid Abedan Kondori
  • Haibo Li
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6855)

Abstract

Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera’s field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders which ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology different low cost 3D glasses will be used for viewers.

Keywords

3D mobile interaction rotational symmetries gesture detection SIFT gesture tracking stereoscopic visualization 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Bencheikh, M., Bouzenada, M., Batouche, M.C.: A New Method of Finger Tracking Applied to the Magic Board. In: Conf. on Industrial Technology (2004)Google Scholar
  2. [2]
    Bretzner, L., Laptev, I., Lindeberg, T.: Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering. In: Proc. of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, (2002)Google Scholar
  3. [3]
    Dorfmueller-Ulhaas, K., Schmalstieg, D.: Finger Tracking for Interaction in Augmented Environments. In: 2nd ACM/IEEE Symposium on Augmented Reality (2001)Google Scholar
  4. [4]
    Erol, A., Bebis, G., Nicolescu, M., Boyle, R., Twombly, X.: Vision-based hand pose estimation: A review. Computer Vision and Image Understanding, (2007)Google Scholar
  5. [5]
    Hardenberg, C.V., Berard, B.: Bare-hand human-computer interaction. In: Proceedings of the 2001 Workshop on Perceptive User Interfaces, Orlando, Florida. ACM International Conference Proceeding Series, vol. 15 archive (2001)Google Scholar
  6. [6]
    Iwai, D., Sato, K.: Heat Sensation in Image Creation with Thermal Vision. In: ACM SIGCHI Int. Conf. on Advances in Computer Entertainment Technology (2005)Google Scholar
  7. [7]
    Johansson, B.: Low Level Operations and Learning in Computer Vision. Linkoping Studies in Science and Technology Dissertation, Linkopings universitet (2004)Google Scholar
  8. [8]
    Kolsch, M., Turk, M.: Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration. In: Proc. CVPR Workshop (2004)Google Scholar
  9. [9]
    Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, 726–740 (1987)Google Scholar
  10. [10]
    Maggioni, C.: A novel gestural input device for virtual reality. In: IEEE Annual International Symposium on Virtual Reality, pp. 118–124 (1993)Google Scholar
  11. [11]
    Rehg, J., Kanade, T.: Digiteyes Vision-based Human Hand Tracking. Technical Report CMU-CS-TR-93-220, Carnegie Mellon University (1993)Google Scholar
  12. [12]
    Laptev, I., Lindeberg, T.: Tracking of Multi-state Hand Models Using Particle Filtering and a Hierarchy of Multi-scale Image Features. In: Proc. Scale-Space and Morphology in Computer Vision (1999)Google Scholar
  13. [13]
    Dubois, E.: A Projection Method To Generate Anaglyph Stereo Images. In: Proc. IEEE Int. Conf. Acoustics Speech Signal Processing (2001)Google Scholar
  14. [14]
    Jones, G., Lee, D., Holliman, N., Ezra, D.: Controlling Perceived Depth in Stereoscopic Images. Stereoscopic Displays and Virtual Reality Systems VIII (2001)Google Scholar
  15. [15]
    Zhou, H., Ruan, Q.: Finger Countour Tracking Based on Model. In: Conf. on Computers, Comunications, Control and Power Engineering (2002)Google Scholar
  16. [16]
    Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV (2004)Google Scholar
  17. [17]
    Hartley, R., Zisserman, A.: Multiple View Geometry. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
  18. [18]
    Holliman, N.: Mapping perceived depth to regions of interest in stereoscopic images. In: Proc. SPIE, Stereoscopic Displays and Virtual Reality Systems XI (2004)Google Scholar
  19. [19]
    Tran, V. M.: New methods for rendering of anaglyph stereoscopic images on CRT displays and photo-quality ink-jet printers. Ottawa-Carleton Institute for Electrical and Computer Engineering, SITE (2005)Google Scholar
  20. [20]
    Stenger, B., Thayananthan, A., Torr, P., Cipolla, R.: Model-based hand tracking using a hierarchical Bayesian filter. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(9), 1372–1384 (2006)CrossRefzbMATHGoogle Scholar
  21. [21]
    Yang, R., Sarkar, S.: Gesture Recognition using Hidden Markov Models from Fragmented Observations. In: Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2006)Google Scholar
  22. [22]
    Mcallister, D.F., Zhou, Y., Sullivan, S.: Methods for computing color anaglyphs (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Shahrouz Yousefi
    • 1
  • Farid Abedan Kondori
    • 1
  • Haibo Li
    • 1
  1. 1.Digital Media Lab., Department of Applied Physics and Electronics, TeknikhusetUmeå UniversityUmeåSweden

Personalised recommendations