Advertisement

Multi-dimensional Game Interface with Stereo Vision

  • Yufeng Chen
  • Mandun Zhang
  • Peng Lu
  • Xiangyong Zeng
  • Yangsheng Wang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3711)

Abstract

An novel stereo vision tracking method is proposed to implement an interactive Human Computer Interface(HCI). Firstly, a feature detection method is introduced to accurately obtain the location and orientation of the feature in an efficient way. Secondly, a searching method is carried out, which uses probability in the time, frequency or color space to optimize the searching strategy. Then the 3D information is retrieved by the calibration and triangulation process. Up to 5 degrees of freedom(DOFs) can be achieved from a single feature, compared with the other methods, including the coordinates in 3D space and the orientation information. Experiments show that the method is efficient and robust for the real time game interface.

Keywords

Stereo Vision Local Moment Human Computer Interface Triangulation Process Game Interface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Gavrila, D.M.: The visual Analysis of Human Movement: A Survey. Computer vision and Image Understanding 75(1) (1999)Google Scholar
  2. 2.
    Freeman, W.T., Tanaka, K., Ohta, J., Kyuma, K.: Computer vision for computer games. In: 2nd International Conference on Automatic Face and Gesture Recognition, Killington, VT, USA (1996)Google Scholar
  3. 3.
    Betke, M., Gips, J., Fleming, P.: The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering 10(1) (March 2002)Google Scholar
  4. 4.
    Magee, J., Scott, M.R., Waber, B.N., Betke, M.: EyeKeys: A Real-time Vision Interface Based on Gaze Detection from a Low-grade Video Camera. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2004)Google Scholar
  5. 5.
    Sharma, V.P., Huang, T.: Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review. IEEE Transaction on Pattern Analysis and Machine Intelligence 19(7) (July 1997)Google Scholar
  6. 6.
    Fagiani, C., Betki, M., Gips, J.: Evaluation of tracking methods for human computer interaction. In: IEEE Workshop on Application of Computer Vision (2002)Google Scholar
  7. 7.
    Zhang, Z., Wu, Y., Shan, Y., Shafer, S.: Visual Panel: Virtual Mouse, Keyboard and 3D Controller with an Ordinary Piece of Paper. In: ACM Workshop on Perceptive User Interfaces (November 2001)Google Scholar
  8. 8.
    Gavrila, D.M., Davis, L.S.: 3d model-based tracking of humans in action: a multi-view approach. In: CVPR (1996)Google Scholar
  9. 9.
    Zeleznik, R.C., Forsberg, A.S., Strauss, P.S.: Two Pointer Input For 3D Interaction. In: Proceedings of the symposium on Interactive 3D graphics, Providence, Rhode Island, United States (1997)Google Scholar
  10. 10.
    Wu, Y., Toyama, K., Huang, T.S.: Self-Supervised Learning for Object Recognition Based on Kernel Discriminant-EM Algorithm. In: Proc. IEEE Int’l Conf. on Computer Vision (ICCV 2001), Vancouver, Canada, July 2001, vol. I, pp. 275–280 (2001)Google Scholar
  11. 11.
    Sidenbladh, M.B., Fleet, D.: Stochastic Tracking of 3D Human Figures using 2D Image Motion. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 702–718. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  12. 12.
    Hu, M.-K.: Visual pattern recognition by moment invariants. IRE Trans. on Information Theory IT-8, 179–187 (1962)zbMATHGoogle Scholar
  13. 13.
    Heikkila, J.: Pattern matching with affine moment descriptors. Pattern Recognition 37, 1825–1834 (2004)CrossRefzbMATHGoogle Scholar
  14. 14.
    Chong, C., Raveendranb, P., Mukundanc, R.: Translation invariants of Zernike moments. Pattern Recognition 36(1), 765–1773 (2003)Google Scholar
  15. 15.
    Takamatsu, R., Sato, M., Kawarada, H.: Pointing device gazing at hand based on local moments. In: Proceedings of SPIE, April 1997. Real-Time Imaging II, vol. 3028, pp. 155–163 (1997)Google Scholar
  16. 16.
    Zhang, Z.: Flexible Camera Calibration By Viewing a Plane From Unknown Orientations. In: ICCV (1999)Google Scholar
  17. 17.
    Hartley, R.I., Sturm, P.: Triangulation,Computer Vision and Image Understanding, pp. 957–966 (1994)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2005

Authors and Affiliations

  • Yufeng Chen
    • 1
  • Mandun Zhang
    • 1
  • Peng Lu
    • 1
  • Xiangyong Zeng
    • 1
  • Yangsheng Wang
    • 1
  1. 1.Institute of AutomationChinese Academy of SciencesBeijingP.R. China

Personalised recommendations