Abstract
Even though many three-dimensional pointing gesture recognition methods have been researched, the problem of self-occlusion has not been considered. When two positions are used to define a pointing vector on a single camera perspective line, one position can occlude the other, which causes detection inaccuracies. In this paper, we propose a pointing gesture recognition method for large display environments based on the Kinect skeleton model. By taking the self-occlusion problem into account, a person’s detected shoulder position is compensated for in the case of a hand occlusion. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position is greatly improved. The average root mean square error was approximately 13 pixels in 1920×1080 screen resolution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Yamamoto, Y., Yoda, I., Sakaue, K.: Arm-pointing gesture interface using surrounded stereo cameras system. In: 17th International Conference on Pattern Recognition, vol. 4, pp. 965–970 (2004)
Kai, N., Rainer, S.: Pointing Gesture Recognition based on 3D-Tracking of Face, Hands and Head Orientation. In: 5th International Conference on Multimodal Interfaces, pp. 140–146. ACM, New York (2003)
Nickel, K., Stiefelhagen, R.: Real-Time Recognition of 3D-Pointing Gestures for Human-Machine-Interaction. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 557–565. Springer, Heidelberg (2003)
Kolesnik, M., Kuleßa, T.: Detecting, Tracking, and Interpretation of a Pointing Gesture by an Overhead View Camera. In: Radig, B., Florczyk, S. (eds.) DAGM 2001. LNCS, vol. 2191, pp. 429–436. Springer, Heidelberg (2001)
Tracking Users with Kinect Skeletal Tracking, http://msdn.microsoft.com/en-us/library/jj131025.aspx
Jing, P., Yepeng, G.: Human-computer Interaction using Pointing Gesture based on an Adaptive Virtual Touch Screen. J. Signal Processing and Pattern Recognition 6(4), 81–92 (2013)
Yepeng, G., Mingen, Z.: Real-time 3D pointing gesture recognition for natural HCI. In: 7th World Congress on Intelligent Control and Automation, pp. 2433–2436 (2008)
Carbini, S., Viallet, J.E., Bernier, O.: Pointing Gesture Visual Recognition for Large Display. In: International Workshop on Visual Observation of Deictic Gestures, pp. 27–32 (2004)
Kehl, R., Van, G.L.: Real-time pointing gesture recognition for an immersive environment. In: 6th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 17–19 (2004)
https://groups.google.com/forum/#topic/openkinect/ihfBIY56Is8
http://msdn.microsoft.com/en-us/library/windows/desktop/ms648394v=vs.85.aspx
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kim, H., Kim, Y., Ko, D., Kim, J., Lee, E.C. (2014). Pointing Gesture Interface for Large Display Environments Based on the Kinect Skeleton Model. In: Park, J., Pan, Y., Kim, CS., Yang, Y. (eds) Future Information Technology. Lecture Notes in Electrical Engineering, vol 309. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55038-6_80
Download citation
DOI: https://doi.org/10.1007/978-3-642-55038-6_80
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-55037-9
Online ISBN: 978-3-642-55038-6
eBook Packages: EngineeringEngineering (R0)