Advertisement

A Modular Approach to Gesture Recognition for Interaction with a Domestic Service Robot

  • Stefan Schiffer
  • Tobias Baumgartner
  • Gerhard Lakemeyer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7102)

Abstract

In this paper, we propose a system for robust and flexible visual gesture recognition on a mobile robot for domestic service robotics applications. This adds a simple yet powerful mode of interaction, especially for the targeted user group of laymen and elderly or disabled people in home environments. Existing approaches often use a monolithic design, are computationally expensive, rely on previously learned (static) color models, or a specific initialization procedure to start gesture recognition. We propose a multi-step modular approach where we iteratively reduce the search space while retaining flexibility and extensibility. Building on a set of existing approaches, we integrate an on-line color calibration and adaptation mechanism for hand detection followed by feature-based posture recognition. Finally, after tracking the hand over time we adopt a simple yet effective gesture recognition method that does not require any training.

Keywords

Random Forest Gesture Recognition False Detection Color Model Modular Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Engleberg, I.N., Wynn, D.R.: Working in Groups: Communication Principles and Strategies, 4th edn. Allyn & Bacon (2006)Google Scholar
  2. 2.
    Saxe, D., Foulds, R.: Toward robust skin identification in video images. In: Proc. 2nd Int’l Conf. on Automatic Face and Gesture Recognition, p. 379 (1996)Google Scholar
  3. 3.
    Jones, M., Rehg, J.: Statistical color models with application to skin detection. International Journal of Computer Vision 46, 81–96 (2002)CrossRefzbMATHGoogle Scholar
  4. 4.
    Francke, H., Ruiz-del-Solar, J., Verschae, R.: Real-Time Hand Gesture Detection and Recognition Using Boosted Classifiers and Active Learning. In: Mery, D., Rueda, L. (eds.) PSIVT 2007. LNCS, vol. 4872, pp. 533–547. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Zhu, X., Yang, J., Waibel, A.: Segmenting hands of arbitrary color. In: Proc. 4th Int’l Conf. on Automatic Face and Gesture Recognition, p. 446 (2000)Google Scholar
  6. 6.
    Kölsch, M., Turk, M.: Robust hand detection. In: Proc. of the 6th IEEE Int’l Conf. on Automatic Face and Gesture Recognition, pp. 614–619 (2004)Google Scholar
  7. 7.
    Viola, P.A., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: Conf. on Computer Vision and Pattern Recognition (CVPR 2001), vol. I, pp. 511–518. IEEE Computer Society, Los Alamitos (2001)Google Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. System Sci. 55, 119–139 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Ho, T.K.: Random decision forests. In: Proc. of the 3rd Int’l Conf. on Document Analysis and Recognition (ICDAR 1995), vol. 1, pp. 278–282 (1995)Google Scholar
  10. 10.
    Belle, V., Deselaers, T., Schiffer, S.: Randomized trees for real-time one-step face detection and recognition. In: Proc. of the 19th Int’l Conf. on Pattern Recognition (ICPR 2008), pp. 1–4. IEEE Computer Society (2008)Google Scholar
  11. 11.
    Panin, G., Klose, S., Knoll, A.: Real-Time Articulated Hand Detection and Pose Estimation. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Kuno, Y., Wang, J., Pajarola, R., Lindstrom, P., Hinkenjann, A., Encarnação, M.L., Silva, C.T., Coming, D. (eds.) ISVC 2009. LNCS, vol. 5876, pp. 1131–1140. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  12. 12.
    Triesch, J., von der Malsburg, C.: Robust classification of hand postures against complex backgrounds. In: Proc. of the 2nd Int’l Conf. on Automatic Face and Gesture Recognition (FG 1996), Washington, DC, USA, pp. 170–175 (1996)Google Scholar
  13. 13.
    Kölsch, M., Turk, M.: Fast 2d hand tracking with flocks of features and multi-cue integration. In: IEEE Workshop on Real-Time Vision for Human-Computer Interaction at Conf. on Computer Vision and Pattern Recognition, p. 158 (2004)Google Scholar
  14. 14.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1994), pp. 593–600 (1994)Google Scholar
  15. 15.
    Bradski, G.R.: Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal 1, 1–15 (1998)Google Scholar
  16. 16.
    Mitra, S., Acharya, T.: Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics (Part C) 37, 311–324 (2007)CrossRefGoogle Scholar
  17. 17.
    Wobbrock, J.O., Wilson, A.D., Li, Y.: Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In: Proc. of the 20th ACM Symposium on User Interface Software and Technology, pp. 159–168 (2007)Google Scholar
  18. 18.
    Viola, P.A., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57, 137–154 (2004)CrossRefGoogle Scholar
  19. 19.
    Kumar, A.: Incorporating cohort information for reliable palmprint authentication. In: Proc. 6th Indian Conf. on Computer Vision, Graphics & Image Processing, pp. 583–590 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Stefan Schiffer
    • 1
  • Tobias Baumgartner
    • 1
  • Gerhard Lakemeyer
    • 1
  1. 1.Knowledge-Based Systems GroupRWTH Aachen UniversityAachenGermany

Personalised recommendations