Kitchen Scene Context Based Gesture Recognition: A Contest in ICPR2012
This paper introduces a new open dataset “Actions for Cooking Eggs (ACE) Dataset” and summarizes results of the contest on “Kitchen Scene Context based Gesture Recognition”, in conjunction with ICPR2012. The dataset consists of naturally performed actions in a kitchen environment. Five kinds of cooking menus were actually performed by five different actors, and the cooking actions were recorded by a Kinect Sensor. Color image sequences and depth image sequences are both available. Besides, action label was given to each frame. To estimate the action label, action recognition method has to analyze not only actor’s action, but also scene contexts such as ingredients and cooking utensils. We compare the submitted algorithms and the results in this paper.
KeywordsAction Recognition Depth Image Kinect Sensor Action Label Motion History Image
Unable to display preview. Download preview PDF.
- 3.Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: A local svm approach. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 3, pp. 32–36 (2004)Google Scholar
- 6.Tenorth, M., Bandouch, J., Beetz, M.: The TUM Kitchen Data Set of Everyday Manipulation Activities for Motion Tracking and Action Recognition. In: IEEE International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences (THEMIS), in conjunction with ICCV 2009 (2009)Google Scholar
- 8.Davis, J.W., Richards, W., Bobick, A.F.: Categorical representation and recognition of oscillatory motion patterns. In: CVPR, pp. 1628–1635 (2000)Google Scholar
- 10.Sun, J., Wu, X., Yan, S., Cheong, L.F., Chua, T.-S., Li, J.: Hierarchical spatio-temporal context modeling for action recognition. In: CVPR, pp. 2004–2011 (2009)Google Scholar