Real-time dance evaluation by markerless human pose estimation
- 66 Downloads
This paper presents a unified framework that evaluates dance performance by markerless estimation of human poses. Dance involves complicated poses such as full-body rotation and self-occlusion, so we first develop a human pose estimation method that is invariant to these factors. The method uses ridge data and data pruning. Then we propose a metric to quantify the similarity (i.e., timing and accuracy) between two dance sequences. To validate the proposed dance evaluation method, we conducted several experiments to evaluate pose estimation and dance performance on the benchmark dataset EVAL, SMMC-10 and a large K-Pop dance database, respectively. The proposed methods achieved pose estimation accuracy of 0.9358 mAP, average pose error of 3.88 cm, and 98% concordance with experts’ evaluation of dance performance.
KeywordsHuman pose estimation Dance performance evaluation
This research was partially supported by the MSIT (Ministry of Science, ICT), Korea, under the SW Starlab support program (IITP-2017-0-00897) supervised by the IITP (Institute for Information & communications Technology Promotion).
This work was partially supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (IITP-2014-0-00059, Development of Predictive Visual Intelligence Technology).
- 1.Alexander T, Szegedy C (2014) Deeppose: Human pose estimation via deep neural networks. In: The IEEE conference on computer vision and pattern recognition, pp 1653–1660Google Scholar
- 2.Baak A, Müller M, Bharaj G, Seidel H-P, Theobalt C (2011) A data-driven approach for real-time full body pose reconstruction from a depth camera. In: IEEE international conference on computer vision, pp 1092–1099Google Scholar
- 3.Cao Z, Simon T, Wei S, Sheikh Y (2017) Realtime multi-person 2D pose estimation using part affinity fields. In: The IEEE conference on computer vision and pattern recognitionGoogle Scholar
- 6.Ganapathi V, Plagemann C, Koller D, Thrun S (2010) Real time motion capture using a single time-of-flight camera. In: IEEE conference on computer vision and pattern recognition, pp 755–762Google Scholar
- 8.Hough P (1959) Machine analysis of bubble chamber pictures. In: International conference on high energy accelerators and instrumentation, pp 73Google Scholar
- 9.Huang J, Altamar D (2016) Pose estimation on depth images with convolutional neural networkGoogle Scholar
- 11.Jalal A, Kamal S, Kim D (2015) Depth Silhouettes Context: A new robust feature for human tracking and activity recognition based on embedded HMMs. In: 12th international conference on ubiquitous robots and ambient intelligence, pp 294–299Google Scholar
- 12.Jung H, Lee S, Heo Y, Yun I (2015) Random tree walk toward instantaneous 3D human pose estimation. In: The IEEE conference on computer vision and pattern recognition, pp 2467–2474Google Scholar
- 13.Kim Y, Kim D (2015) Efficient body part tracking using ridge data and data pruning. In: IEEE-RAS 15th international conference on humanoid robots, pp 114–120Google Scholar
- 16.Plagemann C, Ganapathi V, Koller D, Thrun S (2010) Real-time identification and localization of body parts from depth images. In: IEEE international conference on robotics and automation, pp 3108–3113Google Scholar
- 17.Raptis M, Kirovski D, Hoppe H (2011) Real-time classification of dance gestures from skeleton animation. In: The 2011 ACM SIGGRAPH/Eurographics symposium on computer animation 147–156Google Scholar
- 18.Reyes M, Dominguez G, Escalera S (2011) Featureweighting in dynamic timewarping for gesture recognition in depth data. In: IEEE international conference on computer vision workshops, pp 1182–1188Google Scholar
- 21.Sung J, Ponce C, Selman B, Saxena A (2011) Human Activity Detection from RGBD Images, plan, activity, and intent recognition, vol 64Google Scholar
- 22.Xia L, Chen C, Aggarwal J (2012) View invariant human action recognition using histograms of 3d joints. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 20–27Google Scholar
- 24.Ye M, Yang R (2014) Real-time simultaneous pose and shape estimation for articulated objects using a single depth camera. In: IEEE conference on computer vision and pattern recognition, pp 2345–2352Google Scholar
- 25.Yun K, Honorio J, Chattopadhyay D, Berg T, Samaras D (2012) Two-person interaction detection using body-pose features and multiple instance learning. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 28–35Google Scholar