Advertisement

The Visual Computer

, Volume 33, Issue 10, pp 1279–1289 | Cite as

Joint entropy-based motion segmentation for 3D animations

  • Guoliang LuoEmail author
  • Gang Lei
  • Yuanlong Cao
  • Qinghua Liu
  • Hyewon Seo
Original Article
  • 307 Downloads

Abstract

With the recent advancement of data acquisition techniques, 3D animation is becoming a new challenging subject for data processing. In this paper, we present a joint entropy-based key-frame extraction method, which further derives a motion segmentation method for 3D animations. We start by applying an existing deformation-based feature descriptor to measure the degree of deformation of each triangle within each frame, from which we compute the statistical joint probability distribution of triangles’ deformation between two consecutive subsequences of frames with a fixed length. Then, we further compute joint entropy between the two subsequences. This allows us to extract key-frames by taking the local maximal from the joint entropy curve (or energy curve) of a given 3D animation. Furthermore, we classify the extracted key-frames by grouping the key-frames with similar motions into the same cluster. Finally, we compute a boundary frame between each of the two neighboring frames with different motions, which is achieved by minimizing the variance of energy between the two motions. The experimental results show that our method successfully extracts representative key-frames of different motions, and the comparisons with existing methods show the effectiveness and the efficiency of our method.

Keywords

Joint entropy Key-frame extraction Motion segmentation 3D animation 

Notes

Acknowledgments

This work has been jointly supported by the National Nature Science Foundation of China (Nos. 61602222, 61562044), the Science and Technology Research Program funded by the Education Department of Jiangxi Province (No. GJJ150359), the Doctoral Research Project of JXNU (6754), the Science and Technology Research Project of Jiangxi Provincial Department of Education (No. GJJ14246) and the French national project SHARED (Shape Analysis and Registration of People Using Dynamic Data, No. 10-CHEX-014-01). We are also thankful to the assistance from our colleagues Lei Haopeng, Yi Yugen and Li Zhangyu.

Supplementary material

Supplementary material 1 (mov 23331 KB)

References

  1. 1.
    Autodesk inc.: 3d studio max. http://www.autodesk.com/products/3ds-max. Accessed 23 Nov 2015
  2. 2.
    Cmu graphics lab: Carnegie mellon university motion capture database. http://mocap.cs.cmu.edu. Accessed 01 Dec 2015
  3. 3.
    Barbič, J., Safonova, A., Pan, J.-Y., Faloutsos, C., Hodgins, J. K., Pollard, N. S.: Segmenting motion capture data into distinct behaviors. In: Proceedings of Graphics Interface 2004, Canadian Human–Computer Communications Society, pp. 185–194 (2004)Google Scholar
  4. 4.
    Baum, L.E.: An equality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities 3, 1–8 (1972)Google Scholar
  5. 5.
    Fine, S., Singer, Y., Tishby, N.: The hierarchical hidden markov model: analysis and applications. Mach. Learn. 32(1), 41–62 (1998)CrossRefzbMATHGoogle Scholar
  6. 6.
    Fod, A., Matarić, M.J., Jenkins, O.C.: Automated derivation of primitives for movement classification. Auton. Robots 12(1), 39–54 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Gal, R., Shamir, A., Cohen-Or, D.: Pose-oblivious shape signature. IEEE Trans. Vis. Comput. Gr. 13(2), 261–271 (2007)CrossRefGoogle Scholar
  8. 8.
    González-Díaz, I., Martínez-Cortés, T., Gallardo-Antolín, A.: Díaz-de María, F.: Temporal segmentation and keyframe selection methods for user-generated video search-based annotation. Expert Syst. Appl. 42(1), 488–502 (2015)CrossRefGoogle Scholar
  9. 9.
    Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. Appl. Stat. 28, 100–108 (1979)Google Scholar
  10. 10.
    Janus, B., Nakamura, Y.: Unsupervised probabilistic segmentation of motion data for mimesis modeling. In: 12th International Conference on Advanced Robotics, 2005. ICAR’05. Proceedings. IEEE, pp. 411–417 (2005)Google Scholar
  11. 11.
    Krishna, M.V., Bodesheim, P., Körner, M., Denzler, J.: Temporal video segmentation by event detection: a novelty detection approach. Pattern Recognit. Image Anal. 24(2), 243–255 (2014)CrossRefGoogle Scholar
  12. 12.
    Lee, T.Y., Lin, C.H., Wang, Y.S., Chen, T.G.: Animation key-frame extraction and simplification using deformation analysis. IEEE Trans. Circuits Syst. Video Technol. 18(4), 478–486 (2008)CrossRefGoogle Scholar
  13. 13.
    Liu, T., Zhang, H.-J., Qi, F.: A novel video key-frame-extraction algorithm based on perceived motion energy model. IEEE Trans. Circuits Syst. Video Technol. 13(10), 1006–1013 (2003)CrossRefGoogle Scholar
  14. 14.
    Luo, G., Cordier, F., Seo, H.: Similarity of deforming meshes based on spatio-temporal segmentation. 3D Object Retrieval Workshop, 3DOR (2014)Google Scholar
  15. 15.
    Luo, G., Lei, G., Seo, H.: Joint entropy based key-frame extraction for 3d animations. In: Computer Graphics International (CGI) 2015, the 32nd annual conference (Strasbourg, France, June 2015), Université de Strasbourg (2015)Google Scholar
  16. 16.
    Luo, G., Seo, H., Cordier, F.: Temporal segmentation of deforming mesh. In: Computer Graphics International (CGI) 2014, the 31st annual conference (Sydney, Australia, June 2014), University of Sydney (2014)Google Scholar
  17. 17.
    Maji, S., Berg, A., Malik, J.: Classification using intersection kernel support vector machines is efficient. CVPR 2008. IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp 1–8 (2008)Google Scholar
  18. 18.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  19. 19.
    Shamir, A.: A survey on mesh segmentation techniques. In: Computer Graphics Forum, vol. 27, pp. 1539–1556. Wiley Online Library (2008)Google Scholar
  20. 20.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
  21. 21.
    Sun, J., Ovsjanikov, M., Guibas, L.: A concise and provably informative multi-scale signature based on heat diffusion. In: Computer Graphics Forum, vol. 28, pp. 1383–1392. Wiley Online Library (2009)Google Scholar
  22. 22.
    Van Kaick, O., Zhang, H., Hamarneh, G., Cohen-Or, D.: A survey on shape correspondence. In: Computer Graphics Forum, vol. 30, pp. 1681–1707. Wiley Online Library (2011)Google Scholar
  23. 23.
    Wang, T.-S., Shum, H.-Y., Xu, Y.-Q., Zheng, N.-N.: Unsupervised analysis of human gestures. In: Advances in Multimedia Information Processing–PCM 2001, pp. 174–181. Springer, Berlin (2001)Google Scholar
  24. 24.
    Yamauchi, H., Gumhold, S., Zayer, R., Seidel, H.-P.: Mesh segmentation driven by gaussian curvature. Vis. Comput. 21(8–10), 659–668 (2005)Google Scholar
  25. 25.
    Zhou, F., De la Torre, F., Hodgins, J.K.: Hierarchical aligned cluster analysis for temporal clustering of human motion. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 582–596 (2013)CrossRefGoogle Scholar
  26. 26.
    Zhou, F., Torre, F., Hodgins, J. K.: Aligned cluster analysis for temporal segmentation of human motion. In: 8th IEEE International Conference on Automatic Face and Gesture Recognition, 2008. FG’08, IEEE, pp. 1–7 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Jiangxi Normal University (MIMLab)NanchangChina
  2. 2.Université de Strasbourg (ICube, UMR 7357, CNRS)StrasbourgFrance

Personalised recommendations