Advertisement

Multimedia Tools and Applications

, Volume 62, Issue 3, pp 895–912 | Cite as

Music similarity-based approach to generating dance motion sequence

  • Minho Lee
  • Kyogu LeeEmail author
  • Jaeheung ParkEmail author
Article

Abstract

In this paper, we propose a novel approach to generating a sequence of dance motions using music similarity as a criterion to find the appropriate motions given a new musical input. Based on the observation that dance motions used in similar musical pieces can be a good reference in choreographing a new dance, we first construct a music-motion database that comprises a number of segment-wise music-motion pairs. When a new musical input is given, it is divided into short segments and for each segment our system suggests the dance motion candidates by finding from the database the music cluster that is most similar to the input. After a user selects the best motion segment, we perform music-dance synchronization by means of cross-correlation between the two music segments using the novelty functions as an input. We evaluate our system’s performance using a user study, and the results show that the dance motion sequence generated by our system achieves significantly higher ratings than the one generated randomly.

Keywords

Choreography Dance motion generation Music similarity Music-motion database Motion capture Motion synthesis 

Notes

Acknowledgements

This study was supported by the grant (No. 2011-P3-15) of Advanced Institutes of Convergence Technology (AICT). Also, we greatly acknowledge Dr. Junghoon Kwon for his help on the use of the motion capture system.

References

  1. 1.
    Alankus G, Bayazit AA, Bayazit OB (2005) Automated motion synthesis for dancing characters. Comput Animat Virt W 16(3–4):259–271CrossRefGoogle Scholar
  2. 2.
    Bartsch MA, Wakefield GH (2001) To catch a chorus: using chroma-based representations for audio thumbnailing. In: 2001 IEEE workshop on the applications of signal processing to audio and acoustics, pp 15–18Google Scholar
  3. 3.
    Foote J (1999) Visualizing music and audio using self-similarity. In: Proc. ACM multimedia, pp 70–80Google Scholar
  4. 4.
    Foote J (2000) Automatic audio segmentation using a measure of audio novelty. In: Proc. IEEE int. conf. multimedia and expo (ICME2000), vol 1, pp 452–455Google Scholar
  5. 5.
    Gray JM (1975) An exploration of musical timbre. PhD thesis, Dept. of Psychology, Stanford University, Stanford, CA, USA (1975)Google Scholar
  6. 6.
    Grunberg D, Ellenberg R, Kim Y, Oh P (2009) Creating an autonomous dancing robot. In: Proceedings of the international conference on hybrid information technology (ICHIT), pp 221–227Google Scholar
  7. 7.
    Ikeuchi K, Shiratori T, Nakazawa A (2006) Dancing-to-music character animation. Comput Graph Forum 25:449–458CrossRefGoogle Scholar
  8. 8.
    Kang K-K, Kim D (2007) Synthesis of dancing character motion from beatboxing sounds, smart graphics. In: Lecture notes in computer science, vol 4569. Springer, Berlin, pp 216–219Google Scholar
  9. 9.
    Kanungo T, Netanyahu NS, Wu AY (2002) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 24(7):881–892CrossRefGoogle Scholar
  10. 10.
    Kim JW, Fouad H, Sibert JL, Hahn JK (2009) Perceptually motivated automatic dance motion generation for music. Comput Animat Virt W 20:375–384CrossRefGoogle Scholar
  11. 11.
    Likas A, Vlassis N, Verbeek JJ (2003) The global k-means clustering algorithm. Pattern Recogn 36(2):451–461CrossRefGoogle Scholar
  12. 12.
    Loi K-C, Li T-Y (2009) Automatic generation of character animations expressing music features. In: Proceedings of the APSIPA annual summit and conference, pp 216–221Google Scholar
  13. 13.
    Nakahara N, Miyazaki K, Sakamoto H, Fujisawa TX, Nagata N, Nakatsu R (2009) Dance motion control of a humanoid robot based on real-time tempo tracking from musical audio signals. In: Entertainment computing—ICEC 2009, lecture notes in computer science, vol 5709. Springer, Berlin, pp 36–47CrossRefGoogle Scholar
  14. 14.
    Nakaoka S, Kajita S, Yokoi K (2010) Intuitive and flexible user interface for creating whole body motions of biped humanoid robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1675–1682Google Scholar
  15. 15.
    Ofli F, Erzin E, Yemez Y, Tekalp AM (2010) Multi-modal analysis of dance performances for music-driven choreography synthesis. In: Proc. IEEE international conference on acoustics speech and signal processing (ICASSP), pp 2466–2469Google Scholar
  16. 16.
    Sandholm A, Pronost N, Thalmann D (2009) MotionLab: a Matlab toolbox for extracting and processing experimental motion capture data for neuromuscular simulations. In: Modelling the physiological human—lecture notes in computer science, vol 5903, pp 110–124Google Scholar
  17. 17.
    Sauer D, Yang Y-H (2009) Music-driven character animation. ACM T Multim Comput 5(4):1–16CrossRefGoogle Scholar
  18. 18.
    Shiratori T, Ikeuchi K (2008) Synthesis of dance performance based on analyses of human motion and music. Inf Process Soc JPN 1:80–93Google Scholar
  19. 19.
    Taylor GW, Hinton GE, Roweis ST (2007) Modeling human motion using binary latent variables. In: Advances in neural information processing systems, vol 19, pp 1345–1352Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  1. 1.Graduate School of Convergence Science & Technology, Advanced Institutes of Convergence TechnologySeoul National UniversitySeoulRepublic of Korea

Personalised recommendations