Skip to main content

A Basic Study on Ballroom Dance Figure Classification with LSTM Using Multi-modal Sensor

  • Chapter
  • First Online:
Activity and Behavior Computing

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 204))

Abstract

The paper presents a ballroom dance figure classification method with LSTM using video and wearable sensors. Ballroom dance is a popular sport among people regardless of age or sex. However, learning ballroom dance is very difficult for less experienced dancers as it has many complex types of “dance figures”, which is a completed set of footsteps. Therefore, we aim to develop a system to assist dance exercise which gives advice proper to each dance figure characteristic by recognizing dance figures correctly. While the major approach to recognize dance performance is to utilize video, we cannot simply adopt it for ballroom dance because the images of dancers overlap each other. To solve the problem, we propose a hybrid figure recognition method combining video and wearable sensors to enhance its accuracy and robustness. We collect video and wearable sensor data of seven dancers including acceleration, angular velocity, and body parts location change by pose estimation. After that, we preprocess them and put them into an LSTM-based deep learning network. As a result, we confirmed that our approach achieved an F1-score of 0.86 for 13 figure types recognition using the multi-modal sensors with trial-based fivefold cross-validation. We also performed user-based cross-validation, and sliding window algorithms. In addition, we compared the results with our previous method using Random Forest and also evaluated the robustness with occlusions. We found the LSTM-based method worked better than Random Forest with keypoint data. On the other hand, LSTM could not perform well with a sliding window algorithm. We consider the LSTM-based method would work better with a larger dance figure data, which is our next work. In addition, we will investigate how to solve occlusion problems with pose estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Merom, D., Cumming, R., Mathieu, E., Anstey, K.J., Rissel, C., Simpson, J.M., Morton, R.L., Cerin, E., Sherrington, C., Lord, S.R.: Can Social Dancing Prevent Falls in Older Adults? a Protocol of the Dance, Aging, Cognition, Economics (DAnCE) Fall Prevention Randomised Controlled Trial. BMC Public Health 13(1), 477 (2013)

    Article  Google Scholar 

  2. Fujimoto, M., Tsukamoto, M., Terada, T.: A Dance Training System that Maps Self-Images onto an Instruction Video

    Google Scholar 

  3. Yamauchi, M., Shinomoto, R., Nishiwaki, E., Onozawa, R., Kitahara, T.: Development of dance training support system using kinect and wireless mouse. Sysmposium Entertain Comput 332–338, 2013 (2013)

    Google Scholar 

  4. Narazani, M., Seaborn, K., Hiyama, A., Inami, M.: Wearable skill transfer system for real-time foot-based interaction. StepSync (2018)

    Google Scholar 

  5. Anderson, F., Grossman, T., Matejka, J., Fitzmaurice, G.: YouMove: enhancing movement training with an augmented reality mirror. In: Proceedings of UIST 2013 Conference: ACM Symposium on User Interface Software and Technology, pp 311–320, 2013

    Google Scholar 

  6. Trajkova, M., Cafaro, F.: Takes Tutu to ballet: designing visual and verbal feedback for augmented mirrors. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(1), 1–30 (2018)

    Google Scholar 

  7. Huang, H.-H., Uejo, M., Seki, Y., Lee, J.-H., Kawagoe, K.: Construction of a virtual ballroom dance instructor. Japn. Soc. Artif. Int. 28(2), 187–196 (2013)

    Google Scholar 

  8. Matsuyama, H., Hiroi, K., Kaji, K., Yonezawa, T., Kawaguchi , N.: Hybrid activity recognition for ballroom dance exercise using video and wearable sensor. In: International Conference on Activity and Behavior Computing (2019)

    Google Scholar 

  9. Matsuyama, H., Hiroi, K., Kaji, K., Yonezawa, T., Kawaguchi, N.: ballroom dance step type recognition by random forest using video and wearable sensor. In: International Workshop on Human Activity Sensing Corpus and Application (2019)

    Google Scholar 

  10. Cao, Z., Hidalgo, G., Simon, T., Wei, S-E., Sheikh, Y.: OpenPose: Realtime Multi-person 2D Pose Estimation using Part Affinity Fields. In arXiv preprint arXiv:1812.08008, 2018

  11. Joseph, P., Eric, H., Yuh, H.K.: The CyberShoe: A Wireless Multisensor Interface for a Dancers Feet. 03 1999

    Google Scholar 

  12. Paradiso, J.A., Hsiao, K., Benbasat, A.Y., Teegarden, Z.: Design and implementation of expressive footwear. IBM Syst. J. 39(3.4), 511–529 (2000)

    Google Scholar 

  13. Reza Maanijou and Seyed Abolghasem Mirroshandel: Introducing an expert system for prediction of soccer player ranking using ensemble learning. Neural Comput. Appl. 31(12), 9157–9174 (2019)

    Article  Google Scholar 

  14. Nordsborg, N.B., Espinosa, H.G., Thiel, D.V.: Estimating energy expenditure during front crawl swimming using accelerometers. Procedia Eng. 72, 132–137 (2014). The Engineering of Sport 10

    Google Scholar 

  15. Waldron, M., Twist, C., Highton, J., Worsfold, P., Daniels, M.: Movement and physiological match demands of elite rugby league using portable global positioning systems. J. Sports Sci. 29:1223–30, (2011)

    Google Scholar 

  16. Chen, H.-T., He, Y.-Z., Hsu, C.-C.: Computer-assisted yoga training system. Multimedia Tools Appl. 77(18), 23969–23991 (2018)

    Article  Google Scholar 

  17. Cao, Z., Simon, T., Wei, S-E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  18. Dawar, N., Kehtarnavaz, N.: Action detection and recognition in continuous action streams by deep learning-based sensing fusion. IEEE Sens. J. 18(23), 9660–9668 (2018)

    Article  Google Scholar 

  19. Hwang, I., Cha, G., Oh, S.: Multi-modal human action recognition using deep neural networks fusing image and inertial sensor data. In: 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 278–283, Nov 2017

    Google Scholar 

Download references

Acknowledgements

This research is partially supported by JSPS Grant-in-Aid for Scientific Research (B) Grant Number 17H01762 and JST CREST Grant Number 18071264. The ballroom dance performances were provided by the members of Nagoya University Ballroom Dance Club and its alumni.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hitoshi Matsuyama .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Matsuyama, H., Hiroi, K., Kaji, K., Yonezawa, T., Kawaguchi, N. (2021). A Basic Study on Ballroom Dance Figure Classification with LSTM Using Multi-modal Sensor. In: Ahad, M.A.R., Inoue, S., Roggen, D., Fujinami, K. (eds) Activity and Behavior Computing. Smart Innovation, Systems and Technologies, vol 204. Springer, Singapore. https://doi.org/10.1007/978-981-15-8944-7_13

Download citation

Publish with us

Policies and ethics