Advertisement

Robot Computing for Music Visualization

  • Pei-Chun LinEmail author
  • David MettrickEmail author
  • Patrick C. K. HungEmail author
  • Farkhund IqbalEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11436)

Abstract

This paper presents an algorithm design of Music Visualization on Robot (MVR) which could automatically link the flashlight, color, and emotion through music. We call this algorithm as MVR algorithm that composed by two analyses. First, we focus on Music Signal Analysis. Second, we focus on Music Sentiment Analysis. We integrate two analysis results and implement the MVR algorithm on a robot called Zenbo which is released from ASUS Company. We perform the Zenbo Robot in luminous environments. The MVR system not only could be used in Zenbo robot but also could extend to other fields of Artificial Intelligent (AI) equipment in the future.

Keywords

Music Visualization on Robot (MVR) Beat tracking Music Sentiment Analysis Music Information Retrieval Music Signal Analysis Robot computing 

Notes

Acknowledgment

This research work is supported by the Ministry of Education, R.O.C., under the grants of TEEP@AsiaPlus.

References

  1. 1.
    MilkDrop 1.04 for Windows 2000/NT/ME/98/95. Shareware Music Machine. Hitsquad Pty Ltd. Accessed 11 Oct 2010Google Scholar
  2. 2.
    MilkDrop Version History Archived 23 May 2007 at the Wayback MachineGoogle Scholar
  3. 3.
    MilkDrop preset authoring guide Archived 7 June 2007 at the Wayback MachineGoogle Scholar
  4. 4.
    Muller, M., Ellis, D.P.W., Klapuri, A., Richard, G.: Signal processing for music analysis. IEEE J. Sel. Top. Signal Process. 5(6), 1088–1110 (2011)CrossRefGoogle Scholar
  5. 5.
    McFee, B., et al.: Librosa: audio and music signal analysis in python. In: Proceedings of the 14th Python in Science Conference, SCIPY 2015, pp. 18–25 (2015)Google Scholar
  6. 6.
    Gómez, L.M., Cáceres, M.N.: Applying data mining for sentiment analysis in music. In: De la Prieta, F., et al. (eds.) PAAMS 2017. AISC, vol. 619, pp. 198–205. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-61578-3_20CrossRefGoogle Scholar
  7. 7.
    Shukla, S., Khanna, P., Agrawal, K.K.: 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS), December 2017Google Scholar
  8. 8.
    Jamdar, A., Abraham, J., Khanna, K., Dubey, R.: Emotion analysis of songs based on lyrical and audio features. Int. J. Artif. Intell. Appl. (IJAIA) 6(3), 35–50 (2015)Google Scholar
  9. 9.
    Abboud, R., Tekli, J.: MUSE prototype for music sentiment expression. In: 2018 IEEE International Conference on Cognitive Computing, San Francisco, USA, 2–7 July 2018, pp. 106–109 (2018)Google Scholar
  10. 10.
    Wang, Q., Hai, Y., Shao, X., Gong, R.: Investigation on factors to influence color emotions and color preference responses. Optik 136, 71–78 (2017)CrossRefGoogle Scholar
  11. 11.
    Kawabata, Y., Takahashi, F.: The association between colors and emotions for emotional words and facial expression. Wiley (2017)Google Scholar
  12. 12.
    Kim, M., Lee, H.S., Park, J.W., Jo, S.H., Chung, M.J.: Determining color and blinking to support facial expression of robot for conveying emotional intensity. In: IEEE International Symposium on Robot and Human Interactive Communication, Technische Universität München, Munich (2008)Google Scholar
  13. 13.
    Terada, K., Yamauchi, A., Ito, A.: Artificial emotion expression for a robot by dynamic color change. In: The 21st IEEE International Symposium on IEEE RO-MAN, Paris (2012)Google Scholar
  14. 14.
    Lin, P.-C., Mettrick, D., Hung, P.C.K., Iqbal, F.: IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2018, Taichung, Taiwan, 10–12 December 2018 (2018)Google Scholar
  15. 15.
  16. 16.
  17. 17.
    Böck, S., Krebs, F., Schedl, M.: Evaluating the online capabilities of onset detection methods. In: 11th International Society for Music Information Retrieval Conference (ISMIR 2012), pp. 49–54 (2012)Google Scholar
  18. 18.
    Ellis, D.P.W.: Beat tracking by dynamic programming. J. New Music Res. 36(1), 51–60 (2007)CrossRefGoogle Scholar
  19. 19.
    Oxford Dictionaries, “Emotion”. Oxford University Press, Oxford. https://www.oxforddictionaries.com/
  20. 20.
    Song, S., Yamada, S.: Exploring mediation effect of mental alertness for expressive lights. In: HAI 2017, Bielefeld, Germany, 17–20 October 2017 (2017)Google Scholar
  21. 21.
    Rubin, S., Agrawala, M.: Generating emotionally relevant musical scores for audio stories. In: UIST 2014, Honolulu, HI, USA, 5–8 October 2014 (2014)Google Scholar
  22. 22.
    Palmer, S.E., Schloss, K.B., Xu, Z., Prado-León, L.R.: Music-color associations are mediated by emotion. PNAS 110(22), 8836–8841 (2013)CrossRefGoogle Scholar
  23. 23.
    Lin, P.C., Arbaiy, N., Hamid, I.R.A.: One-way ANOVA model with fuzzy data for consumer demand. In: Herawan, T., Ghazali, R., Nawi, N.M., Deris, M.M. (eds.) SCDM 2016. AISC, vol. 549, pp. 111–121. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-51281-5_12CrossRefGoogle Scholar
  24. 24.
  25. 25.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Information Engineering and Computer ScienceFeng Chia UniversityTaichungTaiwan
  2. 2.Faculty of Business and ITUniversity of Ontario Institute of TechnologyOshawaCanada
  3. 3.College of Technological InnovationZayed UniversityDubaiUAE

Personalised recommendations