Advertisement

Dance to Music Expressively: A Brain-Inspired System Based on Audio-Semantic Model for Cognitive Development of Robots

  • Dengju Li
  • Rui YanEmail author
  • Xiaoliang Xu
  • Huajin Tang
Conference paper
Part of the Communications in Computer and Information Science book series

Abstract

Cognitive development is one of the most challenging and promising research fields in robotics, in which emotion and memory play an important role. In this paper, an audio-semantic (AS) model combining deep convolutional neural network and recurrent attractor network is proposed to associate music to its semantic mapping. Using the proposed model, we design the system inspired by the functional structure of the limbic system in our brain for the cognitive development of robots. The system allows the robot to make different dance decisions based on the corresponding semantic features obtained from music. The proposed model borrows some mechanisms from the human brain, using the distributed attractor network to activate multiple semantic tags of music, and the results meet the expectations. In the experiment, we show the effectiveness of the model and apply the system on the NAO robot.

Keywords

Cognitive robot Brain-inspired system Emotional model Semantic representation 

References

  1. 1.
    Aly, A., Griffiths, S.S., Stramandinoli, F.: Metrics and benchmarks in human-robot interaction: recent advances in cognitive robotics. Cogn. Syst. Res. 43, 313–323 (2017)CrossRefGoogle Scholar
  2. 2.
    Asada, M., et al.: Cognitive developmental robotics: a survey. IEEE Trans. Auton. Ment. Dev. 1(1), 12–34 (2009)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Broekens, J., Heerink, M., Rosendal, H.: Assistive social robots in elderly care: a review. Gerontechnology 8(2), 94–103 (2009)CrossRefGoogle Scholar
  4. 4.
    Cabibihan, J., Javed, H., Ang, H.M., Aljunied, S.M.: Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism. Int. J. Soc. Robot. 5(4), 593–618 (2013)CrossRefGoogle Scholar
  5. 5.
    Fischl, K.D., Cellon, K.B., Stewart, T.C., Horiuchi, T.K., Andreou, A.G.: Socio-emotional robot with distributed multi-platform neuromorphic processing: (invited presentation), pp. 1–6, March 2019Google Scholar
  6. 6.
    Devereux, B., Clarke, A., Tyler, L.K.: Integrated deep visual and semantic attractor neural networks predict fmri pattern-information along the ventral object processing pathway. Sci. Rep. 8(1), 10636 (2018)CrossRefGoogle Scholar
  7. 7.
    Hinton, G.E., Shallice, T.: Lesioning an attractor network: investigations of acquired dyslexia. Psychol. Rev. 98(1), 74 (1991)CrossRefGoogle Scholar
  8. 8.
    Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., Friederici, A.D.: Music, language and meaning: brain signatures of semantic processing. Nat. Neurosci. 7(3), 302–307 (2004)CrossRefGoogle Scholar
  9. 9.
    Masuyama, N., Islam, M.N., Seera, M., Loo, C.K.: Application of emotion affected associative memory based on mood congruency effects for a humanoid. Neural Comput. Appl. 28(4), 737–752 (2017)CrossRefGoogle Scholar
  10. 10.
    Nishida, S., Nishimoto, S.: Decoding naturalistic experiences from human brain activity via distributed representations of words. NeuroImage 180, 232–242 (2017)CrossRefGoogle Scholar
  11. 11.
    Oramas, S., Nieto, O., Barbieri, F., Serra, X.: Multi-label music genre classification from audio, text, and images using deep features. arXiv preprint: arXiv:1707.04916 (2017)
  12. 12.
    Tang, H., Huang, W., Narayanamoorthy, A., Yan, R.: Cognitive memory and mapping in a brain-like system for robotic navigation. Neural Netw. 87, 27–37 (2017)CrossRefGoogle Scholar
  13. 13.
    Tikhanoff, V., Cangelosi, A., Metta, G.: Integration of speech and action in humanoid robots: iCub simulation experiments. IEEE Trans. Auton. Ment. Dev. 3(1), 17–29 (2011)CrossRefGoogle Scholar
  14. 14.
    Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multi-label classification of music by emotion. EURASIP J. Audio Speech Music Process. 2011(1), 4 (2011)CrossRefGoogle Scholar
  15. 15.
    Warren, J.D.: How does the brain process music. Clin. Med. 8(1), 32–36 (2008)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Xiao, R., Yan, R., Tang, H., Tan, K.C.: A spiking neural network model for sound recognition. In: Sun, F., Liu, H., Hu, D. (eds.) ICCSIP 2016. CCIS, vol. 710, pp. 584–594. Springer, Singapore (2017).  https://doi.org/10.1007/978-981-10-5230-9_57CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Neuromorphic Computing Research Center, College of Computer ScienceSichuan UniversityChengduChina
  2. 2.School of Computer Science and TechnologyHangzhou Dianzi UniversityHangzhouChina

Personalised recommendations