Dance to Music Expressively: A Brain-Inspired System Based on Audio-Semantic Model for Cognitive Development of Robots
Cognitive development is one of the most challenging and promising research fields in robotics, in which emotion and memory play an important role. In this paper, an audio-semantic (AS) model combining deep convolutional neural network and recurrent attractor network is proposed to associate music to its semantic mapping. Using the proposed model, we design the system inspired by the functional structure of the limbic system in our brain for the cognitive development of robots. The system allows the robot to make different dance decisions based on the corresponding semantic features obtained from music. The proposed model borrows some mechanisms from the human brain, using the distributed attractor network to activate multiple semantic tags of music, and the results meet the expectations. In the experiment, we show the effectiveness of the model and apply the system on the NAO robot.
KeywordsCognitive robot Brain-inspired system Emotional model Semantic representation
- 5.Fischl, K.D., Cellon, K.B., Stewart, T.C., Horiuchi, T.K., Andreou, A.G.: Socio-emotional robot with distributed multi-platform neuromorphic processing: (invited presentation), pp. 1–6, March 2019Google Scholar
- 11.Oramas, S., Nieto, O., Barbieri, F., Serra, X.: Multi-label music genre classification from audio, text, and images using deep features. arXiv preprint: arXiv:1707.04916 (2017)