Dialogue-Based Neural Learning to Estimate the Sentiment of a Next Upcoming Utterance

  • Chandrakant BotheEmail author
  • Sven Magg
  • Cornelius Weber
  • Stefan Wermter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10614)


In a conversation, humans use changes in a dialogue to predict safety-critical situations and use them to react accordingly. We propose to use the same cues for safer human-robot interaction for early verbal detection of dangerous situations. Due to the limited availability of sentiment-annotated dialogue corpora, we use a simple sentiment classification for utterances to neurally learn sentiment changes within dialogues and ultimately predict the sentiment of upcoming utterances. We train a recurrent neural network on context sequences of words, defined as two utterances of each speaker, to predict the sentiment class of the next utterance. Our results show that this leads to useful predictions of the sentiment class of the upcoming utterance. Results for two challenging dialogue datasets are reported to show that predictions are similar independent of the dataset used for training. The prediction accuracy is about 63% for binary and 58% for multi-class classification.


Recurrent Neural Networks Safe Human-robot Interaction Long Short-term Memory (LSTM) Word Embedding Personal Care Robots 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642667 (SECURE).


  1. 1.
    Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I.J., Bergeron, A., Bouchard, N., Bengio, Y.: Theano: new features and speed improvements. 2012 Workshop on Deep Learning and Unsupervised Feature Learning NIPS (2012)Google Scholar
  2. 2.
    Biswas, S., Chadda, E., Ahmad, F.: Sentiment analysis with gated recurrent units. Adv. Comput. Sci. Inf. Technol. (ACSIT) 2(11), 59–63 (2015)Google Scholar
  3. 3.
    Clark, E.V.: Awareness of language: some evidence from what children say and do. In: Sinclair, A., Jarvella, R.J., Levelt, W.J.M. (eds.) The Child’s Conception of Language, pp. 17–43. Springer, Heidelberg (1978). doi: 10.1007/978-3-642-67155-5_2
  4. 4.
    Collobert, R., Weston, J.: A unified architecture for natural language processing. In: Proceedings of the 25th International Conference on Machine Learning - ICML 2008. vol. 20, pp. 160–167 (2008)Google Scholar
  5. 5.
    Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Neural Information Processing Systems (NIPS), pp. 3079–3087. No. 28, Curran Associates, Inc. (2015)Google Scholar
  6. 6.
    Danescu-Niculescu-Mizil, C., Lee, L.: Chameleons in imagined conversations: a new approach to understanding coordination of linguistic style in dialogs. In: Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. ACL (2011)Google Scholar
  7. 7.
    Fong, T., Thorpe, C., Baur, C.: Collaboration, dialogue, and human-robot interaction. In: Jarvis, R.A., Zelinsky, A. (eds) 10th International Symposium of Robotics Research (Springer Tracts in Advanced Robotics), pp. 255–266 (2003). doi: 10.1007/3-540-36460-9_17
  8. 8.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)CrossRefGoogle Scholar
  9. 9.
    Hutto, C.J., Gilbert, E.: VADER: A parsimonious rule-based model for sentiment analysis of social media text. In: Association for the Advancement of Artificial Intelligence (Proceedings of Eighth International AAAI Conference on Weblogs and Social Media), pp. 216–225 (2014)Google Scholar
  10. 10.
    Kim, S.M., Hovy, E.: Determining the sentiment of opinions. In: COLING 2004 Proceedings of 20th International Conference on Computational Linguistics, p. 1367 (2004)Google Scholar
  11. 11.
    Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the Conference on EMNLP, pp. 1746–1751 (2014)Google Scholar
  12. 12.
    Latham, A.S.: Learning through feedback. Educ. Leadersh. 54(8), 86–87 (1997)Google Scholar
  13. 13.
    Loper, E., Bird, S.: NLTK: the Natural Language Toolkit. In: Proceedings of ACL-2 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. vol. 1, pp. 63–70 (2002)Google Scholar
  14. 14.
    Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of 49th Annual Meeting of the Association for Computational Linguistics, pp. 142–150 (2011)Google Scholar
  15. 15.
    MacWhinney, B.: The CHILDES project: tools for analyzing talk. Lawrence Erlbaum Associates, Inc (1991).
  16. 16.
    Mikolov, T., Corrado, G., Chen, K., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of International Conference on Learning Representations (ICLR 2013), pp. 1–12 (2013)Google Scholar
  17. 17.
    Pang, B., Lee, L.: Opinion mining and sentiment analysis. Found. Trends Inf. Retr. 2(12), 1–135 (2008)CrossRefGoogle Scholar
  18. 18.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of Conference on EMNLP, pp. 1532–1543 (2014)Google Scholar
  19. 19.
    Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of Conference on EMNLP, pp. 1631–1642. Association for Computational Linguistics (2013)Google Scholar
  20. 20.
    Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.Y., Gao, J., Dolan, B.: A neural network approach to context-sensitive generation of conversational responses. In: Association for Computational Linguistics (Human Language Technologies: The 2015 Annual Conference of North American Chapter of the ACL), pp. 196–205 (2015)Google Scholar
  21. 21.
    Tadele, T.S., de Vries, T., Stramigioli, S.: The safety of domestic robotics: a survey of various safety-related publications. IEEE Robot. Autom. Mag. 21(3), 134–142 (2014)CrossRefGoogle Scholar
  22. 22.
    Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: Proceedings of 50th Annual Meeting of the Association for Computational Linguistics, pp. 90–94 (2012)Google Scholar
  23. 23.
    Weston, J.: Dialog-based language learning. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems (NIPS) 29, pp. 829–837. Curran Associates, Inc. (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Chandrakant Bothe
    • 1
    Email author
  • Sven Magg
    • 1
  • Cornelius Weber
    • 1
  • Stefan Wermter
    • 1
  1. 1.Department of InformaticsKnowledge TechnologyHamburgGermany

Personalised recommendations