Abstract
Several studies report successful results on how social assistive robots can be employed as interface in the assisted living domain. In this domain, a natural way to interact with robots is to use a speech. However, humans often use particular intonation in the voice that can change the meaning of the sentence. For this reason, a social assistive robot should have the capability to recognize the intended meaning of the utterance by reasoning on the combination of linguistic and acoustic analysis of the spoken sentence to really understand the user’s feedback. We developed a probabilistic model that is able to infer the intended meaning of the spoken sentence from the analysis of its linguistic content and from the output of a classifier able to recognise the valence and arousal of the speech prosody starting from dataset. The results showed that reasoning on the combination of the linguistic content with acoustic features of the spoken sentence was better than using only the linguistic component.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
References
Bierhoff, I., van Berlo, A.: More Intelligent Smart Houses for Better Care and Health, Global Telemedicine and eHealth Updates. Knowl. Resour. 1, 322–325 (2008)
Thrun, S.: Towards a framework for human-robot interaction. Hum. Comput. Interact. 19(1&2), 9–24 (2004)
Cesta, A., Cortellessa, G., Pecora, F., Rasconi, R.: Supporting interaction in the robocare intelligent assistive environment. In: AAAI 2007 Spring Symposium (2007)
Pineau, J., Montemerlo, M., Pollack, M., Roy, N., Thrun, S.: Towards robotic assistants in nursing homes: challenges and results. Robot. Auton. Syst. 42(3–4), 271–281 (2003)
Graf, B., Hans, M., Schraft, R.D.: Care-O-bot II – development of a next generation robotics home assistant. Auton. Robots 16, 193–205 (2004)
CompanionAble project (2011). http://www.companionable.net
ksera.ieis.tue.nl
Drygajlo, A., Prodanov, P.J., Ramel, G., Meisser, M., Siegwart, R.: On developing a voice-enabled interface for interactive tour-guide robots. J. Adv. Robot. 17(7), 599–616 (2003)
Bosma, W.E., André, E.: Exploiting emotions to disambiguate dialogue acts. In: Nunes, N.J., Rich, C. (eds.) Proceedings 2004, Conference on Intelligent User Interfaces, January 13 2004, Funchal, Portugal, pp. 85–92 (2004)
Searle, J.R.: Speech Acts: An Essay in the Philosophy of Language. Cambridge University Press, Cambridge (1969)
Jensen, F.V.: Bayesian Networks and Decision Graphs, Statistics for Engineering and Information Science. Springer, New York (2001)
De Carolis, B., Cozzolongo, G.: Interpretation of User’s Feedback in Human-Robot Interaction. J. Phy. Agents 3(2), 47–58 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
De Carolis, B., Ferilli, S., Palestra, G. (2015). Improving Speech-Based Human Robot Interaction with Emotion Recognition. In: Esposito, F., Pivert, O., Hacid, MS., Rás, Z., Ferilli, S. (eds) Foundations of Intelligent Systems. ISMIS 2015. Lecture Notes in Computer Science(), vol 9384. Springer, Cham. https://doi.org/10.1007/978-3-319-25252-0_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-25252-0_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-25251-3
Online ISBN: 978-3-319-25252-0
eBook Packages: Computer ScienceComputer Science (R0)