Skip to main content

New Communicative Strategies for the Affective Robot: F-2 Going Tactile and Complimenting

  • Conference paper
  • First Online:
Creativity in Intelligent Technologies and Data Science (CIT&DS 2021)

Abstract

This paper is dedicated to modeling affective reactions in a communicative robot via achieving communicative goals. The effective robot F-2’s software processes multimodal input and facts extracted from input texts. Sentences in Russian are translated with a syntactic-semantic parser into semantic structures that represent sentences meanings and comprise valencies and semantic markers. Basing on the input, the robot changes its emotional state over time, generating effective remarks along with gestures and gazes. The emotional state is modeled via microstates, each represented as a communicative goal, which is further matched to multimodal reactions scenarios from a database. New communicative strategies are introduced. Basing on human features extraction from video, a robot implements strategies of complimenting and making friends; in both cases, it points at the addressee’s look, including clothes and glasses. With tactile sensors, the robot is taught to react to touching; thus the gap in perception is filled which had previously caused the loss of interest by people inclined to tactile communication. Precision is estimated for both methods which are on the basis of new communicative strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allen, S.R.: Concern processing in autonomous agents, Ph.D. thesis. The University of Birmingham, Birmingham (2001)

    Google Scholar 

  2. Almansor, E.H., Hussain, F.K.: Survey on intelligent chatbots: state-of-the-art and future research directions. In: Barolli, L., Hussain, F.K., Ikeda, M. (eds.) CISIS 2019. AISC, vol. 993, pp. 534–543. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-22354-0_47

    Chapter  Google Scholar 

  3. Becker, C., Kopp, S., Wachsmuth, I.: Simulating the emotion dynamics of a multimodal conversational agent. In: André, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 154–165. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24842-2_15

    Chapter  Google Scholar 

  4. Breazeal, C.: Designing Sociable Robots. MIT Press, Cambridge (2002)

    MATH  Google Scholar 

  5. Cai, Y.: Empathic computing. In: Cai, Y., Abascal, J. (eds.) Ambient Intelligence in Everyday Life. LNCS (LNAI), vol. 3864, pp. 67–85. Springer, Heidelberg (2006). https://doi.org/10.1007/11825890_3

    Chapter  Google Scholar 

  6. COCO dataset (2021). https://cocodataset.org/. Accessed 20 Mar 2021

  7. Engleberg, I.N., Wynn, D.R.: Working in Groups: Communication Principles and Strategies. My Communication Kit Series, p. 133. Allyn & Bacon, Boston (2006)

    Google Scholar 

  8. F-2 standby, NRCKI cognitive team (2019). http://youtube.com/watch?v=TrKh5xohBZg. Accessed 15 April 2019

  9. Fung, P., et al.: Towards empathetic human-robot interactions. In: Gelbukh, A. (ed.) CICLing 2016. LNCS, vol. 9624, pp. 173–193. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75487-1_14

    Chapter  Google Scholar 

  10. Girshick, R.: Fast R-CNN. In: IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–1448. IEEE, Pictasaway (2015)

    Google Scholar 

  11. Greta, Embodied Conversational Agent (2017). http://perso.telecomparistech.fr/~pelachau/Greta/. Accessed 10 April 2017

  12. Halawa, L.J., Wibowo, A., Ernawan, F.: Face recognition using faster R-CNN with inception-V2 architecture for CCTV camera. In: 2019 3rd International Conference on Informatics and Computational Sciences (ICICoS), pp. 1–6. IEEE, Pictasaway (2019)

    Google Scholar 

  13. Han, J.G., Campbell, N., Jokinen, K., Wilcock, G.: Investigating the use of non-verbal cues in human-robot interaction with a Nao robot. In: Proceedings of the 3rd IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2012), Kosice, Slovakia, pp. 679–683. IEEE, Pictasaway (2012)

    Google Scholar 

  14. I·bug (2017). http://ibug.doc.ic.ac.uk/. Accessed 10 April 2017

  15. InterLink Electronics. FSR 400 series (2020). https://www.interlinkelectronics.com/fsr-400-series. Accessed 18 July 2020

  16. Iwashita, M., Katagami, D.: Psychological effects of compliment expressions by communication robots on humans. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, Piscataway (2020). https://doi.org/10.1109/IJCNN48605.2020.9206898

  17. Jokinen, K., Wilcock, G.: Modelling user experience in human-robot interactions. In: MA3HMI 2014 Workshop, LNAI, vol. 8757, pp. 45–56. Springer, Heidelberg (2014)

    Google Scholar 

  18. Kirby, R., Forlizzi, J., Simmons, R.: Affective social robots. Robot. Auton. Syst. 58, 322–332. Elsevier, Amsterdam (2010)

    Google Scholar 

  19. Kopp, S.: Social resonance and embodied coordination in face-to-face conversation with artificial interlocutors. Speech Commun. 52, 587–597. Elsevier, Amsterdam (2010)

    Google Scholar 

  20. Kotov, A.A.: Patterns of emotional communicative reactions: problems of creating a corpus and translating to emotional agents (in Russian). In: Computational Linguistics and Intellectual Technologies, vol. 8, pp. 211–218. RSUH, Moscow (2009)

    Google Scholar 

  21. Kotov, A.A., Zinnia, A.A.: Functional analysis of non-verbal communicative behavior (in Russian). In: Computational Linguistics and Intellectual Technologies, vol. 14, no. 2, pp. 308–320. RSUH, Moscow (2015)

    Google Scholar 

  22. Max (2017). http://cycling74.com/products/max/. Accessed 10 April 2017

  23. Minsky, M.: A framework for representing knowledge. In: Patrick Henry Winston (ed.) The Psychology of Computer Vision. McGraw-Hill, New York (1975)

    Google Scholar 

  24. Mori, M.: The uncanny valley (K. F. MacDorman & N. Kageki, Trans.). IEEE Robot. Autom. Mag. 19(2), 98–100 (1970/2012)

    Google Scholar 

  25. Neonode AirBar (2020). https://air.bar/. Accessed 18 July 2020

  26. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39. IEEE, Pictasaway (2015)

    Google Scholar 

  27. Rudakov, I.V., Paschenkova, A.V.: A hierarchical method for verification of software algorithms via hierarchical Petri nets. Engineering Journal: Science and Innovations, vol. 2, no. 14 (in Russian). BMSTU Press, Moscow (2013). https://doi.org/10.18698/2308-6033-2013-2-538

  28. Russell, J.: Core affect and the psychological construction of emotion. Psychol. Rev. 110(1), 145–172. American Psychological Association, Washington (2003)

    Google Scholar 

  29. Semaine Project (2017). http://www.semaine-project.eu/. Accessed 10 April 2017

  30. Sloman, A., Chrisley, R.: Virtual machines and consciousness. J. Conscious. Stud. 10(4–5), 133–172. Imprint Academic, Exeter (2003)

    Google Scholar 

  31. Shröder, M.: The SEMAINE API: towards a standards-based framework for building emotion-oriented systems. Adv. Hum.-Comput. Interact. 2010, 319406. Hindawi, London (2010)

    Google Scholar 

  32. TonTek TTP223-BA6_SPEC_V2.1 (2020). https://static.chipdip.ru/lib/949/DOC005949559.pdf. Accessed 27 Dec 2020

  33. Vilhjálmsson, H., et al.: The behavior markup language: recent developments and challenges. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 99–111. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74997-4_10

    Chapter  Google Scholar 

  34. Vinyals, O., Le, Q.: A neural conversational model. In: Proceedings of ICML Deep Learning Workshop, July 2015. https://arxiv.org/abs/1506.05869. Accessed 15 April 2019

  35. Viola, P., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004). https://doi.org/10.1023/B:VISI.0000013087.49260.fb

  36. Volkova, L., Kotov, A., Klyshinsky, E., Arinkin, N.: A robot commenting texts in an emotional way. In: Kravets, A., Shcherbakov, M., Kultsova, M., Groumpos, P. (eds.) CIT&DS 2017. CCIS, vol. 754, pp. 256–266. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-65551-2_19

    Chapter  Google Scholar 

  37. Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. J. Big Data, vol. 3. Springer, Heidelberg (2016)

    Google Scholar 

  38. Weizenbaum, J.: ELIZA. Commun. ACM 9, 36–45 (1966)

    Article  Google Scholar 

  39. Winograd, T.: Understanding Natural Language. Academic Press, New York (1972)

    Book  Google Scholar 

  40. Yandex SpeechKit API (in Russian). http://api.yandex.ru/speechkit/. Accessed 10 April 2017

  41. Zhou, L., Gao, J., Li, D., Shum, H.-Y.: The Design and Implementation of XiaoIce, an Empathetic Social Chatbot (2018–12–21). https://arxiv.org/abs/1812.08989. Accessed 15 April 2019

  42. Zou, L., Ge, C., Wang, Z., Cretu, E., Li, X.: Novel tactile sensor technology and smart tactile sensing systems: a review. Sensors 17(11), 2653. MDPI, Basel (2017)

    Google Scholar 

Download references

Acknowledgments

This research is supported by the grant of the Russian Science Foundation (project № 19–18-00547).

The F-2 team wishes to express gratitude to all of our respondents, including students of the Power Engineering department of BMSTU. All of the feedback is precious for us as it shows new directions of further development for F-2.

We also wish to thank our colleague Edward Klyshinsky for pointing out Azimov’s “Sally” short story for epigraphs, an inspiration to beam this article with.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liliya Volkova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Volkova, L., Ignatev, A., Kotov, N., Kotov, A. (2021). New Communicative Strategies for the Affective Robot: F-2 Going Tactile and Complimenting. In: Kravets, A.G., Shcherbakov, M., Parygin, D., Groumpos, P.P. (eds) Creativity in Intelligent Technologies and Data Science. CIT&DS 2021. Communications in Computer and Information Science, vol 1448. Springer, Cham. https://doi.org/10.1007/978-3-030-87034-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87034-8_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87033-1

  • Online ISBN: 978-3-030-87034-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics