Skip to main content
Log in

Reflex or Reflection? The Oculomotor Behavior of the Companion Robot, Creating the Impression of Communicating with an Emotional Being

  • Published:
Scientific and Technical Information Processing Aims and scope

Abstract

The control system of the F-2 companion robot implements a competitive system of rules (scenarios) to model the robot’s reactions to a wide range of events. The system is designed in such a way as to provide balanced responses by the robot to speech utterances and other events recognized by the computer vision system (orientation of the user’s face and gaze, events in the tangram game), as well as to the user’s touches. In this experiment, we apply this system to evaluate two robots that are able to determine the orientation of a person’s face and the direction of the gaze and respond differently to his attention. The implicit reactions of a person to the robot’s gaze and the problems of differences between reflexive and reflex behavior in eye movements in comparison with other communicative actions are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.

Notes

  1. The parser works with a written text in Russian. When processing verbal information to recognize text, the Yandex Speech API external service is used. In this case, ambiguous recognition options are also supported.

REFERENCES

  1. Pannasch, S., Schulz, J., and Velichkovsky, B.M., On the control of visual fixation durations in free viewing of complex images, Attention, Perception, Psychophysics, 2011, vol. 73, no. 4, pp. 1120–1132. https://doi.org/10.3758/s13414-011-0090-1

    Article  PubMed  Google Scholar 

  2. Velichkovsky, B.M., Korosteleva, A.N., Pannasch, S., Helmert, J.R., Orlov, V.A., Sharaev, M.G., Velichkovsky, B.B., and Ushakov, V.L., Two visual systems and their eye movements: A fixation-based event-related experiment with ultrafast fMRI reconciles competing views, Sovrem. Tekhnol. Med., 2019, vol. 11, no. 4, p. 7. https://doi.org/10.17691/stm2019.11.4.01

    Article  Google Scholar 

  3. Velichkovsky, B.M., Communicating attention: Gaze position transfer in cooperative problem solving, Pragmatics Cognition, 1995, vol. 3, no. 2, pp. 199–223. https://doi.org/10.1075/pc.3.2.02vel

    Article  Google Scholar 

  4. Beyan, C., Murino, V., Venture, G., and Wykowska, A., Editorial: Computational approaches for human-human and human-robot social interactions, Front. Rob. AI, 2020, vol. 7, p. 55. https://doi.org/10.3389/frobt.2020.00055

    Article  ADS  Google Scholar 

  5. Pagnotta, M., Laland, K.N., and Coco, M.I., Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task, Cognition, 2020, vol. 201, p. 104314. https://doi.org/10.1016/j.cognition.2020.104314

    Article  PubMed  Google Scholar 

  6. Ekman, P. and Friesen, W., Facial Action Coding System, Palo Alto, Calif.: Consulting Psychologists, 1978. https://doi.org/10.1037/t27734-000

    Book  Google Scholar 

  7. Iriskhanova, O.K. and Cienki, A., The semiotics of gestures in cognitive linguistics: contribution and challenges, Vopr. Kognitivnoy Lingvistiki, 2018, no. 4, pp. 25–36. https://doi.org/10.20916/1812-3228-2018-4-25-36

  8. Müller, C., Gesture and sign: Cataclysmic break or dynamic relations?, Front. Psychol., 2018, vol. 9. https://doi.org/10.3389/fpsyg.2018.01651

  9. Admoni, H. and Scassellati, B., Social eye gaze in human-robot interaction: A review, J. Hum.-Robot Interaction, 2017, vol. 6, no. 1, p. 25. https://doi.org/10.5898/jhri.6.1.admoni

    Article  Google Scholar 

  10. Scassellati, B., Mechanisms of shared attention for a humanoid robot, Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symp., AAAI Press, 1996, vol. 4, no. 9.

  11. Breazeal, C. and Scassellati, B., A context-dependent attention system for a social robot, IJCAI’99: Proc. 16th Int. Joint Conf. on Artificial Intelligence, Stockholm, 1999, San Francisco: Morgan Kaufmann, 1999, vol. 2, p. 268.

  12. Kozima, H. and Ito, A., Towards language acquisition by an attention-sharing robot, Proc. Joint Conf. on New Methods in Language Processing and Computational Natural Language Learning - NeMLaP3/CoNLL ’98, Sydney, 1998, Stroudsburg, Pa.: Association for Computational Linguistics, 1998, pp. 245–246. https://doi.org/10.3115/1603899.1603939

  13. Schrammel, F., Pannasch, S.T., Graupner, S., Mojzisch, A., and Velichkovsky, B.M., Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience, Psychophysiology, 2009, vol. 46, no. 5, pp. 922–931. https://doi.org/10.1111/j.1469-8986.2009.00831.x

    Article  PubMed  Google Scholar 

  14. Vilhjálmsson, H., Cantelmo, N., Cassell, J., E. Cha-fai, N.E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A., Pelachaud, C., Ruttkay, Z., Thórisson, K., Van Welbergen, H., and Van Der Werf, R.J., The behavior markup language: Recent developments and challenges, Intelligent Virtual Agents. IVA 2007, Pelachaud, C., Martin, J.C., André, E., Chollet, G., Karpouzis, K., and Pelé, D., Eds., Lecture Notes in Computer Science, vol. 4722, Berlin: Springer, 2007, pp. 99–111. https://doi.org/10.1007/978-3-540-74997-4_10

    Book  Google Scholar 

  15. Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K., and Vilhjálmsson, H., Towards a common framework for multimodal generation: The behavior markup language, Intelligent Virtual Agents. IVA 2006, Gratch, J., Young, M., Aylett, R., Ballin, D., and Olivier, P., Eds., Lecture Notes in Computer Science, vol. 4133, Berlin: Springer, 2006, pp. 205–217. https://doi.org/10.1007/11821830_17

    Book  Google Scholar 

  16. Lorenz, C., Die Rückseite des Spiegels. Versuch einer Naturgeschichte des menschlichen Erkennens, Munich: Piper, 1973.

    Google Scholar 

  17. Minsky, M.L., The Society of Mind, New York: Touchstone Book, 1988.

    Google Scholar 

  18. Laird, J.E., Extending the Soar Cognitive Architecture, 2018.

  19. Laird, J.E., Newell, A., and Rosenbloom, P.S., SOAR: An architecture for general intelligence, Artif. Intell., 1987, vol. 33, no. 1, pp. 1–64. https://doi.org/10.1016/0004-3702(87)90050-6

    Article  Google Scholar 

  20. Kotov, A., Zinina, A., and Filatov, A., Semantic parser for sentiment analysis and the emotional computer agents, Proc. AINL-ISMW FRUCT, 2015, pp. 167–170.

  21. Fillmore, C.J., The case for case, Universals in Linguistic Theory, Bach, E. and Harms, R.T., Eds., New York: Holt, Rinehart & Winston, 1968, pp. 1–68.

    Google Scholar 

  22. Kotov, A.A., Mekhanizmy rechevogo vozdeistviya (Mechanisms of Verbal Impact), Moscow: RGGU, 2021.

  23. Lyusin, D.V., A new method for measuring the emotional intelligence: An EmIn questionnaire, Psikhologicheskaya Diagn., 2006, vol. 1, no. 4, pp. 3–22.

    Google Scholar 

  24. Lyusin, D.V., Questionnaire on emotional intelligence EmIn: New psychometric data, Sotsial’nyi i emotsional’nyi intellekt. Ot modelei k izmereniyam (Social and Emotional Intelligence: From Models to Measurements), Moscow: Inst. Psikhol. Ross. Akad. Nauk, 2009, pp. 264–278.

    Google Scholar 

  25. Kotov, A.A. and Zinina, A.A., Functional layout of communication actions in the REC corpus, Trudy mezhdunarodnoi konferentsii Korpusnaya lingvistika - 2015 (Proc. Int. Conf. Corpus Linguistics-2015), St. Petersburg: S.-Peterb. Gos. Univ., 2015, pp. 287–295.

  26. Kotov, A.A. and Zinina, A.A., Functional analysis of nonverbal communication behavior, Komp’yuternaya lingvistika i intellektual’nye tekhnologii. Dialog-2015 (Computer Linguistics and Intelligent Technologies: Dialog-2015), Selegei, V.P. , Eds., Moscow: Ross. Gos. Gumanit. Univ., 2015, vol. 1, pp. 299–310.

    Google Scholar 

  27. Brown, P. and Levinson, S.C., Politeness: Some Universals in Language Usage, Studies in Interactional Sociolinguistics, Cambridge: Cambridge Univ. Press, 1987. https://doi.org/10.1017/CBO9780511813085

  28. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D., Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell., 2010, vol. 32, no. 9, pp. 1627–1645. https://doi.org/10.1109/tpami.2009.167

    Article  PubMed  Google Scholar 

  29. Kazemi, V. and Sullivan, J., One millisecond face alignment with an ensemble of regression trees, 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, Ohio, 2014, IEEE, 2014, pp. 1867–1874. https://doi.org/10.1109/cvpr.2014.241

  30. Terzakis, G. and Lourakis, M., A consistently fast and globally optimal solution to the perspective-n-point problem, Computer Vision–ECCV 2020, Vedaldi, A., Bischof, H., Brox, T., and Frahm, J.M., Eds., Lecture Notes in Computer Science, vol. 12346, Cham: Springer, 2020, pp. 478–494. https://doi.org/10.1007/978-3-030-58452-8_28

    Book  Google Scholar 

  31. Kotov, A. and Budyanskaya, E., The Russian Emotional Corpus: Communication in natural emotional situations, Komp’yuternaya lingvistika i intellektual’nye tekhnologii (Computer Linguistics and Intelligent Technologies), Moscow: Ross. Gos. Gumanit. Univ., 2012, vol. 1, pp. 296–306.

    Google Scholar 

  32. Zinina, A.A., Arinkin, N.A., Zaidel’man, L.Ya., and Kotov, A.A., Development of communicative behavior model for F-2 robot basing on REC multimodal corpora, Komp’yuternaya lingvistika i intellektual’nye tekhnologii. Po materialam ezhegodnoi mezhdunarodnoi konferentsii Dialog (Computer Linguistics and Intelligent Technologies: Proc. Annu. Int. Conf. Dialog), Moscow: Ross. Gos. Gumanit. Univ., 2018, pp. 831–844.

  33. Kotov, A.A., A computational model of consciousness for artificial emotional agents, Psychol. Russia: State Art, 2017, vol. 10, no. 3, pp. 57–73. https://doi.org/10.11621/pir.2017.0304

    Article  Google Scholar 

  34. Kotov, A.A. and Budyanskaya, E.M., Modeling of poignancy and further steps of dialog for animated virtual agents, Komp’yuternaya lingvistika i intellektual’nye tekhnologii (Computer Linguistics and Intelligent Technologies), Moscow: Ross. Gos. Gumanit. Univ., 2007, pp. 102–108.

    Google Scholar 

  35. Schilbach, L., Helmert, J.R., Mojzisch, A., Pannasch, S., Velichkovsky, B.M., and Vogeley, K., Neural correlates, visual attention and facial expression during social interaction with virtual others, Proc. 27th Annu. Conf. of Cognitive Science Society, Stresa, Italy: 2005, pp. 74–86.

  36. Velichkovsky, B.M., Krotkova, O.A., Kotov, A.A., Orlov, V.A., Verkhlyutov, V.M., Ushakov, V.L., and Sharaev, M.G., Consciousness in a multilevel architecture: Evidence from the right side of the brain, Consciousness Cognition, 2018, vol. 64, pp. 227–239. https://doi.org/10.1016/j.concog.2018.06.004

    Article  PubMed  Google Scholar 

  37. Velichkovsky, B.M., Osipov, G.S., Nosovets, Z.A., and Velichkovsky, B.B., Personal meaning and solving creative tasks: Contemporary neurocognitive studies, Sci. Tech. Inf. Process., 2021, vol. 48, no. 5, pp. 406–414. https://doi.org/10.3103/S0147688221050130

    Article  Google Scholar 

  38. Cognitive Neuroscience of Attention, Posner, M.I., Ed., New York: The Guilford Press, 2004.

    Google Scholar 

Download references

ACKNOWLEDGMENTS

We express our gratitude to the Russian State University for the Humanities for help in organizing experiments during a difficult period of restrictions caused by the pandemic. We thank N.A. Arinkin, K.A. Kivva, and A.A. Filatov for their help in preparing the experiment.

Funding

The study was performed with partial financial support from the Russian Science Foundation, project no. 19-18-00547 (https://rscf.ru/project/19-18-00547/).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. A. Zinina.

Ethics declarations

The authors declare that they have no conflict of interest.

Additional information

Translated by L. Solovyova

Publisher’s Note.

Allerton Press remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zinina, A.A., Zaidelman, L.Y., Kotov, A.A. et al. Reflex or Reflection? The Oculomotor Behavior of the Companion Robot, Creating the Impression of Communicating with an Emotional Being. Sci. Tech. Inf. Proc. 50, 500–511 (2023). https://doi.org/10.3103/S0147688223050179

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.3103/S0147688223050179

Keywords:

Navigation