Abstract
The control system of the F-2 companion robot implements a competitive system of rules (scenarios) to model the robot’s reactions to a wide range of events. The system is designed in such a way as to provide balanced responses by the robot to speech utterances and other events recognized by the computer vision system (orientation of the user’s face and gaze, events in the tangram game), as well as to the user’s touches. In this experiment, we apply this system to evaluate two robots that are able to determine the orientation of a person’s face and the direction of the gaze and respond differently to his attention. The implicit reactions of a person to the robot’s gaze and the problems of differences between reflexive and reflex behavior in eye movements in comparison with other communicative actions are considered.
Notes
The parser works with a written text in Russian. When processing verbal information to recognize text, the Yandex Speech API external service is used. In this case, ambiguous recognition options are also supported.
REFERENCES
Pannasch, S., Schulz, J., and Velichkovsky, B.M., On the control of visual fixation durations in free viewing of complex images, Attention, Perception, Psychophysics, 2011, vol. 73, no. 4, pp. 1120–1132. https://doi.org/10.3758/s13414-011-0090-1
Velichkovsky, B.M., Korosteleva, A.N., Pannasch, S., Helmert, J.R., Orlov, V.A., Sharaev, M.G., Velichkovsky, B.B., and Ushakov, V.L., Two visual systems and their eye movements: A fixation-based event-related experiment with ultrafast fMRI reconciles competing views, Sovrem. Tekhnol. Med., 2019, vol. 11, no. 4, p. 7. https://doi.org/10.17691/stm2019.11.4.01
Velichkovsky, B.M., Communicating attention: Gaze position transfer in cooperative problem solving, Pragmatics Cognition, 1995, vol. 3, no. 2, pp. 199–223. https://doi.org/10.1075/pc.3.2.02vel
Beyan, C., Murino, V., Venture, G., and Wykowska, A., Editorial: Computational approaches for human-human and human-robot social interactions, Front. Rob. AI, 2020, vol. 7, p. 55. https://doi.org/10.3389/frobt.2020.00055
Pagnotta, M., Laland, K.N., and Coco, M.I., Attentional coordination in demonstrator-observer dyads facilitates learning and predicts performance in a novel manual task, Cognition, 2020, vol. 201, p. 104314. https://doi.org/10.1016/j.cognition.2020.104314
Ekman, P. and Friesen, W., Facial Action Coding System, Palo Alto, Calif.: Consulting Psychologists, 1978. https://doi.org/10.1037/t27734-000
Iriskhanova, O.K. and Cienki, A., The semiotics of gestures in cognitive linguistics: contribution and challenges, Vopr. Kognitivnoy Lingvistiki, 2018, no. 4, pp. 25–36. https://doi.org/10.20916/1812-3228-2018-4-25-36
Müller, C., Gesture and sign: Cataclysmic break or dynamic relations?, Front. Psychol., 2018, vol. 9. https://doi.org/10.3389/fpsyg.2018.01651
Admoni, H. and Scassellati, B., Social eye gaze in human-robot interaction: A review, J. Hum.-Robot Interaction, 2017, vol. 6, no. 1, p. 25. https://doi.org/10.5898/jhri.6.1.admoni
Scassellati, B., Mechanisms of shared attention for a humanoid robot, Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symp., AAAI Press, 1996, vol. 4, no. 9.
Breazeal, C. and Scassellati, B., A context-dependent attention system for a social robot, IJCAI’99: Proc. 16th Int. Joint Conf. on Artificial Intelligence, Stockholm, 1999, San Francisco: Morgan Kaufmann, 1999, vol. 2, p. 268.
Kozima, H. and Ito, A., Towards language acquisition by an attention-sharing robot, Proc. Joint Conf. on New Methods in Language Processing and Computational Natural Language Learning - NeMLaP3/CoNLL ’98, Sydney, 1998, Stroudsburg, Pa.: Association for Computational Linguistics, 1998, pp. 245–246. https://doi.org/10.3115/1603899.1603939
Schrammel, F., Pannasch, S.T., Graupner, S., Mojzisch, A., and Velichkovsky, B.M., Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience, Psychophysiology, 2009, vol. 46, no. 5, pp. 922–931. https://doi.org/10.1111/j.1469-8986.2009.00831.x
Vilhjálmsson, H., Cantelmo, N., Cassell, J., E. Cha-fai, N.E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A., Pelachaud, C., Ruttkay, Z., Thórisson, K., Van Welbergen, H., and Van Der Werf, R.J., The behavior markup language: Recent developments and challenges, Intelligent Virtual Agents. IVA 2007, Pelachaud, C., Martin, J.C., André, E., Chollet, G., Karpouzis, K., and Pelé, D., Eds., Lecture Notes in Computer Science, vol. 4722, Berlin: Springer, 2007, pp. 99–111. https://doi.org/10.1007/978-3-540-74997-4_10
Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K., and Vilhjálmsson, H., Towards a common framework for multimodal generation: The behavior markup language, Intelligent Virtual Agents. IVA 2006, Gratch, J., Young, M., Aylett, R., Ballin, D., and Olivier, P., Eds., Lecture Notes in Computer Science, vol. 4133, Berlin: Springer, 2006, pp. 205–217. https://doi.org/10.1007/11821830_17
Lorenz, C., Die Rückseite des Spiegels. Versuch einer Naturgeschichte des menschlichen Erkennens, Munich: Piper, 1973.
Minsky, M.L., The Society of Mind, New York: Touchstone Book, 1988.
Laird, J.E., Extending the Soar Cognitive Architecture, 2018.
Laird, J.E., Newell, A., and Rosenbloom, P.S., SOAR: An architecture for general intelligence, Artif. Intell., 1987, vol. 33, no. 1, pp. 1–64. https://doi.org/10.1016/0004-3702(87)90050-6
Kotov, A., Zinina, A., and Filatov, A., Semantic parser for sentiment analysis and the emotional computer agents, Proc. AINL-ISMW FRUCT, 2015, pp. 167–170.
Fillmore, C.J., The case for case, Universals in Linguistic Theory, Bach, E. and Harms, R.T., Eds., New York: Holt, Rinehart & Winston, 1968, pp. 1–68.
Kotov, A.A., Mekhanizmy rechevogo vozdeistviya (Mechanisms of Verbal Impact), Moscow: RGGU, 2021.
Lyusin, D.V., A new method for measuring the emotional intelligence: An EmIn questionnaire, Psikhologicheskaya Diagn., 2006, vol. 1, no. 4, pp. 3–22.
Lyusin, D.V., Questionnaire on emotional intelligence EmIn: New psychometric data, Sotsial’nyi i emotsional’nyi intellekt. Ot modelei k izmereniyam (Social and Emotional Intelligence: From Models to Measurements), Moscow: Inst. Psikhol. Ross. Akad. Nauk, 2009, pp. 264–278.
Kotov, A.A. and Zinina, A.A., Functional layout of communication actions in the REC corpus, Trudy mezhdunarodnoi konferentsii Korpusnaya lingvistika - 2015 (Proc. Int. Conf. Corpus Linguistics-2015), St. Petersburg: S.-Peterb. Gos. Univ., 2015, pp. 287–295.
Kotov, A.A. and Zinina, A.A., Functional analysis of nonverbal communication behavior, Komp’yuternaya lingvistika i intellektual’nye tekhnologii. Dialog-2015 (Computer Linguistics and Intelligent Technologies: Dialog-2015), Selegei, V.P. , Eds., Moscow: Ross. Gos. Gumanit. Univ., 2015, vol. 1, pp. 299–310.
Brown, P. and Levinson, S.C., Politeness: Some Universals in Language Usage, Studies in Interactional Sociolinguistics, Cambridge: Cambridge Univ. Press, 1987. https://doi.org/10.1017/CBO9780511813085
Felzenszwalb, P.F., Girshick, R.B., McAllester, D., and Ramanan, D., Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell., 2010, vol. 32, no. 9, pp. 1627–1645. https://doi.org/10.1109/tpami.2009.167
Kazemi, V. and Sullivan, J., One millisecond face alignment with an ensemble of regression trees, 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, Ohio, 2014, IEEE, 2014, pp. 1867–1874. https://doi.org/10.1109/cvpr.2014.241
Terzakis, G. and Lourakis, M., A consistently fast and globally optimal solution to the perspective-n-point problem, Computer Vision–ECCV 2020, Vedaldi, A., Bischof, H., Brox, T., and Frahm, J.M., Eds., Lecture Notes in Computer Science, vol. 12346, Cham: Springer, 2020, pp. 478–494. https://doi.org/10.1007/978-3-030-58452-8_28
Kotov, A. and Budyanskaya, E., The Russian Emotional Corpus: Communication in natural emotional situations, Komp’yuternaya lingvistika i intellektual’nye tekhnologii (Computer Linguistics and Intelligent Technologies), Moscow: Ross. Gos. Gumanit. Univ., 2012, vol. 1, pp. 296–306.
Zinina, A.A., Arinkin, N.A., Zaidel’man, L.Ya., and Kotov, A.A., Development of communicative behavior model for F-2 robot basing on REC multimodal corpora, Komp’yuternaya lingvistika i intellektual’nye tekhnologii. Po materialam ezhegodnoi mezhdunarodnoi konferentsii Dialog (Computer Linguistics and Intelligent Technologies: Proc. Annu. Int. Conf. Dialog), Moscow: Ross. Gos. Gumanit. Univ., 2018, pp. 831–844.
Kotov, A.A., A computational model of consciousness for artificial emotional agents, Psychol. Russia: State Art, 2017, vol. 10, no. 3, pp. 57–73. https://doi.org/10.11621/pir.2017.0304
Kotov, A.A. and Budyanskaya, E.M., Modeling of poignancy and further steps of dialog for animated virtual agents, Komp’yuternaya lingvistika i intellektual’nye tekhnologii (Computer Linguistics and Intelligent Technologies), Moscow: Ross. Gos. Gumanit. Univ., 2007, pp. 102–108.
Schilbach, L., Helmert, J.R., Mojzisch, A., Pannasch, S., Velichkovsky, B.M., and Vogeley, K., Neural correlates, visual attention and facial expression during social interaction with virtual others, Proc. 27th Annu. Conf. of Cognitive Science Society, Stresa, Italy: 2005, pp. 74–86.
Velichkovsky, B.M., Krotkova, O.A., Kotov, A.A., Orlov, V.A., Verkhlyutov, V.M., Ushakov, V.L., and Sharaev, M.G., Consciousness in a multilevel architecture: Evidence from the right side of the brain, Consciousness Cognition, 2018, vol. 64, pp. 227–239. https://doi.org/10.1016/j.concog.2018.06.004
Velichkovsky, B.M., Osipov, G.S., Nosovets, Z.A., and Velichkovsky, B.B., Personal meaning and solving creative tasks: Contemporary neurocognitive studies, Sci. Tech. Inf. Process., 2021, vol. 48, no. 5, pp. 406–414. https://doi.org/10.3103/S0147688221050130
Cognitive Neuroscience of Attention, Posner, M.I., Ed., New York: The Guilford Press, 2004.
ACKNOWLEDGMENTS
We express our gratitude to the Russian State University for the Humanities for help in organizing experiments during a difficult period of restrictions caused by the pandemic. We thank N.A. Arinkin, K.A. Kivva, and A.A. Filatov for their help in preparing the experiment.
Funding
The study was performed with partial financial support from the Russian Science Foundation, project no. 19-18-00547 (https://rscf.ru/project/19-18-00547/).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that they have no conflict of interest.
Additional information
Translated by L. Solovyova
Publisher’s Note.
Allerton Press remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Zinina, A.A., Zaidelman, L.Y., Kotov, A.A. et al. Reflex or Reflection? The Oculomotor Behavior of the Companion Robot, Creating the Impression of Communicating with an Emotional Being. Sci. Tech. Inf. Proc. 50, 500–511 (2023). https://doi.org/10.3103/S0147688223050179
Received:
Published:
Issue Date:
DOI: https://doi.org/10.3103/S0147688223050179