Advertisement

Enriching the Human-Robot Interaction Loop with Natural, Semantic, and Symbolic Gestures

  • Katrin Solveig Lohan
  • Hagen Lehmann
  • Christian Dondrup
  • Frank Broz
  • Hatice Kose
Reference work entry

Abstract

In this chapter, we are discussing the appearance and need of gestures as feedback strategy for humanoid robots in interactions with humans. Gestures are a means of communication that is nonverbal, which either supports or replaces the verbal communication, and represent a rich source of communication short cuts. We are discussing the necessity of deliberation behind the use of gestures, as well as the different levels of granularity in them. We are proposing a new definition and categorization of gestures. This will lead us to a discussion on the need of further investigation of gestures as feedback strategies and how it supports the interaction loop in HRI. Finally, the chapter will categorize the state of the art in humanoid gesture capabilities and propose next challenges.

References

  1. 1.
    M.C. Corballis, From Hand to Mouth: The Origins of Language (Princeton University Press, Princeton, 2003)Google Scholar
  2. 2.
    K. Liebal, C. Müller, S. Pika, Gestural Communication in Nonhuman and Human Primates, vol 10 (John Benjamins Publishing, 2007)Google Scholar
  3. 3.
    H. Lyn, P.M. Greenfield, S. Savage-Rumbaugh, K. Gillespie-Lynch, W.D. Hopkins, Nonhuman primates do declare! a comparison of declarative symbol and gesture use in two children, two bonobos, and a chimpanzee. Lang. Commun. 31(1), 63–74 (2011)CrossRefGoogle Scholar
  4. 4.
    H. Kobayashi, S. Kohshima, Unique morphology of the human eye. Nature 387, 767–768 (1997)CrossRefGoogle Scholar
  5. 5.
    H. Kobayashi, S. Kohshima, Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. J. Hum. Evol., 40, 419–435 (2001)CrossRefGoogle Scholar
  6. 6.
    M. Tomasello, B. Hare, H. Lehmann, J. Call, Reliance on head versus eyes in the gaze following of great apes and human infants: the cooperative eye hypothesis. J. Hum. Evol. 52, 314–320 (2007)CrossRefGoogle Scholar
  7. 7.
    V. Corkum, C. Moore. Development of joint visual attention in infants, in Joint Attention: Its Origins and Role in Development, ed. by C. Moore, P.J. Dunham (Erlbaum, Hillsdale, 1995)Google Scholar
  8. 8.
    S. Baron-Cohen, R. Campbell, A. Karmiloff-Smith, J. Grant, J. Walker, Are children with autism blind to the mentalistic significance of the eyes? Br. J. Dev. Psychol. 13, 379–398 (1995)CrossRefGoogle Scholar
  9. 9.
    S. Baron-Cohen, S. Wheelwright, T. Jolliffe, Is there a “language of the eyes”? Evidence from normal adults, and adults with autism or asperger syndrome. Vis. Cogn. 4, 311–331 (1997)CrossRefGoogle Scholar
  10. 10.
    J. Ristic, A. Kingstone, Taking control of reflexive social attention. Cognition 94(3), B55–B65 (2005)CrossRefGoogle Scholar
  11. 11.
    S. Baron-Cohen, Mindblindness: An Essay on Autism and Theory of Mind, vol. 74 (1995)Google Scholar
  12. 12.
    C. Kleinke, Gaze and eye contact: a research review. Psychol. Bull. 100(1), 78–100 (1986)CrossRefGoogle Scholar
  13. 13.
    M. Cook, J.M. Smith, The role of gaze in impression formation. Br. J. Soc. Clin. Psychol. 14(1), 19–25 (1975)CrossRefGoogle Scholar
  14. 14.
    A. Mazur, E. Rosa, M. Faupel, J. Heller, R. Leen, B. Thurman, Physiological aspects of communication via mutual gaze. Am. J. Sociol. 86(1), 50–74 (1980)CrossRefGoogle Scholar
  15. 15.
    F. Broz, H. Lehmann, C.L. Nehaniv, K. Dautenhahn, Mutual gaze, personality, and familiarity: dual eye-tracking during conversation, in IEE International Symposium on Robot and Human Interactive Communication (Ro-Man), 2012Google Scholar
  16. 16.
    N. Wang, J. Gratch, Don’t just stare at me! in Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI ‘10 (ACM, New York, 2010), pp. 1241–1250Google Scholar
  17. 17.
    T. Farroni, Infants perceiving and acting on the eyes: tests of an evolutionary hypothesis. J.~Exp. Child Psychol. 85(3), 199–212 (2003)CrossRefGoogle Scholar
  18. 18.
    D.N. Saito, H.C. Tanabe, K. Izuma, M.J. Hayashi, Y. Morito, H. Komeda, H. Uchiyama, H. Kosaka, H. Okazawa, Y. Fujibayashi, N. Sadato, Stay tuned: inter-individual neural synchronization during mutual gaze and joint attention. Front. Integr. Neurosci. 4(0) (2010)Google Scholar
  19. 19.
    M.F. Land, D.N. Lee, Where we look when we steer. Nature 369(6483), 742–744 (1994)CrossRefGoogle Scholar
  20. 20.
    R. Grasso, S. Glasauer, Y. Takei, A. Berthoz, The predictive brain. NeuroReport 7(6), 1170–1174 (1996)CrossRefGoogle Scholar
  21. 21.
    M. Hollands, K. Sorensen, A. Patla, Effects of head immobilization on the coordination and control of head and body reorientation and translation during steering. Exp. Brain Res. 140(2), 223–233 (2001)CrossRefGoogle Scholar
  22. 22.
    T. Imai, S.T. Moore, T. Raphan, B. Cohen, Interaction of the body, head, and eyes during walking and turning. Exp. Brain Res. 136(1), 1–18 (2001)CrossRefGoogle Scholar
  23. 23.
    E.T. Hall, The Hidden Dimension (Anchor Books, New York, 1969)Google Scholar
  24. 24.
    J.R. Aiello, T. De Carlo Aiello, The development of personal space: proxemic behavior of children 6 through 16. Hum. Ecol. 2(3), 177–189 (1974)CrossRefGoogle Scholar
  25. 25.
    D.V. Lu, D.B. Allan, W.D. Smart, Tuning cost functions for social navigation, in Social Robotics (Springer, 2013), pp. 442–451Google Scholar
  26. 26.
    L. Scandolo, T. Fraichard, An anthropomorphic navigation scheme for dynamic scenarios, in IEEE International Conference on Robotics and Automation (ICRA), 2011 (IEEE, 2011), pp 809–814Google Scholar
  27. 27.
    E.A. Sisbot, L.F. Marin-Urias, R. Alami, T. Simeon, A human aware mobile robot motion planner. IEEE Trans. Robot. 23(5), 874–883 (2007)CrossRefGoogle Scholar
  28. 28.
    T. Ducourant, S. Vieilledent, Y. Kerlirzin, A. Berthoz, Timing and distance characteristics of interpersonal coordination during locomotion. Neurosci. Lett. 389(1), 6–11 (Nov. 2005)CrossRefGoogle Scholar
  29. 29.
    F. Ferland, R. Agrigoroaie, A. Tapus, Assistive Humanoid Robots for the Elderly with Mild Cognitive Impairment, Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  30. 30.
    A. Peters, Small movements as communicational cues in HRI, in HRI 2011 – Workshop on Human-Robot Interaction Pioneers, ed. by T. Kollar, A. Weiss (2011), pp. 72–73Google Scholar
  31. 31.
    H. Huttenrauch, K.S. Eklundh, A. Green, E.A. Topp, H.I. Christensen, What’s in the gap? Interaction transitions that make hri work, in ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication (IEEE, 2006), pp. 123–128Google Scholar
  32. 32.
    S. Koo, D.-S. Kwon, Recognizing human intentional actions from the relative movements between human and robot, in RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication (IEEE, 2009), pp. 939–944Google Scholar
  33. 33.
    M.L. Walters, M.A. Oskoei, D.S. Syrdal, K. Dautenhahn, A long-term human-robot proxemic study, in RO-MAN 2011 – The 20th IEEE International Symposium on Robot and Human Interactive Communication (2011), pp. 137–142Google Scholar
  34. 34.
    R. Mead, M.J. Mataric, Robots have needs too: people adapt their proxemic preferences to improve autonomous robot recognition of human social signals, in New Frontiers in Human-Robot Interaction (2015), p. 100Google Scholar
  35. 35.
    A.D. May, C. Dondrup, M. Hanheide, Show me your moves! Conveying navigation intention of a mobile robot to humans, in European Conference on Mobile Robots (ECMR), 2015 (IEEE, 2015), pp. 1–6Google Scholar
  36. 36.
    A.J. Moon, B. Panton, H.F.M. Van der Loos, E.A. Croft, Using hesitation gestures for safe and ethical human-robot interaction, in Proceedings of ICRA (2010), pp. 11–13Google Scholar
  37. 37.
    A.J. Moon, C.A.C. Parker, E.A. Croft, H.F.M. Van Der Loos, Did you see it hesitate? – Empirically grounded design of hesitation trajectories for collaborative robots, in IEEE/RSJ International Conference on Intelligent Robots and Systems (2011), pp. 1994–1999Google Scholar
  38. 38.
    Y. Ogai, T. Ikegami, Microslip as a simulated artificial mind. Adapt. Behav. 16(2-3), 129–147 (2008)CrossRefGoogle Scholar
  39. 39.
    C. Dondrup, C. Lichtenthäler, M. Hanheide, Hesitation signals in human-robot head-on encounters: a pilot study, in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (ACM, 2014), pp. 154–155.Google Scholar
  40. 40.
    T. Kruse, P. Basili, S. Glasauer, A. Kirsch, Legible robot navigation in the proximity of moving humans, in IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2012 (IEEE, 2012), pp. 83–88Google Scholar
  41. 41.
    C. Lichtenthäler, A. Peters, S. Griffiths, A. Kirsch, Social Navigation-Identifying Robot Navigation Patterns in a Path Crossing Scenario (ICSR, 2013)Google Scholar
  42. 42.
    S. Duncan, D.W. Fiske, Face-to-Face Interaction: Research, Methods, and Theory, vol. 3 (Routledge, 2015)Google Scholar
  43. 43.
    E. Goffman, Interaction Ritual: Essays in Face to Face Behavior (AldineTransaction, 2005)Google Scholar
  44. 44.
    A.P. Atkinson, M.L. Tunstall, W.H. Dittrich, Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures. Cognition 104(1), 59–72 (2007)CrossRefGoogle Scholar
  45. 45.
    J. Montepare, E. Koff, D. Zaitchik, M. Albert, The use of body movements and gestures as cues to emotions in younger and older adults. J. Nonverbal Behav. 23(2), 133–152 (1999)CrossRefGoogle Scholar
  46. 46.
    J.M. Iverson, S. Goldin-Meadow, Why people gesture when they speak. Nature 396(6708), 228–228 (1998)CrossRefGoogle Scholar
  47. 47.
    P. Ekman, J.J. Campos, R.J. Davidson, F.B.M. De Waal. Emotions Inside Out (130 Years After Darwin’s “the Expression of the Emotions in Man and Animals”), Annals of the New York Academy of Sciences (2003)Google Scholar
  48. 48.
    P. Ekman, W.V. Friesen, Hand movements. J. Commun. 22(4), 353–374 (1972)CrossRefGoogle Scholar
  49. 49.
    A. Kendon, How gestures can become like words. Cross-Cult. Perspect. Non-Verbal Commun 1, 131–141 (1988)Google Scholar
  50. 50.
    D. McNeill, Hand and Mind: What Gestures Reveal about Thought (University of Chicago Press, 1992)Google Scholar
  51. 51.
    P. Morrel-Samuels, R.M. Krauss, Word familiarity predicts temporal asynchrony of hand gestures and speech. J. Exp. Psychol. Learn. Mem. Cogn. 18(3), 615 (1992)CrossRefGoogle Scholar
  52. 52.
    S. Goldin-Meadow, H. Nusbaum, S.D. Kelly, S. Wagner, Explaining math: gesturing lightens the load. Psychol. Sci. 12(6), 516–522 (2001)CrossRefGoogle Scholar
  53. 53.
    M.C. Corballis, From Hand to Mouth: The Origins of Language (Princeton University Press, Princeton, 2002)Google Scholar
  54. 54.
    P. Lieberman, E.S. Crelin, D.H. Klatt, Phonetic ability and related anatomy of the newborn and adult human, neanderthal man, and the chimpanzee. Am. Anthropol. 74(3), 287–307 (1972)CrossRefGoogle Scholar
  55. 55.
    L.A. Petitto, P.F. Marentette, Babbling in the manual mode: evidence for the ontogeny of language. Science251(5000), 1493–1496 (1991)CrossRefGoogle Scholar
  56. 56.
    A.S. Pollick, F.B.M. De Waal, Ape gestures and language evolution. Proc. Natl. Acad. Sci. 104(19), 8184–8189 (2007)CrossRefGoogle Scholar
  57. 57.
    M. Annett, Left, Right, Hand and Brain: The Right Shift Theory (Psychology Press, Hove, 1985)Google Scholar
  58. 58.
    W.D. Hopkins, F.B.M. de Waal, Behavioral laterality in captive bonobos (pan paniscus): replication and extension. Int. J. Primatol. 16(2), 261–276 (1995)CrossRefGoogle Scholar
  59. 59.
    C. Cantalupo, W.D. Hopkins, Asymmetric broca’s area in great apes: a region of the ape brain is uncannily similar to one linked with speech in humans. Nature 414(6863), 505–505 (2001)CrossRefGoogle Scholar
  60. 60.
    M.A. Arbib, K. Liebal, S. Pika, M.C. Corballis, C. Knight, D.A. Leavens, D. Maestripieri, J.E. Tanner, M.A. Arbib, K. Liebal, et al., Primate vocalization, gesture, and the evolution of human language. Curr. Anthropol. 49(6), 1053–1076 (2008)CrossRefGoogle Scholar
  61. 61.
    K.R. Gibson, K.R. Gibson, T. Ingold, Tools, Language and Cognition in Human Evolution. (Cambridge University Press, 1994)Google Scholar
  62. 62.
    K. Dautenhahn, S. Woods, C. Kaouri, M.L. Walters, K. Lee Koay, I. Werry, What is a robot companion-friend, assistant or butler? in Intelligent Robots and Systems, 2005 (IROS 2005). 2005 IEEE/RSJ International Conference on (IEEE, 2005), pp. 1192–1197Google Scholar
  63. 63.
    M. Mashiro, Bukimi no tani. Energy 7, 22–35 (1970)Google Scholar
  64. 64.
    A. Dillon, User acceptance of information technology, in Encyclopedia of Human Factors and Ergonomics (2001)Google Scholar
  65. 65.
    T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots. Robot. Auton. Syst. 42(3-4), 143–166 (2003)CrossRefGoogle Scholar
  66. 66.
    K. Dautenhahn, Socially intelligent robots: dimensions of human–robot interaction. Philos. Trans. R. Soc. B: Biol. Sci. 362(1480), 679–704 (2007)CrossRefGoogle Scholar
  67. 67.
    H. Lehmann, A. Roncone, U. Pattacini, G. Metta, Physiologically inspired blinking behavior for a humanoid robot, in International Conference on Social Robotics (Springer, 2016), pp 83–93Google Scholar
  68. 68.
    C.-M. Huang, B. Mutlu, Modeling and evaluating narrative gestures for human-like robots, in Robotics: Science and Systems (2013), pp. 57–64Google Scholar
  69. 69.
    L.D. Riek, T.-C. Rabinowitch, P. Bremner, A.G. Pipe, M. Fraser, P. Robinson, Cooperative gestures: effective signaling for humanoid robots, in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (IEEE, 2010), pp. 61–68Google Scholar
  70. 70.
    M. Coeckelbergh, C. Pop, R. Simut, A. Peca, S. Pintea, D. David, B. Vanderborght, A survey of expectations about the role of robots in robotassisted therapy for children with asd: ethical acceptability, trust, sociability, appearance, and attachment. Sci. Eng. Ethics 22(1), 47–65 (2016)CrossRefGoogle Scholar
  71. 71.
    B. Klein, L. Gaedt, G. Cook, Emotional robots: principles and experiences with paro in Denmark, Germany, and the UK. GeroPsych J. Gerontopsychol. Geriatr. Psychiatr. 26(2), 89 (2013)Google Scholar
  72. 72.
    A. Beck, L. Cañnamero, L. Damiano, G. Sommavilla, F. Tesser, P. Cosi, Children interpretation of emotional body language displayed by a robot, in International Conference on Social Robotics (Springer, 2011), pp. 62–70Google Scholar
  73. 73.
    K. Dautenhahn, C.L. Nehaniv, M.L. Walters, B. Robins, H. Kose-Bagci, A. Mirza, M. Blow, Kaspar ? A minimally expressive humanoid robot for human? Robot interaction research. Appl. Bionics Biomech. 6(3-4), 369–397 (2009)CrossRefGoogle Scholar
  74. 74.
    B. Robins, K. Dautenhahn, P. Dickerson, From isolation to communication: a case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot, in Advances in Computer-Human Interactions, 2009. ACHI’09. Second International Conferences on (IEEE, 2009), pp. 205–211Google Scholar
  75. 75.
    J. Ham, R. Bokhorst, R. Cuijpers, D. van der Pol, J.-J. Cabibihan, Making robots persuasive: the influence of combining persuasive strategies (gazing and gestures) by a storytelling robot on its persuasive power, in International Conference on Social Robotics (Springer, 2011), pp 71–83Google Scholar
  76. 76.
    C.L. Sidner, C. Lee, C.D. Kidd, N. Lesh, C. Rich, Explorations in engagement for humans and robots. Artif. Intell. 166(1), 140–164 (2005)CrossRefGoogle Scholar
  77. 77.
    T. Ono, M. Imai, H. Ishiguro, A model of embodied communications with gestures between humans and robots, in Proceedings of 23rd Annual Meeting of the Cognitive Science Society (Citeseer, 2001), pp. 732–737Google Scholar
  78. 78.
    R.J. Brand, S. Tapscott, Acoustic packaging of action sequences by infants. Infancy 11(3), 321–332 (2007)CrossRefGoogle Scholar
  79. 79.
    A. Cangelosi, G. Metta, G. Sagerer, S. Nolfi, C. Nehaniv, K. Fischer, J. Tani, T. Belpaeme, G. Sandini, F. Nori, L. Fadiga, B. Wrede, K. Rohlfing, E. Tuci, K. Dautenhahn, J. Saunders, A. Zeschel, Integration of action and language knowledge: a roadmap for developmental robotics. IEEE Trans. Auton. Ment. Dev. 2(3), 167–195 (2010)CrossRefGoogle Scholar
  80. 80.
    K.S. Lohan, K.J. Rohlfing, K. Pitsch, J. Saunders, H. Lehmann, C.L. Nehaniv, K. Fischer, B. Wrede, Tutor spotter: proposing a feature set and evaluating it in a robotic system. Int. J. Soc. Robot. 4(2), 131–146 (2012)CrossRefGoogle Scholar
  81. 81.
    M. Meyer, B. Hard, R.J. Brand, M. McGarvey, D.A. Baldwin, Acoustic packaging: maternal speech and action synchrony. IEEE Trans. Auton. Ment. Dev. 3(2), 154–162 (2011)CrossRefGoogle Scholar
  82. 82.
    K. Pitsch, A.-L. Vollmer, J. Fritsch, B. Wrede, K. Rohlfing, G. Sagerer, On the loop of action modification and the recipient’s gaze in adult-child interaction. Gesture Speech Interact. Pozn. Pol. 24(09), 2009 (2009)Google Scholar
  83. 83.
    D. Regan, K.I. Beverley, Looming detectors in the human visual pathway. Vis. Res. 18(4), 415–421 (1978)CrossRefGoogle Scholar
  84. 84.
    A.-L. Vollmer, K. Pitsch, K.S. Lohan, J. Fritsch, K.J. Rohlfing, B. Wrede, Developing feedback: how children of different age contribute to a tutoring interaction with adults, in Development and Learning (ICDL), 2010 IEEE 9th International Conference on (IEEE, 2010), pp. 76–81Google Scholar
  85. 85.
    B. Wrede, K. Rohlfing, M. Hanheide, G. Sagerer, Towards learning by interacting, in Creating Brain-Like Intelligence (2009), pp. 139–150Google Scholar
  86. 86.
    A. Cangelosi, T. Ogata, Speech and Language in Humanoid Robots. Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  87. 87.
    B. Miller, D. Feil-Seifer, Embodiment, Situatednessand Morphology for Humanoid Interaction. Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  88. 88.
    D.J. Matatyaho, L.J. Gogate, Type of maternal object motion during synchronous naming predicts preverbal infants’ learning of word–object relations. Infancy 13(2), 172–184 (2008)CrossRefGoogle Scholar
  89. 89.
    L.J. Gogate, L.H. Bolzani, E.A. Betancourt, Attention to maternal multimodal naming by 6-to 8-month-old infants and learning of word–object relations. Infancy 9(3), 259–288 (2006)CrossRefGoogle Scholar
  90. 90.
    K.S. Lohan, S.S. Griffiths, A. Sciutti, T.C. Partmann, K.J. Rohlfing, Co-development of manner and path concepts in language, action, and eye-gaze behavior. Top. Cogn. Sci. 6(3), 492–512 (2014)CrossRefGoogle Scholar
  91. 91.
    A.-L. Vollmer, K.S. Lohan, K. Fischer, Y. Nagai, K. Pitsch, J. Fritsch, K.J. Rohlfing, B. Wredek, People modify their tutoring behavior in robot-directed interaction for action learning, in 2009 IEEE 8th International Conference on Development and Learning (IEEE, 2009), pp. 1–6Google Scholar
  92. 92.
    K.S. Lohan, A.-L. Vollmer, J. Fritsch, K. Rohlfing, B. Wrede, Which ostensive stimuli can be used for a robot to detect and maintain tutoring situations? in 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (IEEE, 2009), pp. 1–6Google Scholar
  93. 93.
    Y. Nagai, K.J. Rohlfing, Can motionese tell infants and robots? What to imitate? in Proceedings of the 4th International Symposium on Imitation in Animals and Artifacts (2007), pp. 299–306Google Scholar
  94. 94.
    A. D’Ausilio, K. Lohan, L. Badino, A. Sciutti, 12 studying human-human interaction to build the future of human-robot interaction, in Human Computer Confluence Transforming Human Experience Through Symbiotic Technologies (De Gruyter Open, Warsaw, 2016), p. 213Google Scholar
  95. 95.
    D.G. Novick, B. Hansen, K. Ward, Coordinating turn-taking with gaze, in Spoken Language, 1996. ICSLP 96. Proceedings, Fourth International Conference on, vol. 3 (IEEE, 1996), pp. 1888–1891Google Scholar
  96. 96.
    A. Frischen, A.P. Bayliss, S.P. Tipper, Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychol. Bull. 133(4), 694–724 (2007)CrossRefGoogle Scholar
  97. 97.
    N.J. Emery, The eyes have it: the neuroethology, function and evolution of social gaze. Neurosci. Biobehav. Rev. 24(6), 581–604 (2000)CrossRefGoogle Scholar
  98. 98.
    M. Staudte, M.W. Crocker, Investigating joint attention mechanisms through spoken human-robot interaction. Cognition 120(2), 268–291 (2011)CrossRefGoogle Scholar
  99. 99.
    A.X. Li, M. Florendo, L.E. Miller, H. Ishiguro, A.P. Saygin, Robot form and motion influences social attention, in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ‘15, New York (ACM, 2015), pp. 43–50Google Scholar
  100. 100.
    E. Wiese, A. Wykowska, H.J. Müller, F. Crostella, G.V. Caprara, What we observe is biased by what other people tell us: beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues. PLoS One9(4), e94529 (2014)CrossRefGoogle Scholar
  101. 101.
    K. Fischer, K. S. Lohan, K. Rohlfing, K. Foth. Partner orientation in asymmetric communication: evidence from contingent robot response, in HRI?14 Workshop on Humans and Robots in Asymmetric Interactions, 2014Google Scholar
  102. 102.
    T. Ono, T. Ichijo, N. Munekata, Emergence of joint attention between two robots and human using communication activity caused by synchronous behaviors, in Robot and Human Interactive Communication (RO-MAN), 2016 25th IEEE International Symposium on (IEEE, 2016), pp. 1187–1190Google Scholar
  103. 103.
    H. Admoni, B. Scassallati, Social eye gaze in human-robot interaction: a review. J. Hum. Robot Interact. 6 (2017)CrossRefGoogle Scholar
  104. 104.
    A. Curioni, G. Knoblich, N. Sebanz, Joint Action in Humans A Model for Human-Robot Interactions? Section: Human-Humanoid Interaction, Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  105. 105.
    H. Kose, N. Akalin, P. Uluer, Socially interactive robotic platforms as sign language tutors. Int. J. Humanoid Rob. 11(01), 1450003 (2014)CrossRefGoogle Scholar
  106. 106.
    P. Uluer, N. Akalın, H. Köse, A new robotic platform for sign language tutoring. Int. J. Soc. Robot. 7(5), 571–585 (2015)CrossRefGoogle Scholar
  107. 107.
    H. Kose, N. Akalin, R. Yorganci, B.S. Ertugrul, H. Kivrak, S. Kavak, A. Ozkul, C. Gurpinar, P. Uluer, G. Ince, iSign: An Architecture for Humanoid Assisted Sign Language Tutoring (Springer International Publishing, Cham, 2015), pp. 157–184Google Scholar
  108. 108.
    H. Köse, P. Uluer, N. Akalın, R. Yorgancı, A. Özkul, G. Ince, The effect of embodiment in sign language tutoring with assistive humanoid robots. Int. J. Soc. Robot. 7(4), 537–548 (2015)CrossRefGoogle Scholar
  109. 109.
    H. Kose, R. Yorganci, E.H. Algan, D.S. Syrdal, Evaluation of the robot assisted sign language tutoring using video-based studies. Int. J. Soc. Robot., 4(3):273–283, 2012CrossRefGoogle Scholar
  110. 110.
    D. Brentari, Sign languages (Cambridge University Press, 2010)Google Scholar
  111. 111.
    J.A. Bickford, K. Fraychineaud, Mouth morphemes in asl: a closer look, in Sign Languages: Spinning and Unraveling the Past, Present and Future. Proceedings of the Papers from the Ninth Theoretical Issues in Sign Language Research Conference, Florianopolis, 2006Google Scholar
  112. 112.
    C. Valli, C. Lucas, Linguistics of American Sign Language: An Introduction (Gallaudet University Press, 2000)Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  • Katrin Solveig Lohan
    • 1
  • Hagen Lehmann
    • 2
  • Christian Dondrup
    • 1
  • Frank Broz
    • 1
  • Hatice Kose
    • 3
  1. 1.School of Mathematical and Computer SciencesHeriot-Watt UniversityEdinburghUK
  2. 2.School of Mathematical and Computer SciencesHeriot-Watt UniversityEdinburghUK
  3. 3.Istanbul Technical UniversityIstanbulTurkey

Personalised recommendations