International Journal of Social Robotics

, Volume 8, Issue 5, pp 685–694 | Cite as

Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction

  • Yusuke Tamura
  • Takafumi Akashi
  • Shiro Yano
  • Hisashi Osumi
Article

Abstract

In order to smoothly interact with humans, it is desirable that a robot can guide human attention and behaviors. In this study, we developed a model of human visual attention for guiding human attention based on an analysis of a magic trick performance. We measured human gaze points of people watching a video of a magic trick performance and compared them with the area where the magician intended to draw a spectator’s attention. The analysis showed that the relationship between the magician’s face, hands, and gaze plays an important role in guiding the spectator’s attention. On the basis of the preliminary user studies on watching the magic video, we integrated a saliency map and a manipulation map that describes the relationship between gaze and hands to develop a novel human attention model. The evaluation using the observed gaze points demonstrated that the proposed model can better explain human visual attention than the saliency map while people are watching a video of a magic trick performance.

Keywords

Attention Magic Saliency Gaze Human–robot interaction 

Notes

Acknowledgments

This work was supported by the Japan Society for the Promotion of Science, Grant-in-Aid for Young Scientists (B), 24700190.

References

  1. 1.
    Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46CrossRefGoogle Scholar
  2. 2.
    Sato T, Nishida Y, Ichikawa J, Hatamura Y, Mizoguchi H (1994) Active understanding of human intention by a robot through monitoring of human behavior. In: Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94, pp 405–414Google Scholar
  3. 3.
    Sakita K, Ogawara K, Murakami S, Kawamura K, Ikeuchi K (2004) Flexible cooperation between human and robot by interpreting human intention from gaze information. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, pp 846–851Google Scholar
  4. 4.
    Tamura Y, Sugi M, Ota J, Arai T (2004) Deskwork support system based on the estimation of human intentions. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication, pp 413–418Google Scholar
  5. 5.
    Tahboub KA (2006) Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. J Intell Robot Syst 45(1):31–52CrossRefGoogle Scholar
  6. 6.
    Schmid AJ, Weede O, Worn H (2007) Proactive robot task selection given an human intention estimate. In: Proceedings of the 16th IEEE international symposium on robot and human interactive communication, pp 726–731Google Scholar
  7. 7.
    Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 708–713Google Scholar
  8. 8.
    Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the 20th IEEE international symposium on robot and human interactive communication, pp 72–78Google Scholar
  9. 9.
    Fong T, Thorpe C, Bauer C (2003) Collaboration, dialogue, and human-robot interaction. Robot Res, Springer Tracts Adv Robot 6:255–266CrossRefGoogle Scholar
  10. 10.
    Finzi A, Orlandini A (2005) Human-robot interaction through mixed-initiative planning for rescue and search rovers. AI*IA 2005: advances in artificial intelligence. Lect Notes Comput Sci 3673:483–494Google Scholar
  11. 11.
    Hong J-H, Song Y-S, Cho S-B (2007) Mixed-initiative human-robot interaction using hierarchical bayesian networks. IEEE Trans Syst Man Cybern—Part A 37(6):1158–1164CrossRefGoogle Scholar
  12. 12.
    Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect Sci 18(4):379–402CrossRefGoogle Scholar
  13. 13.
    Friesen CK, Kingstone A (1998) The eyes have it! reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev 5(3):490–495CrossRefGoogle Scholar
  14. 14.
    Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S (1999) Gaze perception triggers reflexive visuospatial orienting. Vis Cogn 6(5):509–540CrossRefGoogle Scholar
  15. 15.
    Langton S, Bruce V (1999) Reflexive visual orienting in response to the social attention of others. Vis Cogn 6:541–568CrossRefGoogle Scholar
  16. 16.
    Langton SR, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59CrossRefGoogle Scholar
  17. 17.
    Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2013) Effect of robot’s gaze behaviors for attracting and controlling human attention. Adv Robot 27(11):813–829CrossRefGoogle Scholar
  18. 18.
    Zheng M, Moon A, Croft EA, Meng MQH (2015) Impacts of robot head gaze on robot-to-human handovers. Int J Soc Robot 7:783–798CrossRefGoogle Scholar
  19. 19.
    Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227Google Scholar
  20. 20.
    Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1258CrossRefGoogle Scholar
  21. 21.
    Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203CrossRefGoogle Scholar
  22. 22.
    Schillaci G, Bodiroža S, Hafner VV (2013) Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int J Soc Robot 5:139–152CrossRefGoogle Scholar
  23. 23.
    Ma Y-F, Zhang H-J (2002) A model of motion attention for video skimming. In: Proceedings of the 2002 international conference on image processing, pp 129–132Google Scholar
  24. 24.
    Lang M, Hornung A, Wang O, Poulakos S, Smolic A, Gross M (2010) Nonlinear disparity mapping for stereoscopic 3D. ACM Trans Graph 29(4):75CrossRefGoogle Scholar
  25. 25.
    Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. Adv Neural Inf Process Syst 20:241–248Google Scholar
  26. 26.
    Ozeki M, Kashiwagi Y, Inoue M, Oka N (2011) Top-down visual attention control based on a particle filter for human-interactive robots. In: Proceedings of the 4th international conference on human system interaction, pp 188–194Google Scholar
  27. 27.
    Lamont P, Wiseman R (1999) Magic in theory—an introduction to the theoretical and psychological elements of conjuring. University of Hertfordshire Press, HatfieldGoogle Scholar
  28. 28.
    Kuhn G, Amlani AA, Rensink R (2008) Towards a science of magic. Trends Cogn Sci 12(9):349–354CrossRefGoogle Scholar
  29. 29.
    Macknik SL, King M, Randi J, Robbins A, Teller Thompson J, Martinez-Conde S (2008) Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurosci 9(11):871–879CrossRefGoogle Scholar
  30. 30.
    Kuhn G, Tatler BW, Findlay JM, Cole GG (2008) Misdirection in magic: implications for the relationship between eye gaze and attention. Vis Cogn 16:391–405CrossRefGoogle Scholar
  31. 31.
    Kuhn G, Tatler BW, Cole GG (2009) You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Vis Cogn 17:925–944CrossRefGoogle Scholar
  32. 32.
    Tamariz J (2007) The five points in magic. Hermetic Press, SeattleGoogle Scholar
  33. 33.
    Otero-Millan J, Macknik SL, Robbins A, Martinez-Conde S (2011) Stronger misdirection in curved than in straight motion. Front Hum Neurosci 5(133):1–4Google Scholar
  34. 34.
    Posner MI (1980) Orienting of attention. Q J Exp Psychol 32:3–25CrossRefGoogle Scholar
  35. 35.
    Kowler E, Anderson E, Dosher B, Blaser E (1995) The role of attention in the programming of saccades. Vis Res 35(13):1897–1916CrossRefGoogle Scholar
  36. 36.
    Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504Google Scholar
  37. 37.
    Akashi T, Tamura Y, Yano S, Osumi H (2013) Analysis of manipulating other’s attention for smooth interaction between human and robot. In: Proceedings of the 2013 IEEE/SICE international symposium on system integration, pp 340–345Google Scholar
  38. 38.
    Posner MI, Snyder CRR, Davidson BJ (1980) Attention and detection of signals. J Exp Psychol 109(2):160–174CrossRefGoogle Scholar
  39. 39.
    Tamura Y, Yano S, Osumi H (2014) Visual attention model for manipulating human attention by a robot. In: Proceedings of the 2014 IEEE international conference on robotics and automation, pp 5307–5312Google Scholar
  40. 40.
    Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proceedings of the 13th scandinavian conference on image analysis, pp 363–370Google Scholar
  41. 41.
    Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231CrossRefGoogle Scholar
  42. 42.
    Rothkopf CA, Ballard DH, Hayhoe MM (2007) Task and context determine where you look. J Vis 7(14):16, 1–20Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  • Yusuke Tamura
    • 1
  • Takafumi Akashi
    • 2
  • Shiro Yano
    • 3
  • Hisashi Osumi
    • 2
  1. 1.The University of TokyoTokyoJapan
  2. 2.Chuo UniversityTokyoJapan
  3. 3.Tokyo University of Agriculture and TechnologyTokyoJapan

Personalised recommendations