Advertisement

Should a Robot Guide Like a Human? A Qualitative Four-Phase Study of a Shopping Mall Robot

  • Päivi HeikkiläEmail author
  • Hanna Lammi
  • Marketta Niemelä
  • Kathleen Belhassein
  • Guillaume Sarthou
  • Antti Tammela
  • Aurélie Clodic
  • Rachid Alami
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11876)

Abstract

Providing guidance to customers in a shopping mall is a suitable task for a social service robot. To be useful for customers, the guidance needs to be intuitive and effective. We conducted a four-phase qualitative study to explore what kind of guidance customers need in a shopping mall, which characteristics make human guidance intuitive and effective there, and what aspects of the guidance should be applied to a social robot. We first interviewed staff working at the information booth of a shopping mall and videotaped demonstrated guidance situations. In a human-human guidance study, ten students conducted seven way-finding tasks each to ask guidance from a human guide. We replicated the study setup to study guidance situations with a social service robot with eight students and four tasks. The robot was controlled using Wizard of Oz technique. The characteristics that make human guidance intuitive and effective, such as estimation of the distance to the destination, appropriate use of landmarks and pointing gestures, appear to have the same impact when a humanoid robot gives the guidance. Based on the results, we identified nine design implications for a social guidance robot in a shopping mall.

Keywords

Shopping mall robot Robot guidance Design implications Multi-phased study Social robots 

Notes

Acknowledgements

We thank Olivier Canévet for creating a Wizard of Oz user interface which we used in this study and Petri Tikka for giving technical support. We also thank Ideapark for their cooperation and the volunteer participants of the study. This work has been supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 688147 (MuMMER project).

References

  1. 1.
    Allen, G.L.: Principles and practices for communicating route knowledge. Appl. Cogn. Psychol. 14(4), 333–359 (2000)CrossRefGoogle Scholar
  2. 2.
    Allen, G.L.: Gestures accompanying verbal route directions: do they point to a new avenue for examining spatial representation? Spat. Cogn. Comput. 3(4), 259–268 (2003)CrossRefGoogle Scholar
  3. 3.
    Belhassein, K., et al.: Human-Human Guidance Study. Technical-report hal-01719730, LAAS-CNRS, CLLE, VTT (2017)Google Scholar
  4. 4.
    Brscic, D., Ikeda, T., Kanda, T.: Do you need help? A robot providing information to people who behave atypically. IEEE Trans. Robot. 33(2), 500–506 (2017)CrossRefGoogle Scholar
  5. 5.
    Clodic, A., et al.: Rackham: an interactive robot-guide. In: 15th IEEE RO-MAN (2006)Google Scholar
  6. 6.
    Denis, M.: The description of routes: a cognitive approach to the production of spatial discourse. Cahiers de Psychologie Cognitive 16, 409–458 (1997)Google Scholar
  7. 7.
    Díaz, M., Paillacho, D., Angulo, C., Torres, O., González, J., Albo-Canals, J.: A week-long study on robot-visitors spatial relationships during guidance in a sciences museum. In: 9th ACM/IEEE International Conference on HRI, pp. 152–153 (2014)Google Scholar
  8. 8.
    Foster, M.E., et al.: The MuMMER project: engaging human-robot interaction in real-world public spaces. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 753–763. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-47437-3_74CrossRefGoogle Scholar
  9. 9.
    Gross, H., et al.: TOOMAS: interactive shopping guide robots in everyday use - final implementation and experiences from long-term field trials. In: 2009 IEEE/RSJ IROS, pp. 2005–2012 (2009)Google Scholar
  10. 10.
    Kanda, T., et al.: A communication robot in a shopping mall. IEEE Trans. Rob. 26(5), 897–913 (2010).  https://doi.org/10.1109/TRO.2010.2062550CrossRefGoogle Scholar
  11. 11.
    Kopp, S., et al.: Trading spaces: how humans and humanoids use speech and gesture to give directions. In: Nishida, T. (ed.) Conversational Informatics: An Engineering Approach. Wiley Series in Agent Technology, pp. 133–160. Wiley, Chichester (2007)CrossRefGoogle Scholar
  12. 12.
    Montello, D.R.: Scale and multiple psychologies of space. In: Frank, A.U., Campari, I. (eds.) COSIT 1993. LNCS, vol. 716, pp. 312–321. Springer, Heidelberg (1993).  https://doi.org/10.1007/3-540-57207-4_21CrossRefGoogle Scholar
  13. 13.
    Morales, Y., Satake, S., Kanda, T., Hagita, N.: Building a model of the environment from a route perspective for human-robot interaction. Int. J. Soc. Robot. 7(2), 165–181 (2015)CrossRefGoogle Scholar
  14. 14.
    Niemelä, M., Heikkilä, P., Lammi, H., Oksman, V.: A social robot in a shopping mall: studies on acceptance and stakeholder expectations. In: Korn, O. (ed.) Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction. HIS, pp. 119–144. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-17107-0_7CrossRefGoogle Scholar
  15. 15.
    Niemelä, M., Arvola, A., Aaltonen, I.: Monitoring the acceptance of a social service robot in a shopping mall: first results. In: Proceedings of the Companion of the 2017 ACM/IEEE HRI. ACM (2017)Google Scholar
  16. 16.
    Nothegger, C., Winter, S., Raubal, M.: Selection of salient features for route directions. Spat. Cogn. Comput. 4(2), 113–136 (2004)CrossRefGoogle Scholar
  17. 17.
    Okuno, Y., et al.: Providing route directions: design of robot’s utterance, gesture, and timing. In: 4th ACM/IEEE International Conference on HRI (2009)Google Scholar
  18. 18.
    Sarthou, G., Clodic, A., Alami, R.: Semantic spatial representation: a unique representation of an environment based on an ontology for robotic applications. In: Proceedings of the Combined Workshop SpLU-RoboNLP, pp. 50–60. Association for Computational Linguistics (2019)Google Scholar
  19. 19.
    Satake, S., Nakatani, K., Hayashi, K., Kanda, T., Imai, M.: What should we know to develop an information robot? PeerJ Comput. Sci. 1, e8 (2015)CrossRefGoogle Scholar
  20. 20.
    Triebel, R.: SPENCER: a socially aware service robot for passenger guidance and help in busy airports. In: Wettergreen, D.S., Barfoot, T.D. (eds.) Field and Service Robotics. STAR, vol. 113, pp. 607–622. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-27702-8_40CrossRefGoogle Scholar
  21. 21.
    Tversky, B., Lee, P.U.: How space structures language. In: Freksa, C., Habel, C., Wender, K.F. (eds.) Spatial Cognition: An Interdisciplinary Approach to Representing. LNCS (LNAI), vol. 1404, pp. 157–175. Springer, Heidelberg (1998).  https://doi.org/10.1007/3-540-69342-4_8CrossRefGoogle Scholar
  22. 22.
    Tversky, B., Lee, P.U.: Pictorial and verbal tools for conveying routes. In: Freksa, C., Mark, D.M. (eds.) COSIT 1999. LNCS, vol. 1661, pp. 51–64. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-48384-5_4CrossRefGoogle Scholar
  23. 23.
    Waldhart, J., Clodic, A., Alami, R.: Planning human and robot placements for shared visual perspective. In: Workshop on Robotic Co-workers 4.0 in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018)Google Scholar
  24. 24.
    Wong, N., Gutwin, C.: Where are you pointing? The accuracy of deictic pointing in CVEs. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1029–1038. ACM Press (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Päivi Heikkilä
    • 1
    Email author
  • Hanna Lammi
    • 1
  • Marketta Niemelä
    • 1
  • Kathleen Belhassein
    • 2
    • 3
  • Guillaume Sarthou
    • 2
  • Antti Tammela
    • 1
  • Aurélie Clodic
    • 2
  • Rachid Alami
    • 2
  1. 1.VTT Technical Research Centre of Finland LtdTampereFinland
  2. 2.LAAS-CNRS, Univ. Toulouse, CNRSToulouseFrance
  3. 3.CLLE, Univ. Toulouse, CNRS, UT2JToulouseFrance

Personalised recommendations