The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces

  • Mary Ellen Foster
  • Rachid Alami
  • Olli Gestranius
  • Oliver Lemon
  • Marketta Niemelä
  • Jean-Marc Odobez
  • Amit Kumar Pandey
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9979)


MuMMER (MultiModal Mall Entertainment Robot) is a four-year, EU-funded project with the overall goal of developing a humanoid robot (SoftBank Robotics’ Pepper robot being the primary robot platform) with the social intelligence to interact autonomously and naturally in the dynamic environments of a public shopping mall, providing an engaging and entertaining experience to the general public. Using co-design methods, we will work together with stakeholders including customers, retailers, and business managers to develop truly engaging robot behaviours. Crucially, our robot will exhibit behaviour that is socially appropriate and engaging by combining speech-based interaction with non-verbal communication and human-aware navigation. To support this behaviour, we will develop and integrate new methods from audiovisual scene processing, social-signal processing, high-level action selection, and human-aware robot navigation. Throughout the project, the robot will be regularly deployed in Ideapark, a large public shopping mall in Finland. This position paper describes the MuMMER project: its needs, the objectives, R&D challenges and our approach. It will serve as reference for the robotics community and stakeholders about this ambitious project, demonstrating how a co-design approach can address some of the barriers and help in building follow-up projects.


Humanoid Robot Social Signal Robot Platform Success Metrics Navigation Planning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This research has been partially funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 688147 (MuMMER,


  1. 1.
    Annear, S.: Makers of the world’s ‘first family robot’ just set a new crowdfunding record. Boston Daily.
  2. 2.
    Barker, J.: Towards human-robot speech communication in everyday environments. In: Invited Talk presented at the ICSR 2013 Workshop on Robots in Public Spaces, October 2013Google Scholar
  3. 3.
    Bauer, A., Klasing, K., Lidoris, G., Mühlbauer, Q., Rohrmüller, F., Sosnowski, S., Xu, T., Kühnlenz, K., Wollherr, D., Buss, M.: The autonomous city explorer: Towards natural human-robot interaction in urban environments. Int. J. Soc. Robot. 1(2), 127–140 (2009)CrossRefGoogle Scholar
  4. 4.
    Bibby, C., Reid, I.: Robust real-time visual tracking using pixel-wise posteriors. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 831–844. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88688-4_61 CrossRefGoogle Scholar
  5. 5.
    Bibby, C., Reid, I.D.: Real-time tracking of multiple occluding objects using level sets. In: CVPR, pp. 1307–1314 (2010)Google Scholar
  6. 6.
    Clodic, A., Fleury, S., Alami, R., Chatila, R., Bailly, G., Brethes, L., Cottret, M., Danes, P., Dollat, X., Elisei, F., Ferrane, I., Herrb, M., Infantes, G., Lemaire, C., Lerasle, F., Manhes, J., Marcoul, P., Menezes, P., Montreuil, V.: Rackham: An interactive robot-guide. Proc. RO-MAN 2006, 502–509 (2006)Google Scholar
  7. 7.
    Dautenhahn, K.: Socially intelligent robots: dimensions of human-robot interaction. Philos. Trans. Royal Soc. B: Biol. Sci. 362(1480), 679–704 (2007)CrossRefGoogle Scholar
  8. 8.
    Dondrup, C., Hanheide, M.: Qualitative constraints for human-aware robot navigation using velocity costmaps. In: Proceedings RO-MAN (2016)Google Scholar
  9. 9.
    Hebesberger, D., Dondrup, C., Koertner, T., Gisinger, C., Pripfl, J.: Lessons learned from the deployment of a long-term autonomous robot as companion in physical therapy for older adults with dementia: A mixed methods study. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 27–34 (2016)Google Scholar
  10. 10.
    Hurley, M., Dennett, D., Adams Jr., R.: Inside Jokes. MIT Press, Cambridge (2011)Google Scholar
  11. 11.
    Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., Hagita, N.: A communication robot in a shopping mall. Robot. IEEE Trans. 26(5), 897–913 (2010)CrossRefGoogle Scholar
  12. 12.
    Karkaletsis, V., Konstantopoulos, S., Bilidas, D., Vogiatzis, D.: INDIGO project: personality and dialogue enabled cognitive robots. In: Makedon, F., Maglogiannis, I., Kapidakis, S. (eds.) Proceedings of the 3rd International Conference on Pervasive Technologies Related to Assistive Environments, PETRA 2010 (2010). doi: 10.1145/1839294.1839376
  13. 13.
    Keizer, S., Foster, M.E., Wang, Z., Lemon, O.: Machine learning for social multi-party human-robot interaction. ACM Trans. Intell. Interact. Syst. 4(3), 1–32 (2014)CrossRefGoogle Scholar
  14. 14.
    Kruse, T., Pandey, A.K., Alami, R., Kirsch, A.: Human-aware robot navigation: A survey. Robot. Auton. Syst. 61(12), 1726–1743 (2013)CrossRefGoogle Scholar
  15. 15.
    Lemon, O., Pietquin, O. (eds.): Data-driven Methods for Adaptive Spoken Dialogue Systems: Computational Learning for Conversational Interfaces. Springer, Heidelberg (2012)zbMATHGoogle Scholar
  16. 16.
    Sheikhi, S., Khalidov, V., Klotz, D., Wrede, B., Odobez, J.M.: Leveraging the robot dialog state for visual focus of attention recognition. In: Proceedings of International Conference on Multimodal Interfaces (ICMI) (2013)Google Scholar
  17. 17.
    Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR (2011)Google Scholar
  18. 18.
    SoftBank Robotics Corp: Press releases (2016).
  19. 19.
    Triebel, R., et al.: SPENCER: a socially aware service robot for passenger guidance and help in busy airports. In: Wettergreen, David, S., Barfoot, Timothy, D. (eds.) Field and Service Robotics. STAR, vol. 113, pp. 607–622. Springer, Heidelberg (2016). doi: 10.1007/978-3-319-27702-8_40 CrossRefGoogle Scholar
  20. 20.
    Weiss, A., Igelsbock, J., Tscheligi, M., Bauer, A., Kuhnlenz, K., Wollherr, D., Buss, M.: Robots asking for directions: The willingness of passers-by to support robots. Proc. HRI 2010, 23–30 (2010)Google Scholar
  21. 21.
    Yu, Y., Eshghi, A., Lemon, O.: Training an adaptive dialogue policy for interactive learning of visually grounded word meanings. In: Proceedings SIGDIAL (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Mary Ellen Foster
    • 1
  • Rachid Alami
    • 2
  • Olli Gestranius
    • 3
  • Oliver Lemon
    • 4
  • Marketta Niemelä
    • 5
  • Jean-Marc Odobez
    • 6
  • Amit Kumar Pandey
    • 7
  1. 1.University of GlasgowGlasgowUK
  2. 2.LAAS-CNRSToulouseFrance
  3. 3.IdeaparkLempäälaFinland
  4. 4.Heriot-Watt UniversityEdinburghUK
  5. 5.VTT Technical Research Centre of FinlandTampereFinland
  6. 6.Idiap Research InstituteMartignySwitzerland
  7. 7.SoftBank Robotics EuropeParisFrance

Personalised recommendations