Advertisement

SPENCER: A Socially Aware Service Robot for Passenger Guidance and Help in Busy Airports

  • Rudolph Triebel
  • Kai Arras
  • Rachid Alami
  • Lucas Beyer
  • Stefan Breuers
  • Raja Chatila
  • Mohamed Chetouani
  • Daniel Cremers
  • Vanessa Evers
  • Michelangelo Fiore
  • Hayley Hung
  • Omar A. Islas Ramírez
  • Michiel Joosse
  • Harmish Khambhaita
  • Tomasz Kucner
  • Bastian Leibe
  • Achim J. Lilienthal
  • Timm Linder
  • Manja Lohse
  • Martin Magnusson
  • Billy Okal
  • Luigi Palmieri
  • Umer Rafi
  • Marieke van Rooij
  • Lu Zhang
Chapter
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 113)

Abstract

We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in real-time for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empirical experiments to assess socio-psychological effects of normative robot behaviors.

Keywords

Social Rule Motion Planner Human Social Behavior Inverse Reinforcement Learn Normal Distribution Transform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning (ICML). ACM (2004)Google Scholar
  2. 2.
    Arras, K.O., Grzonka, S., Luber, M., Burgard, W.: Efficient people tracking in laser range data using a multi-hypothesis leg-tracker with adaptive occlusion probabilities. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2008)Google Scholar
  3. 3.
    Biber, P., Straßer, W.: The normal distributions transform: a new approach to laser scan matching. In: IROS, pp. 2743–2748. IEEE (2003)Google Scholar
  4. 4.
    Burgard, W., Cremers, A., Fox, D., Hähnel, D., Lakemeyer, G., Schulz, D., Steiner, W., Thrun, S.: Experiences with an interactive museum tour-guide robot. Artif. Intell. 114(1–2), 3–55 (2000)MATHGoogle Scholar
  5. 5.
    Cox, I., Hingorani, S.: An efficient implementation of Reid’s multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 18(2), 138–150 (1996)CrossRefGoogle Scholar
  6. 6.
    Cristani, M., Pesarin, A., Vinciarelli, A., Crocco, M., Murino, V.: Look at who’s talking: voice activity detection by automated gesture analysis. In: Constructing Ambient Intelligence, pp. 72–80. Springer (2012)Google Scholar
  7. 7.
    Fiore, M., Clodic, A., Alami, R.: On planning and task achievement modalities for human-robot collaboration. In: The International Symposium on Experimental Robotics (2014)Google Scholar
  8. 8.
    Hall, E.T.: The Hidden Dimension. Anchor Books, New York (1966)Google Scholar
  9. 9.
    Hung, H., Huang, Y., Friedland, G., Gatica-Perez, D.: Estimating dominance in multi-party meetings using speaker diarization. Tr. Audio Speech Lang. Process. 19(4), 847–860 (2011)CrossRefGoogle Scholar
  10. 10.
    Jafari, O.H., Mitzel, D., Leibe, B.: Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras. In: International Conference on Robotics and Automation (ICRA) (2014)Google Scholar
  11. 11.
    Joosse, M., Poppe, R., Lohse, M., Evers, V.: Cultural differences in how an engagement-seeking robot should approach a group of people. In: Proceedings of International Conference on Collaboration Across Boundaries: Culture, Distance and Technology (CABS) (2014)Google Scholar
  12. 12.
    Joosse, M.P., Lohse, M., Evers, V.: How a guide robot should behave at an airport insights based on observing passengers. Technical Report TR-CTIT-15-01, Centre for Telematics and Information Technology, University of Twente, Enschede (2015)Google Scholar
  13. 13.
    Karaman, S., Frazzoli, E.: Incremental sampling-based algorithms for optimal motion planning. In: Proceedings of Robotics: Science and Systems (RSS) (2010)Google Scholar
  14. 14.
    Kruse, T., Khambhaita, H., Alami, R., Kirsch, A.: Evaluating directional cost models in navigation. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2014)Google Scholar
  15. 15.
    Kucner, T., Saarinen, J., Magnusson, M., Lilienthal, A.J.: Conditional transition maps: learning motion patterns in dynamic environments. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1196–1201 (2013)Google Scholar
  16. 16.
    Leibe, B., Schindler, K., Van Gool, L.: Coupled object detection and tracking from static cameras and moving vehicles. IEEE Trans. PAMI 30(10) (2008)Google Scholar
  17. 17.
    Levine, S, Popovic, Z., Koltun, V.: Nonlinear inverse reinforcement learning with Gaussian processes. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 24, pp. 19–27 (2011)Google Scholar
  18. 18.
    Linder, T., Arras, K.: Multi-model hypothesis tracking of groups of people in RGB-D data. In: Proceedings of IEEE International Conference on Information Fusion (FUSION), pp. 1–7 (2014)Google Scholar
  19. 19.
    Luber, M., Arras, K.O.: Multi-hypothesis social grouping and tracking for mobile robots. In: Robotics: Science and Systems (RSS’13), Berlin, Germany (2013)Google Scholar
  20. 20.
    Magnusson, M., Lilienthal, A., Duckett, T.: Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 24(10), 803–827 (2007)CrossRefGoogle Scholar
  21. 21.
    Michini, B., How, J.P.: Improving the efficiency of Bayesian inverse reinforcement learning. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), St. Paul, Minnesota, USA (2012)Google Scholar
  22. 22.
    Mitzel, D., Leibe, B.: Close-range human detection for head-mounted cameras. In: British Machine Vision Conference (2012)Google Scholar
  23. 23.
    Moravec, H., Elfes, A.: High resolution maps from wide angle sonar. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 116–121 (1985)Google Scholar
  24. 24.
    Mund, D., Triebel, R., Cremers, D.: Active online confidence boosting for efficient object classification. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2015)Google Scholar
  25. 25.
    Murty, K.G.: An algorithm for ranking all the assignments in order of increasing cost. Oper. Res. 16 (1968)Google Scholar
  26. 26.
    Ong, S.C., Png, S.W., Hsu, D., Lee, W.S.: POMDPs for robotic tasks with mixed observability. In: Proceedings of Robotics: Science and Systems (RSS) (2009)Google Scholar
  27. 27.
    Palmieri, L., Arras, K.: POSQ: a new RRT extend function for efficient and smooth mobile robot motion planning. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2014)Google Scholar
  28. 28.
    Pandey, G., McBride, J.R., Eustice, R.M.: Ford campus vision and lidar data set. Int. J. Robot. Res. 30(13), 1543–1552 (2011)CrossRefGoogle Scholar
  29. 29.
    Pomerleau, F., Krüsi, P., Colas, F., Furgale, P., Siegwart, R.: Long-term 3D map maintenance in dynamic environments. In: IEEE International Conference on Robotics and Automation (ICRA) (2014)Google Scholar
  30. 30.
    Saarinen, J., Andreasson, H., Stoyanov, T., Lilienthal, A.: 3D normal distributions transform occupancy maps: an efficient representation for mapping in dynamic environments. Int. J. Robot. Res. (IJRR) 1627–1644 (2013)Google Scholar
  31. 31.
    Siegwart, R., Arras, K.O., Bouabdallah, S., Burnier, D., Froidevaux, G., Greppin, X., Jensen, B., Lorotte, A., Mayor, L., Meisser, M., Philippsen, R., Piguet, R., Ramel, G., Terrien, G., Tomatis, N.: Robox at Expo. 02: a large-scale installation of personal robots. RAS 42(3–4), 203–222 (2003)Google Scholar
  32. 32.
    Stoyanov, T., Saarinen, J., Andreasson, H., Lilienthal, A.: Normal distributions transform occupancy map fusion: simultaneous mapping and tracking in large scale dynamic environments. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4702–4708 (2013)Google Scholar
  33. 33.
    Sudowe, P., Leibe, B.: Efficient use of geometric constraints for sliding-window object detection in video. In: International Conference on Computer Vision Systems (ICVS) (2011)Google Scholar
  34. 34.
    Tosato, D., Spera, M., Cristani, M., Vittorio, M.: Characterizing humans on Riemannian manifolds. IEEE Trans. Pattern Anal. Mach. Intell. (2013)Google Scholar
  35. 35.
    Triebel, R., Grimmett, H., Paul, R., Posner, I.: Driven learning for driving: how introspection improves semantic mapping. In: Proceedings of International Symposium on Robotics Research (ISRR) (2013)Google Scholar
  36. 36.
    Triebel, R., Stühmer, J., Souiai, M., Cremers, D.: Active online learning for interactive segmentation using sparse Gaussian processes. In: German Conference on Pattern Recognition (GCPR) (2014)Google Scholar
  37. 37.
    Vasquez, D., Okal, B., Arras, K.O.: Inverse reinforcement learning algorithms and features for robot navigation in crowds: an experimental comparison. In: IROS, Chicago, USA (2014)Google Scholar
  38. 38.
    Veenstra, A., Hung, H.: Do they like me? Using video cues to predict desires during speed-dates. In: Internatioanl Conference on Computer Vision, Workshops, pp. 838–845 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Rudolph Triebel
    • 1
  • Kai Arras
    • 2
  • Rachid Alami
    • 3
  • Lucas Beyer
    • 4
  • Stefan Breuers
    • 4
  • Raja Chatila
    • 5
  • Mohamed Chetouani
    • 5
  • Daniel Cremers
    • 1
  • Vanessa Evers
    • 6
  • Michelangelo Fiore
    • 3
  • Hayley Hung
    • 7
  • Omar A. Islas Ramírez
    • 5
  • Michiel Joosse
    • 6
  • Harmish Khambhaita
    • 3
  • Tomasz Kucner
    • 8
  • Bastian Leibe
    • 4
  • Achim J. Lilienthal
    • 8
  • Timm Linder
    • 2
  • Manja Lohse
    • 6
  • Martin Magnusson
    • 8
  • Billy Okal
    • 2
  • Luigi Palmieri
    • 2
  • Umer Rafi
    • 4
  • Marieke van Rooij
    • 9
  • Lu Zhang
    • 6
    • 7
  1. 1.Department of Computer ScienceTUMunichGermany
  2. 2.Social Robotics LabUniversity of FreiburgFreiburg im BreisgauGermany
  3. 3.LAAS-CNRS: Laboratory for Analysis and Architecture of SystemsToulouseFrance
  4. 4.RWTH AachenAachenGermany
  5. 5.ISIR-CNRS: Institute for Intelligent Systems and RoboticsParisFrance
  6. 6.University of TwenteEnschedeThe Netherlands
  7. 7.Delft University of TechnologyDelftThe Netherlands
  8. 8.Örebro UniversityÖrebroSweden
  9. 9.University of AmsterdamAmsterdamThe Netherlands

Personalised recommendations