Advertisement

Supporting a Human-Aware World Model Through Sensor Fusion

  • Dominik Riedelbauch
  • Tobias Werner
  • Dominik Henrich
Conference paper
Part of the Mechanisms and Machine Science book series (Mechan. Machine Science, volume 49)

Abstract

Recent research in robotics aims at combining the abilities of humans and robots through human-robot collaboration. Robots must overcome additional challenges to handle dynamic environments within shared workspaces. They especially must perceive objects and the working progress to synchronize with humans in shared tasks. Due to unpredictable human interaction, local information about objects detected by eye-in-hand cameras and stored within a world model falls in value as soon as respective objects get out of sight. Our contribution is an approach to making world models aware of human influences and thus allowing robots to decide, whether information is still valid. To this end, we annotate pieces of information with certainty values encoding how trustworthy they are. Certainty is adapted over time according to additional knowledge about human presence within the workspace, provided by a global sensor. Thus, we achieve human-awareness through fusion of local and global sensor data. Our concept is validated through a prototype implementation and experiments that regard certainty of objects in different scenarios of human presence.

Keywords

World model Data aging Sensor fusion 

References

  1. 1.
    Alami R et al (2006) Toward human-aware robot task planning. In: AAAI spring symposiumGoogle Scholar
  2. 2.
    Arras K et al (2007) Using boosted features for the detection of people in 2D range data. In: IEEE conference on robotics and automationGoogle Scholar
  3. 3.
    Dalal N et al (2006) Human detection using oriented histograms of flow and appearance. In: European conference on computer visionGoogle Scholar
  4. 4.
    DIN 33402–2:2005:12, ergonomics - human body dimensions - Part 2: valuesGoogle Scholar
  5. 5.
    Fox V et al (2003) Bayesian filtering for location estimation. IEEE Perv Comput 2(3):24–33CrossRefGoogle Scholar
  6. 6.
    ISO/TR 7250–2:2011 Basic human body measurements for technological design, Part 2: statistical summaries of body measurements from national populationsGoogle Scholar
  7. 7.
    Lacevic B, Rocco P (2010) Towards a complete safe path planning for robotic manipulators. In: IEEE conference on intelligent robots and systemsGoogle Scholar
  8. 8.
    Lee MW et al (2002) Particle filter with analytical inference for human body tracking. In: IEEE workshop on motion and video computingGoogle Scholar
  9. 9.
    Ober-Gecks A et al (2014) Fast multi-camera reconstruction and surveillance with human tracking and optimized camera configurations. ISRGoogle Scholar
  10. 10.
    Okuma K et al (2004) A boosted particle filter: multitarget detection and tracking. In: European conference on computer visionGoogle Scholar
  11. 11.
    Quigley M et al (2009) ROS: an open-source robot operating system. In: ICRA workshop on open source softwareGoogle Scholar
  12. 12.
    Saxena A et al (2014) RoboBrain: large-scale knowledge engine for robots. arXiv:1412.0691
  13. 13.
    Shotton J et al (2013) Real-time human pose recognition in parts from single depth images. Commun ACM 56(1):116–124CrossRefGoogle Scholar
  14. 14.
    Taipalus T, Ahtiainen J (2011) Human detection and tracking with knee-high mobile 2D LIDAR. In: IEEE conference on robotics and biomimeticsGoogle Scholar
  15. 15.
    Thombre DV et al (2009) Human detection and tracking using image segmentation and kalman filter. In: International conference on intelligent agent and multi-agent systemsGoogle Scholar
  16. 16.
    Toshev A et al (2014) Deeppose: human pose estimation via deep neural networks. In: IEEE conference on computer vision and pattern recognitionGoogle Scholar
  17. 17.
    Tuzel O et al (2007) Human detection via classification on riemannian manifolds. In: IEEE conference on computer vision and pattern recognitionGoogle Scholar
  18. 18.
    Waldherr S, Romero R, Thrun S (2000) A gesture based interface for human-robot interaction. Auton Robot 9(2):151–173CrossRefGoogle Scholar
  19. 19.
    Werner T et al (2016) ENACT: an efficient and extensible entity-actor framework for modular robotics software components. In: 47th symposium on roboticsGoogle Scholar
  20. 20.
    Wurm KM et al (2010) OctoMap: a probabilistic, flexible, and compact 3D map representation for robotic systems. In: ICRA WorkshopGoogle Scholar
  21. 21.
    Xia L, Chen LC, Aggarwal J (2011) Human detection using depth information by kinect. In: IEEE computer vision and pattern recognition workshopsGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Dominik Riedelbauch
    • 1
  • Tobias Werner
    • 1
  • Dominik Henrich
    • 1
  1. 1.Lehrstuhl für Robotik und Eingebettete SystemeUniversität BayreuthBayreuthGermany

Personalised recommendations