Advertisement

Task Representation in Robots for Robust Coupling of Perception to Action in Dynamic Scenes

  • Darius BurschkaEmail author
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 10)

Abstract

Most current perception systems are designed to represent static geometry of the environment and to monitor the execution of their tasks in 3D Cartesian representations. While this representation allows a human-readable definition of tasks in robotic systems and provides direct references to the static environment representation, it does not correspond to the native data format of many of the passive sensor systems. Additional calibration parameters are necessary to transform the sensor data into the Cartesian space. They decrease the robustness of the perception system making them sensitive to changes and errors. An example of an alternative coupling strategy for perception modules is the shift from look-then-move to visual servoing in grasping, where 3D task planning is replaced by a task definition defined directly in the image space. The errors and goals are represented here directly in sensor space. In addition, the spatial ordering of the information based on Cartesian data may lead to wrong prioritization of dynamic objects, e.g., not always the nearest objects are the prime collision candidates in the scene. We propose alternative ways how to represent task goals in robotic systems that are closer to the native sensor space and, therefore, that are more robust to errors. We present our initial ideas, how this task representations can be applied in manipulation and automotive domains.

References

  1. 1.
    Ancona, N., Poggio, T.: Optical flow from 1d correlation: application to a simple time-to-crash detector. In: 1993 4th International Conference on Computer Vision, pp. 209–214 (1993)Google Scholar
  2. 2.
    Burschka, D., Hager, G., Vision-based control of mobile robots. In: IEEE International Conference on Robotics and Automation. Proceedings 2001 ICRA, vol. 2, pp. 1707–1713 (2001)Google Scholar
  3. 3.
    Burschka, D.: Monocular navigation in large scale dynamic environments. In: BMVC (2017)Google Scholar
  4. 4.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: Large-Scale Direct Monocular SLAM, pp. 834–849. Springer International Publishing, Cham (2014)Google Scholar
  5. 5.
    Hespanha, J.P., Dodds, Z., Hager, G.D., Morse, A.S.: What tasks can be performed with an uncalibrated stereo vision system? Int. J. Comput. Vis. 35(1), 65–85 (1999)CrossRefGoogle Scholar
  6. 6.
    Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. (2013). Software available at http://octomap.github.com
  7. 7.
    Magin, G., Ru, A., Burschka, D., Frber, G.: A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots. Robot. Auton. Syst. 14(2), 119–131 (1995). Research on Autonomous Mobile RobotsGoogle Scholar
  8. 8.
    Schaub, A., Burschka, D.: Reactive avoidance of dynamic obstacles through optimization of their epipoles. Best paper in ICSTCC (2017)Google Scholar
  9. 9.
    Schaub, A., Burschka, D.: Single point velocity estimation in dynamic scenes from optical flow in binocular systems. In: AIM (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceTechnische Universität MünchenMunichGermany

Personalised recommendations