Skip to main content

Task Representation in Robots for Robust Coupling of Perception to Action in Dynamic Scenes

  • Conference paper
  • First Online:
Robotics Research

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 10))

Abstract

Most current perception systems are designed to represent static geometry of the environment and to monitor the execution of their tasks in 3D Cartesian representations. While this representation allows a human-readable definition of tasks in robotic systems and provides direct references to the static environment representation, it does not correspond to the native data format of many of the passive sensor systems. Additional calibration parameters are necessary to transform the sensor data into the Cartesian space. They decrease the robustness of the perception system making them sensitive to changes and errors. An example of an alternative coupling strategy for perception modules is the shift from look-then-move to visual servoing in grasping, where 3D task planning is replaced by a task definition defined directly in the image space. The errors and goals are represented here directly in sensor space. In addition, the spatial ordering of the information based on Cartesian data may lead to wrong prioritization of dynamic objects, e.g., not always the nearest objects are the prime collision candidates in the scene. We propose alternative ways how to represent task goals in robotic systems that are closer to the native sensor space and, therefore, that are more robust to errors. We present our initial ideas, how this task representations can be applied in manipulation and automotive domains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ancona, N., Poggio, T.: Optical flow from 1d correlation: application to a simple time-to-crash detector. In: 1993 4th International Conference on Computer Vision, pp. 209–214 (1993)

    Google Scholar 

  2. Burschka, D., Hager, G., Vision-based control of mobile robots. In: IEEE International Conference on Robotics and Automation. Proceedings 2001 ICRA, vol. 2, pp. 1707–1713 (2001)

    Google Scholar 

  3. Burschka, D.: Monocular navigation in large scale dynamic environments. In: BMVC (2017)

    Google Scholar 

  4. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: Large-Scale Direct Monocular SLAM, pp. 834–849. Springer International Publishing, Cham (2014)

    Google Scholar 

  5. Hespanha, J.P., Dodds, Z., Hager, G.D., Morse, A.S.: What tasks can be performed with an uncalibrated stereo vision system? Int. J. Comput. Vis. 35(1), 65–85 (1999)

    Article  Google Scholar 

  6. Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. (2013). Software available at http://octomap.github.com

  7. Magin, G., Ru, A., Burschka, D., Frber, G.: A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots. Robot. Auton. Syst. 14(2), 119–131 (1995). Research on Autonomous Mobile Robots

    Google Scholar 

  8. Schaub, A., Burschka, D.: Reactive avoidance of dynamic obstacles through optimization of their epipoles. Best paper in ICSTCC (2017)

    Google Scholar 

  9. Schaub, A., Burschka, D.: Single point velocity estimation in dynamic scenes from optical flow in binocular systems. In: AIM (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Darius Burschka .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Burschka, D. (2020). Task Representation in Robots for Robust Coupling of Perception to Action in Dynamic Scenes. In: Amato, N., Hager, G., Thomas, S., Torres-Torriti, M. (eds) Robotics Research. Springer Proceedings in Advanced Robotics, vol 10. Springer, Cham. https://doi.org/10.1007/978-3-030-28619-4_4

Download citation

Publish with us

Policies and ethics