Theta-Disparity: An Efficient Representation of the 3D Scene Structure

  • Lazaros Nalpantidis
  • Danica Kragic
  • Ioannis Kostavelis
  • Antonios Gasteratos
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 302)

Abstract

We propose a new representation of 3D scene structure, named theta-disparity. The proposed representation is a 2D angular depth histogram that is calculated using a disparity map. It models the structure of the prominent objects in the scene and reveals their radial distribution relative to a point of interest. The proposed representation is analyzed and used as a basic attention mechanism to autonomously resolve two different robotic scenarios. The method is efficient due to the low computational complexity. We show that the method can be successfully used for the planning of different tasks in the industrial and service robotics domains, e.g., object grasping, manipulation, plane extraction, path detection, and obstacle avoidance.

Keywords

3D scene understanding Object detection Autonomous robotics Industrial and service robots 

Notes

Acknowledgments

This work has been supported by the Swedish Foundation for Strategic Research and the European Commission through the research projects “Extending Sensorimotor Contingencies to Cognition (eSMCs)”, FP7-ICT-2009-6-270212 and “Sustainable and Reliable Robotics for Part Handling in Manufacturing Automation (STAMINA)”, FP7-ICT-2013-10-610917.

References

  1. 1.
    Shubina, K., Tsotsos, J.K.: Visual search for an object in a 3D environment using a mobile robot. Computer Vision and Image Understanding 114(5) (May 2010) 535–547Google Scholar
  2. 2.
    Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation, Shanghai, China (May 9–13 2011)Google Scholar
  3. 3.
    Frintrop, S., Rome, E., Nüchter, A., Surmann, H.: A bimodal laser-based attention system. Computer Vision and Image Understanding 100(1–2) (October 2005) 124–151Google Scholar
  4. 4.
    Maki, A., Nordlund, P., Eklundh, J.O.: Attentional scene segmentation: Integrating depth and motion. Computer Vision and Image Understanding 78(3) (2000) 351–373Google Scholar
  5. 5.
    Zhou, K., Richtsfeld, A., Zillich, M., Vincze, M.: Coherent spatial abstraction and stereo line detection for robotic visual attention. In: IEEE International Conference on Intelligent Robots and Systems, San Francisco, CA, USA (September25–30 2011) 1201–1207Google Scholar
  6. 6.
    Sjöö, K., Aydemir, A., Mörwald, T., Zhou, K., Jensfelt, P.: Mechanical support as a spatial abstraction for mobile robots. In: IEEE International Conference on Intelligent Robots and Systems. (2010) 4894–4900Google Scholar
  7. 7.
    Björkman, M., Kragic, D.: Active 3D scene segmentation and detection of unknown objects. In: IEEE International Conference on Robotics and Automation. (2010)Google Scholar
  8. 8.
    Labayrade, R., Aubert, D., Tarel, J.: Real time obstacle detection in stereovision on non flat road geometry through v-disparity representation. In: Intelligent Vehicle Symposium, 2002. IEEE. Volume 2, IEEE (2002) 646–651Google Scholar
  9. 9.
    De Cubber, G., Doroftei, D., Nalpantidis, L., Sirakoulis, G.C., Gasteratos, A.: Stereo-based terrain traversability analysis for robot navigation. In: IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance, Brussels, Belgium (2009)Google Scholar
  10. 10.
    Broggi, A., Caraffi, C., Fedriga, R., Grisleri, P.: Obstacle detection with stereo vision for off-road vehicle navigation. In: Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on, IEEE (2005) 65–65Google Scholar
  11. 11.
    Zhao, J., Katupitiya, J., Ward, J.: Global correlation based ground plane estimation using v-disparity image. In: Robotics and Automation, 2007 IEEE International Conference on, IEEE (2007) 529–534Google Scholar
  12. 12.
    Hu, Z., Lamosa, F., Uchimura, K.: A complete UV-disparity study for stereovision based 3D driving environment analysis. In: 3-D Digital Imaging and Modeling, 2005. 3DIM 2005. Fifth International Conference on, IEEE (2005) 204–211Google Scholar
  13. 13.
    Gao, Y., Ai, X., Wang, Y., Rarity, J., Dahnoun, N.: UV-disparity based obstacle detection with 3D camera and steerable filter. In: Intelligent Vehicles Symposium (IV), 2011 IEEE, IEEE (2011) 957–962Google Scholar
  14. 14.
    Minguez, J., Montano, L.: Nearness diagram navigation (ND): a new real time collision avoidance approach. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. Volume 3. (2000) 2094–2100Google Scholar
  15. 15.
    Kragic, D., Christensen, H.I.: Survey on visual servoing for manipulation. Technical Report ISRN KTH/NA/P-02/01-SE, CVAP259, Royal Institute of Technology (KTH), Stockholm, Sweden (2002)Google Scholar
  16. 16.
    Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybernetics 9(1) (January 1979) 62–66Google Scholar
  17. 17.
    Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: IEEE International Conference on Robotics and Automation, Shanghai, China, IEEE (2011) 1817–1824Google Scholar
  18. 18.
    Procopio, M.J.: Hand-labeled DARPA LAGR datasets. Available at http://ml.cs.colorado.edu/procopio/labeledlagrdata/ (2007)

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Lazaros Nalpantidis
    • 1
  • Danica Kragic
    • 2
  • Ioannis Kostavelis
    • 3
  • Antonios Gasteratos
    • 3
  1. 1.Robotics, Vision and Machine Intelligence Laboratory, Department of Mechanical and Manufacturing EngineeringAalborg University CopenhagenCopenhagenDenmark
  2. 2.Computer Vision and Active Perception Laboratory, Centre for Autonomous SystemsRoyal Institute of Technology (KTH)StockholmSweden
  3. 3.Robotics and Automation Laboratory, Production and Management Engineering DepartmentDemocritus University of ThraceXanthiGreece

Personalised recommendations