Advertisement

Utilization of Depth and Color Information in Mobile Robotics

  • Maciej Stefańczyk
  • Konrad Bojar
  • Włodzimierz Kasprzak
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 226)

Abstract

Computer vision plays an increasing role in robotics, as the computing power of modern computers grows year by year allowing more advanced algorithms to be implemented. Along with visual information, depth is also widely used in navigation of mobile robots, for example for obstacle detection. As cheap depth sensors become popular nowadays, there is a possibility to use data from both sources to further enhance the processes of navigation and object detection. This article presents some possibilities of utilizing in mobile robotics the integrated video and depth images - by performing image segmentation for environment description, optical flow estimation for obstacle avoidance and object detection for semantic map creation. All of the presented examples are based on real, working applications, which additionally proves validity of proposed methods.

Keywords

Mobile Robot Color Image Motion Vector Object Detection Obstacle Avoidance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baczyk, R., Kasinski, A., Skrzypczynski, P.: Vision-based mobile robot localization with simple artificial landmarks. In: Artificial Landmarks, Prepr. 7th IFAC Symp. on Robot Control. Citeseer, Wroclaw (2003)Google Scholar
  2. 2.
    Chen, S.Y., Li, Y.F., Zhang, J.: Jianwei Zhang. Vision processing for realtime 3D data acquisition based on coded structured light. IEEE Transactions on Image Processing 17(2), 167–176 (2008)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Engelhard, N., Endres, F., Hess, J., Sturm, J., Burgard, W.: Real-time 3d visual slam with a hand-held rgb-d camera. In: Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, vol. 2011 (2011)Google Scholar
  4. 4.
    Giles, J.: Inside the race to hack the Kinect. The New Scientist 208(2789), 22–23 (2010)CrossRefGoogle Scholar
  5. 5.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial intelligence 17(1), 185–203 (1981)CrossRefGoogle Scholar
  6. 6.
    Inokuchi, S., Sato, K., Matsuda, F.: Range imaging system for 3-d object recognition. In: Proceedings of the International Conference on Pattern Recognition, pp. 806–808 (1984)Google Scholar
  7. 7.
    Jung, B., Sukhatme, G.S.: Detecting moving objects using a single camera on a mobile robot in an outdoor environment. In: International Conference on Intelligent Autonomous Systems, pp. 980–987 (2004)Google Scholar
  8. 8.
    Kasprzak, W., Szynkiewiez, W.: A method for discrete self-localization using image analysis. In: Proceedings of the Third International Workshop on Robot Motion and Control, RoMoCo 2002, pp. 369–374. IEEE (2002)Google Scholar
  9. 9.
    Konolige, K.: Projected texture stereo. In: International Conference on Robotics and Automation (ICRA), pp. 148–155. IEEE (2010)Google Scholar
  10. 10.
    Konolige, K., Agrawal, M., Bolles, R.C., Cowan, C., Fischler, M.A., Gerkey, B.P.: Outdoor mapping and navigation using stereo vision. In: International Symposium on Experimental Robotics, pp. 179–190 (2006)Google Scholar
  11. 11.
    Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, pp. I–652. IEEE (2004)Google Scholar
  12. 12.
    Prusak, A., Melnychuk, O., Roth, H., Schiller, I., Koch, R.: Pose estimation and map building with a Time-Of-Flight-camera for robot navigation. Int. J. Intell. Syst. Technol. Appl. 5, 355–364 (2008)Google Scholar
  13. 13.
    Stefańczyk, M., Kasprzak, W.: Multimodal segmentation of dense depth maps and associated color information. In: Bolc, L., Tadeusiewicz, R., Chmielewski, L.J., Wojciechowski, K. (eds.) ICCVG 2012. LNCS, vol. 7594, pp. 626–632. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  14. 14.
    Steinbrucker, F., Sturm, J., Cremers, D.: Real-time visual odometry from dense rgb-d images. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 719–722. IEEE (2011)Google Scholar
  15. 15.
    Surmann, H., Lingemann, K., Nüchter, A., Hertzberg, J.: A 3d laser range finder for autonomous mobile robots. In: 32nd International Symposium on Robotics (ISR), pp. 153–158 (2001)Google Scholar
  16. 16.
    Wei, B., Fan, Y., Gao, B.: Mobile robot vision system based on linear structured light and DSP. In: International Conference on Mechatronics and Automation, ICMA 2009, pp. 1285–1290 (August 2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Maciej Stefańczyk
    • 1
  • Konrad Bojar
    • 2
  • Włodzimierz Kasprzak
    • 1
  1. 1.Institute of Control and Computation EngineeringWarsaw University of TechnologyWarsawPoland
  2. 2.Industrial Research Institute for Automation and MeasurementsWarsawPoland

Personalised recommendations