Advertisement

Journal of Real-Time Image Processing

, Volume 11, Issue 2, pp 335–348 | Cite as

Real-time monocular image-based path detection

A GPU-based embedded solution for on-board execution on mobile robots
  • Pablo De Cristóforis
  • Matías A. Nitsche
  • Tomáš Krajník
  • Marta Mejail
Special Issue Paper

Abstract

In this work, we present a new real-time image-based monocular path detection method. It does not require camera calibration and works on semi-structured outdoor paths. The core of the method is based on segmenting images and classifying each super-pixel to infer a contour of navigable space. This method allows a mobile robot equipped with a monocular camera to follow different naturally delimited paths. The contour shape can be used to calculate the forward and steering speed of the robot. To achieve real-time computation necessary for on-board execution in mobile robots, the image segmentation is implemented on a low-power embedded GPU. The validity of our approach has been verified with an image dataset of various outdoor paths as well as with a real mobile robot.

Keywords

Mobile Robot Graphic Processing Unit Compute Unify Device Architecture Horizon Line Road Edge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This work has been supported by the Ministry of Education of the Czech Republic by project 7AMB12AR022 and by Ministry of Science of Argentina by project ARC/11/11.

References

  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  2. 2.
    Blas, M., Agrawal, M., Sundaresan, A., Konolige, K.: Fast color/texture segmentation for outdoor robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008. IROS 2008, pp. 4078–4085. IEEE, New York (2008)Google Scholar
  3. 3.
    Bonin-Font, F., Ortiz, A., Oliver, G.: Visual navigation for mobile robots: a survey. J. Intell. Robot. Syst. 53(3), 263–296 (2008)CrossRefGoogle Scholar
  4. 4.
    Borenstein, J., Koren, Y.: Obstacle avoidance with ultrasonic sensors. IEEE J. Robot. Autom. 4(2), 213–218 (1988)CrossRefGoogle Scholar
  5. 5.
    Chang, C., Siagian, C., Itti, L.: Mobile robot monocular vision navigation based on road region and boundary estimation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012. IEEE, New York (2012)Google Scholar
  6. 6.
    Chen, Z., Birchfield, S.: Qualitative vision-based path following. IEEE Trans. Robot. 25(3), 749–754 (2009)CrossRefGoogle Scholar
  7. 7.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002). doi: 10.1109/34.1000236 CrossRefGoogle Scholar
  8. 8.
    Dahlkamp, H., Kaehler, A., Stavens, D., Thrun, S., Bradski, G.: Self-supervised monocular road detection in desert terrain. In: Proceedings of Robotics: Science and Systems (RSS) (2006)Google Scholar
  9. 9.
    DeSouza, G., Kak, A.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 237–267 (2002)CrossRefGoogle Scholar
  10. 10.
    Ettinger, S., Nechyba, M., Ifju, P., Waszak, M.: Vision-guided flight stability and control for micro air vehicles. Adv. Robot. 17(7), 617–640 (2003)CrossRefGoogle Scholar
  11. 11.
    Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. Int. J. Comput. Vis. 59(2), 167–181 (2004)CrossRefGoogle Scholar
  12. 12.
    Fulkerson, B., Soatto, S.: Really quick shift: Image segmentation on a gpu. In: ECCV2010 Workshop on Computer Vision on GPUs (CVGPU2010) (2010)Google Scholar
  13. 13.
    Fulkerson, B., Vedaldi, A.: Really quick shift: Image segmentation on a gpu. http://vision.ucla.edu/~brian/gpuquickshift.html, version 0.2 (2010)
  14. 14.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using depth cameras for dense 3d modeling of indoor environments. In: The 12th International Symposium on Experimental Robotics (ISER), vol. 20, pp. 22–25 (2010)Google Scholar
  15. 15.
    Kaliyaperumal, K., Lakshmanan, S., Kluge, K.: An algorithm for detecting roads and obstacles in radar images.IEEE Trans. Veh. Technol. 50(1), 170–182 (2001)CrossRefGoogle Scholar
  16. 16.
    Kong, H., Audibert, J., Ponce, J.: General road detection from a single image. IEEE Trans. Image Process. 19(8), 2211–2220 (2010)CrossRefMathSciNetGoogle Scholar
  17. 17.
    KoÅanar, K., KrajÃnk, T., PÅŹeuÄil, L.: Visual topological mapping. In: Bruyninckx, H., Preucil, L., Kulich, M. (eds.) European Robotics Symposium 2008, Springer Tracts in Advanced Robotics, vol. 44, pp. 333–342. Springer, Berlin (2008)Google Scholar
  18. 18.
    Krajník, T., Faigl, J., Vonásek, V., Košnar, K., Kulich, M., Přeučil, L.: Simple yet stable bearing-only navigation. J. Field Robot. 27(5), 511–533 (2010)CrossRefGoogle Scholar
  19. 19.
    Kuhnl, T., Kummert, F., Fritsch, J.: Monocular road segmentation using slow feature analysis. In: Intelligent Vehicles Symposium (IV), 2011 IEEE, pp. 800–806, IEEE, New York (2011)Google Scholar
  20. 20.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  21. 21.
    May, S., Werner, B., Surmann, H., PervÃũlz, K.: 3d time-of-flight cameras for mobile robotics. In: IROS, pp. 790–795. IEEE, New York (2006)Google Scholar
  22. 22.
    Moghadam, P., Starzyk, J., Wijesoma, W.: Fast vanishing-point detection in unstructured environments. IEEE Trans. Image Process. 21(1), 425–430 (2012)CrossRefMathSciNetGoogle Scholar
  23. 23.
    Neto, A., Victorino, A., Fantoni, I., Zampieri, D.: Robust horizon finding algorithm for real-time autonomous navigation based on monocular vision. In: 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2011, pp. 532–537. IEEE, New York (2011)Google Scholar
  24. 24.
    Nieto, M., Salgado, L.: Real-time vanishing point estimation in road sequences using adaptive steerable filter banks. In: Advanced Concepts for Intelligent Vision Systems, pp. 840–848. Springer, Berlin (2007)Google Scholar
  25. 25.
    Parzen, E.: On estimation of a probability density function and mode. Ann. Math. Stat. 33(3), 1065–1076 (1962)CrossRefMathSciNetzbMATHGoogle Scholar
  26. 26.
    Pedre, S., De Cristóforis, P., Caccavelli, J., Stoliar, A.: A mobile mini robot architecture for research, education and popularization of science. J. Appl. Comput. Sci. Methods; Guest Editors: Zurada, J, Estevez p 2 (2010)Google Scholar
  27. 27.
    Rasmussen, C.: Grouping dominant orientations for ill-structured road following. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, vol. 1, pp. I-470. IEEE, New York (2004)Google Scholar
  28. 28.
    Rasmussen, C., Lu, Y., Kocamaz, M.: Appearance contrast for fast, robust trail-following. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009, pp. 3505–3512. IEEE, New York (2009)Google Scholar
  29. 29.
    Santosh, D., Achar, S., Jawahar, C.: Autonomous image-based exploration for mobile robot navigation. In: IEEE International Conference on Robotics and Automation, 2008. ICRA 2008, pp. 2717–2722. IEEE, New York (2008)Google Scholar
  30. 30.
    Šegvić, S., Remazeilles, A., Diosi, A., Chaumette, F.: A mapping and localization framework for scalable appearance-based navigation. Comput. Vis. Image Underst. 113(2), 172–187 (2009)CrossRefGoogle Scholar
  31. 31.
    Sezgin, M. et al.: Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 13(1), 146–168 (2004)CrossRefMathSciNetGoogle Scholar
  32. 32.
    Sheikh, Y., Khan, E., Kanade, T.: Mode-seeking by medoidshifts. In: IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007, pp. 1–8. IEEE, New York (2007)Google Scholar
  33. 33.
    Surmann H, Lingemann K, Nüchter A, Hertzberg J (2001) A 3d laser range finder for autonomous mobile robots. In: Proceedings of the 32nd ISR (International Symposium on Robotics), vol 19, pp 153–158Google Scholar
  34. 34.
    Ulrich, I., Nourbakhsh, I.: Appearance-based obstacle detection with monocular color vision. In: Proceedings of the National Conference on Artificial Intelligence, 1999, pp. 866–871. AAAI Press, Menlo Park; MIT Press, Cambridge (2000)Google Scholar
  35. 35.
    Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) Computer Vision ECCV 2008. Lecture Notes in Computer Science, vol. 5305, pp 705–718. Springer, Berlin (2008)Google Scholar
  36. 36.
    Wang, Y., Fang, S., Cao, Y., Sun, H.: Image-based exploration obstacle avoidance for mobile robot. In: Control and Decision Conference, 2009. CCDC’09, Chinese. pp. 3019–3023. IEEE, New York (2009)Google Scholar
  37. 37.
    Yanqing, W., Deyun, C., Chaoxia, S., Peidong, W.: Vision-based road detection by monte carlo method. Inf. Technol. J. 9(3), 481–487 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Pablo De Cristóforis
    • 1
  • Matías A. Nitsche
    • 1
  • Tomáš Krajník
    • 2
  • Marta Mejail
    • 1
  1. 1.Laboratory of Robotics and Embedded Systems, Computer Science Department, Faculty of Exact and Natural SciencesUniversity of Buenos AiresBuenos AiresArgentina
  2. 2.Intelligent and Mobile Robotics Group, Department of Cybernetics, Faculty of Electrical EngineeringCzech Technical University in PraguePrague 2Czech Republic

Personalised recommendations