Advertisement

Towards Camera Based Navigation in 3D Maps by Synthesizing Depth Images

  • Stefan Schubert
  • Peer Neubert
  • Peter Protzel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10454)

Abstract

This paper presents a novel approach to localize a robot equipped with an omnidirectional camera within a given 3D map. The pose estimate builds upon the synthesis of panoramic depth images, which are compared to the current view of the camera. We present an algorithmic approach to compute the similarity between these synthetic depth images and visual images, and show how to utilize this image matching for mobile robot navigation tasks, i.e. heading estimation, global localization, and navigation towards a target position. The presented method requires neither additional colour nor laser intensity information in the map. We provide a first evaluation of the involved image processing pipeline and a set of proof-of-concept experiments on a mobile robot. The presented approach supports different use cases like map sharing for heterogeneous robotics teams, or the usage of external sources of 3D maps like extruded floor plans.

Keywords

Camera-based localization Visual compass Visual homing 3D map Omnidirectional camera 

References

  1. 1.
    Möller, R.: A model of ant navigation based on visual prediction. J. Theor. Biol. 305, 118–130 (2012)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Möller, R.: Local visual homing by warping of two-dimensional images. Robot. Auton. Syst. 57(1), 87–101 (2009)CrossRefGoogle Scholar
  3. 3.
    Lange, S., Wunschel, D., Schubert, S., Pfeifer, T., Weissig, P., Uhlig, A., Truschzinski, M., Protzel, P.: Two autonomous robots for the dlr spacebot cup - lessons learned from 60 minutes on the moon. In: Proceedings of ISR 2016: 47th International Symposium on Robotics, pp. 1–8, June 2016Google Scholar
  4. 4.
    Caron, G., Dame, A., Marchand, E.: Direct model based visual tracking and pose estimation using mutual information. Image Vis. Comput. 32(1), 54–63 (2014)CrossRefGoogle Scholar
  5. 5.
    Caselitz, T., Steder, B., Ruhnke, M., Burgard, W.: Monocular camera localization in 3D lidar maps. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1926–1931, October 2016Google Scholar
  6. 6.
    Caselitz, T., Steder, B., Ruhnke, M., Burgard, W.: Matching geometry for long-term monocular camera localization. In: ICRA Workshop: AI for Long-Term Autonomy (2016)Google Scholar
  7. 7.
    Forster, C., Pizzoli, M., Scaramuzza, D.: Air-ground localization and map augmentation using monocular dense reconstruction. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3971–3978, November 2013Google Scholar
  8. 8.
    Gawel, A., Cieslewski, T., Dubé, R., Bosse, M., Siegwart, R., Nieto, J.: Structure-based vision-laser matching. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 182–188, October 2016Google Scholar
  9. 9.
    Pascoe, G., Maddern, W., Newman, P.: Direct visual localisation and calibration for road vehicles in changing city environments. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 98–105, December 2015Google Scholar
  10. 10.
    Pascoe, G., Maddern, W., Stewart, A.D., Newman, P.: Farlap: fast robust localisation using appearance priors. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 6366–6373, May 2015Google Scholar
  11. 11.
    Stewart, A.D., Newman, P.: Laps - localisation using appearance of prior structure: 6-dof monocular camera localisation using prior pointclouds. In: IEEE International Conference on Robotics and Automation, pp. 2625–2632, May 2012Google Scholar
  12. 12.
    Wolcott, R.W., Eustice, R.M.: Visual localization within lidar maps for automated urban driving. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 176–183, September 2014Google Scholar
  13. 13.
    Napier, A., Corke, P., Newman, P.: Cross-calibration of push-broom 2d lidars and cameras in natural scenes. In: 2013 IEEE International Conference on Robotics and Automation, pp. 3679–3684, May 2013Google Scholar
  14. 14.
    Pandey, G., McBride, J.R., Savarese, S., Eustice, R.M.: Automatic extrinsic calibration of vision and lidar by maximizing mutual information. J. Field Robot. 32(5), 696–722 (2015)CrossRefGoogle Scholar
  15. 15.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  16. 16.
    Schubert, S., Neubert, P., Protzel, P.: How to build and customize a high-resolution 3D laserscanner using off-the-shelf components. In: Alboul, L., Damian, D., Aitken, J.M.M. (eds.) TAROS 2016. LNCS, vol. 9716, pp. 314–326. Springer, Cham (2016). doi: 10.1007/978-3-319-40379-3_33 Google Scholar
  17. 17.
    Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easily calibrating omnidirectional cameras. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5695–5701, October 2006Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.TU ChemnitzChemnitzGermany

Personalised recommendations