Skip to main content
Log in

Image-Based Indoor Topological Navigation with Collision Avoidance for Resource-Constrained Mobile Robots

  • Regular Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This paper presents a complete topological navigation system for a resource-constrained mobile robot like Pepper, based on image memory and the teach-and-repeat paradigm. Image memory is constructed from a set of reference images that are acquired during a prior mapping phase and arranged topologically. A* search is used to find the optimal path between the current location and the destination. The images from the robot’s RGB camera are used to localize within the topological graph, and an Image-Based Visual Servoing (IBVS) control scheme drives the robot to the next node in the graph. Depth images update a local egocentric occupancy grid, and another IBVS controller navigates local free-space. The output of the two IBVS controllers is fused to form the final control command for the robot. We demonstrate real-time navigation for the Pepper robot in an indoor open-plan office environment without the need for accurate mapping and localization. Our core navigation module can run completely onboard the robot (which has quite limited computing capabilities) at 5 Hz without requiring any external computing resources. We have successfully performed navigation trials over 15 days, visiting more than 50 destinations and traveling more than 1200m with a success rate of over 80%. We discuss remaining challenges and openly share our software.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Code Availability

The complete software is open-sourced. The source code is available on https://github.com/qcr/pepper_navigation.git

References

  1. Akinlar, C., Topal, C.: EDLines: A real-time line segment detector with a false detection control. In: Pattern Recognition Letters 32.13, pp. 1633–1642. ISSN: 0167-8655. https://doi.org/10.1016/j.patrec.2011.06.001 (2011)

  2. de Lima, D.A., Victorino, A.C.: A Hybrid Controller for Vision-Based Navigation of Autonomous Vehicles in Urban Environments. In: IEEE Transactions on Intelligent Transportation Systems 17.8, pp. 2310–2323. ISSN: 1524-9050. https://doi.org/10.1109/TITS.2016.2519329 (2016)

  3. Bista, S.R., Giordano, P.R., Chaumette, F.: Combining line segments and points for appearance-based indoor navigation by image based visual servoing. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE. pp. 2960–2967. https://doi.org/10.1109/IROS.2017.8206131 (2017)

  4. Bista, S.R., Giordano, P.R., Chaumette, F.: Appearance-based indoor navigation by IBVS using line segments. In: IEEE robotics and automation letters 1.1, pp. 423–430. ISSN: 2377-3766. https://doi.org/10.1109/LRA.2016.2521907 (2016)

  5. Bista, S.R., Giordano, P.R., Chaumette, F.: Appearance-based indoor navigation by IBVS using mutual information. In: IEEE international conference on control, automation, robotics and vision. pp. 1–6. https://doi.org/10.1109/ICARCV.2016.7838760 (2016)

  6. Bonin-Font, F., Ortiz, A., Oliver, G.: Visual navigation for mobile robots: A survey. In: Journal of intelligent and robotic systems 53.3, pp. 263–296. ISSN: 1573-0409. https://doi.org/10.1007/s10846-008-9235-4 (2008)

  7. Camara, L.G., Pivonka, T., Jılek, M., Gäbert, C., Kosnar, K., Preucil, L.: Accurate and robust teach and repeat navigation by visual place recognition: A CNN approach. In: IEEE/RSJ International conference on intelligent robots and systems (2020)

  8. Chaumette, F., Hutchinson, S.: Visual servo control, Part I: Basic approaches. In: IEEE robotics and automation magazine 13.4, pp. 82–90. ISSN: 1070-9932. https://doi.org/10.1109/MRA.2006.250573 (2006)

  9. Cherubini, A., Chaumette, F.: Visual navigation of a mobile robot with laser-based collision avoidance. In: The international journal of robotics research 32.2, pp. 189–205. https://doi.org/10.1177/0278364912460413 (2013)

  10. Clement, L., Kelly, J., Barfoot, T.D.: Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. In: Journal of field robotics 34.1, pp. 74–97. ISSN: 1556-4959. https://doi.org/10.1002/rob.21655 (2017)

  11. Dame, A., Marchand, E.: Using mutual information for appearance-based visual path following. In: Robotics and autonomous systems 61.3, pp. 259–270. ISSN: 0921-8890. https://doi.org/10.1016/j.robot.2012.11.004 (2013)

  12. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: Real-time single camera SLAM. In: IEEE Transactions on pattern analysis and machine intelligence 29.6, Pp. 1052–1067. ISSN: 0162-8828. https://doi.org/10.1109/TPAMI.2007.1049 (2007)

  13. Delfin, J., Becerra, H.M., Arechavaleta, G.: Humanoid navigation using a visual memory with obstacle avoidance. In: Robotics and autonomous systems 109, pp. 109–124. ISSN: 0921-8890. https://doi.org/10.1016/j.robot.2018.08.010 (2018)

  14. Diosi, A., Segvic, S., Remazeilles, A., Chaumette, F.: Experimental evaluation of autonomous driving based on visual memory and image based visual servoing. In: IEEE Transactions on intelligent transportation systems 12.3, pp. 870–883. ISSN: 1524-9050. https://doi.org/10.1109/TITS.2011.2122334 (2011)

  15. Folio, D., Cadenat, V.: A redundancy-based scheme to perform safe vision-based tasks amidst obstacles. In: IEEE International conference on robotics and biomimetics. pp. 13–18. https://doi.org/10.1109/ROBIO.2006.340252(2006)

  16. Fox, D., Burgard, W., Thrun, S.: The dynamic window approach to collision avoidance. In: IEEE robotics automation magazine 4.1, pp. 23–33. ISSN: 1070-9932. https://doi.org/10.1109/100.580977 (1997)

  17. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. In: Journal of field robotics 27.5, pp. 534–560. https://doi.org/10.1002/rob.20342 (2010)

  18. Gálvez-López, D., Tardós, J.D.: Bags of binary words for fast place recognition in image sequences. In: IEEE Transactions on robotics 28.5, pp. 1188–1197. ISSN: 1552-3098. https://doi.org/10.1109/TRO.2012.2197158(2012)

  19. Garcia-Fidalgo, E., Ortiz, A.: Vision-based topological mapping and localization methods: A survey. In: Robotics and autonomous systems 64, pp. 1–20. ISSN: 0921-8890. https://doi.org/10.1016/j.robot.2014.11.009 (2015)

  20. Gee, A.P., Mayol-Cuevas, W.: Real-time model-based SLAM using line segments. In: Proceedings of the second international conference on advances in visual computing. Springer-Verlag, pp. 354–363. https://doi.org/10.1007/11919629_37 (2006)

  21. Hart, P.E., Nilsson, N.J., Raphael, B.: A formal basis for the heuristic determination of minimum cost paths. In: IEEE Transactions on systems science and cybernetics 4.2, pp. 100–107. ISSN: 0536-1567. https://doi.org/10.1109/TSSC.1968.300136 (1968)

  22. Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Second. Cambridge University Press, Cambridge (2004). ISBN: 0521540518

    Book  Google Scholar 

  23. von Hundelshausen, F., Himmelsbach, M., Hecker, F., Mueller, A., Wuensche, H.-J.: Driving with tentacles: Integral structures for sensing and motion. In: Journal of field robotics 25.9, pp. 640–673. ISSN:1556-4959. https://doi.org/10.1002/rob.v25:9 (2008)

  24. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. In: IEEE International conference on robotics and automation. Vol. 2., pp. 500–505. https://doi.org/10.1109/ROBOT.1985.1087247 (1985)

  25. Labrosse, F.: Short and long-range visual navigation using warped panoramic images. In: Robotics and autonomous systems 55.9, pp. 675–684. ISSN:0921-8890. https://doi.org/10.1016/j.robot.2007.05.004(2007)

  26. Micusik, B., Wildenauer, H.: Descriptor free visual indoor localization with line segments. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3165–3173. https://doi.org/10.1109/CVPR.2015.7298936 (2015)

  27. Mujahed, M., Fischer, D., Mertsching, B.: Admissible gap navigation: A new collision avoidance approach. In: Robotics and autonomous systems 103, pp. 93–110. ISSN: 0921-8890. https://doi.org/10.1016/j.robot.2018.02.008(2018)

  28. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: A versatile and accurate monocular SLAM system. In: IEEE Transactions on robotics 31.5, pp. 1147–1163. ISSN: 1941-0468. https://doi.org/10.1109/TRO.2015.2463671 (2015)

  29. Seder, M., Petrovic, I.: Dynamic window based approach to mobile robot motion control in the presence of moving obstacles. In: IEEE International conference on robotics and automation. pp. 1986–1991. https://doi.org/10.1109/ROBOT.2007.363613 (2007)

  30. Segvic, S., Remazeilles, A., Diosi, A., Chaumette, F.: A mapping and localization framework for scalable appearance-based navigation. In: Computer vision and image understanding 113, pp. 172–187. ISSN: 1077-3142. https://doi.org/10.1016/j.cviu.2008.08.005 (2009)

  31. Smith, P., Reid, I., Davison, A.: Real-time monocular SLAM with straight lines. In: Proc. British Machine Vision Conference. pp. 17–26 (2006)

  32. SoftBank Robotics: Pepper-Documentation. Accessed: 12-02-19. http://doc.aldebaran.com/2-5/home_pepper.html (2016)

  33. Suddrey, G., Jacobson, A., Ward, B.: Enabling a pepper robot to provide automated and interactive tours of a robotics laboratory. In: Australasian conference on robotics and automation. arXiv:1804.03288(2018)

  34. Swedish, T., Raskar, R.: Deep visual teach and repeat on path networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp. 1533–1542 (2018)

  35. Von Gioi, R.G., Jakubowicz, J., Morel, J.-M., Randall, G.: LSD: A fast line segment detector with a false detection control. In: IEEE transactions on pattern analysis and machine intelligence 32.4, pp. 722–732. https://doi.org/10.1109/TPAMI.2008.300 (2008)

  36. Warren, M., Greeff, M., Patel, B., Collier, J., Schoellig, A.P., Barfoot, T.D.: There’s no place like home: Visual teach and repeat for emergency return of Multirotor UAVs during GPS failure. In: IEEE Robotics and automation letters 4.1, pp. 161–168. ISSN: 2377-3766. https://doi.org/10.1109/LRA.2018.2883408 (2019)

  37. Zhang, L., Koch, R.: An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. In: Journal of visual communication and image representation 24.7, pp. 794–805. ISSN:1047-3203. https://doi.org/10.1016/j.jvcir.2013.05.006 (2013)

  38. Zhang, L., Koch, Reinhard .: Hand-held monocular SLAM based on line segments. In: International machine vision and image processing conference 0, pp. 7–14. https://doi.org/10.1109/IMVIP.2011.11 (2011)

Download references

Funding

This research was supported by the Australian Research Council Centre of Excellence for Robotic Vision (project number CE140100016) and funded by the Queensland Government under an Advance Queensland Grant.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Suman Raj Bista. The manuscript was written by Suman Raj Bista and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Suman Raj Bista.

Ethics declarations

Conflict of Interests

Distinguished Professor Peter Corke (One of the authors of the manuscript) is in the Board of Governors of this journal.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was supported by the Australian Research Council Centre of Excellence for Robotic Vision (project # CE140100016) and funded by the Queensland Government under an Advance Queensland Grant.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(MP4 93.8 MB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bista, S.R., Ward, B. & Corke, P. Image-Based Indoor Topological Navigation with Collision Avoidance for Resource-Constrained Mobile Robots. J Intell Robot Syst 102, 55 (2021). https://doi.org/10.1007/s10846-021-01390-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01390-6

Keywords

Navigation