Abstract
Funnel lane is a map-less visual navigation technique that tries qualitatively to follow a path that has been recorded before by a camera. Unlike some other methods, funnel lane does not require any calculation to relate world coordinates to image coordinates. However, the funnel lane has some shortcomings. First, it does not provide any information about the radius of rotation of the robot. This reduces the robot maneuverability of the robots and, on some occasions, does not let the robot to correct its path if a deviation occurs. Second, funnel lane constraints sometimes do not distinguish between forward or turning movement of the robot while the robot is in the funnel lane, and command the robot to go forward. This prevents the robot to follow the desired path and leads to failure of the robot’s mission. This paper introduces the sloped funnel lane technique to address these shortcomings. It sets the rotation radius based on the observed frames. Moreover, it reduces translation and rotation ambiguity. Therefore, the robot can follow any desired path leading to more robust and accurate navigation. Experimental results on challenging scenarios on a real ground robot demonstrate the effectiveness of the sloped funnel lane technique.
Similar content being viewed by others
References
Diosi A, Remazeilles A, šegvić S, Chaumette F (2007) Experimental evaluation of an urban visual path following framework. In: IFAC proceedings volumes (IFAC-PapersOnline)
Chen Z, Birchfield ST (2009) Qualitative vision-based path following. IEEE Trans Robot 25:749–754
Zhichao C, Birchfield ST (2006) Qualitative vision-based mobile robot navigation. In: Proceedings—IEEE international conference on robotics and automation
Guerrero JJ, Martinez-Cantin R, Sagüś C (2005) Visual map-less navigation based on homographies. J Robot Syst 22:569–581
Liang BLB, Pears N (2002) Visual navigation using planar homographies. In: Proceedings of 2002 IEEE international conference on robotic automation (Cat. No.02CH37292)
Royer E, Lhuillier M, Dhome M, Lavest J-M (2007) Monocular vision for mobile robot localization and autonomous navigation. Int J Comput Vis 74:237–260
Remazeilles A, Chaumette F, Gros P (2006) 3D navigation based on a visual memory. In: International conference on robotics and automation
Matsumoto Y, Ikeda K, Inaba M, Inoue H (1999) Visual navigation using omnidirectional view sequence. In: 1999 IEEE/RSJ international conference on intelligent robots and systems
Pasteau F, Narayanan VK, Babel M, Chaumette F (2016) visual servoing approach for autonomous corridor following and doorway passing in a wheelchair. Robot Auton Syst 75:28–40
David J, Manivannan PV (2014) Control of truck-trailer mobile robots: a survey. Intell Serv Robot 7:245–258
Remazeilles A, Chaumette F (2007) Image-based robot navigation from an image memory. Robot Auton Syst 55:345–356
šegvić S, Remazeilles A, Diosi A, Chaumette F (2007) Large scale vision-based navigation without an accurate global reconstruction. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition
Do T, Carrillo-Arce LC, Roumeliotis SI (2018) Autonomous flights through image-defined paths. In: Bicchi A, Burgard W (eds) Robotics research. Springer Proceedings in Advanced Robotics, vol 2. Springer, Cham, pp 39–55
Nguyen T, Mann GKI, Gosine RG (2014) Vision-based qualitative path-following control of quadrotor aerial vehicle. In: 2014 international conference on unmanned aircraft systems, ICUAS 2014—conference proceedings
Royer E, Bom J, Dhome M, Thuilot B, Lhuillier M, Marmoiton F (2005) Outdoor autonomous navigation using monocular vision. In: Proceedings of IEEE/RSJ international conference intelligent robotics system
Do T, Carrillo-Arce LC, Roumeliotis SI (2019) High-speed autonomous quadrotor navigation through visual and inertial paths. Int J Robot Res 38:486–504
Bonin-Font F, Ortiz A, Oliver G (2008) Visual navigation for mobile robots: a survey. J Intell Robot Syst 53:263
Kidono K, Miura J, Shirai Y (2002) Autonomous visual navigation of a mobile robot using a human-guided experience. Robot Auton Syst 40:121–130
Chao H, Gu Y, Gross J (2013) A comparative study of optical flow and traditional sensors in UAV navigation. In: 2013 American control..
Srinivasan MV (2011) Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics. Physiol Rev 91:413–460
Chao H, Gu Y, Napolitano M (2013) A survey of optical flow techniques for UAV navigation applications. In: 2013 international conference on unmanned aircraft systems, ICUAS 2013—conference proceedings
King P, Vardy A, Forrest AL (2018) Teach-and-repeat path following for an autonomous underwater vehicle. J Field Robot 35:748–763
Furgale P, Barfoot TD (2010) Visual teach and repeat for long-range rover autonomy. J Field Robot 27:534–560
Ostafew CJ, Schoellig AP, Barfoot TD (2013) Visual teach and repeat, repeat, repeat: iterative learning control to improve mobile robot path tracking in challenging outdoor environments. In: IEEE international conference on intelligent robots and systems
Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD (2019) There’s no place like home: visual teach and repeat for emergency return of multirotor UAVs during GPS failure. IEEE Robot Autom Lett 4:161–168
Clement L, Kelly J, Barfoot TD (2017) Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. J Field Robot 34:74–97
Wang Z, Lambert A (2018) ICSP based visual teach and repeat for outdoor car-like robot localization. In: 2018 10th computer science and electronic engineering conference (CEEC)
Bista SR, Giordano PR, Chaumette F (2016) Appearance-based indoor navigation by IBVS using mutual information. In: 2016 14th international conference on control. Robotics and vision, ICARCV, automation, p 2017
Krajnik T, Majer F, Halodova L, Vintr T (2018) Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE international conference on intelligent robots and systems
Burschka D, Hager G (2001) Vision-based control of mobile robots. In: Proceedings of IEEE international conference on robotic automation
Nguyen T, Mann GKI, Gosine RG, Vardy A (2016) Appearance-based visual-teach-and-repeat navigation technique for micro aerial vehicle. J Intell Robot Syst Theory Appl 84:217–240
Toudeshki AG, Shamshirdar F, Vaughan R (2018) UAV visual teach and repeat using only semantic object features. CoRR
Tomasi C (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon University, Pittsburgh
http://www.vexrobotics.com. Accessed 2018
Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60:91–110
Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359
Acknowledgements
The authors would like to thank Artificial Intelligence laboratory members for their support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Kassir, M.M., Palhang, M. & Ahmadzadeh, M.R. Qualitative vision-based navigation based on sloped funnel lane concept. Intel Serv Robotics 13, 235–250 (2020). https://doi.org/10.1007/s11370-019-00308-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11370-019-00308-4