Skip to main content
Log in

Passive Global Localisation of Mobile Robot via 2D Fourier-Mellin Invariant Matching

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Passive global localisation is defined as locating a robot on a map, under global pose uncertainty, without prescribing motion controls. The majority of current solutions either assume structured environments or require tuning of parameters relevant to establishing correspondences between sensor measurements and segments of the map. This article advocates for a solution that dispenses with both in order to achieve greater portability and universality across disparate static environments. A single 2D panoramic LIght Detection And Ranging (LIDAR) sensor is used as the measurement device, this way reducing computational and investment costs. The proposed method disperses pose hypotheses on the map of the robot’s environment and then captures virtual scans from each of them. Subsequently, each virtual scan is matched against the one derived from the physical sensor. Angular alignment is performed via 2D Fourier-Mellin Invariant (FMI) matching; positional alignment is performed via feedback of the position estimation error. In order to deduce the robot’s pose the method sifts through hypotheses by using measures extracted from FMI. Simulations and experiments illustrate the efficacy of the proposed global localisation solution in realistic surroundings and scenarios. In addition, the proposed method is pitted against the most effective Iterative Closest Point (ICP) variant under the same task, and three conclusions are drawn. The first is that the proposed method is effective in both structured and unstructured environments. The second is that it concludes to fewer false positives. The third is that the two methods are largely equivalent in terms of pose error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Not at this stage as this article is a product of research in an active and ongoing research program.

Notes

  1. This remark is general, and assumes that as the area of an environment grows, so do demarcating structures in it. In simple environments, with few occluding structures, fewer hypotheses may be needed due to the decreased dissimilarity that the absence of occlusions induces.

References

  1. Jensfelt, P., Kristensen, S.: Active global localization for a mobile robot using multiple hypothesis tracking. IEEE Trans. Robot. Autom. 17(5), 748–760 (2001). https://doi.org/10.1109/70.964673

    Article  Google Scholar 

  2. O’Kane, J.M.: Global localization using odometry. In: Proceedings - IEEE International Conference on Robotics and Automation. https://doi.org/10.1109/ROBOT.2006.1641158, pp 37–42 (2006)

  3. Gasparri, A., Panzieri, S., Pascucci, F., Ulivi, G.: A hybrid active global localisation algorithm for mobile robots. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, Roma. https://doi.org/10.1109/ROBOT.2007.363958, pp 3148–3153 (2007)

  4. Manasse, M., McGeoch, L., Sleator, D.: Competitive algorithms for on-line problems. In: Proceedings of the twentieth annual ACM symposium on Theory of computing (STOC ’88). https://doi.org/10.1145/62212.62243, pp 322–333. Association for Computing Machinery, New York (1988)

  5. Kleinberg, J.M.: The localization problem for mobile robots. In: Proceedings 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA. https://doi.org/10.1109/SFCS.1994.365739, pp 521–531 (1994)

  6. Romanik, K., Schuierer, S.: Optimal robot localization in trees. SCG ’96 (1996)

  7. Dudek, G., Romanik, K., Whitesides, S.: Localizing a robot with minimum travel. SIAM J. Comput. 27(2), 583–604 (1998). https://doi.org/10.1137/S0097539794279201

    Article  MathSciNet  Google Scholar 

  8. O’Kane, J.M., LaValle, S.M.: Almost-sensorless localization. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain. https://doi.org/10.1109/ROBOT.2005.1570694, pp 3764–3769 (2005)

  9. Rao, M., Dudek, G., Whitesides, S.: Randomized algorithms for minimum distance localization. Int. J. Robot. Res. 26(9), 917–933 (2007). https://doi.org/10.1177/0278364907081234

    Article  Google Scholar 

  10. Se, S., Lowe, D., Little, J.: Local and global localization for mobile robots using visual landmarks. In: Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180), Maui, HI, USA. https://doi.org/10.1109/IROS.2001.973392, vol. 1, pp 414–420 (2001)

  11. Hough, P.V.C: Method and means for recognizing complex patterns. United States Patent 3069654 (1962)

  12. Hernandez-Alamilla, S.F., Morales, E.F.: Global localization of mobile robots for indoor environments using natural landmarks. In: 2006 IEEE Conference on Robotics, Automation and Mechatronics, Bangkok. https://doi.org/10.1109/RAMECH.2006.252692, pp 1–6 (2006)

  13. Bosse, M., Zlot, R.: Keypoint design and evaluation for place recognition in 2D lidar maps. Robot. Auton. Syst. 57(12), 1211–1224 (2009). https://doi.org/10.1016/j.robot.2009.07.009

    Article  Google Scholar 

  14. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). https://doi.org/10.1109/34.121791

    Article  Google Scholar 

  15. Claus, B.: Vehicle localization using landmarks obtained by a lidar mobile mapping system. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences: [PCV 2010 - Photogrammetric Computer Vision And Image Analysis, Pt I]. Nr. Part 3A, S., vol. 38, pp 139–144 (2010)

  16. Zhu, J., Zheng, N., Yuan, Z.: An improved technique for robot global localization in indoor environments. Int. J. Adv. Robot. Syst. https://doi.org/10.5772/10525 (2011)

  17. Censi, A., Iocchi, L., Grisetti, G.: Scan matching in the hough domain. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp 2739–2744 (2005)

  18. Xie, J., Nashashibi, F., Parent, M., Favrot, O.G.: A real-time robust global localization for autonomous mobile robots in large environments. In: 2010 11th International Conference on Control Automation Robotics & Vision, Singapore. https://doi.org/10.1109/ICARCV.2010.5707329, pp 1397–1402 (2010)

  19. Bosse, M., Zlot, R.: Place recognition using keypoint voting in large 3D lidar datasets. In: 2013 IEEE International Conference on Robotics and Automation, Karlsruhe. https://doi.org/10.1109/ICRA.2013.6630945, pp 2677–2684 (2013)

  20. Park, S., Park, S.: Global localization for mobile robots using reference scan matching. Int. J. Control Autom. Syst. 12, 156–168 (2014). https://doi.org/10.1007/s12555-012-9223-0

    Article  Google Scholar 

  21. Leordeanu, M., Hebert, M.: A spectral technique for correspondence problems using pairwise constraints. In: Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Beijing. https://doi.org/10.1109/ICCV.2005.20, vol. 2, pp 1482–1489 (2005)

  22. Himstedt, M., Frost, J., Hellbach, S., Böhme, H., Maehle, E.: Large scale place recognition in 2D LIDAR scans using geometrical landmark relations. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL. https://doi.org/10.1109/IROS.2014.6943277, pp 5030–5035 (2014)

  23. Tipaldi, G.D., Arras, K.O.: FLIRT - Interest regions for 2D range data. In: 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK. https://doi.org/10.1109/ROBOT.2010.5509864, pp 3616–3622 (2010)

  24. Lyrio, L.J., Oliveira-Santos, T., Forechi, A., Veronese, L., Badue, C., De Souza, A.F.: Image-based global localization using VG-RAM weightless neural networks. In: 2014 International Joint Conference on Neural Networks (IJCNN), Beijing. https://doi.org/10.1109/IJCNN.2014.6889888, pp 3363–3370 (2014)

  25. Kallasi, F., Rizzini, D.L.: Efficient loop closure based on FALKO lidar features for online robot localization and mapping. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon. https://doi.org/10.1109/IROS.2016.7759202, pp 1206–1213 (2016)

  26. Su, Z., Zhou, X., Cheng, T., Zhang, H., Xu, B., Chen, W.: Global localization of a mobile robot using lidar and visual features. In: 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, pp 2377–2383 (2017)

  27. Chen, Y., Chen, W., Zhu, L., Su, Z., Zhou, X., Guan, Y., Liu, G.: A study of sensor-fusion mechanism for mobile robot global localization. Robotica 37(11), 1835–1849 (2019). https://doi.org/10.1017/S0263574719000298, https://doi.org/10.1109/ROBIO.2017.8324775

    Article  Google Scholar 

  28. Azzi, C.: Efficient Image-Based Localization Using Context. Master’s thesis, University of Waterloo (2015)

  29. Singh, G., Kosecka, J.: Visual loop closing using gist descriptors in Manhattan world. In: ICRA Omnidirectional Vision Workshop (2010)

  30. Cop, K.P., Borges, P.V.K., Dubé, R.: Delight: an efficient descriptor for global localisation using LiDAR intensities. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD. https://doi.org/10.1109/ICRA.2018.8460940, pp 3653–3660 (2018)

  31. Wang, X., Marcotte, R.J., Olson, E.: GLFP: Global localization from a floor plan. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1627–1632 (2019)

  32. Olson, E., Agarwal, P.: Inference on networks of mixtures for robust robot mapping. Int. J. Robot. Res. 32(7), 826–840 (2013). https://doi.org/10.1177/0278364913479413

    Article  Google Scholar 

  33. Yilmaz, A., Temeltas, H.: Self-adaptive Monte Carlo method for indoor localization of smart AGVs using LIDAR data. https://doi.org/10.1016/j.robot.2019.103285, vol. 122, p 103285 (2019)

  34. Zhang, L.: Self-adaptive Markov Localization for Single-Robot and Multi-Robot Systems. Ph.D thesis, Universite Montpellier II-Sciences et Techniques du Languedoc (2010)

  35. Zhang, L., Zapata, R., Lépinay, P.: Self-adaptive Monte Carlo localization for mobile robots using range sensors. https://doi.org/10.1109/IROS.2009.5354298 (2009)

  36. Grisetti, G., Stachniss, C., Burgard, W.: Improved techniques for grid mapping with Rao-Blackwellized particle filters. IEEE Trans. Robot. 23, 34–46 (2007)

    Article  Google Scholar 

  37. Hess, W., Kohler, D., Rapp, H., Andor, D.: Real-time loop closure in 2D LIDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm. https://doi.org/10.1109/ICRA.2016.7487258, pp 1271–1278 (2016)

  38. Bresson, G., Alsayed, Z., Yu, L., Glaser, S.: Simultaneous localization and mapping: a survey of current trends in autonomous driving. In: IEEE Trans. Intell. Veh. https://doi.org/10.1109/TIV.2017.2749181, vol. 2, pp 194–220 (2017)

  39. Labbé, M., Michaud, F.: RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. J. Field Robot. 35, 416–446 (2019). https://doi.org/10.1002/rob.21831

    Article  Google Scholar 

  40. Cooper, M.A., Raquet, J.F., Patton, R.: Range information characterization of the Hokuyo UST-20LX LIDAR sensor. Photonics 5, 12 (2018)

    Article  Google Scholar 

  41. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). MIT Press, Cambridge (2005)

    MATH  Google Scholar 

  42. Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte Carlo localization for mobile robots. In: Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), Detroit, MI, USA. https://doi.org/10.1109/ROBOT.1999.772544, vol. 2, pp 1322–1328 (1999)

  43. Maybeck, P.: Stochastic Models Estimation and Control, vol. 1. Academic Press, New York (1979)

    MATH  Google Scholar 

  44. Rösmann, C., Feiten, W., Wösch, T., Hoffmann, F., Bertram, T.: Trajectory modification considering dynamic constraints of autonomous robots. In: Robotics; Proceedings of ROBOTIK 2012; 7th German Conference on, pp. 10-6. VDE (2012)

  45. Rösmann, C., Hoffmann, F., Bertram, T.: Kinodynamic trajectory optimization and control for car-like robots. https://doi.org/10.1109/IROS.2017.8206458 (2017)

  46. Vasiljevic, G., Miklic, D., Draganjac, I., Kovacic, Z., Lista, P.: High-accuracy vehicle localization for autonomous warehousing (2016)

  47. Tzitzis, A., Megalou, S., Siachalou, S., Yioultsis, T., Kehagias, A., Tsardoulias, E., Filotheou, A., Symeonidis, A., Petrou, L., Dimitriou, A.G.: Phase ReLock - localization of RFID tags by a moving robot. In: European Conference on Antennas and Propagation, Krakow, Poland (2019)

  48. Megalou, S., Tzitzis, A., Siachalou, S., Yioultsis, T., Sahalos, J., Tsardoulias, E., Filotheou, A., Symeonidis, A., Petrou, L., Bletsas, A., Dimitriou, A.G.: Fingerprinting localization of RFID tags with real-time performance-assessment, using a moving robot. In: European Conference on Antennas and Propagation, Krakow, Poland (2019)

  49. Censi, A.: An ICP variant using a point-to-line metric. In: 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA. https://doi.org/10.1109/ROBOT.2008.4543181, pp 19–25 (2008)

  50. Gutmann, J.-, Konolige, K.: Incremental mapping of large cyclic environments. In: Proceedings 1999 IEEE International Symposium on Computational Intelligence in Robotics and Automation. CIRA’99 (Cat. No.99EX375), Monterey, CA, USA. https://doi.org/10.1109/CIRA.1999.810068, pp 318–325 (1999)

  51. Hahnel, D., Burgard, W., Fox, D., Thrun, S.: An efficient fastSLAM algorithm for generating maps of large-scale cyclic environments from raw laser range measurements. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA. https://doi.org/10.1109/IROS.2003.1250629, vol. 1, pp 206–211 (2003)

  52. Wang, C.-C., Thorpe, C., Thrunm, S.: Online simultaneous localization and mapping with detection and tracking of moving objects: theory and results from a ground vehicle in crowded urban areas. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan. https://doi.org/10.1109/ROBOT.2003.1241698, vol. 1, pp 842– 849 (2003)

  53. Lacroix, S., Mallet, A., Bonnafous, D., Bauzil, G., Fleury, S., Herrb, M., Chatila, R.: Autonomous rover navigation on unknown terrains: functions and integration. Int. J. Robot. Res. 21 (10–11), 917–942 (2002). https://doi.org/10.1177/0278364902021010841

    Article  Google Scholar 

  54. Minguez, J., Montesano, L., Montano, L.: An architecture for sensor-based navigation in realistic dynamic and troublesome scenarios. In: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, pp 2750–2756 (2004)

  55. Montesano, L., Minguez, J., Montano, L.: Modeling dynamic scenarios for local sensor-based motion planning. Auton. Robot. 25, 231–251 (2008)

    Article  Google Scholar 

  56. Schulz, D., Burgard, W., Fox, D., Cremers, A.B.: Tracking multiple moving targets with a mobile robot using particle filters and statistical data association. In: Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), Seoul, South Korea. https://doi.org/10.1109/ROBOT.2001.932850, vol. 2, pp 1665–1670 (2001)

  57. Casasent, D., Psaltis, D.: Position, rotation, and scale invariant optical correlation. Appl. Opt. 1795–9 (1976)

  58. Chen, Q.-S., Defrise, M., Deconinck, F.: Symmetric phase-only matched filtering of Fourier-Mellin transforms for image registration and recognition. https://doi.org/10.1109/34.387491, vol. 16, pp 1156–1168 (1994)

  59. Reddy, B.S., Chatterji, B.N.: An FFT-based technique for translation, rotation, and scale-invariant image registration. IEEE Trans. Image Process 5(8), 1266–1271 (1996). https://doi.org/10.1109/83.506761

    Article  Google Scholar 

  60. Riemann, B.: Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse. Doctoral Thesis (1851)

  61. Checchin, P., Gérossier, F., Blanc, C., Chapuis, R., Trassoudaine, L.: Radar scan matching SLAM using the Fourier-Mellin transform. Field Serv. Robot., 151–161. 978-3-642-13408-1 (2010)

  62. Vivet, D., Gérossier, F., Checchin, P., Trassoudaine, L., Chapuis, R.: Mobile ground-based radar sensor for localization and mapping: an evaluation of two approaches. Int. J. Adv. Robot. Syst. https://doi.org/10.5772/56636(2013)

  63. Monod, M.O.: Frequency modulated radar: a new sensor for natural environment and mbile robotics, Ph.D. Thesis, Paris IV University, France (1995)

  64. Bülow, H., Pfingsthorn, M., Birk, A.: Using robust spectral registration for scan matching of sonar range data. https://doi.org/10.3182/20100906-3-IT-2019.00105, vol. 43, pp 611–616 (2010)

  65. Bülow, H., Birk, A.: Spectral registration of noisy sonar data for underwater 3D mapping. Auton. Robot. 30, 307–331 (2011). https://doi.org/10.1007/s10514-011-9221-8

    Article  Google Scholar 

  66. Pfingsthorn, M., Birk, A., Schwertfeger, S., Bülow, H., Pathak, K.: Maximum likelihood mapping with spectral image registration. In: 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK. https://doi.org/10.1109/ROBOT.2010.5509366, pp 4282–4287 (2010)

  67. Bülow, H., Birk, A.: Fast and robust photomapping with an Unmanned Aerial Vehicle (UAV). In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO. https://doi.org/10.1109/IROS.2009.5354505, pp 3368–3373 (2009)

  68. Birk, A.: Using recursive spectral registrations to determine brokenness as measure of structural map errors. In: 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK. https://doi.org/10.1109/ROBOT.2010.5509322, pp 3472–3477 (2010)

  69. Kazik, T., Göktoğan, A.H.: Visual odometry based on the Fourier-Mellin transform for a rover using a monocular ground-facing camera. In: 2011 IEEE International Conference on Mechatronics, Istanbul. https://doi.org/10.1109/ICMECH.2011.5971331, pp 469–474 (2011)

  70. Hurtós, N., Cuf’, X., Petillot, Y., Salvi, J.: Fourier-based registrations for two-dimensional forward-looking sonar image mosaicing. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura. https://doi.org/10.1109/IROS.2012.6385813, pp 5298–5305 (2012)

  71. Kim, K., Neretti, N., Intrator, N.: Mosaicing of acoustic camera images. In: IEE Proceedings - Radar, Sonar and Navigation. https://doi.org/10.1049/ip-rsn:20045015, vol. 152, pp 263–270 (2005)

  72. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  73. Oberlander, J., Roennau, A., Dillmann, R.: Hierarchical SLAM using spectral submap matching with opportunities for long-term operation. In: 2013 16th International Conference on Advanced Robotics (ICAR), Montevideo. https://doi.org/10.1109/ICAR.2013.6766479, pp 1–7 (2013)

  74. Rohde, J., Völz, B., Mielenz, H., Zöllner, J.M.: Precise vehicle localization in dense urban environments. In: 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro. https://doi.org/10.1109/ITSC.2016.7795655, pp 853–858 (2016)

  75. Filotheou, A., Tsardoulias, E., Dimitriou, A., et al.: Pose selection and feedback methods in tandem combinations of particle filters with scan-matching for 2D mobile robot localisation. J. Intell. Robot. Syst. https://doi.org/10.1007/s10846-020-01253-6 (2020)

  76. https://github.com/AndreaCensi/csm/blob/master/csm_manual.pdf. Accessed 16 December 2021

  77. https://github.com/AndreaCensi/csm/blob/master/sm/csm/algos.h. Accessed 6 December 2021

  78. YDLIDAR TG30 datasheet https://www.ydlidar.com/Public/upload/files/2021-08-20/YDLIDAR%20TG30%20Data%20Sheet%20V3.1.pdf. Accessed 6 December 2021

  79. http://data.nvision2.eecs.yorku.ca/3DGEMS/, last accessed 11/11/2020, based on the preprint A.Rasouli, J.K. Tsotsos. The Effect of Color Space Selection on Detectability and Discriminability of Colored Objects. arXiv:1702.05421 (2017)

  80. https://github.com/CognitiveRobotics/jarves/tree/master/jarves_gazebo/worlds/mapshttps://github.com/CognitiveRobotics/jarves/tree/master/jarves_gazebo/worlds/maps. Accessed 30 June 2020

Download references

Funding

This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH CREATE INNOVATE (project code: T2EDK-02000).

Author information

Authors and Affiliations

Authors

Contributions

Alexandros Filotheou had the main idea, implemented it in code, and wrote the article. Anastasios Tzitzis provided support with respect to the simulation environment. Emmanouil Tsardoulias conducted the analysis with regard to the PL-ICP algorithm. Antonis Dimitriou conducted the analysis of the properties of the Fourier-Mellin Transform. Andreas Symeonidis provided guidance and support with regard to the problem of global localisation in robotics. George Sergiadis provided guidance and support with regard to the utility and properties of the Fourier-Mellin Transform. Loukas Petrou provided guidance and support with regard to the problem of global localisation in robotics.

Corresponding author

Correspondence to Alexandros Filotheou.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for Publication

Not applicable.

Competing interests

None.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Let us illustrate the methodology introduced with an example: consider again the map depicted in Fig. 3, and let pa(11.56, 12.20, 0.0) [m, m, rad] be the robot’s true pose and a pose hypothesis pc(7.56, 11.20,π/4) [m, m, rad] be disposed by (− 4.0,− 1.0,π/4). At the rotation estimation stage the range scan captured from the robot’s true pose, \(\mathcal {S}_{r}^{a}\), and the virtual range scan captured from the hypothesis, \(\mathcal {S}_{\text {v}}^{c}\), are projected to the xy plane as if each was captured from (0, 0, 0). Figure 27a shows the projected range scan points from pa, \(\mathcal {P}_{r}^{a}\), while Fig. 27b shows those from pc, \(\mathcal {P}_{\text {v}}^{c}\). Notice that these connected point-sets consist of the surroundings of the real and virtual range scan sensors from their local reference frame perspective. These point-sets are then converted into 2D grids via discretisation, inputted to FMI-SPOMF, and the rotation angle between them is used to align the orientation of pc with respect to that of pa.

Fig. 27
figure 27

Illustration of orientational and then positional alignment of candidate pose pc with respect to true pose pa in environment CORRIDOR (Fig. 3)

Once the orientation of the hypothesis is corrected, a new map scan is captured from the renewed hypothesis \(\boldsymbol {p}_{c}^{\prime }\). Then the centroids of \(\mathcal {P}_{r}^{a}\) and the point set of the newly projected map scan \(\mathcal {P}_{\text {v}}^{c\prime }\) are computed. Figure 27c depicts \(\mathcal {P}_{r}^{a}\) and its centroid Ca(− 3.57,− 0.78) [m, m], while Fig. 27d depicts \(\mathcal {P}_{\text {v}}^{c\prime }\) and its corresponding centroid Cc(0.42, 0.09) [m, m]. Notice how the two shapes are almost identical, but differ in terms of their position in the xy plane. Notice also the discrepancy between these two points sets at the left-hand side: due to the offset between the positions of pa and \(\boldsymbol {p}_{c}^{\prime }\), a larger proportion of the map is visible from the latter, and therefore the difference between their corresponding points-sets’ centroids CaCc = [3.99, 0.87] does not correspond exactly with the difference in position between the two poses, which is [4.0, 1.0]. Adding, however, CaCc to \(\boldsymbol {p}_{c}^{\prime }\) and repeating the same translation estimation process makes \(\mathcal {P}_{\text {v}}^{c\prime }\) converge to \(\mathcal {P}_{r}^{a}\), and therefore \(\boldsymbol {p}_{c}^{\prime }\) to pa. Figure 27e shows the final point-set \(\mathcal {P}_{\text {v}}^{c\prime }\), which is overlaid in Fig. 27f (coloured with red) on top of \(\mathcal {P}_{r}^{a}\) (black).

Let us now examine the possible outcomes of the same process for a false candidate pose, for instance pb. In theory pb would be either discarded at the end of the angular estimation process due to the extraction of an (viewed externally) arbitrary scaling factor \(\sigma \in (-\infty , \underline {\sigma }] \cup [\overline {\sigma }, +\infty )\), or accepted for position estimation, whereupon the position of the hypothesis would in all probability be moved divergently from the robot’s true pose. If \(\boldsymbol {p}_{c} \in {\mathscr{H}}\) then FMT-SPOMF would report a higher similarity degree wc > wb, and pb would be filtered out as a true negative. Consequently, if no pose hypothesis resided in the vicinity of pa, the projected range scan images captured from every hypothesis would not be able to be angularly aligned by FMI-SPOMF, and a false hypothesis would be erroneously reported as the system’s pose estimate. This leads to the formulation of the following observation:

Remark 1

A pose hypothesis \(\boldsymbol {h} \in {\mathscr{H}}\) that resides in the vicinity of the robot’s true pose is a necessary condition for a correct solution to problem P (in the sense of Remark 1) when approached by a scan–to–map-scan method. In consequence: the supplied number of pose hypotheses \(|{\mathscr{H}}|\) should be proportional to the area of M.Footnote 1

In more complex environments, where the environment and its map feature repetitive structures, it may be the case that ambiguity cannot be resolved at all regardless of the number of pose hypotheses. In others, wide open spaces may result to missing information due to sensor maximum range limits. The effects of these conditions may be so pronounced that higher similarity is established between an incorrect pose and the robot’s true pose than between the robot’s true pose and a pose residing near it. The first issue plagues all global localisation methods as, even for a human, its solution is undecidable, and maximum sensitivity is of paramount importance in such conditions. The second issue is also uncontrollable, as it manifests itself as a limitation imposed by the combination of environment and robot equipment limits.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Filotheou, A., Tzitzis, A., Tsardoulias, E. et al. Passive Global Localisation of Mobile Robot via 2D Fourier-Mellin Invariant Matching. J Intell Robot Syst 104, 26 (2022). https://doi.org/10.1007/s10846-021-01535-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01535-7

Keywords

Navigation