Advertisement

Simultaneous Localization and Mapping in the Epoch of Semantics: A Survey

  • Muhammad Sualeh
  • Gon–Woo KimEmail author
Article
  • 61 Downloads

Abstract

Simultaneous Localization and Mapping (SLAM) with an astonishing research history of over three decades has brought the concept to the door step of truly autonomous robotic systems. The concept has advanced beyond the map building and self–localization of robot on the map. On the other hand, the long–standing challenges pertaining to the provision of out of the box solution for range of conditions still needs to be addressed. However, the technological advancements in the area is steadily making its ways into industry. This paper surveys state–of–the–art SLAM and discuss the insights of existing methods. Starting with a classical definition of SLAM, a brief conceptual overview, and formulation of a standard SLAM system is carried out. While discussing the auxiliaries for solving SLAM, the influx of machine learning into SLAM is also addressed. Moreover, recent SLAM algorithms have been reviewed with a focus on emerging concept of semantics to augment the system. In this paper a taxonomy of recently developed SLAM algorithms with a detailed comparison metrics, is presented. Furthermore, open challenges, future directions and emerging research issues have also been laid down.

Keywords

Bundle adjustment deep learning pose graph optimization semantics SLAM 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    H. F. Durrant–Whyte and T. Bailey, “Simultaneous localization and mapping (SLAM): Part I,” IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99–110, June 2006.CrossRefGoogle Scholar
  2. [2]
    G. Younes, D Asmar, E. Shammas, and J. Zelek, “Keyframe–based monocular SLAM: design, survey, and future directions,” Robotics and Autonomous Systems, vol. 98, pp. 67–88, 2017.CrossRefGoogle Scholar
  3. [3]
    S. Saeedi, M. Trentini, M. Seto, and H. Li, “Multiple–robot simultaneous localization and mapping: A review,” Journal of Field Robotics, vol. 33, no. 1, pp. 3–46, 2016.CrossRefGoogle Scholar
  4. [4]
    C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, J. Leonard, “Past, present, and future of simultaneous localization and mapping: toward the robust perception age,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.CrossRefGoogle Scholar
  5. [5]
    T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual SLAM algorithms: a survey from 2010.to 2016.,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 16, pp. 1–16, June 2017.Google Scholar
  6. [6]
    S. Thrun and J. J. Leonard, “Simultaneous localization and mapping,” Springer Handbook of Robotics, B. Siciliano and O. Khatib, 2nd ed. New York, USA, ch. 46, pp. 1153–1176, 2016.Google Scholar
  7. [7]
    S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, MIT press, 2005.zbMATHGoogle Scholar
  8. [8]
    T. Bailey and H. F. Durrant–Whyte, “Simultaneous localization and mapping (SLAM): Part II,” IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108–117, 2006.CrossRefGoogle Scholar
  9. [9]
    J. Steckel and H. Peremans, “Spatial sampling strategy for a 3D sonar sensor supporting BatSLAM,” Proc. of IEEE International Conference on Intelligent Robotic Systems, pp. 723–728, 2015.Google Scholar
  10. [10]
    T. J. Chong, X. J. Tang, C. H. Leng, M. Yogeswaran, O. E. Ng, and Y. Z. Chong, “Sensor technologies and simultaneous localization and mapping (SLAM),” Procedia Computer Science, vol. 76, pp. 174–179, 2015.CrossRefGoogle Scholar
  11. [11]
    R. C. Smith and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” International Journal of Robotics Research, vol. 5, no. 4, pp. 56–68, 1986.CrossRefGoogle Scholar
  12. [12]
    R. Smith, M. Self, and P. Cheeseman, “Estimating uncertain spatial relationships in robotics,” Autonomous Robot Vehicles, pp.167–193, 1990.Google Scholar
  13. [13]
    J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “Monoslam: realtime single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp.1052–1067, 2007.Google Scholar
  14. [14]
    G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid–based slam with rao–blackwellized particle filters by adaptive proposals and selective resampling,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 2432–2437, 2005.Google Scholar
  15. [15]
    G. Grisetti, C. Stachniss and W. Burgard, “Improved techniques for grid mapping with Rao–Blackwellized particle filters,” IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007.CrossRefGoogle Scholar
  16. [16]
    H. Strasdat, J. M. M. Montiel, and A. J. Davison, “Visual slam: Why filter?,” Image and Vision Computing, vol. 30, no. 2, pp. 65–77, 2012.CrossRefGoogle Scholar
  17. [17]
    P. Moutarlier and R. Chatila, “An experimental system for incremental environment modeling by an autonomous mobile robot,” Proc. of International Symposium on Experimental Robotics, pp. 327–346, 1989.Google Scholar
  18. [18]
    P. Moutarlier and R. Chatila, “Stochastic multisensory data fusion for mobile robot location and environment modeling,” Proc. of International Symposium on Robotics Research, pp. 85–94, 1989.Google Scholar
  19. [19]
    J. A. Castellanos, J. M. M. Montiel, J. Neira, and J. D. Tardo, “The SPmap: A probabilistic framework for simultaneous localization and map building,” IEEE Transactions on Robotics and Automation, vol. 15, no. 5, pp. 948–953, 1999.CrossRefGoogle Scholar
  20. [20]
    J. Guivant and E. Nebot, “Optimization of the simultaneous localization and map building algorithm for real time implementation,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 242–257, 2001.CrossRefGoogle Scholar
  21. [21]
    S. Thrun, D. Koller, Z. Ghahramani, H. Durrant–Whyte, and A. Y. Ng, “Simultaneous mapping and localization with sparse extended information filters,” The International Journal of Robotics Research, vol. 23, no. 7–8, pp. 693–716, 2004.CrossRefGoogle Scholar
  22. [22]
    M. Paz, J. D. Tardos, and J. Neira, “Divide and conquer: EKF slam in o(n),” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1107–1120, 2008.zbMATHCrossRefGoogle Scholar
  23. [23]
    J. C. Cox, “A review of statistical data association techniques for motion correspondence,” International Journal of Computer Vision, vol. 10, no. 1, pp. 53–66, 1993.CrossRefGoogle Scholar
  24. [24]
    G. Dissanayake, P. Newman, S. Clark, H. F. Durrant–Whyte, and M. Csorba, “A solution to the simultaneous localization and map building (SLAM) problem,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 229–241, 2001.CrossRefGoogle Scholar
  25. [25]
    L. Pedraza, D. Rodriguez–Losada, F. Matia, G. Dissanayake, and J. Miro, “Extending the limits of f–based slam with B–splines,” IEEE Transactions on Robotics, vol. 25, no. 2, pp. 353–366, 2009.CrossRefGoogle Scholar
  26. [26]
    A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice, Springer–Verlag, New York, 2001.zbMATHCrossRefGoogle Scholar
  27. [27]
    J. Liu and R. Chen, “Sequential monte carlo methods for dynamic systems,” Journal of the American Statistical Association, vol. 93, no. 443, pp. 1032–1044, 1998.MathSciNetzbMATHCrossRefGoogle Scholar
  28. [28]
    D. Hahnel, D. Fox, W. Burgard, and S. Thrun, “A highly efficient FastSLAM algorithm for generating cyclic maps of large–scale environments from raw laser range measurements,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 206–211, 2003.Google Scholar
  29. [29]
    M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: an improved particle filtering algorithm for simultaneous localization and mapping that provably converges,” Proc. of International Joint Conference on Artificial Intelligence (IJCAI), pp. 1151–1156, August 2003.Google Scholar
  30. [30]
    M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: a factored solution to the simultaneous localization and mapping problem,” Proceedings of the AAAI National Conference on Artificial Intelligence, pp. 593–598, 2002.Google Scholar
  31. [31]
    S. Thrun, M. Montemerlo, D. Koller, B. Wegbreit, J. Nieto, and E. Nebot, “FastSLAM: an efficient solution to the simultaneous localization and mapping problem with unknown data association,” Journal of Machine Learning Research, vol. 4, no. 3, pp. 380–407, 2004.Google Scholar
  32. [32]
    G. Grisetti, R. Kummerle, C. Stachniss, and W. Burgard, “A tutorial on graph–based SLAM,” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31–43, 2010.CrossRefGoogle Scholar
  33. [33]
    R. Havangi, M. A. Nekoui, and M. Teshnehlab, “An optimization based method for simultaneous localization and mapping,” International Journal of Control Automation and Systems, vol. 12, no. 4, pp. 823–832, 2014.zbMATHCrossRefGoogle Scholar
  34. [34]
    F. Dellaert and M. Kaess, “Square root SAM: Simultaneous localization and mapping via square root information smoothing,” International Journal of Robotics Research, vol. 25, no. 12, pp. 1181–1203, 2006.zbMATHCrossRefGoogle Scholar
  35. [35]
    J. Folkesson and H. Christensen, “Graphical SLAM for outdoor applications,” Journal of Field Robotics, vol. 23, no. 1, pp. 51–70, 2006.Google Scholar
  36. [36]
    G. Grisetti, C. Stachniss, S. Grzonka, and W. Burgard, “A tree parameterization for efficiently computing maximum likelihood maps using gradient descent,” Robotics: Science and Systems, pp. 65–72, 2007.Google Scholar
  37. [37]
    M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard, and F. Dellaert, “iSAM2: incremental smoothing and mapping using the Bayes tree,” International Journal of Robotics Research, vol. 31, no. 2, pp. 217–236, 2012.CrossRefGoogle Scholar
  38. [38]
    E. Olson, J. J. Leonard, and S. Teller, “Fast iterative alignment of pose graphs with poor initial estimates,” Proc. of IEEE International Conference on Robotics and Automation, pp. 2262–2269, 2006.Google Scholar
  39. [39]
    S. Thrun and M. Montemerlo, “The GraphSLAM algorithm with applications to Large–Scale mapping of urban structures,” International Journal of Robotics Research, vol. 25, no. 5–6, pp. 403–430, 2005.Google Scholar
  40. [40]
    F. R. Kschischang, B. J. Frey, and H. A. Loeliger, “Factor graphs and the sum–product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001.MathSciNetzbMATHCrossRefGoogle Scholar
  41. [41]
    F. Lu and E. Milios, “Globally consistent range scan alignment for environment mapping,” Autonomous Robots, vol. 4, no. 4, pp. 333–349, 1997.CrossRefGoogle Scholar
  42. [42]
    B. D. Lucas, T. Kanade, “An iterative image registration technique with an application to stereo vision,” Proc. of International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.Google Scholar
  43. [43]
    S. Baker and I. Matthews, “Lucas–Kanade 20 years on: a unifying framework,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221–255, 2004.CrossRefGoogle Scholar
  44. [44]
    S. Gargoum and K. El–Basyouny, “Automated extraction of road features using LiDAR data: A review of LiDAR applications in transportation,” Proc. of International Conference on Transportation Information and Safety (ICTIS), pp. 563–574, 2017.Google Scholar
  45. [45]
    P. R. Beaudet, “Rotationally invariant image operators,” Proc. of International Conference on Pattern Recognition, vol. 15, no. 50, pp. 10–52, 1978.Google Scholar
  46. [46]
    C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. of Fourth Alvey Vision Conference, pp. 147–151, 1988.Google Scholar
  47. [47]
    J. Shi and C. Tomasi, “Good fs to track, computer vision and pattern recognition,” Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593–600, 1994.Google Scholar
  48. [48]
    T. Lindeberg, “F detection with automatic scale selection,” International Journal of Computer Vision, vol. 30, no. 2, pp. 79–116, 1998.CrossRefGoogle Scholar
  49. [49]
    J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide baseline stereo from maximally stable extremal regions,” Image and vision computing, vol. 22, no. 10, pp. 761–767, 2004.CrossRefGoogle Scholar
  50. [50]
    D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.MathSciNetCrossRefGoogle Scholar
  51. [51]
    E. Mair, G. D. Hager, D. Burschka, M. Suppa, and G. Hirzinger, “Adaptive and generic corner detection based on the accelerated segment test,” Proc. of European Conference on Computer Vision, pp. 183–196, 2010.Google Scholar
  52. [52]
    M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, P. Fua, “Brief: computing a local binary descriptor very fast,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1281–98, 2012.CrossRefGoogle Scholar
  53. [53]
    S. Leutenegger, M. Chli, R.Y. Siegwart, “Brisk: Binary robust invariant scalable keypoints,” Proc. of IEEE International Conference on Computer Vision, pp. 2548–2555, 2011.Google Scholar
  54. [54]
    H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speededup robust fs (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.CrossRefGoogle Scholar
  55. [55]
    D. Lowe, “Object recognition from local scale–invariant features,” Proc. of International Conference on Computer Vision, pp. 1150–1157, 1999.Google Scholar
  56. [56]
    N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.Google Scholar
  57. [57]
    Alahi, R. Ortiz, and P. Vandergheynst, “Freak: fast retina keypoint,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 510–517, 2012.Google Scholar
  58. [58]
    E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” International Conference on Computer Vision, pp. 2564–2571, 2011.Google Scholar
  59. [59]
    P. Moreels and P. Perona, “Evaluation of fs detectors and descriptors based on 3D objects,” International Journal of Computer Vision, vol. 73, no. 3, pp. 263–284, 2007.CrossRefGoogle Scholar
  60. [60]
    J. Hartmann, J. H. Klussendorff, and E. Maehle, “A comparison of feature descriptors for visual SLAM,” Proc. of European Conference on Mobile Robots, pp. 56–61, 2013.Google Scholar
  61. [61]
    I. Rey–Otero, M. Delbracio, and J. Morel, “Comparing feature detectors: a bias in the repeatability criteria, and how to correct it,” Proc. of IEEE International Conference on Image Processing (ICIP), pp. 3024–3028, 2015.Google Scholar
  62. [62]
    Hietanen, J. Lankinen, J.–K. Kämäräinen, A. G. Buch, and N. Krüger, “A comparison of feature detectors and descriptors for object class matching,” Neurocomputing, vol. 184, pp. 3–12, December 2015.Google Scholar
  63. [63]
    F. Perronnin, J. Sanchez, and T. Mensink, “Improving the Fisher kernel for large–scale image classification,” Proc. of European Conference on Computer Vision (ECCV), pp. 143–156, 2010.Google Scholar
  64. [64]
    G. Csurka, C. Dance, L. Fan, J. Williamowski, and C. Bray, “Visual categorization with bags of keypoints,” Workshop on Statistical Learning in Computer Vision, pp. 59–74, 2004.Google Scholar
  65. [65]
    F. Perronnin and C. Dance, “Fisher kernels on visual vocabularies for image categorization,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8, 2007.Google Scholar
  66. [66]
    H. Jegou, M. Douze, C. Schmid, and P. Perez, “Aggregating local descriptors into a compact image representation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3304–3311, 2010.Google Scholar
  67. [67]
    R. Arandjelovic and A. Zisserman, “All about VLAD,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1578–1585, 2013.Google Scholar
  68. [68]
    A. Oliva and A. Torralba, “Modeling the shape of the scene: a holistic representation of the spatial envelope,” International Journal of Computer Vision, vol. 42, no. 3, pp. 145–175, 2001.zbMATHCrossRefGoogle Scholar
  69. [69]
    Y. Xia, J. Li, L. Qi, and H. Fan, “Loop closure detection for visual SLAM using PCANet features,” Proc. of International Joint Conference on Neural Networks (IJCNN), pp. 2274–2281, 2016.Google Scholar
  70. [70]
    X. Gao and T. Zhang, “Loop closure detection for visual slam systems using deep neural networks,” Proc. of 34th Chinese Control Conference (CCC), pp. 5851–5856, July 2015.Google Scholar
  71. [71]
    R. F. Salas–Moreno, R. A. Newcombe, H. Strasdat, P. H. J. Kelly, and A. J. Davison, “SLAM++: simultaneous localization and mapping at the level of objects,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1352–1359, 2013.Google Scholar
  72. [72]
    N. Neverova, P Luc, C. Couprie, J. Verbeek, and Y. LeCun, “Predicting deeper into the future of semantic segmentation,” Proc. of IEEE International Conference on Computer Vision (ICCV), pp. 1–10, 2017.Google Scholar
  73. [73]
    R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2O: a general framework for graph optimization,” Proc. of IEEE International Conference on Robotics Automation, pp. 3607–3613, 2011.Google Scholar
  74. [74]
    F. Dellaert, Factor Graphs and GTSAM: A Hands–on Introduction, Georgia Institute of Technology, 2012.Google Scholar
  75. [75]
    G. Grisetti, R. Kummerle, C. Stachniss, U. Frese, and C. Hertzberg, “Hierarchical optimization on manifolds for online 2d and 3d mapping,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 273–278, 2010.Google Scholar
  76. [76]
    S. Agarwal and K. Mierle, “Ceres solver,” http://ceressolver. org, 2012.Google Scholar
  77. [77]
    L. Polok, V. Ila, M. Solony, P. Smrz, and P. Zemcik, “Incremental block cholesky factorization for nonlinear least squares in robotics,” Robotics: Science and Systems, pp. 328–336, 2013.Google Scholar
  78. [78]
    I. Nowak, and S. Vigerske, LaGO–Lagrangian Global Optimizer, 2003.Google Scholar
  79. [79]
    N. Sünderhauf, T. T. Pham, Y. Latif, M. Milford, and I. Reid, “Meaningful Maps Object–Oriented Semantic Mapping,” Proc. of International Conference on Intelligent Robots and Systems (IROS), pp. 5079–5085, 2017.Google Scholar
  80. [80]
    J. Stuckler, B. Waldvogel, H. Schulz, and S. Behnke, “Dense real–time mapping of object–class semantics from RGB–D video,” Journal of Real–Time Image Processing, vol. 10, no. 4, pp. 599–609, 2015.CrossRefGoogle Scholar
  81. [81]
    S. L. Bowman, N. Atanasov, K. Daniilidis, and G. J. Pappas, “Probabilistic data association for semantic slam,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 1722–1729, 2017.Google Scholar
  82. [82]
    V. Vineet, O. Miksik, M. Lidegaard, M. Niebner, S. Golodetz, V. A. Prisacariu, O. Kähler, D. W. Murray, S. Izadi, P. Pérez, and P. H. Torr, “Incremental dense semantic stereo fusion for large–scale semantic scene reconstruction,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 75–82, 2015.Google Scholar
  83. [83]
    S. Pillai and J. Leonard, “Monocular SLAM supported object recognition,” Robotics Science and Systems Conference, pp. 310–319, 2015.Google Scholar
  84. [84]
    L. Ma and G. Sibley, “Unsupervised dense object discovery, detection, tracking and reconstruction,” Proc. of European Conference on Computer Vision, pp. 80–95, 2014.Google Scholar
  85. [85]
    T. Dharmasiri, V. Lui, and T. Drummond, “MO–SLAM: multi object SLAM with run–time object discovery through duplicates,” Proc. of International Conference on Intelligent Robots and Systems (IROS), pp. 1214–1221, 2016.Google Scholar
  86. [86]
    S. Choudhary, A. J. B. Trevor, H. I. Christensen, and F. Dellaert, “SLAM with object discovery, modeling and mapping,” Proc. of International Conference on Intelligent Robots and Systems, pp. 1018–1025, 2014.Google Scholar
  87. [87]
    D. Kochanov, A. Ŏep, J. Stückler, and B. Leibe, “Scene flow propagation for semantic mapping and object discovery in dynamic street scenes,” International Conference on Intelligent Robots and Systems (IROS), pp. 1785–1792, 2016. 0.1109/IROS.2016.7759285Google Scholar
  88. [88]
    J. McCormac, A. Handa, A. Davison, and S. Leutenegger, “Semanticfusion: dense 3D semantic mapping with convolutional neural networks,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 4628–4635, 2017.Google Scholar
  89. [89]
    S. Yang, Y. Song, M. Kaess, and S. Scherer, “Pop–up SLAM: semantic monocular plane SLAM for low–texture environments,” Proc. of International Conference on Intelligent Robots and Systems (IROS), pp. 1222–1229, 2016.Google Scholar
  90. [90]
    A. Gawel, C. D. Don, R. Siegwart, J. Nieto, and C. Cadena, “X–View: graph–based semantic multi–view localization,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1687–1694, 2017.CrossRefGoogle Scholar
  91. [91]
    K. Tateno, F. Tombari, I. Laina, and N. Navab, “CNNSLAM: real–time dense monocular SLAM with learned depth prediction,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6565–6574, 2017.Google Scholar
  92. [92]
    C. Zhao, L. Sun, and R. Stolkin, “A fully end–to–end deep learning approach for real–time simultaneous 3D reconstruction and material recognition,” Proc. of International Conference on Advanced Robotics (ICAR), pp. 75–82, 2017.Google Scholar
  93. [93]
    S. Zheng, S. Jayasumana, B. Romera–Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr, “Conditional random fields as recurrent neural networks,” Proc. of IEEE International Conference on Computer Vision, pp. 1529–1537, 2015.Google Scholar
  94. [94]
    A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt, “BundleFusion: real–time globally consistent 3D reconstruction using on–the–fly surface re–integration,” ACM Transactions on Graphics (TOG), vol. 36, no. 3, pp. 24, 2017.CrossRefGoogle Scholar
  95. [95]
    S. Choudhary, L. Carlone, C. Nieto, J. Rogers, Z. Liu, H. I. Christensen, and F. Dellaert, “Multi robot objectbased SLAM,” International Symposium on Experimental Robotics, pp. 729–741, 2016.Google Scholar
  96. [96]
    G. Pascoe, W. Maddern, M. Tanner, P. Piniés, and P. Newman, “NID–SLAM: robust monocular SLAM using normalized information distance,” Proc. of Conference on Computer Vision and Pattern Recognition, pp. 1446–1455, 2017.Google Scholar
  97. [97]
    R. Mur–Artal, J. M. M. Montiel, and J. D. Tardos, “ORBSLAM: a versatile and accurate monocular SLAM system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.CrossRefGoogle Scholar
  98. [98]
    R. Mur–Artal and J. D. Tardos, “ORB–SLAM2: an opensource SLAM system for monocular, stereo and RGB–D cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2016.CrossRefGoogle Scholar
  99. [99]
    S. Yang, Y. Song, M. Kaess, and S. Scherer, “Pop–up SLAM: semantic monocular plane SLAM for low texture environments,” Proc. of IEEE International Conference on Intelligent Robotic Systems, pp. 1222–1229, 2016.Google Scholar
  100. [100]
    T. Pire, T. Fischer, G. Castro, P. D. Cristóforis, J. Civera, and J. J. Berlles, “S–PTAM: stereo parallel tracking and mapping,” Robotic Autonomous Systems, vol. 93, pp. 27–42, 2017.CrossRefGoogle Scholar
  101. [101]
    M. Klingensmith, S. S. Sirinivasa, and M. Kaess, “Articulated robot motion for simultaneous localization and mapping (ARM–SLAM),” IEEE Robotic Automation Letters, vol. 1, no.2, pp. 1156–1163, 2016.Google Scholar
  102. [102]
    J. J. Tarrio and S. Pedre, “Realtime edge–based visual odometry for a monocular camera,” Proc. of IEEE International Conference on Computer Vision, pp. 702–710, 2016.Google Scholar
  103. [103]
    H. Liu, G. Zhang, and H. Bao, “Robust keyframe–based monocular SLAM for augmented reality,” Proc. of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1–10, 2016.Google Scholar
  104. [104]
    C. Qu and G. Cross, “AprilSLAM–mapping and localization from AprilTags,” Kumar Robotics (www. kumarrobotics.org), 2015.Google Scholar
  105. [105]
    L. Silveira, F. Guth, P. Drews–Jr, P. Ballester, M. Machado, F. Codevilla, N. Duarte–Filho, and S. Botelho, “An open–source bio–inspired solution to underwater SLAM,” IFAC–Papers OnLine, vol. 28, no. 2, pp. 212–217, 2015.CrossRefGoogle Scholar
  106. [106]
    T. Whelan, S. Leutenegger, R. F. Salas–Moreno, B. Glocker, and A. J. Davison, “ElasticFusion: dense SLAM without a pose graph,” Robotics: Science and Systems Conference (RSS), pp. 1697–1716, July 2015.Google Scholar
  107. [107]
    E. D. Nerurkar, K. J. Wu, and S. I. Roumeliotis, “CKLAM: constrained keyframe–based localization and mapping,” Proc. of IEEE International Conference Robotic Automation, pp. 3638–3643, 2014.Google Scholar
  108. [108]
    H. C. Daniel, K. Kim, J. Kannala, K. Pulli, and J. Heikkilä, “DT–SLAM: deferred triangulation for robust SLAM,” Proc. of International Conference on 3D Vision, pp. 609–616, 2014.Google Scholar
  109. [109]
    J. Engel and D. Cremers, “Large–scale direct SLAM with stereo cameras,” Proc. of IEEE/RSJ International Conference on Intelligent Robotic Systems Hamburg, pp. 1935–1942, 2015.Google Scholar
  110. [110]
    J. Engel, T. Schoeps, and D. Cremers, “LSD–SLAM: large–scale direct monocular SLAM,” Proc. of European Conference on Computer Vision, pp. 834–849, 2014.Google Scholar
  111. [111]
    M. Pizzoli, C. Forster, and D. Scaramuzza, “REMODE: probabilistic, monocular dense reconstruction in real time,” Proc. of IEEE International Conference Robotic Automation, pp. 2609–2616, 2014.Google Scholar
  112. [112]
    F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3D mapping with an RGB–D camera,” IEEE Transactions on Robotics, vol. 30, no. 1, pp.177–187, 2014.Google Scholar
  113. [113]
    N. Fioraio and L. D. Stefano, “SlamDunk: affordable real–time RGB–D SLAM},”Workshop at the European Conference on Computer Vision, pp. 401–414, 2014.Google Scholar
  114. [114]
    J. Steckel and H. Peremans, “BatSLAM: simultaneous localization and mapping using biomimetic sonar,” PLoS One, vol. 8, no. 1, e54076, 2013.Google Scholar
  115. [115]
    D. Zou and P. Tan, “CoSLAM: collaborative visual SLAM in dynamic environments,” IEEE Transactions on Pattern Analysis and Machine IUntelligence, vol. 35, no. 2, pp. 354–66, 2013.CrossRefGoogle Scholar
  116. [116]
    C. Kerl, J. Sturm, and D. Cremers, “Dense visual SLAM for RGB–D cameras,” Proc. of IEEE International Conference on Intelligent Robotic Systems, pp. 2100–2106, 2013.Google Scholar
  117. [117]
    S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe based visual inertial SLAM using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2013.CrossRefGoogle Scholar
  118. [118]
    W. Tan, “Robust monocular SLAM in dynamic environments,” Proc. of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 209–218, 2013.Google Scholar
  119. [119]
    T. Whelan, M. Kaess, and M. Fallon, “Kintinuous: spatially extended kinectfusion,” url: https://goo.gl/G6NVLH(visitedon08–Feb–2018), 2012.Google Scholar
  120. [120]
    K. Pirker, M. Ruther, and H. Bischof, “CD SLAM–continuous localization and mapping in a dynamic world,” Proc. of IEEE/RSJ International Conference on Intelligent Robotic Systems (IROS), pp. 3990–3997, 2011.Google Scholar
  121. [121]
    R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “DTAM: dense tracking and mapping in real–time,” Proc. of IEEE International Conference on Computer Vision, pp. 2320–2327, 2011.Google Scholar
  122. [122]
    K. Pirker, M. Rüther, G. Schweighofer, and H. Bischof, “GPSLAM: marrying sparse geometric and dense probabilistic visual mapping,” Proc. of British Machine Vision Conference, pp. 1–12, 2011.Google Scholar
  123. [123]
    S. Kohlbrecher, O. V. Stryk, J. Meyer, and U. Klingauf, “A flexible and scalable SLAM system with full 3D motion estimation,” Proc. of IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 155–160, 2011.Google Scholar
  124. [124]
    R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon, “KinectFusion: real–time dense surface mapping and tracking,” Proc. of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136, 2011.Google Scholar
  125. [125]
    C. Mei, G. Sibley, M. Cummins, P. Newman, and I. Reid, “RSLAM: a system for large–scale mapping in constant time using stereo,” International Journal of Computer Vision, vol. 94, no. 2, pp. 198–214, 2011.CrossRefGoogle Scholar
  126. [126]
    H. Strasdat, A. J. Davison, J. M. M. Montiel, and K. Konolige, “Double window optimisation for constant time visual SLAM,” Proc. of International Conference on Computer Vision, pp. 2352–2359, 2011.Google Scholar
  127. [127]
    K. Konolige and M. Agrawal, “FrameSLAM: from bundle adjustment to realtime visual mapping,” Proc. of IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1–11, 2008.CrossRefGoogle Scholar
  128. [128]
    G. Klein and D. Murray, “Parallel tracking and mapping for small AR workspaces,” IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234, 2007.Google Scholar
  129. [129]
    N. Karlsson, E. D. Bernardo, J. Ostrowski, L. Goncalves, P. Pirjanian, and M. E. Munich, “The vSLAM algorithm for robust localization and mapping,” Proc. of IEEE International Conference on Robotics and Automation, pp. 24–29, 2005.Google Scholar
  130. [130]
    J. Neira and J. D. Tardos, “Data association in stochastic mapping using the joint compatibility test,” IEEE Transactions on Robotics and Automation, vol. 17, no. 6, pp. 890–897, 2001.CrossRefGoogle Scholar
  131. [131]
    C. Bibby and I. Reid, “Simultaneous localization and mapping in dynamic environments (SLAMIDE) with reversible data association,” Robotics: Science and Systems, pp. 105–112, 2007.zbMATHGoogle Scholar
  132. [132]
    C. C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. F. Durrant–Whyte, “Simultaneous localization, mapping and moving object tracking,” International Journal of Robotics Research, vol. 26, no. 9, pp. 889–916, 2007.CrossRefGoogle Scholar
  133. [133]
    F. Dayoub, G. Cielniak, and T. Duckett, “Long–term experiments with an adaptive spherical view representation for navigation in changing environments,” Robotic Autonomous Systems, vol. 59, no. 5, pp. 285–295, 2011.CrossRefGoogle Scholar
  134. [134]
    T. Krajník, J. P. Fentanes, O. M. Mozos, T. Duckett, J. Ekekrantz, and M. Hanheide, “Long–term topological localization for service robots in dynamic environments using spectral maps,” Proc. of International Conference on Intelligent Robots and Systems (IROS), pp. 4537–4542, 2014.Google Scholar
  135. [135]
    N. Sunderhauf and P. Protzel, “Towards a robust backend for pose graph SLAM,” Proc. of IEEE International Conference on Robotics and Automation, pp. 1254–1261, 2012.Google Scholar
  136. [136]
    L. Carlone, A. Censi, and F. Dellaert, “Selecting good measurements via L1 relaxation: A convex approach for robust estimation over graphs,” Proc. of International Conference on Intelligent Robots and Systems (IROS), pp. 2667–2674, 2014.Google Scholar
  137. [137]
    Y. Latif, C. Cadena, and J. Neira, “Robust loop closing over time for pose graph slam,” International Journal of Robotics Research, vol. 32, no. 14, pp. 1611–1626, 2013.CrossRefGoogle Scholar
  138. [138]
    E. Olson and P. Agarwal, “Inference on networks of mixtures for robust robot mapping,” International Journal of Robotics Research, vol. 32, no. 7, pp. 826–840, 2013.CrossRefGoogle Scholar
  139. [139]
    D. Sabatta, D. Scaramuzza, and R. Siegwart, “Improved appearance based matching in similar and dynamic environments using a vocabulary tree,” Proc. of IEEE International Conference on Robotics and Automation, pp. 1008–1013, 2010.Google Scholar
  140. [140]
    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei–Fei, “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.MathSciNetCrossRefGoogle Scholar
  141. [141]
    M. Everingham, L. Van–Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The PASCAL visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010.CrossRefGoogle Scholar

Copyright information

© Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Intelligent Robotics Laboratory, Control and Robotics Engineering DepartmentChungbuk National UniversityCheongju, ChungbukKorea

Personalised recommendations