Skip to main content
Log in

A New Visual Front-end Combining KLT with Descriptor Matching for Visual-inertial Odometry

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Currently, feature-based visual-inertial odometry (VIO) predominantly employs descriptor-matching or Kanade-Lucas-Tomasi (KLT)-based methods for feature tracking. However, these methods are prone to short track lengths and large accumulative errors. In this study, we propose a novel approach that seamlessly integrates the advantages of KLT and descriptor-matching techniques through a tightly-coupled fusion for feature tracking. The proposed method effectively overcomes the limitations of both methods, resulting in longer tracking lengths and reducing accumulative errors. Consequently, the enhanced feature tracking module contributes to improving localization accuracy and stability in VIO. To validate the proposed approach, we incorporate it into the feature tracking module of mainstream VIO and evaluate its performance via open-source datasets. Experimental results reveal that our proposed feature tracking method outperforms the original method in accuracy and robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

The data and code are not available in a public repository.

References

  1. Campos, C., Elvira, R., Rodríguez, J.J.G., Montiel, J.M., Tardós, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual-inertial, and multimap SLAM. IEEE Trans. Robot. 37(6),1874–1890. https://doi.org/10.1109/TRO.2021.3075644 (2021)

  2. Tomasi, C., Detection, T.K.: Tracking of point features. Int. J. Comput. Vis. 9, 137–154 (1991)

    Article  Google Scholar 

  3. Paul, M.K., Wu, K., Hesch, J.A., Nerurkar, E.D., Roumeliotis, S.I.: A Comparative Analysis of Tightly-Coupled Monocular, Binocular, and Stereo VINS. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 165–172. https://doi.org/10.1109/ICRA.2017.7989022 (2017)

  4. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. Int. J. Comput. Vis. 65, 43–72. https://doi.org/10.1007/s11263-005-3848-x (2005)

  5. Morrell, B.J.: Autonomous Feature Tracking for Autonomous Approach to a Small Body. In: ASCEND 2020, Virtual Event (2020)

  6. DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperPoint: Self-Supervised Interest Point Detection and Description. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 337–33712. https://doi.org/10.1109/CVPRW.2018.00060 (2018)

  7. Harris, C.G., Stephens, M., et al.: A Combined Corner and Edge Detector. In: Alvey Vision Conference, vol. 15, pp. 10–5244. https://doi.org/10.5244/C.2.23 (1988)

  8. Shi, J., Tomasi, C.: Good Features to Track. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 593–600. Seattle, WA, USA. https://doi.org/10.1109/CVPR.1994.323794 (1994)

  9. Rosten, E., Drummond, T.: Machine Learning for High-speed Corner Detection. In: Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, pp. 430–443. Springer (2006)

  10. Lowe, D.G.: Object Recognition from Local Scale-Invariant Features. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–11572. https://doi.org/10.1109/ICCV.1999.790410 (1999)

  11. Bay, H., Tuytelaars, T., Gool, L.V.: SURF: Speeded up Robust Features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) Computer vision - ECCV 2006, 9th European conference on computer vision, Graz, Austria, May 7-13, 2006, Proceedings, Part I, vol. 3951, pp. 404–417. Springer. https://doi.org/10.1007/11744023_32

  12. Alcantarilla, P.F., Nuevo, J., Bartoli, A.: Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. In: British Machine Vision Conference, BMVC 2013, Bristol, UK, September 9-13, 2013. https://doi.org/10.5244/C.27.13 (2013)

  13. Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: Binary Robust Independent Elementary Features. In: Computer vision - ECCV 2010, 11th European conference on computer vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part IV, pp. 778–792. https://doi.org/10.1007/978-3-642-15561-1_56 (2010)

  14. Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary Robust Invariant Scalable Keypoints. In: IEEE International Conference on Computer Vision, ICCV 2011, pp. 2548–2555. Barcelona, Spain, November 6–13, 2011. https://doi.org/10.1109/ICCV.2011.6126542 (2011)

  15. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.R.: ORB: An Efficient Alternative to SIFT or SURF. In: IEEE International Conference on Computer Vision, ICCV 2011, pp. 2564–2571. Barcelona, Spain, November 6-13, 2011. https://doi.org/10.1109/ICCV.2011.6126544 (2011)

  16. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-Net: A Trainable CNN for Joint Description and Detection of Local Features. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8084–8093. https://doi.org/10.1109/CVPR.2019.00828 (2019)

  17. Revaud, J., Weinzaepfel, P., Souza, C.R., Humenberger, M.: R2D2: Repeatable and Reliable Detector and Descriptor. In: NeurIPS (2019)

  18. Lucas, B.D., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: IJCAI’81: 7th International Joint Conference on Artificial Intelligence, vol. 2, pp. 674–679 (1981)

  19. Bouguet, J.-Y., et al.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corp. 5(1–10), 4 (2001)

    Google Scholar 

  20. Fraundorfer, F., Scaramuzza, D.: Visual Odometry : Part II: matching, robustness, optimization, and applications. IEEE Robot. Autom. Mag. 19(2),78–90. https://doi.org/10.1109/MRA.2012.2182810 (2012)

  21. Mourikis, A.I., Roumeliotis, S.I.: A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, pp. 3565–3572. https://doi.org/10.1109/ROBOT.2007.364024 (2007)

  22. Li, M., Mourikis, A.I.: High-precision, Consistent EKF-based Visual-Inertial Odometry. Int. J. Robot. Res. 32(6), 690–711. https://doi.org/10.1177/0278364913483827 (2013)

  23. Sun, K., Mohta, K., Pfrommer, B., Watterson, M., Liu, S., Mulgaonkar, Y., Taylor, C.J., Kumar, V.: Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robot. Autom. Lett. 3(2), 965–972. https://doi.org/10.1109/LRA.2018.2793349 (2018)

  24. Geneva, P., Eckenhoff, K., Lee, W., Yang, Y., Huang, G.: OpenVINS: A Research Platform for Visual-Inertial Estimation. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 4666–4672. https://doi.org/10.1109/ICRA40945.2020.9196524 (2020)

  25. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial SLAM using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334. https://doi.org/10.1177/0278364914558104 (2015)

  26. Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020. https://doi.org/10.1109/TRO.2018.2853729 (2018)

  27. Galvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5),1188–1197. https://doi.org/10.1109/TRO.2012.2197158 (2012)

  28. Qin, T., Pan, J., Cao, S., Shen, S.: A general optimization-based framework for local odometry estimation with multiple sensors. arXiv:1901.03638 (2019)

  29. Rosinol, A., Abate, M., Chang, Y., Carlone, L.: Kimera: An Open-Source Library for Real-Time Metric-Semantic Localization and Mapping. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 1689–1696. https://doi.org/10.1109/ICRA40945.2020.9196885 (2020)

  30. Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.: Visual-inertial mapping with non-linear factor recovery. IEEE Robot. Autom. Lett. 5(2), 422–429. https://doi.org/10.1109/LRA.2019.2961227 (2020)

  31. Mur-Artal, R., Tardós, J.D.: Visual-inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2(2),796–803. https://doi.org/10.1109/LRA.2017.2653359 (2017)

  32. DeTone, D., Malisiewicz, T., Rabinovich, A.: https://github.com/magicleap/SuperPointPretrainedNetwork

  33. Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Robot. 33(1), 1–21. https://doi.org/10.1109/TRO.2016.2597321 (2017)

  34. Sibley, G., Matthies, L., Sukhatme, G.: Sliding window filter with application to planetary landing. J. Field Robot. 27(5),587–608. https://doi.org/10.1002/rob.20360 (2010)

  35. Bell, B.M., Cathey, F.W.: The iterated kalman filter update as a gauss-newton method. IEEE Trans. Autom. Control 38(2), 294–297. https://doi.org/10.1109/9.250476 (1993)

  36. Dinh, N.V., Kim, G.-W.: Multi-sensor fusion towards vins: a concise tutorial, survey, framework and challenges. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 459–462. https://doi.org/10.1109/BigComp48618.2020.00-26 (2020)

  37. Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M.W., Siegwart, R.: The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 35(10), 1157–1163. https://doi.org/10.1177/0278364915620033 (2016)

  38. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A Benchmark for the Evaluation of RGB-D SLAM Systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580. https://doi.org/10.1109/IROS.2012.6385773(2012)

  39. Zhang, Z., Scaramuzza, D.: A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7244–7251. https://doi.org/10.1109/IROS.2018.8593941 (2018)

  40. He, Y., Xu, B., Ouyang, Z., Li, H.: A Rotation-Translation-Decoupled Solution for Robust and Efficient Visual-Inertial Initialization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 739–748 (2023)

  41. Huai, Z., Huang, G.: Robocentric visual-inertial odometry. Int. J. Robot. Res. 41(7), 667–689 (2022)

    Article  Google Scholar 

Download references

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61973055, in part by Natural Science Foundation of Sichuan Province of China under Grant 2023NSFSC0511, and in part by the Key Research Development Program of HeBei (Project No. 19210906D).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rui Li.

Ethics declarations

Ethical Approval

Not applicable.

Consent to Participate

Not applicable

Consent for Publication

Not applicable

Conflicts of interest

The authors have no relevant financial or nonfinancial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Z., Yu, B., Li, R. et al. A New Visual Front-end Combining KLT with Descriptor Matching for Visual-inertial Odometry. J Intell Robot Syst 109, 79 (2023). https://doi.org/10.1007/s10846-023-02008-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-023-02008-9

Keywords

Navigation