Skip to main content
Log in

Detection and Elimination of Dynamic Feature Points Based on YOLO and Geometric Constraints

  • Research Article-Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

For the problem where dynamic objects in complex environments have a large impact on simultaneous localization and mapping (SLAM) pose estimation and mapping accuracy, you only look once (YOLO) detection method combining optical flow and geometric constraints is proposed to identify and eliminate dynamic feature points. Firstly, the latent dynamic feature points in the environment are detected based on YOLO, and the static and dynamic regions are divided according to motion consistency. Secondly, motion detection and tracking are carried out based on the optical flow method to preliminarily estimate the motion state, and then, re-judgment is performed in combination with geometric constraints to reduce tracking loss and precision reduction caused by false elimination of feature points. Finally, on the basis of eliminating dynamic feature points, static feature points are used for pose estimation to avoid the interference of dynamic feature points on pose estimation. Finally, based on the Technical University Munich (TUM) dataset and SLAM accuracy evaluation indexes, such as absolute trajectory error (ATE), and so on, the proposed method of this paper is experimentally tested and evaluated, and the ATE index of this paper's algorithm is improved by 8.1% and 96.35% compared with ORB-SLAM2 under TUM's low-dynamic sequences and high dynamic sequences, respectively, and the results show that this paper's algorithm has a better accuracy under the dynamic environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Liu, H.; Liu, G.; Tian, G., et al.: Visual SLAM based on dynamic object removal. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, New York, pp 596–601 (2019).

  2. Zhong, F.; Wang, S.; Zhang, Z., et al.: Detect-SLAM: Making object detection and SLAM mutually beneficial. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, New York, pp 1001–1010 (2018).

  3. Yu, C.; Liu, Z.; Liu, X. J., et al.: DS-SLAM: A semantic visual SLAM towards dynamic environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, New York, pp 1168–1174 (2018).

  4. Bescos, B.; Fácil, J.M.; Civera, J., et al.: DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Automat. Lett. 3(4), 4076–4083 (2018)

    Article  Google Scholar 

  5. Wang, R.; Wan, W.; Wang, Y., et al.: A new RGB-D SLAM method with moving object detection for dynamic indoor scenes. Remote Sensing 11(10), 1143 (2019)

    Article  Google Scholar 

  6. Long, X.; Zhang, W.; Zhao, B.: PSPNet-SLAM: a semantic SLAM detect dynamic object by pyramid scene parsing network. IEEE Access 8, 214685–214695 (2020)

    Article  Google Scholar 

  7. Ai, Y.; Rui, T.; Lu, M., et al.: DDL-SLAM: A robust RGB-D SLAM in dynamic environments combined with deep learning. IEEE Access 8, 162335–162342 (2020)

    Article  Google Scholar 

  8. Abuqaddom, I.; Mahafzah, B.; Faris, H. Oriented stochastic loss descent algorithm to train very deep multi-layer neural networks without vanishing gradients. Knowledge-Based Systems, 230, Article 107391 (2021).

  9. Liu, L.; Ouyang, W.; Wang, X., et al.: Deep learning for generic object detection: A survey. Int. J. Comput. Vis. 128(2), 261–318 (2020)

    Article  Google Scholar 

  10. Redmon, J.; Farhadi, A.: Yolov3: An incremental improvement. arXiv preprint arXiv: 1804.02767, 2018.

  11. Lin, T. Y.; Maire, M.; Belongie, S., et al.: Microsoft coco: common objects in context. In: European Conference on Computer Vision, pp 740–755 (2014).

  12. Kalal, Z.; Mikolajczyk, K.; Matas, J.; Forward-backward error: Automatic detection of tracking failures. In: 20th international conference on pattern recognition. IEEE, New Yoork, pp 2756–2759 (2010)

  13. Zhang, Z.: Determining the epipolar geometry and its uncertainty: A review. Int. J. Comput. Vision 27(2), 161–195 (1998)

    Article  Google Scholar 

  14. Sturm, J.; Engelhard, N.; Endres, F.; A benchmark for the evaluation of RGB-D SLAM systems. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, New York, pp 573–580 (2012)

Download references

Funding

Supported by the National Natural Science Youth Foundation Project (42105143); the Science and Technology Development Fund of Wuxi (N20201011); Vehicle road collaboration application scenario validation (560122034).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Tang.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, J., Wang, X., Tang, Y. et al. Detection and Elimination of Dynamic Feature Points Based on YOLO and Geometric Constraints. Arab J Sci Eng (2024). https://doi.org/10.1007/s13369-024-08957-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13369-024-08957-z

Keywords

Navigation