Abstract
The precise positioning issue of oblique aerial image has been widely studied in recent years. However, there are still some deficiencies in applying the existing methods to highly time-sensitive engineering. For the real-time positioning issues of oblique images involved in Unmanned Aerial Vehicle’s (UAV’s) patrolling applications, existing photogrammetry method cannot meet the real-time positioning requirements, existing binocular vision method cannot meet the dynamic and precise positioning requirements, existing optical flow method cannot meet the absolute positioning requirements, and existing multi-source feature matching method cannot meet the robust positioning requirements. In order to meet the real-time, dynamic, precise, absolute and robust positioning requirements of UAV’s patrolling images, a real-time positioning model for UAV’s patrolling images based on airborne LiDAR point cloud fusion is proposed. First, a precise Digital Surface Model (DSM) is generated by rasterizing and imaging the raw airborne LiDAR point cloud, in which a pixel’s grayscale is exactly equal to elevation of local area covered by the pixel. Second, the generated DSM and UAV’s patrolling image are fused under specific geometric constrains, so as to realize real-time positioning of UAV’s patrolling image pixel by pixel. Finally, more precise positioning of selected key points on UAV’s patrolling image can be realized by performing Principal Component Analysis (PCA)on the raw airborne LiDAR point cloud that surrounds the selected key points. The above methods are analyzed and verified by three groups of practical experiments, and results indicate that the proposed model can achieve real-time positioning of a single UAV’s patrolling image (4000 × 6000 pixels) with an accuracy of 0.5 m within 0.38 seconds in arbitrary areas, and can further realize precise positioning of any selected key point on UAV’s patrolling image with an accuracy of 0.2 m in 0.001 seconds.
Similar content being viewed by others
References
Abdel-Aziz Y, Karara H, Hauck M (2015) Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Photogrammetric Engineering & Remote Sensing 81(2):103–107, DOI: https://doi.org/10.14358/PERS.81.2.103
Alcantarilla PF, Bartoli A, Davison AJ (2012) KAZE features. European Conference on Computer Vision, 214–227, DOI: https://doi.org/10.1007/978-3642-33783-316
Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Computer Vision and Image Understanding 110(3):346–359, DOI: https://doi.org/10.1016/j.cviu.2007.09.014
Bay H, Tuytelaars T, Gool LV (2006) Surf: Speeded up robust features. European Conference on Computer Vision, 404–417, DOI: https://doi.org/10.1007/11744023_32
Brox T, Bruhn A, Papenberg N, Weickert J (2004) High accuracy optical flow estimation based on a theory for warping. Computer Vision - ECCV 2004, 25–36, DOI: https://doi.org/10.1007/978-3-540-24673-2_3
Chen Q, Yao L, Xu L, Yang Y, Xu T, Yang Y, Liu Y (2022) Horticultural image feature matching algorithm based on improved ORB and LK optical flow. Remote Sensing 14(18):4465, DOI: https://doi.org/10.3390/RS14184465
Cui Y, Zhou F, Wang Y, Liu L, Gao H (2014) Precise calibration of binocular vision system used for vision measurement. Optic Express 22(8): 9134–9149, DOI: https://doi.org/10.1364/OE.22.009134
Farnebäck G (2002) Polynomial expansion for orientation and motion estimation. Linköping University Electronic Press
Feng R, Du Q, Shen H, Li X (2021) Region-by-region registration combining feature-based and optical flow methods for remote sensing images. Remote Sensing 13(8):1475, DOI: https://doi.org/10.3390/rs13081475
Horn BKP, Schunck BG (1981) Determining optical flow. Artificial Intelligence 17(1–3):185–203, DOI: https://doi.org/10.1016/0004-3702(81)90024-2
Hu Y, Song R, Li Y (2016) Efficient coarse-to-fine patchmatch for large displacement optical flow. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5704–5712, https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Hu_Efficient_Coarse-To-Fine_PatchMatch_CVPR_2016_paper.html
Ke Y, Sukthankar R (2004) PCA-SIFT: A more distinctive representation for local image descriptors. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2: II–II, DOI: https://doi.org/10.1109/CVPR.2004.1315206
Keeling SL, Ring W (2005) Medical image registration and interpolation by optical flow with maximal rigidity. Journal of Mathematical Imaging and Vision 23(1):47–65, DOI: https://doi.org/10.1007/s10851-005-4967-2
Lalak M, Wierzbicki D, Kędzierski M (2020) Methodology of processing single-strip blocks of imagery with reduction and optimization number of ground control points in UAV Photogrammetry. Remote Sensing 12(20):3336, DOI: https://doi.org/10.3390/rs12203336
Leutenegger S, Chli M, Siegwart RY (2011) BRISK: Binary robust invariant scalable keypoints. International Conference on Computer Vision, 2548–2555, DOI: https://doi.org/10.1109/iccv.2011.6126542
Li Z, Bian S, QU Y (2017) Robust total least squares estimation of space intersection appropriate for multi-images. Acta Geodaetica et Cartographica Sinica 46(5):593–604, DOI: https://doi.org/10.13474/j.cnki.11-2246.2018.0216
Li N, Huang X, Zhang F, Wang L (2013) Registration of aerial imagery and lidar data in desert areas using the centroids of bushes as control information. Photogrammetric Engineering & Remote Sensing 79(8):743–752, DOI: https://doi.org/10.14358/PERS.79.8.743
Li N, Huang X, Zhang F, Li D (2015) Registration of aerial imagery and lidar data in desert areas using sand ridges. The Photogrammetric Record 30(151):263–278, DOI: https://doi.org/10.1111/phor.12110
Li D, Sun T, Guo B (2018) A Multi-slice mapping technique based on oblique images. Bulletin of Surveying and Mapping (7):83–87, https://en.cnki.com.cn/Article_en/CJFDTotal-CHTB201807020.htm
Li S, Xu C (2011) A stable direct solution of perspective-three-point problem[J]. International Journal of Pattern Recognition and Artificial Intelligence 25(5):627–642, DOI: https://doi.org/10.1142/S0218001411008774
Li S, Xu C, Xie M (2012) A robust O(n) solution to the perspective-n-point problem. in IEEE Transactions on Pattern Analysis and Machine Intelligence 34(7):1444–1450, DOI: https://doi.org/10.1109/TPAMI.2012.41
Linder W. Digital photogrammetry[M] (2009) Berlin/Heidelberg, Germany: Springer, DOI: https://doi.org/10.1007/978-3-540-92725-9
Liu Z, Liu X, Cao Z, Gong X, Tan M, Yu J (2023) High precision calibration for three-dimensional vision-guided robot system. In IEEE Transactions on Industrial Electronics 70(1):624–634, DOI: https://doi.org/10.1109/TIE.2022.3152026
Lowe DG (1999) Object recognition from local scale-invariant features. Proceedings of the Seventh IEEE International Conference on Computer Vision 2:1150–1157, DOI: https://doi.org/10.1109/ICCV.1999.790410
Lowe DG (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2):91–110, DOI: https://doi.org/10.1023/B:VISI.0000029664.99615.94
Lucas BD, Takeo K (1981) An iterative image registration technique with an application to stereo vision. Proceedings of the 7th International Joint Conference on Artificial Intelligence 81:121–130, https://www.researchgate.net/publication/215458777
Maćkiewicz A, Ratajczak W. Principal components analysis (PCA)[J]. (1993) Computers & Geosciences 19(3):303–342, DOI: https://doi.org/10.1016/0098-3004(93)90090-R
Ouyang P, Yin S, Liu L, Zhang Y, Zhao W, Wei S (2018) A fast and power-efficient hardware architecture for visual feature detection in affine-sift IEEE Transactions on Circuits and Systems I: Regular Papers 65(10):3362–3375, DOI: https://doi.org/10.1109/TCSI.2018.2806447
Revaud J, Weinzaepfel P, Harchaoui Z, Schmid C (2015) Epicflow: Edge-preserving interpolation of correspondences for optical flow. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1164–1172, DOI: https://doi.org/10.1109/cvpr.2015.7298720
Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: An efficient alternative to SIFT or SURF. International Conference on Computer Vision, 2564–2571, DOI: https://doi.org/10.1109/iccv.2011.6126544
Sotiras A, Davatzikos C, Paragios N (2013) Deformable medical image registration: A survey. IEEE Transactions on Medical Imaging 32(7): 1153–1190, DOI: https://doi.org/10.1109/TMI.2013.2265603
Tzovaras D, Strintzis MG, Sahinoglou H (1994) Evaluation of multiresolution block matching techniques for motion and disparity estimation. Signal Processing. Image Communication 6(1):59–67, DOI: https://doi.org/10.1016/0923-5965(94)90046-9
Wang P, Xu G, Wang Z, Cheng Y (2018) An efficient solution to the perspective-three-point pose problem. Computer Vision and Image Understanding, 166:81–87, DOI: https://doi.org/10.1016/j.cviu.2017.10.005
Yang B, Ali F, Yin P, Yang T, Yu Y, Li S, Liu X (2021) Approaches for exploration of improving multi-slice mapping via forwarding intersection based on images of UAV oblique photogrammetry. Computers & Electrical Engineering 92:107–135, DOI: https://doi.org/10.1016/j.compeleceng.2021.107135
Zhang G, Wang TY, Li D, Tang X, Jiang YH, Huang WC, Pan H (2015a) Block adjustment for satellite imagery based on the strip constraint. in IEEE Transactions on Geoscience and Remote Sensing 53(2):933–941, DOI: https://doi.org/10.1109/TGRS.2014.2330738
Zhang Y, Xiong X, Wand M, Lu Y (2014) A new aerial image matching method using airborne LiDAR point cloud and POS data. Acta Geodaetica et Cartographica Sinica 43(4):380–388, DOI: https://doi.org/10.13485/j.cnki.11-2089.2014.0057
Zhang Z, Xu K, Wu Y, Zhang S, Qi Y (2022) A simple and precise calibration method for binocular vision. Measurement Science and Technology 33(6), DOI: https://doi.org/10.1088/1361-6501/ac4ce5
Zhang Y, Zheng M, Xiong X, Xiong J (2015b) Multistrip bundle block adjustment of ZY-3 satellite imagery by rigorous sensor model without ground control point. In IEEE Geoscience and Remote Sensing Letters 12(4):865–869, DOI: https://doi.org/10.1109/LGRS.2014.2365210
Acknowledgments
We thank the editors for reviewing the manuscript, and the anonymous reviewers for providing suggestions that greatly improved the quality of the work.
This research was supported by China Postdoctoral Science Foundation (2021M701373).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fan, W., Liu, H., Pei, H. et al. A Real-time Positioning Model for UAV’s Patrolling Images Based on Airborne LiDAR Point Cloud Fusion. KSCE J Civ Eng (2024). https://doi.org/10.1007/s12205-024-2254-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12205-024-2254-2