Skip to main content
Log in

Edge-based cover recognition and tracking method for an AR-aided aircraft inspection system

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

Cabin hatch cover plays an important role for a typical aircraft to ensure the aircraft quality and flight safety. In the test flight inspection and routine maintenance of an aircraft, one crucial step is to open cabin hatch covers one by one for detailed inspection. Authorized operators suffer from difficulties to recognize different covers with similar shapes and different inspection requirements. In this paper, an edge-based cover recognition and tracking method is proposed to recognize different hatch covers with similar shapes. First, a fast edge feature is proposed to describe image contours with simple geometric constraints. Second, based on the edge feature, a novel cover descriptor, consisting of shape and position description vectors, is presented to recognize those different covers with similar shapes. Third, on the basis of recognized cover landmarks, a direct visual odometry–based camera tracking method is presented to improve the robustness of cover recognition. The experiments are implemented in a piece of simplified mockup of aircraft cabin skin, and the results show that the proposed method has good practicability and real-time property. Meanwhile, the tracking accuracy is also good enough in the augmented reality inspection environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Wang X, Ong SK, Nee AYC (2016) A comprehensive survey of augmented reality assembly research. Adv Manuf 4(1):1–22. https://doi.org/10.1007/s40436-015-0131-4

    Article  Google Scholar 

  2. Zubizarreta J, Aguinaga I, Amundarain A (2019) A framework for augmented reality guidance in industry. Int J Adv Manuf Tech 102(9–12):4095–4108. https://doi.org/10.1007/s00170-019-03527-2

    Article  Google Scholar 

  3. Fang HC, Ong SK, Nee AYC (2013) Orientation planning of robot end-effector using augmented reality. Int J Adv Manuf Tech 67(9–12):2033–2049. https://doi.org/10.1007/s00170-012-4629-7

    Article  Google Scholar 

  4. Zhu J, Ong SK, Nee AYC (2013) An authorable context-aware augmented reality system to assist the maintenance technicians. Int J Adv Manuf Tech 66(9–12):1699–1714. https://doi.org/10.1007/s00170-012-4451-2

    Article  Google Scholar 

  5. Wang Y, Zhang S, Yang S, He W, Bai X, Zeng Y (2016) A LINE-MOD-based markerless tracking approachfor AR applications. Int J Adv Manuf Tech 89(5–8):1699–1707. https://doi.org/10.1007/s00170-016-9180-5

    Article  Google Scholar 

  6. Wang Y, Zhang SS, Wan BL, He WP, Bai XL (2018) Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system. Int J Adv Manuf Tech 99(9–12):2341–2352. https://doi.org/10.1007/s00170-018-2575-8

    Article  Google Scholar 

  7. Yin X, Fan X, Zhu W, Liu R (2019) Synchronous AR assembly assistance and monitoring system based on ego-centric vision. Assem Autom 39(1):1–16. https://doi.org/10.1108/aa-03-2017-032

    Article  Google Scholar 

  8. Bruno F, Barbieri L, Marino E, Muzzupappa M, D’Oriano L, Colacino B (2019) An augmented reality tool to detect and annotate design variations in an Industry 4.0 approach. Int J Adv Manuf Tech 105(1–4):875–887. https://doi.org/10.1007/s00170-019-04254-4

    Article  Google Scholar 

  9. Xia R, Zhao J, Zhang T, Su R, Chen Y, Fu S (2020) Detection method of manufacturing defects on aircraft surface based on fringe projection. Optik 208.https://doi.org/10.1016/j.ijleo.2020.164332

  10. Wang X, Yew AWW, Ong SK, Nee AYC (2019) Enhancing smart shop floor management with ubiquitous augmented reality. Int J Prod Res 58:2352–2367. https://doi.org/10.1080/00207543.2019.1629667

    Article  Google Scholar 

  11. Siew CY, Ong SK, Nee AYC (2019) A practical augmented reality-assisted maintenance system framework for adaptive user support. Robot Cim-Int Manuf 59:115–129. https://doi.org/10.1016/j.rcim.2019.03.010

    Article  Google Scholar 

  12. Ong SK, Yew AWW, Thanigaivel NK, Nee AYC (2020) Augmented reality-assisted robot programming system for industrial applications. Robot Cim-Int Manuf 61:101820. https://doi.org/10.1016/j.rcim.2019.101820

  13. Ong SK, Yuan ML, Nee AYC (2008) Augmented reality applications in manufacturing: a survey. Int J Prod Res 46(10):2707–2742. https://doi.org/10.1080/00207540601064773

    Article  MATH  Google Scholar 

  14. Munoz E, Konishi Y, Murino V, Bue AD (2016) Fast 6D pose estimation for texture-less objects from a single RGB image. In: 2016 IEEE International Conference on Robotics and Automation (ICRA)

  15. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vision 60(2):91–110. https://doi.org/10.1023/B:Visi.0000029664.99615.94

    Article  Google Scholar 

  16. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-Up Robust Features (SURF). Comput Vis Image Und 110(3):346–359. https://doi.org/10.1016/j.cviu.2007.09.014

  17. Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. 2011 Ieee Int Conf Comput Vis (ICCV) 2564–2571. https://doi.org/10.1109/iccv.2011.6126544

  18. Hinterstoisser S, Lepetit V, Ilic S, Fua P, Navab N (2010) Dominant orientation templates for real-time detection oftexture-less objects. IEEE Conf Comput Vis Pattern Recogn (CVPR) 23:2257–2264

    Google Scholar 

  19. Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V (2012) Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 34(5):876–888. https://doi.org/10.1109/TPAMI.2011.206

  20. Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N (2013) Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: 2012 Asian Conference on Computer Vision, pp 548–562

  21. Zhang H, Cao Q (2019) Detect in RGB, optimize in edge: accurate 6D pose estimation for texture-less industrial parts. In: Howard A, Althoefer K, Arai F et al. (eds) 2019 International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation ICRA, pp 3486–3492

  22. Ulrich M, Wiedemann C, Steger C (2012) Combining scale-space and similarity-based aspect graphs for fast 3D object recognition. IEEE Trans Pattern Anal Mach Intell 34(10):1902–1914. https://doi.org/10.1109/tpami.2011.266

  23. Joshi N, Sharma Y, Parkhiya P, Khawad R, Krishna KM, Bhowmick B (2018) Integrating objects into monocular SLAM: line based category specific models. Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing. https://doi.org/10.1145/3293353.3293434

  24. Yang S, Scherer S (2019) CubeSLAM: monocular 3-D object SLAM. IEEE Trans Robot 35(4):925–938. https://doi.org/10.1109/TRO.2019.2909168

  25. Mur-Artal R, Montiel JMM, Tardos JD (2015) ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot 31(5):1147–1163. https://doi.org/10.1109/Tro.2015.2463671

    Article  Google Scholar 

  26. Mur-Artal R, Tardos JD (2017) ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans Robot 33(5):1255–1262. https://doi.org/10.1109/Tro.2017.2705103

    Article  Google Scholar 

  27. Han P, Zhao G (2019) A review of edge-based 3D tracking of rigid objects. Virtual Real Intell Hardw 1(6):580–596. https://doi.org/10.1016/j.vrih.2019.10.001

    Article  Google Scholar 

  28. Harris C, Stennett C (1990) RAPID - a video rate object tracker. In: British Machine Vision Conference, pp 73–77

  29. Choi C, Christensen HI (2012) Robust 3D visual tracking using particle filtering on the special Euclidean group: a combined approach of keypoint and edge features. Int J Robot Res 31(4):498–519

    Article  Google Scholar 

  30. Wang B, Zhong F, Qin X (2019) Robust edge-based 3D object tracking with direction-based pose validation. Multimed Tools Appl 78(9):12307–12331. https://doi.org/10.1007/s11042-018-6727-5

    Article  Google Scholar 

  31. Trinh S, Spindler F, Marchand E, Chaumette F (2018) A modular framework for model-based visual tracking using edge, texture and depth features. In: 2018 IEEE International Conference on Intelligent Robots and Systems (IROS), pp 89–96. https://doi.org/10.1109/IROS.2018.8594003

  32. Koller D, Daniilidis K, Nagel HH (1993) Model-based object tracking in monocular image sequences of road traffic scenes. Int J Comput Vis 10(3):257–281

    Article  Google Scholar 

  33. Tombari F, Franchi A, Di Stefano L (2013) BOLD features to detect texture-less objects. Ieee I Conf Comp Vis:1265–1272. https://doi.org/10.1109/Iccv.2013.160

  34. Yin X, Fan X, Yang X, Qiu S (2019) An image appearance based optimization scheme for monocular 6D pose estimation of SOR cabins. Optik 199:163115. https://doi.org/10.1016/j.ijleo.2019.163115

  35. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8(6):679–698

  36. Douglas DH, Peucker TK (2011) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Class Cartogr: Reflect influential Artic Cartogr 10:15–28. https://doi.org/10.1002/9780470669488.ch2

    Article  Google Scholar 

  37. Lepetit V, Moreno-Noguer F, Fua P (2009) EPnP: an accurate O(n) solution to the PnP problem. Int J Comput Vis 81(2):155–166. https://doi.org/10.1007/s11263-008-0152-6

    Article  Google Scholar 

  38. Newcombe RA, Lovegrove SJ, Davison AJ (2011) DTAM: dense tracking and mapping in real-time. 2011 Ieee International Conference on Computer Vision (Iccv), pp 2320–2327. https://doi.org/10.1109/iccv.2011.6126513

  39. Forster C, Zhang ZC, Gassner M, Werlberger M, Scaramuzza D (2017) SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Trans Robot 33(2):249–265. https://doi.org/10.1109/Tro.2016.2623335

    Article  Google Scholar 

  40. Engel J, Koltun V, Cremers D (2018) Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell 40(3):611–625. https://doi.org/10.1109/Tpami.2017.2658577

    Article  Google Scholar 

  41. Sida P, Yuan L, Qixing H, Xiaowei Z, Hujun B (2019) PVNet: pixel-wise voting network for 6DoF pose estimation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Proceedings, pp 4556–4565. https://doi.org/10.1109/cvpr.2019.00469

  42. Romero-Ramirez FJ, Munoz-Salinas R, Medina-Carnicer R (2018) Speeded up detection of squared fiducial markers. Image Vis Comput 76:38–47. https://doi.org/10.1016/j.imavis.2018.05.004

    Article  Google Scholar 

Download references

Funding

This work was supported by the Chengdu Aircraft Industry (Group) Co. Ltd. of Aviation Industry Corporation of China (Grant No. 40113000050X).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiumin Fan.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, X., Fan, X., Wang, J. et al. Edge-based cover recognition and tracking method for an AR-aided aircraft inspection system. Int J Adv Manuf Technol 111, 3505–3518 (2020). https://doi.org/10.1007/s00170-020-06301-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-020-06301-x

Keywords

Navigation