Skip to main content

Visually Encoded Contact Inspection System for EMATs

Abstract

Contact inspections are commonly performed in industry to check for defects and degradation, such as corrosion or cracks. Non-destructive evaluation (NDE) probes are being deployed with increasing frequency using autonomous robots, especially in harsh environments or in areas where access is restricted and difficult. Together with the NDE measurement, it is important to capture the 3D position of the probe so that the location where the data originated is known. This allows for the generation of 3D maps of the inspection, which ensure full scan coverage, and can be used for inspection reports and to generate digital twins of the structure for asset management. In this paper, a full inspection system integrating a robot mountable stereo camera system together with an electromagnetic acoustic transducer (EMAT) probe and a wireless NDE data acquisition system is presented. The system is capable of capturing and merging 3D positional data of the probe as it is scanned and NDE data. The system design in terms of hardware and software is described in this paper. A set of tests to evaluate its performance on relevant structural components are also presented, and the results are reported and discussed.

Introduction

Non destructive evaluation (NDE) techniques are used routinely in industrial inspections to assess the structural integrity of components. An important part of these inspections involves scanning the surface of the component using an NDE probe to cover a surface area that can range from a few centimetres squared to many square metres. Spatial encoding of the probe location during the inspection is necessary in most cases to produce a map of the inspection scan. In addition, there is a current trend in the NDE industry to use digital twins of inspected structures for asset management, in which case the ability to encode the probe location in 3D space is highly desirable.

Encoding technology for NDE exists, however it is generally based on wheeled or wired encoders. These existing encoders can only capture the distance that the probe travels on the surface, but not the 3D position and pose of the probe. Camera systems are available but they require prior calibration and setup on site. In this paper, we explore the use of the Simultaneous Localization And Mapping (SLAM) technique for spatial encoding and the fusion of this with ultrasonic measurements, leading to a complete inspection system with spatial encoding.

SLAM is a technique where the robot aims to construct a map of its surroundings and estimate its own position at the same time. SLAM can be performed using monocular cameras. In that case, an RBG image is created every frame, and visual odometry can be performed based on the change of the environment between 2 successive frames. However, monocular SLAM suffers from a problem called the scale drift, as it cannot retrieve the actual scale from the world without extra information. This problem has been addressed and solutions are proposed in [1, 2]. Two other types of camera can also be used in SLAM; these are RGB-D and stereo cameras. Using an RGB-D camera, in addition to an RGB image, information about the distance of the camera from its surroundings is available. This is done by emitting an infrared light beam, and receiving its reflection and calculating the time-of-flight (ToF) [3, 4]. For stereo cameras, 2 different pictures are taken from the left and right camera at each frame, the difference between these two pictures is called a disparity map, using this map the depth information of the surroundings can be extracted [5, 6]. Performance wise, RGB-D cameras present less of a computational burden at the cost of being less accurate due to noise and environmental dependencies, while stereo cameras are more accurate at the cost of needing more computational resources since a disparity map needs to be computed at every frame [7]. In both cases, together with the visual odometry and the depth information, a map of the camera surroundings with the location of the camera can be generated.

Starting from the last decade there have been numerous visual based SLAM method being proposed, such as Oriented FAST and rotated BRIEF SLAM (ORB-SLAM) [8], Real-Time Appearance-Based Mapping (RTAB-Map) [9] and Zedfu [10]. The ORB-SLAM is a monocular SLAM proposed in 2015 and it requires a monocular camera to perform SLAM. ORB features [11] are used in feature matching. The ORB-SLAM allows a lifelong operation due to the compactness of the generated map. ZEDfu is a stereo SLAM proposed by Stereolabs. The ZEDfu requires a stereo ZED camera to perform SLAM. It consists of four main modules; Depth Sensing, Positional Tracking, Spatial Mapping and Object Detection, and can also be integrated with multiple third-party libraries and environments such as Robotics Operating System (ROS) [12] and Unity [13]. RTAB-Map is a SLAM approach proposed in 2013 and can be performed by using either a RGB-D camera or a stereo camera, an optional 3D Lidar input is also accepted to perform 6 DoF mapping. The RTAB-MAP aims to provide a solution to loop closure in a large scale environment, where memory and runtime become a concern due to a large set of keyframes being considered in the loop closure check. In RTAB-MAP, by using the concept of working memory and short term memory, it was shown that the time required has been kept under a fixed limit [14]. A comparative analysis of visual-based SLAM methods was performed in [15], it was shown that out of all 3 SLAM methods, ZEDfu has the least deviation and thus the best odometry quality and therefore it is selected for this work.

A relevant type of NDE inspection probe for scans are electromagnetic acoustic transducer (EMAT) [16, 17]. EMAT probes have the advantage that they do not require direct contact with the inspected component, and thus there is no need for surface preparation. In addition, they can work through coatings on the component and do not require ultrasonic couplant. An acquisition system for EMATs was recently developed [18], which has the advantage that is small, lightweight, wireless, standalone, and compatible with ROS. This acquisition system is thus portable, and can be easily used to scan components in industry, either manually or using robots.

In this paper, a stereo camera system using SLAM is integrated with an EMAT probe and its corresponding acquisition system, to create an inspection system capable of 3D spatial encoding. The ultrasonic data from the EMAT probe is fused with the SLAM data to generate a data set containing the full pose of each measurement and the component thickness information. The performance of the resulting system is demonstrated in a set of tests that involve scanning 3D components and digitally displaying the encoded inspection data in 3D.

The rest of the paper is structured as follows. The detail design of the acquisition and localization system is discussed in Sect. 2, the setup of experiment performed is discussed in Sect. 3, the results of experiments are discussed in Sect. 4, and concluding remarks are reported in Sect. 5.

Developed Integrated System

The integrated system consists of a localisation device, and an NDE system, combining their data to produce an output encoded NDE dataset. A ZEDm camera was selected in this work as the most suitable camera to perform localisation. This ZEDm camera was integrated with the acquisition system and EMAT probe [18]. The integrated EMAT probe and camera can then be moved over the surface to measure thickness while continuously estimating the pose.

Fig. 1
figure 1

Block diagram of the complete system

The features for both the acquisition system and localisation device are summarised in Sect. 2.1. The complete system with the full setup are shown in Sect. 2.2. The hardware layout is summarised in Sect. 2.3. The software setup is summarised in Sect. 2.4.

Fig. 2
figure 2

Picture of the holding structure

Features

The acquisition system [18] is a standalone system that is connected to an EMAT probe to perform measurements, which in this work are thickness measurements of the inspected component. The acquisition system first obtains the analogue signals from the probe. It then amplifies and digitalises them, and lastly it processes them to obtain the information of interest, which in this case is component thickness. The resulting thickness values are then transmitted via Wi-Fi using ROS to a receiving platform, which in this case is a laptop. This work flow is schematised in Fig. 1.

In parallel, the localisation device, which in this work is a ZEDm camera, is continuously performing SLAM to obtain its pose. The localisation device does so by capturing images of the environment and reading from its IMU while moving. Pictures of the environment captured and readings at each frame are passed to the receiving platform via USB transfer; this receiving platform can be either a robot or a laptop as in this work. These pictures and readings are then processed using the ZED SDK to compute the pose of the camera through visual motion estimation, taking into account of the effect of fast movement. The localisation device is also able to estimate track its motion and to transfer the pose data as a ROS message. Since the acquisition system has a message publish frequency of 3 Hz, the localisation device needs to be able to produce a pose message with a minimum publish frequency double of this.

Complete System

The integrated system brings together the localisation device and the EMAT probe together with the acquisition system for the EMAT, and data is assembled in the receiving platform. The main elements of the integrated system, together with the communication between them, are summarised in Fig. 1. The detailed structure of data flow within the acquisition system is presented in [18]. Within the receiving platform, a script was used in this work to combine the received message into one. This is performed by restricting the publication of combined messages to only when a thickness data point is received. The combined message is then passed to a script that relies on standard Matlab libraries to construct a point cloud of the structure.

Fig. 3
figure 3

Full setup of the intergrated system

The ZEDm camera and the EMAT probe need to be fixed together with a known relative position between them in order to achieve high accuracy in the positioning. A structure to hold the EMAT probe together with the ZEDm camera was created using 3D printing, a picture of which is shown in Fig. 2. In the structure, the ZEDm camera is positioned and secured on top of the EMAT offset by 11cm from the ultrasonically active area. Adhesive tape is placed at the base of the EMAT to protect it and its thickness is considered negligible. This structure including the EMAT and camera can be considered the end effector of the integrated inspection system.

Figure 3 shows the full setup of the integrated system prepared to be used in practice. The acquisition system is connected to the EMAT via a LEMO connector. This acquisition system is then connected to the receiving platform via Wi-Fi connection. The ZEDm camera is connected to the receiving platform via a USB 3.0 cable.

Hardware

The acquisition system consists of 3 main elements: an arbitrary function generator and an oscilloscope, a board with analogue amplifiers, and a processing machine with Wi-Fi. The acquisition system can operate at a frequency range between 50 kHz and 3 MHz. In this work it is exciting a pulse with a central frequency of 2 MHz. This pulse is sent via a LEMO cable to the EMAT probe, which generates a polarized shear wave on the component at the 2 MHz frequency. The system can perform measurements at up to 12 Hz, using 10 averages per measurement, meaning that it can excite pulses with a burst rate of 120 Hz. Details of this system can be found in [18]. For the purposes of this work, the processor receives the digital raw signals from the EMAT probe, and processes them to obtain the thickness values of interest. The processor has ROS installed on it, and transmits the resulting thickness values via Wi-Fi using ROS.

The localisation device used in this work is the ZEDm camera, supplied by Stereolabs. The camera has a dimension of 124.5 \(\times \) 30.5 \(\times \) 26.5 mm, weighs 62.9 g and is USB powered. It consists of a dual camera, an accelerometer and a gyroscope. The maximum resolution of the video mode is 2.2 k at 15 Hz, and the maximum depth and pose update rate up to 100 Hz while in VGA resolution, the accelerometer and a gyroscope have a sampling rate of 800 Hz, the minimum and maximum depth sensing distance are 0.1m and 15m respectively. A USB 3.0 cable is used as a power supply cable and to transfer data from camera to an operating laptop. In this work, the video mode is set to 720p, and the pose update rate is set to 60 Hz.

Software

The processor of the acquisition system has Ubuntu 16.04 as operating system and ROS Kinetic installed. The inspection data is outputted using ROS via Wi-Fi. It is also possible to communicate to the processor from any external laptop connected to the Wi-Fi by establishing a network protocol secure shell (SSH), in case any inspection parameters such as number of averages of the inspection need to be modified.

The captured image and IMU data from the camera are transferred to a receiving laptop through a USB 3.0 cable for further processing. In this work, the ZED SDK is used for acquiring the pose of the camera. The ZED SDK is the native SDK provided along with the ZED camera. The SDK is capable of performing position tracking, spatial mapping and object detection. In this work, mainly the position tracking modules is used, the ROS wrapper provided by Stereolab is also used to publish pose data as a ROS message. It should be noted that the ZED SDK is based on the analysis of textured information and IMU data for pose computation, which has limitation is that no motion in visual scene is allowed when the ZED camera is moving. As such, the ZED SDK is not expected to provide reliable pose estimation in dynamic environments.

In the implementation in this work, the camera pose estimates are offset by the dimensions of the 3D printed structure holding the camera to the EMAT probe. This provides the pose at the base of the EMAT probe, which is the pose of the outer surface of the inspected component. The inner surface of the inspected component is obtained by offsetting the points from the outer surface of the component in the direction perpendicular to the outer surface, and by an amount equal to the thickness values measured using the EMAT. The combination of the outer surface of the inspected component, obtained based on the camera pose, and the inner surface, obtained combining the camera pose and measured thickness, provide a reconstruction of the scanned component. This is plotted as a point cloud that creates a digital twin of the inspected component, which inherently includes the inspection data.

Fig. 4
figure 4

(Top) Scene of the measurement including scanned sample, EMAT holder and attached zed camera and background that the zed camera is looking at. (Middle and bottom) Image that the zed camera sees

Experimental Testing

The integrated system was first tested on an L-shaped aluminium bar with constant cross section as shown in Fig. 4a. The bar has a thickness of 10 mm with a length of 1400 mm, the top face has a width of 100mm and the side face has a width of 50 mm. Figure 4 shows the experimental setup used to scan the L-shaped aluminium bar. The integrated camera and EMAT probe are shown on top of the inspected component, and the background that the camera is facing is also shown. The majority of background is at a distance approximately between 1 and 3 m from the camera. The edge of the testing component can also be see in some of the background, which is at a distance of 10–50 from the camera. The distance between the background and the camera is important in the precision of the pose computation, backgrounds further away from the camera lead to lower precision. In the experimental setup, the background is located at a distance that can be representative of the background distances found in the field.

In the tests, the inspection device containing the EMAT probe and camera was moved along the surface manually following an approximate raster scan pattern similar to the one that would perform when inspecting an industrial structure. The pose of the ZEDm camera and the thickness of the specimen were recorded at each measuring point (obtained at a 3 Hz frequency). A point cloud with both the outer surface and inner surface of the tested specimen were then generated for the full scan of the component.

This inspection scan of the component was repeated 5 times in the same configuration and the results were recorded. The average of the results was also computed in order to find the accuracy of the system.

During each test of the aluminium angle bar experiment, an average of around 600 data points, including both the inside and the outside surface are recorded. The experiment was conducted along a section of the bar with a length of 90 mm.

The system was then tested on a circular aluminium pipe with 128 mm outer diameter and 113 mm inner diameter. The test was perform to illustrate the versatility of the system in terms of scanning curved surfaces. The setup used for the experiment on the pipe is shown at Fig. 5.

Fig. 5
figure 5

Scene of the measurement including scanned sample, EMAT holder and attached zed camera and background that the zed camera is looking at

A first scan of the pipe using a raster scan was performed. The section of the pipe scanned can be described using cylindrical coordinates, and corresponds to 120 degrees of the pipe and 70 mm in height. This scan of the pipe was then repeated to obtain 5 independent scans, which were recorded. The average of the results was also computed in order to find the accuracy of the system.

In all tests, once a complete point cloud was acquired, this point cloud was then imported into CloudCompare [19] for comparison. CloudCompare is an open source 3D point cloud and mesh processing software. The point cloud imported was then compared with the ground truth 3D model, which was created as a 3D STL file with either the L-shaped bar or the pipe geometry. The distance between all cloud points and the nearest triangle of the ground truth geometry mesh are computed as a measure of the error.

For each test of the aluminium pipe experiment, an average of around 400 data points, including both the inside and the outside surface, are recorded. The experiment was conducted along a section of the pipe with a length of 70 mm.

Results and Discussion

The point cloud of measurements obtained from a representative test on the aluminium bar is shown in Fig. 6. The external surface data points are shown in red, while the internal surface are shown in blue.

Fig. 6
figure 6

Scatter plot of the obtained point cloud, external surface colored as red and internal surface colored as blue, units are in mm

Figure 7 shows this point cloud being superimposed onto the ground truth model within CloudCompare. The alignment process was done by using 4 corner reference points and manually fine tuning after the first alignment.

Fig. 7
figure 7

Point cloud of aligned geometry superimposed onto stl model outer points labeled as red, inner points labeled as yellow (top) Isometric view (bottom) side view

Table 1 Error of angled bar (overall)
Table 2 Error of angle bar (outer surface)

Table 1 shows the error of reconstruction in the angled aluminium bar, with a mean error of 0.598 mm on both surfaces. Table 2 shows the error of reconstruction of the outer surface. The outer surface reconstruction mean error is close to but a little smaller (by 0.1 mm) than the error for the combined outer and inner surfaces. A histogram of the Cloud to Mesh (C2M) distance is also shown in Fig. 8, 83.4% of points has a C2M absolute distance smaller than 1mm.

A scatter plot of cloud point obtained in the aluminium pipe is shown in Fig. 9. The external surface data points are shown in red, while the internal surface are shown in blue.

Figure 10 shows this point cloud being overlayed onto the ground truth model, the alignment process was done by manually aligning stl model onto the pointcloud.

Fig. 8
figure 8

Histogram of difference of point could

Fig. 9
figure 9

Scatter plot of the obtained point cloud, external surface colored as red and internal surface colored as blue

Table 3 shows the error of reconstruction in the pipe. The mean error is 0.770mm on both surfaces. Table 4 shows the error of reconstruction of the outer surface. The mean error in Table 4 is close to but smaller than overall error in 3 by 0.1mm. This is due to the inaccuracy of the combined NDE acquisition system and EMAT probe at the 2 MHz frequency used. A histogram of C2M is also shown in Figure 11. In that histogram, 71.4% of points have a C2M absolute distance smaller than 1mm.

Fig. 10
figure 10

Point cloud of aligned geometry superimposed onto a CAD model that serves as ground truth. Outer points labeled as red, inner points labeled as yellow, and (top) isometric view (bottom) side view

Table 3 Error of pipe (overall)
Table 4 Error of pipe (outer surface)

It should be noted that, as previously mentioned in Sect. 2.4, the SLAM technology used in ZED SDK is based on the analysis of textured information and IMU data for pose computation. This has the limitation is that no motion in visual scene is allowed when the ZED camera is moving, and thus in these scenarios the ZED SDK will not provide the reliable pose information. This is a limitation that is arises in dynamic environments and that can be responsible for some of the discrepancies between the point clouds obtained and the actual component geometry.

Fig. 11
figure 11

Histogram of difference of point could

It should also be noted that the distance between ZED camera and background affects the prevision of the pose computation, as described in Sect. 3. Thus, locating objects in the background closer or further from the camera can lead to higher and lower precision in the results, respectively. The results presented in this work are for a representative background described in Sect. 3.

Conclusion

An integrated inspection system consisting of an EMAT probe together with its acquisition system, and a ZEDm stereo camera system, was presented in this paper. The ZEDmcamera system uses SLAM, and the integration results in merged NDE and position measurements. The system is capable of 3D spatial encoding the probe location as well as returning the location of the inner surface of the specimen. Thus, it can create 3D digital twins of the structures it scans, which inherently include the NDE inspection measurements. The system is small and light weight (<1kg), and is ROS-enabled. The performance of the system was showcased in a set of tests. The results of these tests were a set of points cloud of the internal and external surfaces of the scanned structure. The evaluation of these tests indicate that they reconstructed the geometry with mean error of 0.598 mm for a 90 mm section of an aluminium angle bar, and with a mean error of 0.770 mm for a 70 mm long, 120 degree section of a 12.7 cm diameter aluminium pipe.

References

  1. Antigny, N., Uchiyama, H., Servières, M., Renaudin, V., Thomas, D., Taniguchi, R.I.: Solving monocular visual odometry scale factor with adaptive step length estimates for pedestrians using handheld devices. Sensors (Switzerland) (2019). https://doi.org/10.3390/s19040953

    Article  Google Scholar 

  2. Strasdat, H., Montiel, J.M., Davison, A.J.: Scale drift-aware large scale monocular SLAM. In: Robotics: Science and Systems (2011)

  3. Cruz, L., Lucio, D., Velho, L.: Kinect and RGBD images: challenges and applications. In: Proceedings: 25th SIBGRAPI—Conference on Graphics, Patterns and Images Tutorials, SIBGRAPI-T 2012. (2012)

  4. Khoshelham, K.: Accuracy Analysis of Kinect Depth Data. ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. vol. XXXVIII-5/, pp. 133–138 (2012)

  5. Faugeras, O., Hotz, B., Mathieu, H., Vieville, T., Zhang, Z., Fua, P., Theron, E., Moll, L., Berry, G., Vuillemin, J., Bertin, P., Proy, C., Real time correlation-based stereo: algorithm, implementations and applications. In: INRIA report (1993)

  6. Kusupati, U., Cheng, S., Chen, R., Su, H.: Normal assisted stereo depth estimation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

  7. Sabattini, L., Levratti, A., Venturi, F., Amplo, E., Fantuzzi, C., Secchi, C.: Experimental comparison of 3D vision sensors for mobile robot localization for industrial application: Stereo-camera and RGB-D sensor. In: 2012 12th International Conference on Control Automation Robotics & Vision (ICARCV). IEEE, pp. 823–828 (dec 2012). [Online]. Available: http://ieeexplore.ieee.org/document/6485264/

  8. Mur-Artal, R., Montiel, J.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  9. Labbé, M., Michaud, F.: Online global loop closure detection for large-scale multi-session graph-based SLAM. In: IEEE International Conference on Intelligent Robots and Systems (2014)

  10. Stereolabs, Stereolabs—Capture the World in 3D. (2019)

  11. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: An efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision. IEEE, 2011, pp. 2564–2571. [Online]. Available: http://ieeexplore.ieee.org/document/6126544/

  12. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A. Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)

  13. Unity Technology. Unity 3D. in Unity Technology. (2018)

  14. Labbé, M., Michaud, F.: Memory management for real-time appearance-based loop closure detection. In: IEEE International Conference on Intelligent Robots and Systems (2011)

  15. Ibragimov, I.Z., Afanasyev, I.M.: Comparison of ROS-based visual SLAM methods in homogeneous indoor environment. In: 2017 14th Workshop on Positioning, Navigation and Communications, WPNC 2017 (2018)

  16. Thompson, R.: Physical principles of measurements with EMAT transducers. Phys. Acoust. 19, 157–200 (1990)

  17. Choi, S., Cho, H., Lindsey, M.S., Lissenden, C.J.: Electromagnetic acoustic transducers for robotic nondestructive inspection in harsh environments. Sensors 18(1), 193 (2018)

  18. Garriga-Casanovas, A., Khalili, P., Cegla, F.: Development of a new. Wireless acquisition system for EMATs compatible with the robotics operating system. IEEE Sens. J. 20(21), 12783–12790 (2020)

    Article  Google Scholar 

  19. Girardeau-Montaut, D.: CloudCompare: 3D point cloud and mesh processing software. Webpage: http://www.cloudcompare.org (2015)

Download references

Acknowledgements

This work was supported by research funding from EPSRC via grants with the following reference numbers: EP/R026173/1 and EP/R511547/1.

Open Access

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Ka Yung or Frederic Cegla.

Ethics declarations

Conflicts of interest

There are no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yung, K., Garriga-Casanovas, A., Khalili, P. et al. Visually Encoded Contact Inspection System for EMATs. J Nondestruct Eval 41, 15 (2022). https://doi.org/10.1007/s10921-022-00848-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10921-022-00848-3

Keywords

  • Robotics
  • Localisation
  • Encoding
  • EMAT