Autonomous robotic system for tunnel structural inspection and assessment
Abstract
This paper presents a robotic platform, capable of autonomous tunnel inspection, developed under ROBO-SPECT European union funded research project. The robotic vehicle consists of a robotized production boom lift, a high precision robotic arm, advanced computer vision systems, a 3D laser scanner and an ultrasonic sensor. The autonomous inspection of tunnels requires advanced capabilities of the robotic vehicle and the computer vision sub-system. The robot localization in underground spaces and on long linear paths is a challenging task, as well as the mm accurate positioning of a robotic tip installed on a five-ton crane vehicle. Moreover, the 2D and 3D vision tasks, which support the inspection process, should tackle with poor and variable lighting conditions, low textured lining surfaces and the need for high accuracy. This contribution describes the final robotic vehicle and the developments as designed for concrete lining tunnel inspection. Results from the validation and benchmarking of the system are also included following the final tests at the operating Egnatia Motorway tunnels in northern Greece.
Keywords
Autonomous robot Tunnel inspection Structural assessment Computer vision system Autonomous navigation Ultrasonic sensors1 Introduction
The maintenance and safe operation of the existing civil infrastructure, e.g., pipelines, tunnels, roads and bridges, is a tedious and challenging task (Frangopol and Liu 2007). Due to ageing, environmental factors, increased loading, inadequate or poor maintenance and deferred repairs, these structures are progressively deteriorating urgently needing inspection assessment and repair work (Brownjohn 2007). In transportation tunnels, there is a widespread evidence of deterioration associated resulting in an increase on inspection and assessment budgets (Loupos et al. 2014; Montero et al. 2015; Koch et al. 2014). Partial collapses in tunnels in recent years have been reported which highlighted the need for research into ways to inspect and assess tunnel stability of in service tunnels (Klammer et al. 2012; Botelho 2001; Delatte and Norbert 2009). One should add here that in the next decades the rate of expansion of the transport infrastructure will not keep pace with the increase in transport demand necessitating the maximization of the operational uptime of tunnels. Thus, maintenance should be proactive and inspection speedy in order to minimise tunnel closures making good use of the limited engineering hours’ for tunnel inspection and assessment.
At the same time, there is a great boost in cognitive research fields, such as computer vision, machine learning and pattern analysis, reasoning and planning, with a tremendous impact for many real-life applications. Cognition research breaks new grounds in computer vision under arduous conditions, e.g., industrial processes (Voulodimos et al. 2012), under the water inspection (Huet and Mastroddi 2016), video surveillance applications (Gutchess et al. 2001; Wang 2013), objects’ manipulation, packaging and food quality analysis (Brosnan and Sun 2004; Ibrahim et al. 2009), safe and rescue using unmanned vehicles (Rudol and Doherty 2008). All this progress advocates that automatic cognitive systems can improve the performance in many other application areas, including tunnel’s inspection.
In this paper, we propose an integrated Structural Health Monitoring (SHM) platform which exploits robotic technologies, computer vision, deep learning, multimedia data streaming, 3D modelling, reconstruction and laser scanning, as well as in situ analysis, defect recognition and structural engineering assessment. The platform targets on inspection of concrete lining tunnels. The structural assessment of transportation tunnels is considered of primal importance in order to identify and determine its reliability levels on the ability to carry existing and future loads and fulfil its task having in mind human life, financial, maintenance and operational risks. One of the largest challenges currently present when talking about real-time monitoring and inspection systems is the actually unique character of these infrastructures. This raises a need for a system able to be adapted to different operational needs and structure types with different monitoring requirements (Loupos et al. 2016).
1.1 Previous works
Presently, structural tunnel inspection is predominantly performed through scheduled, periodic, tunnel-wide visual observations by inspectors who identify structural defects (e.g., cracking or spalling) and rate these defects by taking a series of measurements. For instance, a crack is considered as minor when its width is up to 0.8 mm, moderate is its width is between 0.8 and 3.2 mm and severe if it is greater than 3.2 mm. The main drawbacks of the current manual inspection approaches lie in the following four problems: It is slow, labour intensive, subjective and unreliable. As a result, a more automatic process should be implemented to improve the recognition process of a tunnel inspection. Towards this direction, computer vision and machine learning can be assistive.
Nowadays, most vision-based systems that are used or proposed for tunnel inspection make use of heuristic methods for detecting and tracing cracks. Although, these methods are simple and their application is straight-forward, they present low generalization ability. Consequently, they inherently depend on initial assumptions and cannot be applied in different environments under different inspection conditions without reconfiguration or human supervisor/operator intervention. To overpass these problems most of these systems use multiple constraints concerning the crack detection and tracing process (Sulibhavi and Parks 2007). The works of (Yao et al. 2009; Yoon et al. 2009; Jeong et al. 2007) require a predefined surface scanning trajectory, while the system (Paar et al. 2006) in the case of missed cracks, makes necessary the intervention of a human expert to declare their start and end points.
The most common technique used by vision based systems for crack detection and tracing relies on edge detection. Edge detection techniques are used by Fujita et al. (Fujita et al. 2006) by enhancing lines on captured images. However, blemishes and painted surfaces affect the efficiency and accuracy of these methods. To overpass this drawback the work of (Fujita et al. 2006) makes assumptions about the crack size. The authors of (Yu et al. 2007a) present a mobile robot for fast crack acquisition by scanning the inspected surface using a line camera. However, the accuracy of results depends on robot velocity and position. As the same authors mention, total automation of the robot is limited because of the difficulty to obtain accurate data in unpredictable environments. The system presented in (Victores et al. 2011) determines the distance between camera and target surface by using proximity sensors, while the system of (Soga et al. 2010) tries to address perspective problems caused by the main shape of inspected surface by using a sophisticated mosaicking tool developed at Cambridge University.
The work of (Jahanshahi and Masri 2012; Yu et al. 2007b) exploited 3D measurements for crack detection and mobile robotic systems. However, most of the aforementioned approaches lack cognitive-learning capabilities that would allow a more reliable and speedy tunnel inspection. In all of the above cases, when a more detailed structural assessment is needed for cross-sections of concern, in a subsequent step, measurements are taken, through non-destructive or destructive means to provide the required input for structural analysis. The main drawbacks for this assessment approach are: (a) it is slow, (b) relies on expensive equipment and (c) is labour intensive. Generally, any tunnel inspection methodology, based on computer vision tools, present a set of challenges. A brief description is provided in the next paragraphs.
1.1.1 The visibility problem
There is absence of natural light in tunnels and lighting conditions are very poor. Thus, it is in fact challenging to improve the performance of existing computer vision outcomes within such arduous environments.
1.1.2 The curvature problem
The main shapes for lining intrados are circular, horseshoe and oval/egg, while salient cracks can be expanded towards any direction in 3D space on this surface. Such curvature of a lining distorts the way that a 3D crack is projected onto 2D cameras’ views, in terms of precise and accurate measurement of its length, width and depth. As a result, it is quite important for the computer vision algorithms to be combined with control mechanisms able to define the most suitable angular orientation in a 3D space, cameras’ distance from the target (i.e., the crack) and its direction so as to improve the automatic recognition and measurement accuracy by the cameras.
1.1.3 The depth problem
Although cracks are very narrow (less than one to few millimetres) in width they can be quite deep. Even worse, their depth (as well as their width) is important for civil engineers to calculate tunnel stability. Such defect parameters make even more challenging the recognition accuracy since the optical beams of the cameras and/or of the laser mechanisms actually are physical “particles” with some small width which in this case is comparable with the width of the crack. This, in other words, means that the optical description of a crack through visual cameras may be severely distorted in terms of precision accuracy.
1.1.4 The integration problem
Automatic and precise inspection of a tunnel should combine different technologies ranging from computer vision and machine learning to improved and more accurate sensor measurement, control and robotic automation. All these different technologies should be properly integrated in order to achieve reasonable results from a civil engineering point of view. At the time of writing, there is no standard ontology for tunnel surface and deterioration description. A new semantics must be determined to allow the learning agent describe the environmental situation to the reasoning agent, and to allow the reasoning agent to guide the robot control towards optimal orientation and illumination for the vision system while performing incrementally enhanced trajectories. Additionally, the vision system should be assisted by other types of sensors in order to achieve precise positioning of the robotic tip around the crack to measure its status. The system is capable of touching an ultrasound sensor with a positioning accuracy of few centimetres (about 5 cm) around the crack.
1.2 Our contribution
In this paper, we present an integrated, automatic system, suitable for tunnel inspection. The robotic platform and its control room was the research outcomes of the European Union funded project ROBO-SPECT (Loupos et al. 2014). Our system targets road tunnels constructed from concrete. In such tunnels, SHM focuses on deformation limits in the structure stability. The investigation is applied on stresses, strains and deflections that may be present (Sumitro et al. 2004). Currently tunnel inspections are predominantly performed through scheduled, periodic, tunnel-wide visual observations by inspectors who identify structural defects manually.
Our platform paves the way towards an automatic, reliable and speedy inspection and assessment of the road tunnels from concrete constructions. The system addresses the aforementioned major problems in today’s tunnel inspection; reliability, subjectivity and speed. The key modules of our platform are described in Sect. 2. In brief, the ROBO-SPECT system consists of a series of innovative modules properly integrated together to derive the tunnel inspection.
- 1.
A mobile robotic system of huge arms that is capable of autonomously moving within road tunnels and inspect their structures at huge sizes (up to 7 m). The robotic system consists of the mobile platform and the arm. The latter is necessary so that upon the detection of a crack the robot reaches the crack (though its very small width) and take in situ measurements.
- 2.
A cognitive computer vision system (primary inspection system) for tunnel inspection and assessment of the structural condition that can detect cracks and other types of defects such as spalling on the automatic application of novel computer vision tools properly combined with deep learning techniques.
- 3.
Precise 3D models of the cracks and use these models at later stages so as to derive more robust knowledge on tunnels stresses. Computer vision tools and sensory data will be exploited towards this case.
- 4.
A secondary inspection system (see Sect. 5) that measures (a) the depth of cracks or the depth of the opening of joints of interest with an accuracy of 1 mm and (b) the width of these cracks and openings with an accuracy of 0.1 mm. This accuracy refers to the measurements made by the ultrasound sensor under the assumption that it has been positioned around the crack. Our robotic system exploits computer vision algorithms and 3D SLAM techniques to allow for a precision positioning of the ultrasound sensor around the crack with an accuracy of few centimetres (about 5 cm). The sensors are integrated in the robotic platform on moving arms, in order to be placed on cracks selected for measurement during tunnel inspection. This is performed using the computer vision module which detects cracks and moves the robotic arm in a position near to the crack for installing the ultrasound sensor.
This paper is organized as follows: Sect. 2 describes the main components of our ROBO-SPECT system. The robotic platforms along with all main subcomponents are presented in Sect. 3. The computer vision tools and the machine learning algorithms implemented are discussed in Sect. 4. Section 5 depicts the ultra-sound sensors which is responsible for in situ measuring the condition of a crack. The ground control station and the decision support subsystem is proposed in Sect. 6. On field validation experiments are analyzed in Sect. 7 while Sect. 8 concludes the paper.
2 ROBO-SPECT overall architecture
The full ROBO-SPECT system components
More specifically, the robotic platform is equipped with a navigation unit and a laser scanner used for obstacle avoidance. The platform also consists of motorized and sensorized wheels to be able to autonomously navigate in the tunnel. A complicated system of batteries supports the whole robotic platform to make it independent from power supply. Beacons are placed within the tunnel infrastructure for indoor positioning navigation since in such underground structures satellite data for GPS are not available. Finally, the robotic platform is also equipped with a dedicated control and navigation unit and turret sensing. On the robotic platform, a robotic arm is placed. The role of this arm is to reach a crack and take in situ measurements touching in the crack ultrasound sensors. The robotic arm is of high precision though it extremely height in size. A lengthy arm is required for inspecting tunnels due to large scale size constraints.
A pair of two visual cameras is installed on the robotic platform in order to perform a 3D reconstruction around a detected crack. An Arduino device is also installed to timely synchronize the pair of two cameras. Images captured by the set of cameras are used for detecting a crack using advanced computer vision and machine learning algorithms. In our approach, deep learning is exploited to initially detect candidate crack regions followed by computer vision methods that exploit connectivity analysis. 3D modelling is used to estimate the coordinates of the detected cracks. This is an important aspect for navigating the robotic arm in order to touch the crack and then to take in situ measurements on it. The 3D coordinates are combined with a 3D simultaneous localization and mapping (SLAM) algorithm to dynamically navigate the robotic arm towards the crack.
ROBO-SPECT system in position to start autonomous tunnel inspection at Egnatia Motorway (Metsovo)
3 Robotic subsystems
3.1 Robotic vehicle and crane
The ROBO-SPECT robotic platform consists of an industrial wheeled robotic vehicle and an automated crane. The system is automated through the use of robotic controllers. The vehicle chosen is the Genie Z30/20N,1 along with an articulated crane that can inspect tunnels up to 7 m size, to be able to touch any sector of the tunnel. The robotic vehicle is currently adjusted to move on road conditions. The platform is able to move autonomously in the tunnel, exploiting collision avoidance procedures based on laser sensors that are placed on the vehicle and the arm.
-
Maximum vehicle speed in the tunnel of 1.2 m/s and
-
Maximum payload on the tip of the crane of 227 kg.
- 1.
Two relative encoders in the driving wheels with the resolution of 2000 dots, connected by USB to the vehicle control system,
- 2.
One absolute encoder in the steering wheel with 3600 dots to handle synchronization aspects of the vehicle. This encoder is connected to the control system using CanBus communication interface.
3.1.1 Vehicle sensors for navigation
The vehicle is equipped with several navigation sensors. The following sensors are considered: (1) two SICK S3000 laser systems for obstacle avoidance, one placed in the front and one in the rear part of the vehicle with a working distance of 30 m, (2) one navigation NAV200 laser for detecting artificial landmarks inside the tunnel used for indoor positioning of the vehicle, (3) one gyroscope CRG20 system to increase the accuracy of the vehicle orientation (odometer).
The vehicle navigation is done using a set of landmarks detected by the robot sensors. The landmarks can be fixed in the tunnel (flat landmark) or included in cylindrical milestones. In this last case, the reflexion will be better, allowing increasing the distance that the landmarks are positioned. If they are attached to the planar or curve part of the tunnel’s wall, the reflexion index decrease. This is main reason to use the solution with the cylindrical reflective landmarks situated in the borders of the tunnels.
3.1.2 Navigation and localization control
The vehicle navigation control is based on the 3D simultaneous localization and mapping (SLAM) algorithm. Using the on-board navigation sensors and installed landmarks (beacons) in the tunnels, the navigation will focus on tracking the tunnel wall at a defined distance, commonly between 1.5 and 3 m. The location of the landmarks will be in the range of 15 m between them, being commonly 3–4 marks visible from the vehicle at any moment.
Navigation errors: a without compensation (average error of 15% or 0.3 m), and ) with SLAM compensation (average error of 12% or 0.24 m)
3.2 Intelligent global controller (IGC)
Intelligent global controller schematic
3.3 The robotic arm
The robotic arm positioning the ultra-sonic sensor on a crack
Detail of the IP camera (left) and image from the camera stream (right)
4 Computer vision subsystem
4.1 The computer vision module
ROBO-SPECT camera system in position to start autonomous tunnel inspection in Metsovo Tunnel at Egnatia Motorway
Stereo pairs collected from a drainage tunnel; various tunnel faults are indicated on this image set
4.2 Crack detection analysis using deep learning
The crack detection and position pinpoint process
4.2.1 Convolutional neural networks (CNNs)
Crack are detected using convolutional neural networks. Initially, an RGB image pair is captured. The detection mechanism utilizes only one of these two images; the second one is exploited in case of a positive crack detection in order to activate 3D reconstruction. The image under consideration initially is converted to grayscale and then is resized to be fed to the convolutional neural detector. Both grayscale scale conversion and resizing operator are used to reduce computational complexity of the crack detection module.
An illusteration of the employed CNN crack detector topology
The training process requires a balanced data set, i.e., the ratio between cracked and the non-cracked regions is set 1–3. Ideally, given an annotated image of size 3376 × 2704, less than 0.05% corresponds to the cracked areas; i.e., we have at most 4500 positive training paradigms (image patches of size 13 × 13) per image. In order to further extend the positive paradigms training set, we involve patch rotations at 30° and 60°.
Illustration of the CNN annotation (b) over an image (a). Areas in white denote possible cracks. In order to pinpoint a crack position, for depth assessment using ultrasound module, a post processing techniques is applied over the annotated image. Only, the contours within the red bounding boxes are considered (b), for the crack assessment
4.2.2 Connectivity analysis for crack mapping refinement
The post processing mechanism is useful when unexpected occurrences appear (e.g., graffities, bugs) on the tunnel surface, resulting in noisy areas on the binary annotated image. To refine the results obtained from the CNN deep learning module, connectivity analysis is exploited. First, we localize the contours of the regions derived from the CNN model and then boundary boxes are detected. In the following, we apply heuristic rules, referring to the expected characteristics of a crack. A crack is usually long in its length but it is very narrow (few pixels) in its width. In addition, it does not follow a canonical form of a straight line. Such canonical forms of straight lines are always due to artificial civil engineering structures existing in a tunnel. Instead, the cracks mostly resembles a curve.
Therefore, we initially extract statistics within each boundary box and those ones that do not follow a crack property are excluded from further processing. The remaining bounding boxes are then processed using density based algorithms and curve fitting tools. In particular, we perform a rough search for consecutive boxes within a certain image area. This is due to the fact that these consecutive boxes probably belong to the same crack. The algorithm estimates the aspect ratio across consecutive boundary boxes. In case that this ratio is consistent to the crack properties and its continuity, the boundary boxes are merged and considered as a part of the crack region. On the contrary, if the aspect ratio between two consecutive boxes significant varies, then these boxes are considered as noise and are excluded from further processing. Finally, a curve fitting algorithm is applied over all candidate consecutive boundary boxes to identify the crack. The DBSCAN density based clustering algorithm is exploited to localize consecutive boundary boxes within a certain image region. Figure 11b presents the boundary boxes assigned to each potential crack region and the most probable region of a crack using connectivity analysis among consecutive boxes. Each boundary region is processed and those that satisfies the crack attributes are selected for further processing. Then, consecutive boundary boxes are merged together using the aforementioned algorithm to derive the final detected crack region.
4.3 3D laser scanning
An example of the reconstruction model derived around a detected crack. The first row depicts the region of the crack. In particular, the left image shows the original image captured by the vision module of the ROBO-SPECT robotic system. The right image depicts the detected crack regions by the computer vision interface. The second row of images depict the 3D reconstruction model created around the crack area. In particular, the right image presents the 3D model created, while the left image a zoom of the model to show 3D modelling precision accuracy
The FARO 3D laser scanner used
In order to derive the geometry of the tunnel cross section, a surface of known equation is presupposed. Usually, the inner surface of a tunnel has a quadratic form, e.g., circle, parabola, or an assembly of circular arcs. A nonlinear least-squares solver is utilized to solve the surface fitting problem as in (Protopapadakis et al. 2016). The proposed approach also calculates the translation and rotation parameters, since the laser scanner is not located at the centre of the tunnel. The generated data allow for a 3D depiction of the section using point cloud based techniques.
4.4 Vision-based navigation of the robotic arm
The purpose of this module is to position the ultrasound sensor around the crack in order to take precise measurements which are important for the structure engineers to assess tunnel conditions and the potential risks. This is performed by exploiting computer vision methods, control techniques as well as 3D SLAM algorithms. In particular, two visual cameras are embedded in the robotic system. We recall that these cameras are synchronized through an Arduino controller. The pair of cameras is first calibrated using synthetic image patterns. Each time a crack is detected by the computer vision module, the respective pixel coordinates are estimated onto the 2D image planes. In the following, we estimate the correspondent points between the two image planes so as to reconstruct the depth. This way, the coordinates of the crack are computed with respect to the local coordinate system of the cameras. The robotic system knows the affine transformations that relates the local coordination system of the cameras with respect to the coordination system of the robot. This affine transformation is given as a set of translation parameters in the (x, y, z) space as well a set of \( (\varphi ,\lambda \)) that express the rotation parameters of the transformation. Therefore, the robotic platform is able to project the coordinates of the detected crack to the coordinates of the robotic system.
As we have mentioned in Sect. 3.1.2, landmarks are used in the tunnel to assist robot navigation as well as to geo-reference the robotic system. This way, the coordinates of the detected crack are projected to the global coordinates of the tunnel. 3D SLAM methods are exploited to navigate the robotic arm towards the crack. Each time a crack is detected, the controller moves the robotic arm towards the direction of the crack to approximate it. Then, new images are shot and a new crack detection is activated to refine the position of the crack with respect to the local coordinate system of the cameras, forcing the controller to move the robotic arm closer to the crack.
5 Ultra-sonic sensor
Ultrasonic sensor system design and operation on the robotic arm
The ultrasonic sensor is precisely positioned on the actual tunnel crack automatically and adjusted manually with high precision in order to perform the measurement of the crack width with a 0.1 mm accuracy, whereas an accuracy of few mm is reached for crack depth. That is, if the automated mechanisms somehow fails, the joypad-based rele-operation is activated to readjust the placement. The robotic system is capable of installing the ultrasound sensor, carried out by the robotic arm, with an accuracy of 5 cm around the crack. The system is also able to perform ultrasound surface velocity measurements on concrete.
Ultrasonic system diagram
6 Ground control station
6.1 The ground station units
Ground control station—mission preparation
Snapshots of the ground control station user interface, which display the remote planning and control interface of the mission. a Image the robot position is monitored on the top view map of the tunnel and on a virtual reality environment; the real conditions of the tunnel are monitored via an IP camera; the detected anomalies are recorded and displayed on the lower right part of the snapshot. b The human inspector can intervene with the inspection process and annotate an image of the tunnel lining, either for correcting a misclassified defect, either for keeping his own remark of the inspection
6.2 Decision support system
The ROBO-SPECT decision support system (DSS) is actually the structural assessment tool and user interface gathering and structuring the collected data from the robotic system to perform an overall assessment of the tunnel. DSS provides a visual illustration used to determine the impact on structural safety of various scenarios. Then, the civil engineers can study the influence of these cracks on the structural condition of the lining. Additionally the DSS provides positioning of cracks on the relevant node to be studied by the structural assessment modules.
The DSS tool (3D model left and geometry assessment right)
7 Field validation and benchmarking
The ROBO-SPECT system inspecting the tunnel
7.1 Crane navigation performance
Motion performance scores recorded during the final trials
Control actuation hysteresis and errors recorded in final trials
An important issue to highlight is the autonomy of the robot, directly related with battery life. In our configuration, the batteries last for 5 h. Within this time interval several procedures should take place such as frame processing of visual information, 3D reconstruction modelling, touching ultrasound sensors on cracks, global positioning system calibration, etc. In our configuration, we need 3 h to process of a tunnel of 500 m length.
7.2 Validation of the computer vision module
The computer vision module is validated and benchmarked into detecting cracks and other defects (delamination, spalling, joints with water leakage, etc.). For the validation, a set of different metrics are used as the ones described in Sect. 7.2.1 and different types of algorithms was compared as shown in Sect. 7.2.2.
7.2.1 Performance metrics
7.2.2 Validation results
Validation results of the proposed computer vision method using different types of metrics
| Method | ACC | PPV | TPR | F1 |
|---|---|---|---|---|
| CNN | 0.637 | 0.720 | 0.720 | 0.494 |
| Classification trees | 0.622 | 0.300 | 0.300 | 0.281 |
| Feedforward neural networks | 0.752 | 0.012 | 0.012 | 0.024 |
| Linear discriminant Analysis | 0.533 | 0.463 | 0.463 | 0.328 |
Different results of crack detection using the CNN model. The right part of each column denotes the original image while the left part the detected cracks. However, only the areas within red bounding boxes are considered for the final crack assessment
Regarding computational complexity of the algorithm, the proposed computer vision scheme requires 33 s for processing an image frame. This measure has been achieved using a laptop computer of i7 Intel processor of 2.6 GHz (6700 series), memory of 16 GB and GPU nvidia of 950 M series. It is clear that better performance can be achieved in case that a dedicated embedded interface is exploited to perform the computer vision task. For example, we can use an FPGA approach for accelerating the computational complexity of the computer vision module.
7.2.3 Other defect identification performance
Defect detection results for calcium leaching for the Malakasi tunnel in Egnatia
7.3 Laser scanner performance
The FARO Focus3D laser scanner was utilized to scan slices of tunnels, in order to extract the precise geometry of the tunnel inner surface and detect possible deformations. In order to evaluate the accuracy and precision of FARO Focus3D a number of experimental surveys were first performed laboratorilly. Regarding the laser scanner accuracy, “on-site” calibration methods based on planar feature identification are employed. The standard deviation of the distance of scan points from the calculated plane is used to discover if the measurement data sets are contaminated with high level noise or significant systematic errors. A total of eleven planes are detected in the view field of the three scan sets. The scan point distance varies from 1.36 to 6.53 mm with a mean value of 3.53 mm. It is obvious that the measured standard deviation exceeds the accuracy value provided by the manufacturer. On the basis of these calculations FARO Focus3D seems to be reliable for the detection of object features whose size exceeds the calculated mean error value.
7.4 Robotic arm performance
The high precision robotic arm movement is also tested and validated to actually position the ultrasonic measurement apparatus on the crack to be measured. The average timing of placing the ultrasonic sensors on the tunnel wall is about 20–30 s. This time highly depends on the actual geometric characteristics of the point to place the sensors as the robotic arm needs to perform a 3D laser scan of the surface in order to move without collision on the walls. After the ultrasonic measurements are completed, the robotic arm returns back to the home position by performing the same trajectory backwards. As this process does not require a laser scanning and thus timing is much faster reaching 7 s on average.
Regarding the accuracy of robotic arm positioning system, three different accuracies must be taken into account. First, the accuracy of the robotic arm system is by itself 0.2 mm due to the industrial characteristics of the arm. The second fact refers to the accuracy of the isolated robotic arm system including all the different processes (laser scanning, trajectory computation, execution). This second accuracy has been demonstrated at a laboratory environment and reaches 3–5 mm.
Finally, the complete accuracy of the system that takes into account all the subsystems (vehicle, crane, vision, arm) has measured to be about 5 cm between the tip of the ultrasonic frame placed by the robotic arm and the crack point detected by the cameras algorithms. This value reflects all the accumulative errors from the chain starting with the vehicle position, crane encoders, vision, laser and arm components.
Characteristic photo of the tip approach to take ultrasonic measurements of the crack (automatically)
One of the limitations of the system relies on the length of the robotic arm. As the arm has 7 degrees of freedom, it is capable of reaching any position in different configurations. However, the length that the arm can place the ultrasonic sensors is limited to 0.8–1 m. Augmenting this length constraint can allow the vehicle crane to be placed away from the wall, moving less distance and increasing the velocity of the overall inspection. However, increasing the length of the robotic arm provides the drawback of reducing the accuracy of the system and this disadvantage must be taken into account.
7.5 Ultrasonic measurements
CAD representation of cracks measured during the “manned” inspection of Metsovo tunnel by Egnatia Road Engineers
For the depth measurements, the results are compared with manual measurements performed with time of flight (ToF) method using a manual system. In these measurements, the ultrasonic sensors are used with an ultrasound gel to improve ultrasonic transmission from the concrete. The results obtained with the automatic system are fully comparable with those performed manually. This comparison is not always favourable for the manual system, as the quality of the gel and the condition of the concrete surface can produce errors to the manual ultrasonic measurement.
Considering the operational requirements, the automatic sensor system is operated both in fully automatic mode, with positioning on the crack performed autonomously by the robot, and in semi-automatic mode, with remote piloting by an operator with the aid of a camera mounted on the sensor system. In the remote piloting mode, a very high placement precision of the system on the crack in the order of 1 mm is reached.
Concluding for the ultrasonic system, the width measurement method is found not really automatic and reliable as it requires the slow and complicated entry of a steel spike in front of the robotic tip inside the crack. The crack depth measurement is found reliable and precise, free from errors that may be introduced during the manual procedure and faster after the approach of the robotic tip on the crack location.
7.6 System limitations
Although the many advanced capabilities of the ROBO-SPECT robotic platform, there are exists several limitations. These limitations target different components of the robotic platform and are analysed in the following.
7.6.1 Limitations of the robotic platform module
Tunnels are huge infrastructures of curvature shapes. This makes their inspection a difficult task. Our platform manages to reach a low average speed for the inspection about 2 km/h. This in the sequel increases the time needed for the inspection. Research should be carried out to improve vehicle speed while maintaining indoor navigation positioning and obstacles avoidance procedure accurate and precise.
In addition, due to the existence of many subcomponents in the robotic platform the overall error in navigation is of order of few centimetres. Research needed to compensate this error and thus improving the overall system performance.
7.6.2 Limitations of the computer vision module
Crack and defects detection in tunnel in real-time using computer vision tools is in fact a very arduous and challenging task mainly due to the visual complexity of a tunnel. Crack are tiny fissures that can be easily confused with other structures artificial or not in the tunnel. In addition, there are hardly visible especially when they should be seen from some meter far away, that is, the distance of the robotic platform to the tunnel wall. Our system reaches an accuracy of about 65% which is relatively adequate under such complex conditions. However, significant research effort is needed to improve this performance especially for tunnels of different lighting and constructional conditions.
Another important limitation is the time needed to process the captured images to identify or a crack existence. Due to the complexity of the problem, application of a single classifier is not adequate to effectively discriminate cracks from non-cracks. For this reason, post processing algorithms are applied employing connectivity analysis across visually detected components that can be potentially considered as cracks. The overall time achieved in our case is 33 s per frame using a powerful computer system (see Sect. 7.2.2). This time could be significantly reduced if a dedicated hardware is used like FPGA’s components. This way, the overall inspection time would greatly decrease making the overall platform more exploitable.
Our computer vision module is applicable only for concrete tunnels. For other types of tunnels like brick ones, too many artificial structures would be visible making the problem is difficult and decreasing the overall system performance.
7.6.3 Limitations of the ultrasound module
The key problem of this sensor is that it needs to be very close to the crack to take in situ measurements. This makes the navigation problem tough due to the high heights of the tunnels and the distances of the cracks from the robot vehicle. Research effort is needed in this area to improve the module so as to be able to take reliable measurements even when the cracks are not detected so accurately from the computer vision module and the robotic arm fails in touching them very precisely.
8 Conclusions
A robotic platform capable of performing structural assessment in tunnels has been presented. The described work was the outcome of the ROBO-SPECT European union funded research project. The platform is a multidisciplinary and multimodal approach exploiting state of the art techniques from intelligent robotics, computer vision and machine learning fields. The holistic solution utilizes multiple sensors, i.e., RGB cameras, laser scanners and ultrasound sensors in order to inspect transportation tunnels autonomously, analyse potential defects, survey cross-sections deformations, emphasizing on the crack detection.
Multiple challenges have to be dealt with simultaneously. The first challenging task is the robot localization in underground spaces and on long linear paths. Then, we have the 2D and 3D vision tasks, which should tackle with poor and variable lighting conditions, low textured lining surfaces and the need for high accuracy. Thirdly, there is the millimetres accurate positioning of a robotic tip installed on a five-ton crane vehicle.
Obtained results during the validation and benchmarking phase of the system, at the operating Egnatia Motorway tunnels, in northern Greece, suggest that the ROBO-SPECT platform is capable of autonomous concrete lining tunnel inspection. The modular design of the platform allow for further improvement in any part, as well as, the addition of further sensors.
The proposed architecture can be improved in time needed for the detection of the cracks, the accuracy of the detection process, the robustness of the algorithms across different types of tunnels, the speed of the robotic platform, the accumulated errors of the arms and cranes, etc. For instance, the inclusion of FPGA structures can reduce the processing time per frame, or research on fabrication of the ultrasound sensor can make it less sensitive to the distance of the crack in order to take reliable in situ measurements.
Footnotes
Notes
Acknowledgements
The research leading to the above described results has received funding from the EC FP7-ICT project ROBO-SPECT (Contract no. 611145). Authors would like to thank all partners within the ROBO-SPECT consortium.
References
- Arel, I., Rose, D.C., Karnowski, T.P.: Deep machine learning—a new frontier in artificial intelligence research [Research Frontier]. IEEE Comput. Intell. Mag. 5(4), 13–18 (2010)CrossRefGoogle Scholar
- Botelho, F.: A light at the end of the tunnel. Public Roads 65(1) (2001)Google Scholar
- Brosnan, T., Sun, D.W.: Improving quality inspection of food products by computer vision—a review. J. Food Eng. 61(1), 3–16 (2004)CrossRefGoogle Scholar
- Brownjohn, J.M.W.: Structural health monitoring of civil infrastructure. Philos. Trans. R. Soc. Lond. A Math. Phys. Eng. Sci. 365(1851), 589–622 (2007)CrossRefGoogle Scholar
- Delatte, Jr., N.J.: Beyond failure. Forensic case studies for civil engineers (2009)Google Scholar
- Frangopol, D.M., Liu, M.: Maintenance and management of civil infrastructure based on condition, safety, optimization, and life-cycle cost. Struct. Infrastruct. Eng. 3(1), 29–41 (2007)CrossRefGoogle Scholar
- Fujita, Y., Mitani, Y., Hamamoto, Y.: A method for crack detection on a concrete structure. Proc. 18th Int. Conf. Pattern Recogn. 3, 901–904 (2006)Google Scholar
- Georgousis, S., Stentoumis, C., Doulamis, N., Voulodimos, A.: A hybrid algorithm for dense stereo correspondences in challenging indoor scenes. IEEE International Conference on Imaging Systems and Technology IST 2016, Oct., Chania, Greece (2016)Google Scholar
- Gutchess, D., Trajkovics, M., Cohen-Solal, E., Lyons, D., Jain, A.K.: A background model initialization algorithm for video surveillance. IEEE Int Conf Comput Vis ICCV 1, 733–740 (2001)Google Scholar
- Huet, C., Mastroddi, Franco: Autonomy for underwater robots—a European perspective. Auton. Robots 40(7), 1113–1118 (2016)CrossRefGoogle Scholar
- Ibrahim, Y.M., Lukins, T.C., Zhang, X., Trucco, E., Kaka, A.P.: Towards automated progress assessment of workpackage components in construction projects using computer vision. Adv. Eng. Inform. 23(1), 93–103 (2009)CrossRefGoogle Scholar
- Jahanshahi, M.R., Masri, S.F.: Adaptive vision-based crack detection using 3D scene reconstruction for condition assessment of structures. Autom. Constr. 22, 567–576 (2012)CrossRefGoogle Scholar
- Jeong, D.H., Kim, Y.R., Cho, I.-S., Kim, E.J., Lee, K.M., Jin, K.W., Song, C.G.: Real-time image scanning system for detecting tunnel cracks using linescan cameras. J. Korea Multimed. Soc. 10(6) (2007)Google Scholar
- Klammer, D.M., Bauer, F., Dietzel, C., Köhler, M., Leis, S.: Thaumasite Formation from Sulphate Attack (TSA). Case Study at Austrian Tunnel Sites. www.dmg-home.de/DMG-CD/filedir/365_abstract.pdf. Accessed on 12/2/2012)
- Koch, C., Paal, S.G., Rashidi, A., Zhu, Z., König, M., Brilakis, I.: Achievements and challenges in machine vision-based inspection of large concrete structures. Adv. Struct. Eng. 17(3), 303–318 (2014)CrossRefGoogle Scholar
- Loupos, K., Amditis, A., Chrobocinski, P., Montero, R., Belsito, L., Lopez, R., Doulamis, N.: Autonomous robot for tunnel inspection and assessment. 6th International Symposium on Tunnels and Underground Structures in See Urban, Underground Structures. In Karst, Radisson Blu Resort, Split, Croatia, March 16–18 (2016)Google Scholar
- Loupos, K., Amditis, A., Stentoumis, C., Chrobocinski, P., Victores, J., Wietek, M., Panetsos, P., Roncaglia, A., Camarinopoulos, S., Kallidromitis, V., Bairaktaris, D., Komodakis, N., Lopez, R.: Robotic intelligent vision and control for tunnel inspection and evaluation—the ROBINSPECT EC Project. IEEE International Symposium on Robotic and Sensors Environments 16–18 October, 2014. Timisoara, Romania (2014)Google Scholar
- Loupos, K., Amditis, A., Stentoumis, C.: Integrated robotic system for tunnel structural assessment—the ROBO-SPECT EC project. World Tunnel Congress (2015)Google Scholar
- Makantasis, K., Protopapadakis, E., Doulamis, A., Doulamis, N., Loupos, C.: Deep convolutional neural networks for efficient vision based tunnel inspection. In: 2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 335–342 (2015)Google Scholar
- Metta, G., Fitzpatrick, P., Natale, L.: YARP: yet another robot platform. Int J Adv Robot Syst 3(1), 43–48 (2006)CrossRefGoogle Scholar
- Montero, R., Victores, J.G., Martínez, S., Jardón, A., Balaguer, C.: Past, present and future of robotic tunnel inspection. Autom. Constr. 59, 99–112 (2015)CrossRefGoogle Scholar
- Paar,G., Kontrus, H.: Three-dimensional tunnel reconstruction using photogrammetry and laser scanning. 3rd Nordost, 9. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten, Berlin (2006)Google Scholar
- Protopapadakis, E., Doulamis, N.: Image based approaches for tunnels’ defects recognition via robotic inspectors. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Pavlidis, I., Feris, R., McGraw, T., Elendt, M., Kopper, R., Ragan, E., Ye, Z., Weber, G. (eds.) Advances in visual computing, pp. 706–716. Springer, Berlin (2015)CrossRefGoogle Scholar
- Protopapadakis, E., Makantasis, K., Kopsiaftis, G., Doulamis, N.D., Amditis, A.: Crack identification via user feedback, convolutional neural networks and laser scanners for tunnel infrastructures. In: RGB-SpectralImaging, Rome (2016)Google Scholar
- Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., et al.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, Vol. 3, No. 3.2, p. 5 (2009)Google Scholar
- Rudol, P., Doherty, P.: Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. IEEE Aerospace Conference (2008)Google Scholar
- Soga, K., Chaiyasarn, K., Viola, F., Yan, J., Seshia, A., Cipolla, R.: Innovation in monitoring technologies for underground structures. In: Proceedings of the 1st Int. Conf. Information Technology in Geo-Engineering, (ICITG) Shangai, IOS Press, pp. 3–18 (2010)Google Scholar
- Stentoumis, C., Amditis, A., Karras, G.: Census-based cost on gradients for matching under illumination differences. IEEE International Conference 3D Vision, Lyon, pp. 224–231 (2015)Google Scholar
- Stentoumis, C., Protopapadakis, E., Doulamis, A., Doulamis, N.: A holistic approach for inspection of civil infrastructures based on computer vision techniques. In: ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information, pp 131–138 (2016)Google Scholar
- Stentoumis, C., Grammatikopoulos, L., Kalisperakis, I., Karras, G.: On accurate dense stereo-matching using a local adaptive multi-cost approach. ISPRS J Photogramm. Remote Sens. 91, 29–49 (2014). doi: 10.1016/j.isprsjprs.2014.02.006 CrossRefGoogle Scholar
- Sulibhavi, G.R., Parks, W.A.: Advanced methods for tunnel assessment. In: Proceedings of the World Tunnel Congress 2007 and 33rd ITA/AITES Annual General Assembly, Prague (2007)Google Scholar
- Sumitro, S., Okam, T., Inaudi, D.: Intelligent sensory technology for health monitoring based maintenance of infrastructures. 11th Spie’s Annual International Symposium On Smart Structures And Materials, March 14–18, San Diego, USA (2004)Google Scholar
- Victores, J.G., Martínez, S., Balaguer, C.: Robot-aided tunnel inspection and maintenance system by vision and proximity sensor integration. Autom. Constr. 20(5), 629–636 (2011)CrossRefGoogle Scholar
- Voulodimos, A., Kosmopoulos, D., Vasileiou, G., Sardis, E., Anagnostopoulos, V., Lalos, C., Doulamis, A., Varvarigou, T.: Large-scale multimedia data collections: a threefold dataset for activity and workflow recognition in complex industrial environments. IEEE Multimedia Magazine, pp. 42–52, July–September (2012)Google Scholar
- Wang, X.: Intelligent multi-camera video surveillance: a review. Pattern Recogn. Lett. 34(1), 3–19 (2013)CrossRefGoogle Scholar
- Yao, F.-H., Shao, G.-F., Yamada, H., Kato, K.: Development of an automatic concrete-tunnel inspection system by an autonomous mobile robot. 9th IEEE International Workshop on Robot and Human Interactive Communication (2009)Google Scholar
- Yoon, J.-S., Sagong, M., Lee, J.S., Lee, K.-S.: Feature extraction of a concrete tunnel liner from 3D laser scanning data. NDT and E International, 42(2), March, pp. 97–105 (2009)Google Scholar
- Yu, S., Jang, J.-H., Han, C.-S.: Auto Inspection System Using a Mobile Robot for Detecting Concrete Cracks in a Tunnel. Autom. Constr. 16, 255–261 (2007a)CrossRefGoogle Scholar
- Yu, S.-N., Jang, J.-H., Han, C.-S.: Auto inspection system using a mobile robot for detecting concrete cracks in a tunnel. Autom. Constr. 16(3), 255–261 (2007b)CrossRefGoogle Scholar
























