Abstract
In this paper, we present an external pipeline inspection robotic system capable of detecting the surface defects on above-ground pipelines and modeling their degradation. This system consists of two subsystems, the main base and the sensing system. The main base is a self-driving autonomous ground vehicle (AGV) equipped with Lidar, acoustic sensors and motor encoders, which can track the pipeline on uneven terrains. The sensing system includes two cameras attached to a C-arm, which rotates around the pipe. The two cameras are placed 180° from each other and can take pictures at 90° intervals, allowing the system to analyze the full 360° of the pipe with only half of the rotation as compared to a single camera system. When the C-arm encounters a flange or support, it retracts off the pipe, moves past the obstacle, and extends back to continue taking images. These images are used for defect detection. The detected defect data is then used for modeling the defect degradation and predicting when the defects become critical and require maintenance.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- External pipeline inspection
- Defect detection
- Self-driving
- Autonomous ground vehicle
- Degradation modeling
1 Introduction
There are around half a million miles of high-volume pipeline transporting natural gas, oil, and other hazardous liquids across the United States [1]. The nation’s pipeline networks are also widespread, running alternately through remote and densely populated regions, some above ground and some below. These systems are vulnerable to accidents and terrorist attack. Due to the extended network of pipelines, the number of incidents has increased with the increase in pipeline mileage. In the last three years, there has been an annual average of 353 reported pipeline accidents [1]. This amounted to a loss of over 87,000 barrels of oil and over $460,000,000 in property and environmental damage [1].
Stress corrosion and defects accounted for the majority of accidents on transmission pipelines from 1991–2005 as shown by the Lithuanian Energy Institute (LEI) [2]. The stress cracks form from the combined influence of pipeline stress due to its pressurized contents and a corrosive medium. Interlinking crack clusters form over time and eventually lead to pipeline failure. It was found that external defects are eight percent more common than internal defects.
Using advanced imaging techniques supported by an autonomous robotic system may present a practical solution for detecting defects on pipelines before failure. In the present work, we aim to develop a robotic system to identify pipeline defects and flag them for operator review as well as predict the degradation of the defects. We have designed and built an autonomous ground vehicle (AGV) (Fig. 1) with the capability to identify the relative position of a pipeline and adjust itself to the proper range and angle to scan the pipe for defects. We have also incorporated a model commonly used, as well as industry standards for the threshold of when a crack is considered critical and will require maintenance, to forecast defect growth to make an accurate prediction of expected degradation of the defects over time after successful identification of cracks and defects. A variety of sensors were used in this robotic system, including ultrasonic proximity sensors, an RPLIDAR system, a thermal camera, and two optical cameras.
2 System Design
2.1 Physical Design
The main base of the robotic system consists of a suspension system comprised of 70 custom individual aluminum parts. The basis of this design is a double wishbone suspension system, shown in Fig. 2, which enables the mobile platform to move upon multiple terrains. Additionally, the system utilizes two 24 V motors to power the driving subsystem. They are capable of producing approximately 6 N-m of torque which provides the ability for the system to negotiate more rugged terrains.
The second subsystem consists of the sensing unit, housed in a structure that is referred to as the C-clamp (seen in Fig. 3). The C-clamp is driven by a linear actuator and uses ultrasonic sensors to detect obstacles along the pipeline. When the system moves too close to a flange or support, this sensor will trigger the machine to stop, retract the arm off the pipe, move forward past the pipe and extend back out to resume the scanning process. If there is no support detected, the system moves in three inch increments and a Raspberry Pi sends signals to the other controllers to take pictures or to rotate the C-clamp.
Two Raspberry Pis are used on the system, and both are programmed using Python. The first Raspberry Pi controls a digital camera which receives a serial communication when it is necessary for the cameras to take pictures. The captured images are processed in MATLAB using specific User-Defined Functions, and the results are stored in an Excel file. This file will be used later for analysis of the pipeline, and a user can access the file by utilizing the GUI accordingly. The second Raspberry Pi controls a second digital camera as well as the RPLIDAR. It continuously communicates with the Arduino responsible for driving to share obstruction information and the correction factor, which is explained in further detail in the following paragraph. The Arduino connected to the second Raspberry Pi is responsible for controlling both drive motors as well as the linear actuator used to extend and contract the C-arm. It is also responsible for obtaining the readings from the ultrasonic sensors which are used to detect pipeline supports and flanges. A second Arduino receives this signal through serial for timing the movement of the C-clamp using a stepper motor driver. The overall flow of the systems software can be seen in Fig. 4.
2.2 Automation
The RPLIDAR A2 is a Lidar system which has a laser rangefinder that spins, enabling it to map distances in a single plane, as shown in Fig. 5. It has a refresh rate of 4000 samples per second. Connected to the raspberry pi using a serial adapter, it is integrated using python. The RPLIDAR is used for two purposes: obstacle detection and pipe distance and parallelism. If the Lidar detects that a point falls within a predetermined distance threshold, regardless of whether it’s in front, behind, or to the side of the system, it will send a signal to stop driving. The Lidar will also send a correction factor to the Arduino that controls the driving motors. This correction factor is defined as the ratio of the distance at a given angle to the correct hypotenuse at that angle for the system to be parallel. The equation is given as follows,
Where fn is the nth correction factor, d is the distance at angle θ taken by the lidar for θ ∈ [30°, 50°], dp is the distance perpendicularly to the pipe (d for θ = 0° ± 5°), and s is the scale factor. Note that for θ ∈ [330°, 350°], the first term is inversed (s is negative). This means that if the system is travelling parallel to the pipe, the correction factor is equal to 1.00. If it is headed towards the pipe, the correction factor is greater than one, and vice versa. The correction factor is smoothed to prevent rapid overcorrection or system instability from faulty measurements. Empirically, α = 0.75 was chosen. As soon as the correction factor is calculated by the Raspberry Pi, it is sent to the Arduino which multiplies it with the inner drive wheel’s PWM value, and its inverse with the outer wheel’s PWM value. This ensures feedback to the motors so that the system can maintain its parallelism with the pipe. If this difference needs to be exaggerated or diminished, s can be increased or decreased, but it was found that s = 1 is a good solution. Note that for all values, f ∈ [0.50, 2.00], otherwise the motors would receive values from the Arduino that are greater than 255 (the max value at their resolution).
2.3 Crack Identification
After the images are taken from the digital cameras, both the section of pipe scanned and the angle at which the images was taken (Fig. 6) are recorded.
Each image is processed in three phases including crack detection, degradation modeling and reliability analysis. During the first phase, the digital pictures are first analyzed using several image processing techniques which are shown in Fig. 7.
After data processing, the cracks (Fig. 8) are identified and the measurements of the crack such as the length and area are extracted.
In order for these results to be accurate, it is critical that the pictures be taken at consistently bright lighting, supplied by an attached LED strip (Fig. 6), as well as consistent spacing between the pipe and all three cameras. This ensures that the sizes of all three images stay the same which is a necessity for the dimensions of the crack to be accurate and repeatable.
2.4 Degradation Modeling
If the path for which the crack length propagates is considered to be random with normal distribution, it may be considered a form of Brownian motion. At any point in the future, the crack has a probability of reaching the threshold given by the CDF of the normal distribution, ϕ. This yields the reliability of the pipeline at time t as,
Where D(0) is the initial value, μ is the drift parameter, σ D is the diffusion parameter, and D* is the failure threshold level [3].
Using the initial crack lengths stored, predictions can be made for the crack growth over time, an example of which can be seen in Fig. 9. Additionally, a threshold can be designated within the code to indicate when the pipe will require maintenance or when it is in danger of failure.
Due to the fact that this process is random, different iterations using the same data will result in different conclusions. To account for this, the calculations are performed 1000 times and are grouped together for a more accurate representation, seen in Fig. 10. The time at which a crack growth crosses the threshold is known to follow the inverse Gaussian distribution, as the histogram suggests. The remaining useful life of the pipeline is calculated by taking the average of the failure times generated in the histogram.
For easy access to the information generated above, a simple graphical user interface (GUI) is provided to the user, the home screen of which is displayed below in Fig. 11. Displayed on the left of the figure are three functions which include Cracks, Image Processing, and Degradation Analysis, and each are activated by a simple push button. From this GUI, a user can analyze the collected images, track them over time, and find the remaining useful life of the pipeline. With many useful infographics and instructions, this helps users understand the data and use it in manageable ways.
3 Validation
The system was tested in various conditions to evaluate its capability for various tasks. It was driven in an outdoor environment to test its terrain capability, for which it performed very well. Since significantly long pipe was not obtained, a wall was used to simulate a long pipeline for the Lidar correction described before. The system was able to correct itself along the wall, even with significant interference such as an operator intentional moving it out of alignment.
To test the cameras along a pipe, four inch PVC was used composing of two sections with supports as can be seen in Fig. 12. The system was able to iterate scans along the pipeline as well as avoid the middle support. Cracks were simulated in the pipe using a small hand saw. Many images were taken for testing the image processing code. The artificial cracks were all identified and those pictures devoid of signs of degradation were successfully dismissed and separated.
4 Conclusion
Pipelines are the primary tool for oil transportation. Although they are the most cost effective way for oil companies to transfer their product, pipelines are too often subject to failure. These failures can cause deaths, environmental disasters, and can cost companies billions of dollars in the long run. Cracks and other types of degradation are the first signs of potential failure. This system is an autonomous robot capable of detecting cracks on pipes. Its ability to maintain parallelism and navigate around supports helps minimize the need for an operator. Its suspension system gives it the capability to travel on rough terrain where pipelines are generally located while taking accurate pictures. A simple GUI allows any user to easily access the data collected, as well as the forecasting model. Having an easy to use machine that can forecast the remaining useful life of a pipe could save millions of dollars as well as entire ecosystems.
References
Parfomak, P.W.: Keeping America’s Pipelines Safe and Secure: Key Issues for Congress. DIANE Publishing (2012)
Dundulis, G., Grybanas, A., Janulionis, R., Kriakiena, R., Rimkevicius, S.: Degradation mechanisms and evaluation of failure of gas pipelines. Mechanics 21(5), 352–360 (2015)
Elsayed, E.A.: Reliability Engineering, 2nd edn. Wiley, New York (2012)
Huang, X.P., Moan, T., Cui, W.: Fatigue crack growth under variable-amplitude loading. In: Schijve, J. (ed.) Fatigue of Structures and Materials, pp. 329–369. Springer, Heidelberg (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Costa, J. et al. (2017). Autonomous Robotic System for Pipeline Integrity Inspection. In: Duffy, V. (eds) Digital Human Modeling. Applications in Health, Safety, Ergonomics, and Risk Management: Health and Safety. DHM 2017. Lecture Notes in Computer Science(), vol 10287. Springer, Cham. https://doi.org/10.1007/978-3-319-58466-9_30
Download citation
DOI: https://doi.org/10.1007/978-3-319-58466-9_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58465-2
Online ISBN: 978-3-319-58466-9
eBook Packages: Computer ScienceComputer Science (R0)