Keywords

1 Introduction

The advancement of wind turbine-related technology has raised the bar for crane efficiency, lifting precision, real-time detection, and precise positioning control in wind power construction projects as well as the installation and maintenance of wind turbines. At present, the commonly used domestic wind turbine installation method is to install the nacelle on the tower first, and then the blade and hub are combined on the ground, and the blade and hub assemblies are lifted by large cranes in the high altitude for the installation method. Hub blade assembly in the lifting process, the need for multiple cranes to work together, through the rope pulling and other ways to adjust the position of the hub blade assembly.

With the development of wind turbine related technologies, the capacity of wind turbines has been continuously improved, and the installation height of wind turbines has also reached more than 100 m, as shown in Fig. 1, for the installation of wind turbine hubs. Under this working condition, it is difficult to work at high altitude, there are many people involved in installation, and the cost is high. When the hub is installed and aligned, due to factors such as wind power or technical personnel cooperation, the installation period is long and the installation time is greater than 70 h. With the rapid development of China’s wind power industry, it is necessary to develop a pose detection method with less direct participation, high safety, high detection efficiency and high detection accuracy.

Fig. 1
A monochrome photo of a wind turbine. It has a partially assembled wind turbine with three long blades. A large crane is on the left with vehicles and equipment near the base.

Wind turbine hub lifting

Compared with other pose detection methods, visual positioning can detect the position of the measured object under non-contact conditions, and can adapt to various complex environments under sufficient illumination. Scholars at home and abroad have accumulated relevant research foundations in machine vision. Liu et al. [1] designed a robot intelligent unstacking system based on visual positioning. The corresponding world coordinates are obtained by using the coordinate transformation of the target pixel center. The position error is 1.1 mm and the angle error is 1.2°. Chen et al. [2] proposed a screw automatic fastening assembly system based on visual positioning. The visual positioning technology based on sub-pixel edge is introduced to accurately compensate the positioning deviation of screw assembly, and the visual positioning accuracy is better than 0.02 mm.

Currently, the detection of camera fixation and the movement of the measured item are the basic foundations of visual positioning research. This study suggests a pose detection technique in which the measured object and the camera are both in a state of erratic motion. This study offers a fan hub based on a circle fitting algorithm that integrates a laser range sensor with machine vision to address the complex working environment.

2 The Wheel Hub Pose Detection Scheme Under High Altitude Condition

Since the characteristics of the cabin itself are difficult to be directly used as positioning features, it is necessary to add feature color blocks as positioning markers, as shown in Fig. 3. The selection of positioning features is the key to the realization of visual detection. For the needs of practical engineering, two square color blocks of a certain size are installed in different directions of the transmission shaft as the positioning feature of the secondary treatment, and the bolt hole of the engine room shaft is used as the feature of the primary treatment.

The camera is used to collect and process the image to obtain the position information of the target point. Combined with the distance information of the ranging sensor, the pixel information on the image is converted into the actual distance information, and the pose state of the fan hub is obtained.

The position of the camera and the ranging sensor is shown in Fig. 2, and the installation position of the ranging sensor and the industrial camera is that the hub is facing the cabin surface. After the camera is installed, the relative position matrix of the camera and the hub center should be obtained to initialize the pose matrix.

Fig. 2
Two close up photos, labeled a and b. In a, there is a mechanical part labeled camera mounting position. An arrow points from this position to the actual industrial camera on the right side. The camera has a large circular opening at the front.

Sensor mounting

Fig. 3
A schematic of the color block. It has a circle labeled low-speed shaft with two positioning color blocks attached to the top and right side of the circle.

Characteristic color block

3 Visual Orientation Program

3.1 Image Segmentation Scheme of Secondary Processing

Since the cabin itself has no obvious features that can be used as features for visual recognition and positioning, it is necessary to add feature color blocks for positioning [3]. The square color block features are prominent, and when performing contour fitting, it can effectively eliminate the influence of the external environment on visual recognition. Therefore, two square color blocks of a certain size are installed in different directions of the drive shaft as the positioning feature of the secondary processing, as shown in Fig. 3. The bolt hole of the nacelle shaft is used as a feature of one-time processing.

The quality of the image the camera captures cannot be guaranteed due to the intricacy of the real operating conditions. As a result, during image processing, secondary processing is used to segment the images [4]. The captured image is first simply processed to separate the main recognition portion from the remainder portion, which is then binarized. The picture is then re-segmented following one procedure: binarization processing is used to produce the desired contour after screening by area, contour, and other parameters [5]. It is a flow chart for secondary processing, as seen in Fig. 4.

Fig. 4
A flowchart of the image processing flows through obtaining images, convert color gamut, extract color components, image segmentation, contour extraction, outline filtering, profile verification, drawing and calculation, and secondary treatment.

Secondary image processing flow

This image processing method increases the computational pressure of the host computer to a certain extent and increases the time of image processing, but it can effectively overcome the influence of the environment on image processing and improve the accuracy of visual positioning [6]. In addition, during the installation of the fan hub, the movement speed is slow, and the influence of the algorithm delay on the installation process can be ignored.

3.2 Circle Fitting Algorithm Based on Least Square Method

When the wheel hub and the engine room are far apart, the area of the positioning color block in the image collected by the camera is small, and it is difficult to extract the features. Aiming at the situation that the color block cannot be identified and located at a long distance, a circle fitting algorithm based on the least square method is introduced [7].

After the image is binarized, the edge of the image will be highlighted. The highlighted pixels can be regarded as points in the two-dimensional coordinate system (\({{\text{x}}}_{i}\), \({{\text{y}}}_{i}\)).

First, the mathematical algebraic expression of the circle is:

$${({\text{x}}-{\text{A}})}^{2}+{(y-B)}^{2}={R}^{2}$$
(1)

According to (1), another form of the equation of the circle can be obtained:

$${x}^{2}+{y}^{2}+ax+by+c=0$$
(2)

That is to say, the parameters of the circle can be obtained by calculating a, b, c.

The distance from each point to the center of the circle \({d}_{i}\):

$${{{\text{d}}}_{i}}^{2}={\left({{\text{x}}}_{i}-{\text{A}}\right)}^{2}+{({{\text{y}}}_{i}-{\text{B}})}^{2}$$
(3)

The square difference between the square of the distance from each point to the edge of the circle and the radius is:

$${\updelta }_{i}={{{\text{d}}}_{i}^{2}}-{{\text{R}}}_{i}^{2}$$
(4)

Let Q (a, b, c) be the sum of squares of \({\updelta }_{i}\):

$${\text{Q}}=\sum_{i=1}^{n}{{\delta }_{i}}^{2}$$
(5)

The parameters (a, b, c) are obtained to minimize the value of Q.

Figure 5 shows the binary image of the interface shape of the wind turbine. When the camera has a large angle deflection or the camera cannot collect a complete engine room image, as shown in Fig. 6, this circle fitting algorithm can realize the detection and feedback of the deflection.

Fig. 5
An illustration for cabin docking surface. It has a circle with a smaller dark circle at its center. Surrounding the central black circle, there are evenly spaced dots along the circumference of the circle.

Cabin docking surface

Fig. 6
An illustration of the large angle deflection. It has a semi-circular object with eight dots on its surface.

Simulation of large angle deflection

When the camera only captures part of the cabin image, the simple color feature or circular feature algorithm cannot reflect the correct position relationship [8, 9]. However, the circle fitting algorithm based on the least square method can solve this kind of problem to a certain extent.

Firstly, the center coordinates of each bolt hole are obtained by image processing, as shown in Fig. 7.

Fig. 7
An illustration of the circle center coordinates. It has a dark, semi-circular object with eight dots on its surface.

Getting circle center coordinates

Then, the circle fitting of the center coordinates of each bolt hole is carried out, and the pixel coordinates of the center of the engine room are obtained, as shown in Fig. 8.

Fig. 8
An illustration of the center of the circle. A dashed line outlines the circular shape. The center of the circle is marked and labeled as 503, 205. There is a shaded area to the left side of the circle, which appears to be part of another shape. Dots are placed at intervals along the circumference of the circle, indicating specific points.

Fitting the center of the circle

According to the returned information, the position and posture of the hub are adjusted in time to prevent collision accidents and ensure the safety of installation work.

3.3 Ranging Sensor Filtering Algorithm Based on Mean Filtering

The measurement range of the laser ranging sensor used in the experiment is 0.01–8 m, the measurement blind area is 10 mm, and the measurement error is 2%. When the laser ranging sensor performs distance detection, the detected distance information is unstable and there is a certain error. The software is used to simulate the random error distribution of the ranging sensor within the range, as shown in Fig. 9.

Fig. 9
A line graph of error versus distance. The estimated values are as follows. (0, 0), (2000, negative 10), (4000, negative 50), (6000, negative 90), (8000, negative 110).

Random error simulation of ranging sensor

The ranging sensor data is collected at different distances, and several data are randomly collected at the same distance, as shown in Fig. 10, which is the measurement data at a distance of 1138 mm.

Fig. 10
A line graph of measuring distance versus time. The estimated values are as follows. (0, 1140), (200, 1110), (400, 1140), (500, 1110). Standard distance (0, 1140), (200, 1140), (400, 1140), (500, 1140).

Error floating of ranging sensor

Based on the data collected several times, it is concluded that the size of the error of the ranging sensor increases with the increase of the distance when the data measurement is performed. According to the working principle of the range sensor and statistics related theories, the filtering algorithms such as minimum filter, median filter, mean filter and Kalman filter [10] are used to perform the filtering experiments on the range sensor data in static and dynamic situations respectively, as shown in Fig. 11.

Fig. 11
A line graph of distance versus time. The curves for measuring data, average, and minimum have fluctuating trends. The curve for standard value has a linear trend.

Comparison of minimum and mean filtering

By measuring a number of data at a fixed distance and taking the mean value of a number of data as a reference value. According to the data obtained from many experiments, it can be seen that these filtering methods can well reduce the floating trend of the measurement data. Moreover, the effect of mean filtering is the best.

According to the experimental data under different conditions, when the motion speed is greater than 0.5 m/s, the filtering effect of median filter and Kalman filter is better, and the value is closer to the reference value. When the motion speed is less than 0.2 m/s, the mean filter has the best filtering effect, and the data is stable and close to the reference value.

In addition, the size of the filter also affects the filtering performance. Taking mean filtering as an example, when the motion speed is slow, the larger the filter is, the better the filtering effect is. When the motion speed is fast, the smaller the filter, the better the filtering effect.

3.4 Coordinate Conversion Based on Image DPI

In the traditional coordinate transformation method, the internal and external parameters of the camera are calculated by camera calibration, and then the image coordinate system is transformed into the world coordinate system through the camera internal and external parameter matrix and pixel coordinates [11]. When the camera is stationary, this coordinate conversion method can achieve high accuracy and good stability. However, during the installation of the fan hub, the hub and the nacelle are in an unstable state of motion. Therefore, the traditional coordinate transformation method does not fully meet the positioning requirements under this working condition.

Aiming at the inapplicability of traditional coordinate transformation methods, a camera coordinate transformation method based on distance and camera DPI relationship is adopted. At a fixed distance, the number of pixels per inch of the actual distance is fixed. During the movement, the DPI of the camera changes with the distance. By randomly changing the distance between the camera and the measured object within a certain distance range, the relevant data of the camera DPI and distance are obtained.

Then, the curve fitting of camera DPI and distance is carried out by polynomial approximation. Finally, the relationship between camera DPI and distance is obtained, as shown in Fig. 12, Distance in mm.

Fig. 12
Two graphs of D versus distance. In the graph on the left, a line and dot plot plots increasing trends for distance D and D P I. In the graph on the right, a line graph plots a fluctuating line for D P I residuals.

Camera DPI and actual distance fitting

Through multiple sets of data fitting, the quadratic polynomial equation of camera DPI with distance transformation is obtained.

$${\text{D}}=6.13*{10}^{-8}*{{\text{x}}}^{2}+1.38*{10}^{-3}*x-5*{10}^{-2}$$
(6)

x is the vertical distance between the center of the camera imaging plane and the plane of the cabin, in mm. The measurement for D is mm/pixel, which stands for the reciprocal of the camera’s DPI and represents the actual distance that each pixel at a given distance represents.

The derived equation’s error squares are equal to 0.02, and the R-square (coefficient of determination) is equal to 0.998.

On the basis of this, it is possible to determine the real distance L of the item that was measured using image processing:

$${\text{L}}={\text{D}}*{\text{l}}$$
(7)

l: The number of pixels in the radius of the image, in pixels.

Taking the engine room center as the origin of the world coordinate system, the actual distance between the engine room center and the camera center is obtained by calculation. Taking the direction perpendicular to the nacelle section as the y-axis, the relative position of the camera relative to the nacelle can be obtained, that is, the position matrix T of the hub, as shown in Fig. 13.

Fig. 13
A technical drawing features two distinct mechanical components. On the left, there is a complex, layered mechanical part resembling an engine or turbine. On the right, a box-shaped housing unit with an opening at the front and a top handle. A labeled coordinate system X, Y, and Z appears on the left side of the object.

Pose coordinate system

Then the rotation matrix R of the hub is obtained by the gyroscope, and the pose matrix P of the hub can be obtained:

$${\text{P}}={\text{R}}*{\text{T}}$$
(8)

R is the initialized rotation matrix. T is the initialized position matrix.

4 Experimental Analysis

Firstly, the error of the visual algorithm itself is evaluated, and the evaluation results can provide reference for the subsequent accuracy improvement. The positioning mark used in the experiment is a blue circular mark with an actual radius of 85 mm, as shown in Fig. 14. The radius of the positioning mark is measured at a fixed distance. The pixel size of the radius of the circle is 47.7 based on the principle of averaging multiple measurements. The visual algorithm is used to process the positioning mark multiple times, and the pixel size of the radius of the circle is measured, and the error of the visual algorithm can be obtained, as shown in Fig. 15.

Fig. 14
A close up of a dark circle with a diameter of 170mm.

Positioning mark

Fig. 15
A line graph of radius versus time. The estimated values are as follows. Measuring radius (0, 47.65), (100, 47.67), (200, 47.73), (300, 47.73), (400, 47.75), (500, 47.73). Standard radius (0, 47.72), (100, 47.72), (200, 47.72), (300, 47.72), (400, 47.72), (500, 47.72).

Algorithm error

According to the data obtained from several experiments, the error of the algorithm in detecting the radius is between 0.15 pixels when the conditions are fixed. According to the working conditions of wheel installation, the introduction of mean value filter to optimize the algorithm can keep the error of the algorithm within 0.1 pixels.

On this basis, a ranging sensor is introduced for practical measurement experiments. The positioning markers are subjected to reciprocating motion and arbitrary angle deflection at random speeds within a range of 1 m in front of the camera to simulate the instability of the actual lifting process, as shown in Fig. 16.

Fig. 16
Two close-ups of experimental platform labeled a and b. In a, it has a visual inspection device. It appears to be a mechanical, primarily designed for inspecting something. The device with a camera and a ranging sensor. In b, a model of a wind turbine nacelle. The nacelle has circular pattern on the front with several dots.

Experimental platform

The image processing algorithm, ranging sensor filtering algorithm and position detection algorithm shown in the paper are used to carry out the measurement of the radius of the positioning markers for many times, as shown in Fig. 17. The experimental results are compared with the actual radius of the positioning markers to find out the error, and the measured error fluctuation curve is obtained, as shown in Fig. 18.

Fig. 17
A line graph of calculate radius versus time. The estimated values are as follows. Calculate radius (0, 89), (100, 89), (200, 87), (300, 84), (400, 85.5), (500, 89.6). Actual radius (0, 85), (100, 85), (200, 85), (300, 85), (400, 85), (500, 85).

Measured radius at different distances

Fig. 18
A line graph of error versus time. The estimated values are as follows. Error (0, 4), (100, 3.9), (200, 2), (300, negative 1), (400, 0.9), (500, 1).

Error fluctuation curve

According to the comprehensive results of multiple sets of experimental data, the position detection method in this paper can realize real-time radius detection of positioning markers in dynamic situations, and the detection error is within 5 mm. The results of multiple experiments show that the position detection method described in this paper has good repeatability. In the process of fan hub installation, the distance between the hub and the nacelle is within 1 m, so the fan hub position detection method described in this paper has reference value for the actual fan hub installation process.

5 Conclusion

In this paper, a method for detecting the position and posture of the fan hub based on machine vision and ranging sensor is proposed. The camera is used to collect the image, and the image is processed twice. The circle fitting algorithm based on the least square method is used to obtain the pixel coordinates of the positioning mark. The relative position relationship between the hub and the cabin is obtained by the relationship between the ranging sensor and the fitted camera DPI with the change of distance. The obtained data is processed and optimized by various filtering algorithms, which improves the accuracy of visual positioning and can meet the positioning requirements under actual working conditions. The feasibility and stability of the wheel pose detection method are verified by experiments, which meets the accuracy requirements of engineering hoisting. The feasibility of applying machine vision to construction machinery is verified.

This visual positioning method can reduce the direct participation of personnel in the process of fan hub hoisting, and return the real-time position information to the crane console and commander. When there is a large angle deflection or abnormal position, the technical personnel can adjust the hoisting process in time to eliminate the risk. It provides a technical reference for the research and development of intelligent hoisting or intelligent crane.