Keywords

1 Introduction

The complexity of modern machines and their components is growing. While this trend improves the overall performance of higher level systems, it introduces challenges in terms of maintenance. The level of complexity often causes larger part costs. Thus the regeneration of these parts may be more economical and with increasing material scarcity also more ecological.

This work focuses on the regeneration of turbine blades used in aircraft engines. To assist the regeneration process, a combination of different optical sensors is combined to assess the current state of the component. This information can further be used to detect defects and estimate the reliability and performance. Based on the gathered information, a suitable regeneration strategy is chosen.

2 Motivation

The assessment of worn components is challenging due to a wide range of their possible conditions. Mechanical stress and thermal or chemical wear cause deviations from the original state and can impact the overall performance and reliability (Tabakoff et al. 1998; Kurz and Brun 2000; Laguna-Camacho et al. 2016). Detecting these changes and characterizing their appearance with non destructive optical methods is the goal of this project.

However, deviations can occur in varying shapes, sizes and types. Thus multiple geometric scales have to be taken into consideration, when reconstructing the object.

While macroscopic defects (multiple centimetres) like larger cracks and dents may be detectable with one sensor, a different system is necessary to reconstruct defects which only have a size of a few micrometers.

In order to take into account various types of defects, a combination of multiple different sensors (S1,..., Sn) is necessary, since a single sensor is not capable of covering all of the required scale ranges. The basic concept of multiscale geometry inspection and fusion to a holistic dataset in a common coordinate system, which was developed in this work, is shown in Fig. 1.

Fig. 1
A diagram. Multi sensor data processing with sensor selection, transformation, and data attaches to fusion through S 1 2, S 1 2 3, to S 1 to n, where each fusion is connected to one another. Below is a measurement object connects to sensors 1 to n in the sensor system, then to fusion.

Fusion of different sensors combined into a multi sensor system to cover multiple scale ranges

Different scale ranges require different measurement techniques and global data registration necessitates the identification of spatial relationships of the individual sensors to each other to allow the fusion of data gathered from multiple sensors. Based on the available measurement methods, a suitable sensor can be chosen in order to meet the requirements to reconstruct a defect.

3 Multi-sensor Design

To provide a holistic model, a combination of geometric and non-geometric data has to be acquired. The developed sensor system consists of three sensors, which provide geometric data. A fringe projection unit is used to cover the macro- and mesoscale ranging from sub millimetre to multiple centimetres. A low-coherence interferometer is used to reconstruct micro scale features of the measurement object. In addition to these sensors an illumination sensor is used to provide information about non-geometrical surface properties. The structure, function principle and results of each individual sensor will be introduced in the next sections.

3.1 Illumination Sensor System

This sensor is mainly designed to assess the local reflective properties under different illumination scenarios. Therefore, the sensor is designed with 42 white light-emitting diodes (LEDs). Each one is collimated and placed on a hemisphere pointing towards to the centre (see Fig. 2). To capture the illuminated scene an industrial camera (AVT Manta G-1236B) is mounted in the centre of the hemisphere surface. Each light source is individually controllable, which leads to a large number of possible combinations and thus, lighting scenarios.

Fig. 2
A photo of the inside of an illumination system. The system has concentric circles labeled rows A to D from the outermost to the innermost. Camera and lens, L E D 1, and L E D 2 are labeled.

Inside of the half sphere of the illumination sensor. A camera is placed in the middle, 42 LEDs are arranged in four rings around the centre

3.1.1 Data Acquisition

To perform a measurement a set of pre-defined light configurations is executed and an image is recorded, resulting in an image stack with one image for each configuration. While more complex scenarios are possible, this approach mainly focuses on images when enabling only one LED at a time. Depending on the enabled LED, the amount of reflected light changes the intensity of different regions. This is caused by a combination of light direction and geometry which, based on macro and micro structures on the surface, lead to shadowing or affect the overall reflected light. These changes can be observed in the example images given in Fig. 3.

Fig. 3
Three photos of a turbine blade illuminated by light in different directions.

Camera image of a turbine blade illuminated with varying light directions (Intensities have been slightly adjusted to improve visibility) a From right side. b From the bottom. c From the left

3.1.2 Measurement Principle

In order to take advantage of the reflective properties of the objects surface, different algorithms are applied to extract information about the measurement object. They are divided into two approaches for a macroscopic and a microscopic characterization. Further it is assumed that the positions of each LED in respect to the camera is sufficiently well-known from the computer-aided design (CAD). Thus the light vectors can be calculated with the assumption of perfectly collimated light and a fixed working distance for the camera when placed in focus. These assumptions may deviate from the actual conditions due to assembly-related deviations, not perfectly collimated LEDs and varying distances to the measurement object, but experiments have indicated, that these simplifications are feasible for this application.

The evaluation of the data is based on the following surface reflectance model utilising the surface normal n, the aforementioned incident light vectors ln with n ∈ [1, 42] for each LED and a view vector v. Figure 4 shows these vectors for a single surface point. Additionally the angles θ and φ are defined for in and outgoing rays, respectively light and view vector. The parameter vector k is used to describe the surface properties. Its length is dependant on the applied model. Utilizing these parameters the shape of the measurement object can be approximated by applying a photometric stereo approach. This algorithm requires a set of images with varying light direction to determine the surface normal for each pixel. However, basic approaches assume that the surface is mainly diffuse with low specular features. Research has been carried out to overcome this limitation (Herbort and Wöhler 2011; Zheng et al. 2020). While highly specular objects are still challenging, the optimization process was adjusted to be more robust.

Fig. 4
A diagram displays a plane k on x and y axes. A dashed line passes through both axes. x is phi o and phi i from the dashed line. n points vertically upward from the plane. v is angle theta o from n, and l n is angle theta i from n.

Simplified surface reflectance model

The objective function is to find the surface normal by utilizing the set of pixel intensities for each light vector. A least squares based approach was published by Woodham (1979), which provides good results for worn turbine blades but is limited at strong specular reflective conditions.

The resulting surface normals can further be used to create an unscaled 3-D model of the measurement object. Since there is no metric information, only qualitative statements can be made. This, however, is already sufficient to detect macro scale deviations of the overall geometry e.g. missing parts of the nominal model. Experiments have shown, this method is not well suited for fine structures in micrometer range since the algorithm tends to smoothen filigree features.

In addition, the estimated normals can further be applied to characterize the objects surface properties. This is achieved by approximating a reflectance function for each pixel. For this a bidirectional reflectance distribution function (BRDF) is utilised. The basic formula in Eq. 1

$${\text{BRDF}}\left( {\theta_{i} ,\varphi_{i} ,\theta_{0} ,\varphi_{0} } \right)$$
(1)

describes the ratio of reflected to incident light, with respect to the vectors incident and azimuth angles, θ and φ, and the surface normal vector. These functions are mainly used in computer graphics to render photorealistic images of different materials. In general a BRDF model requires the incident light beam l, the surface normal n, the viewing vector v and some model specific parameters k (Guarnera et al. 2016), see Fig. 4. With the previously determined surface normals all but the model parameters, which are used to describe the reflectance, are known. There has been extensive research about identifying model parameters from real world data (Ward 1992; Westin et al. 1992; Lafortune et al. 1997). However, in this case the data base is usually collected using gonioreflectometers, which provide a fine sampling along possible incident and viewing angles (Guarnera et al. 2016; Schröder and Sweldens 1995; Ngan et al. 2005; Bieron and Peers 2020).

The data provided by the illumination sensor is rather limited and thus may not be sufficient to find the `actual’ model parameters. But the data basis is sufficient to do an approximation of reflectance models. Figure 5 shows the pixel intensities of one pixel of the image for all 42 LEDs. While the fitted intensities are not perfectly overlapping with the actual data, the trend is reproduced well. These approximated parameters from the model fit can then be divided into multiple classes, which can be used to do a multi-class segmentation of the object surface. For this state of the art cluster algorithms like K-Means (Hartigan and Wong 1979) or DBSCAN (Ester et al. 1996) can be applied to determine cluster borders in parameter space. Based on the used algorithm an expected number of classes has to be given or is derived from the data. An example result of a K-Means clustering with three classes is shown in Fig. 6b. Here the regions are marked with different colours. While the reflectance of rough surfaces, e.g. sand-blasted metal, is mainly diffuse, surfaces with lower roughness, e.g. polished metal, is shiny. Both effects are represented by parameters of the reflectance model, which are used for the classification. Thus, the classes correspond to areas with different roughnesses. This difference is shown in Fig. 7 where the marked spots from Figure 6a have been measured with a confocal laser scanning microscope (Keyence VK-X200) as a sample for the classified regions. The resulting roughness parameters for the measurements are listed in Table 1. These values suggest, that the red region has a higher roughness than the green areas. As stated by Bons (2010) the roughness affects the performance of the gas turbine. Therefore, the resulting surface classification is useful to identify differing areas which can further be examined with punctual roughness measurements. A model-driven estimation of the local roughness parameters according to the identification of the reflection parameters is not reliably possible with the existing data base.

Fig. 5
A dot plot of pixel intensity versus L E D index. L E D index is divided into rows A to D. Measured intensities and fitted model intensities start at (0, 0.2) and fluctuate as index increases, and rises from row A to row D.

Actual and fitted pixel intensities for one pixel

Fig. 6
Two photos of a turbine blade. a, 2 dots are marked at the upper left and near the middle. b, the upper half is marked with different shades.

a One image from the sequence. Orange and yellow marked areas indicate measurement spots. b Partially classified section of the turbine blade. Different regions can be distinguished. Red sections are in areas with rough structures, while these are missing in green regions. Blue coloured pixels often occur in transitioning areas from red to green

Fig. 7
Two 3 dimensional graphs of surface measurement plotted in x, y, and z in micrometers. a, The maximum values are in different x values and y equals 0 and 1000. b, The maximum values are around (500, 1000) and (1000, 400). All values are approximate.

a Surface measurement of the yellow area. b Measurement data for the orange spot. (cf. Fig. 6a)

Table 1 Roughness parameters for the measured regions (cf. Figs. 6a and 7)

3.1.3 Conclusion Illumination Sensor

Since the reflectance of a surface is dependent on the surface roughness, the results can be used to make a qualitative distinction between regions with differing roughnesses. These sections can then be measured in future steps to assign quantitative values. Thus, this sensor is mainly used to gather qualitative information about the measurement object. A rough geometric assessment is possible with the addition of distinguishing between different surface regions by examining the reflectance.

3.2 Fringe Projection Sensor

A regular fringe projection system (FPS) consists of a camera and a projector unit to project structured patterns onto the measurement object, see Fig. 8. In this case two industrial monochrome cameras (AVT Manta G-419B) are coupled with a programmable projector module (Wintech PRO4500). The second camera is equipped with a different focal length to expand the scale range of the acquired data. Both systems are calibrated by identifying the optical parameters and the spatial relation between the components by applying state of the art camera, projector and stereo calibrations like Zhang (2000) or methods proposed by Hartley and Zisserman (2003). To encode the projector pixels a 8-step phase shifting sequence following Peng (2007) is applied. Here, 8 phase shifted sine patterns with increasing frequencies are projected onto the measurement object.

Fig. 8
a, a diagram and, b, a photo of fringe projection principle. a, The camera is in v and u axes and the projector is in v prime and u prime axes. They form a triangle with the measured object. b, Projector, main camera, and small F O V camera are labeled.

a Fringe projection principle. b Fringe projection sensor

3.2.1 Data Acquisition and Registration

Triangulation determines a corresponding surface object point for each camera pixel by 3-D ray intersection, resulting in a high-density three-dimensional, metric point cloud. For the unambiguous spatial determination of the corresponding viewing beams of both camera and projector, the projection patterns are captured and evaluated via a phase unwrapping pipeline.

For a better assessment of the turbine blades state, a 3-D model is necessary. Since the fringe projection sensor only provides surface measurements from a single viewing point, multiple measurements have to be combined. To register these, point correspondences have to be estimated. There are various algorithms which can be used to determine these correspondences. Depending on the algorithm itself, different kind of data and requisites may be necessary to apply them. Below, those strategies will be divided into two categories: 2-D based and 3-D based algorithms.

The first uses 2-D organised data, e.g. image data, to detect features and calculate descriptors, which then can be used to estimate correspondences. The data used for the feature detection is the colour information, which can be greyscale or RGB. Some of the most common algorithms used for image feature detection are ORB (Rublee et al. 2011), SURF (Bay et al. 2008) or SIFT (Lowe 2004).

The other category handles unorganized 3-D data, e.g. point clouds. Here each 3-D point and its neighbourhood is taken into consideration to calculate feature descriptors. Usually these require the surface normals of each point, which can easily be approximated (Mitra and Nguyen 2003). Algorithms of this kind are FPFH (Rusu et al. 2009), ‘spin images’ (Johnson and Hebert 1999) or SHOT (Salti et al. 2014). These feature descriptors can be used for object recognition (Gupta et al. 2019), while recently neural networks are frequently used for this task (Liang and Hu 2015).

However, the state of the measurement object can vary a lot and thus the texture and number of geometrical features present. This means that the registration using only natural occurring features is not robust to changing measurement objects. To overcome this issue artificial textures are applied to the measurement object by projecting random patterns onto it (Betker et al. 2020). Hereby, more features can be detected with 2-D feature detectors. The advantage of this approach is that no manual steps are necessary to apply any markers next to or on the object. However, neither the position with respect to the object nor the optical properties may change during the measurement process.

The setup for the random pattern projection is shown in Fig. 9a. Multiple projectors (Texas Instruments DLPDLCR2000EVM) are placed around the mounting system and aimed at the object. Each projector has a different generated random pattern. To ensure a high density of features different kinds of pattern designs and 2-D feature detectors have been examined. Overall, binary patterns performed better than greyscale. Partly because sharp borders are more robust against effects introduced by defocus, noise and the mixture with the actual object’s texture. The best results were obtained with a combination of randomly placed overlapping black rectangles on white background and a SIFT feature detector and descriptor. Since the 3-D reconstruction is calculated in the camera coordinate system, additional colour information can be added by acquiring an image with active random pattern projection. This way each point holds not only the spatial coordinates, but also an arbitrary number of, in this case, greyscale intensities. These greyscale images can be used to determine 2-D correspondences by applying the aforementioned 2-D SIFT feature detector. Since each reconstructed 3-D object point corresponds to a camera pixel, any 2-D assignment can subsequently be transferred to the registration of the point clouds. An example pair of images respectively surface measurements is shown in Fig. 9b. To increase visibility not all found correspondences are shown.

Fig. 9
Two photos of, a, a mounted turbine blade and, b, 2 rectangular image patches are placed opposing each other. Several colored lines connect corresponding spots on both images in a vertical manner. A few lines are diagonal.

a Turbine blade inside the mounting system surrounded by multiple projectors which project random patterns onto the object. b Pair of measurements as gathered by the fringe projection camera. Each coloured line represents an estimated correspondence

Due to the chosen pattern the algorithm detects a large amount of corresponding points. Although the majority of correspondences is plausible, a certain amount of false connections are present. This problem is addressed by utilizing a random sample consensus (RANSAC) approach to estimate the rigid body transformation between both measurements. The combination of high density features, distributed on the surface, and an outlier-aware transformation estimation results in a robust alignment of multiple 3-D measurements, which is independent of natural features. To further improve the transformation and reduce the remaining alignment error, an iterative closest point (ICP) algorithm is applied. In this work a coloured ICP (Park et al. 2017) is used to further benefit from the projected patterns.

However the pairwise registration of measurements is error-prone and even small alignment errors add up when forming the complete 3-D model. This can result in a loop closing problem, where first and last measurement of a sequence are not aligned as expected. This problem is addressed by applying a multiway registration as proposed by Choi et al. (2015) which performs a graph optimization between all segments. Further neighbouring measurements are combined into fragments to increase the overlap between segments.

Running the presented registration pipeline results in blade measurements as seen in Fig. 10. To reduce the number of points in the merged point cloud, close points are merged with a weighted voxel based filter. The respective weight is chosen by the reconstruction quality of each point, which is derived from the signal quality during measurement.

Fig. 10
Four photos of different angles of a turbine blade.

ad Different views of multiple measurements combined into one 3-D model

3.2.2 Data Evaluation

In this section some options to process the data provided by the fringe projection unit will be discussed. The goal of the evaluation step is to detect defects or damaged regions on the turbine blade. Given the nominal geometry, the 3-D measurement can be aligned to it. This allows the estimation of deviations to the nominal structure of the turbine blade. An example deviation map is shown in Fig. 11. Since the actual nominal geometry from the manufacturer is unknown, another worn blade was chosen to represent the nominal geometry. Therefore, deviations were calculated between multiple worn blades to illustrate the process. Regions with missing material are coloured in blue, added material is represented with red. Green regions have low deviations to the nominal structure.

Fig. 11
Two diagrams display a turbine blade in x, y, and z axes. Left, the maximum deviation is located at several spots to the right of the blade. b, the high deviation is located at the base of the turbine blade.

Deviations of the measured 3-D model to a given nominal geometry

While this representation allows a quick assessment of damaged an undamaged regions there are some considerations to be made. First, the nominal geometry has to be given. Furthermore the initial alignment of nominal and actual geometry may be influenced by errors introduced by variations of the structure. To ensure a good alignment reference points which are not influenced by the operational stress would be necessary. Nevertheless this strategy can, depending on the requirements, be sufficient to draw conclusions about the blades condition.

Because of the aforementioned drawbacks another strategy has been researched. With emerging research in the field of artificial intelligence, various algorithms and neural networks have been released to handle a multitude of tasks. To use these methods in the field of defect detection, convolutional neural networks (CNN) were chosen. To be more specific, image segmentation networks. Since single measurements of the fringe projection sensor are in a matrix structure this data can be used as input data for CNNs.

The goal is to detect defects in single measurements using this approach. Firstly a’defect’ has to be defined in order to create according labels. Because the worn turbine blades have numerous of defects and a clear line between defect and intact regions is hard to define, the cooling holes of the blade are chosen instead. These are much more distinct and thus easier to label without expert knowledge. In some way the form of the cooling holes resembles the geometry of a defect. They interrupt an otherwise continuous surface by introducing high curvatures in a certain area.

With this definition a dataset of multiple measurements has been labelled. A sample is shown in Fig. 12. The annotations are based on the greyscale image, which can then be extended to the 3-D data. Different combinations of input data have been examined. A promising combination is the colour information with approximated normals. A prediction of the trained model and the difference in prediction and manual labels is shown in Fig. 13. It can be seen that more regions are marked than in the original labels. Subjectively these predictions are plausible and usually represent regions with larger curvatures. However the available training dataset is limited and the overall performance could further be improved with more data. But even with this rather small dataset the network seems to learn the rules for a local deviation, which can also be interpreted as a defect. Therefore, it is expected that it is possible to train this type of neural network to segment more general deviating regions.

Fig. 12
2 monochrome photos. a, blade is placed at an angle. b, the same photo has bright dots digitally overlaid on the rows of pinholes on the surface of the blade.

a The raw greyscale image as recorded by the camera of the fringe projection unit. b The red sections show the labelled cooling holes. Regions without reconstructed 3-D data are omitted

Fig. 13
2 monochrome photos. a, a closeup photo of the surface of the model with digitally highlighted colored solid dots and patches at the top and both sides. b, the same surface has outlines of the dots and patches on the surface in a different shade.

a Prediction given by the trained neural network. b Difference between labels and predictions. Red indicates additional predictions, blue indicates missing labels

3.2.3 Conclusion Fringe Projection Sensor

The main task of the fringe projection system is the reconstruction of turbine blades in 3-D metric space. As shown in the previous section, it is possible to use this data to detect defects or deviations. This can be achieved through the use of neural networks or regular deviations estimations by estimated point distances. Thus this sensor is mainly used to gather geometrical data in macro and meso scale ranges.

3.3 Low-Coherence Interferometer

The Low-Coherence-Interferometer (LCI) is an interferometer in a Michelson configuration. The basic setup is shown in Fig. 14. A regular industrial camera (Basler acA1920-48gm) in combination with a telecentric lens is used to capture the interferences which form on the surface. The objective has a comparatively large working distance as similar systems. This simplifies the positioning of the sensor in relation to the complex geometries and reduces risks of collisions.

Fig. 14
A diagram displays an interferometer setup. Camera with telecentric lens, light source 665 nanometer, 90 is to 10 T is to R beam splitter, 50 is to 50 T is to R beam splitter, deflection mirror, and specimen surface are labeled.

Low-coherence interferometer setup

The low coherent light source has a wave length of 665 nm. The light beam is collimated and sent into a 50/50 beam splitter to split reference and measurement beam. The reference beam is then send to a deflection mirror which is implemented as a 50/50 beam splitter and gets reflected on the reference mirror which is also realised with a beam splitter, but with a 90% transmission 10% reflection ratio. As a result the reference beam intensity is reduced since the surface of our measurement objects is very rough and does not reflect a lot of light. With these adjustments the beam intensities of measurement and reference ray are aligned in order to improve the overall contrast of the occurring interferences.

3.3.1 Data Acquisition and Evaluation

The LCI is mounted onto a high precision linear axis (PI L-509) to perform the scanning process for each measurement. This way the optical path length of the measurement beam is changed, which allows sampling the depth of the surface. For each step of the scanning process an image is recorded and put into an image stack, which can further be evaluated. Li et al. (2015) presented a GPU-based evaluation strategy to calculate the corresponding depth maps. Paired with the magnification properties of the telecentric lens it is possible to estimate 3-D data.

Some example results are given in Fig. 15. The left measurement shows a section of the pressure side of the turbine blade. The right side shows an area of the leading edge. Smaller defects and transitions into cooling holes are visible. Depth resolutions lower than 200 nm are possible but strongly depend on the signal strength which again is dependent on the surface properties. The lateral resolution is determined by the pixel size and lens magnification which leads to 1.2 µm.

Fig. 15
Two 3 dimensional graphs in x, y, and z axes in millimeters. The highest point in a is at (0, 1.5, 0.1) and the highest point in b is at (2.5, 0, 0.1). All values are approximate.

a Measurement of the leading edge. b Section of the pressure side

3.3.2 Conclusion Low-Coherence-Interferometer

With its rather small measurement area of around 3 mm2 and a long measuring duration, the LCI is not suitable for digitizing complete geometries or surfaces, but more for local inspections which require a high depth resolution. Thus the LCI is used as a complementary sensor to gather data in a micro scale in particularly relevant regions of the turbine blade such as damages, cooling holes or areas with fine structures which can influence the air flow.

3.4 Multi-sensor Measuring Head

In the previous sections the different sensors were introduced. To combine these into one multi-sensor system the individual requirements regarding working distance, orientation or field of view have to be taken into account. Given the interface of the robot, a measuring head has been developed. A CAD is shown in Fig. 16b. All sensors are mounted on the end-effector of a 6-axis industrial robot (Stäubli TX90). This allows a flexible positioning of the sensor head with respect to the specimen and guarantees appropriate positioning. The robot kinematic is extended with a rotational axis, which rotates the measurement object and the random pattern projectors used for the data alignment. The combination of robot and rotational axis makes it possible to fully reconstruct each individual specimen. With this design each sensor can be chosen by rotating the end-effector. LCI and fringe projection unit were placed in a way that their fields of view overlap to allow direct interactions, see Fig. 16a. The illumination sensor is separated to prevent collisions when handling other sensors. Thus, with this setup is possible to use the individual sensors within the multi sensor setup without directly affecting the other sensors.

Fig. 16
Two photos of the measurement system setup. Illumination sensor, interferometer, fringe projection cameras, projector, and robot base are labeled.

a Measurement system setup with important coordinate systems. b Rendering of the measuring head CAD. Cones are used to visualize working distances and field of view

4 Data Fusion

Every sensor operates on its own scale range and provides measurement data. However, the measurements of each sensor are currently in the respective sensor coordinate system (see Fig. 16). This means the measurement data is scattered in space and does not refer to a unified coordinate system. Thus it is not possible to immediately combine the information gathered from different sensors. In order to achieve this goal, each sensor coordinate system has to be calibrated to refer to a unified coordinate system. The next sections will outline this calibration process for each sensor.

4.1 Sensor Hand-Eye Calibration

For camera-based sensors state of the hand-eye calibration methods can be applied to calculate the position of the camera coordinate system in respect to a robots base coordinate system (Tsai et al. 1989). For this the robot is moved and for each pose a stationary calibration target is captured with the camera. In this work a 2-D dot pattern with known dot distance is used as calibration target. From the known properties of the target the transformation between it and each camera pose can be estimated. The set of camera to target transformations and forward kinematic based transformations of the robot facilitate the hand-eye calibration of the fringe projection and illumination sensor. Both cameras can be modelled with the pinhole camera model. The camera and lens of the LCI, however, requires a different model. In addition the depth of field of the telecentric lens is a lot smaller and the required number of different robot poses can not be achieved, because the blurriness prevents the transformation estimation. In addition the camera centre of a telecentric lens cannot be explicitly determined, but has to be set manually. Therefore a different calibration procedure has to be used for the LCI.

To calibrate the interferometer in respect to the other sensors, a 3-D calibration strategy is applied. For this a pair of LCI and fringe projection measurements is used. Both perform a measurement of a 3-D calibration target with distinct features, which allows an unambiguous alignment of both measurements. The resulting transformation which is necessary to align LCI into fringe projection data can be used to determine the 3-D calibration of the LCI coordinate system. Thus the fringe projection can be used as reference coordinate system for the LCI, which closes the transformation chain to the robot base.

The calibration of each sensor makes it possible to transform data from one coordinate system to the other. In order to transform fringe projection data (FPS) into the coordinate system of the interferometer (LCI), the homogenous transformation matrix LCITFPS is applied by multiplying it with the fringe projection data p (cf. Eq. 2).

$${\text{\bf{p}}}_{\text{FPS, LCI}} \, = \,{}^{{\text{LCI}}}{\text{\bf{T}}}_{{\text{FPS}}} \cdot {\text{\bf{p}}}_{{\text{FPS}}}$$
(2)

The subscript of data is used to show the coordinate system. For better understanding the data origin is kept as first subscript for transformed data. Sub- and superscripts applied to transformation matrices determine the source respectively destination co-ordinate system.

4.2 Calibration of the Rotational Axis

While the hand-eye calibrations are sufficient to transform data when only moving the robot, the rotary axis is not yet considered. To include the additional rotational movement, the axis is integrated into the kinematic chain and calibrated. For this the rotational axis in respect to the robot has to be identified. This is achieved by mounting the calibration target onto to the axis and recording multiple images while rotating the axis is small steps. For each axis position the target centre can be estimated utilizing the calibrated camera and known target properties. The 3-D information of each estimated centre is used to calculate a three dimensional circle fit. The normal of this circle is used to describe the location of the rotational axis. The direction of the normal is chosen with respect to the actual rotation of the axis following the right-hand rule. The missing degree of freedom along the axis is set manually.

4.3 Combination of Measurement Data

With the calibrated systems it is now possible to transform all gathered data into a unified coordinate system. Figure 17 shows the registered measurement of LCI and FPS. Larger structures on the blade surface can be observed in both surface reconstructions. Roughness measurements can thus be used to complement existing data with higher depth resolution. The transformation of the data is performed by using Eq. 2:

$${\text{\bf{p}}}_{\text{FPS, RS}} = {}^{\text{RS}}{\text{\bf{T}}}_{\text{FPS}} \cdot {\text{\bf{p}}}_{\text{FPS}}$$
(3)
$${\text{\bf{p}}}_{\text{LCI, RS}} = {}^{\text{RS}}{\text{\bf{T}}}_{\text{LCI}} \cdot {\text{\bf{p}}}_{\text{LCI}}$$
(4)
Fig. 17
A photo of a surface measured at 5.5 millimeters. A rectangular spot is highlighted towards the center right edge. Several shallow dent and ridge like formations are present on the surface.

LCI (dark grey) measurement registered onto surface measured (light grey) with FPS

In this case both measurements are transformed into the stage coordinate system (RS) to include rotations which may have been performed between both measurements. Since some geometrical features are available this coarse alignment can further be improved by applying e.g. a point-to-plane ICP.

The previous example demonstrates the combination of multiple scales of 3-D data.

The determined transformations additionally allow the merging of 2-D and 3-D measurements as present with the results of the illumination sensor. This, however, requires the addition of the intrinsic camera parameters to perform a proper projection of the data into the respective camera coordinate system. Melchert et al. (2020) demonstrated this projection step to utilize the surface normals from the FPS to improve the classification results and map them onto a 3-D measurement.

Figure 18 shows this process. The classification image which is derived from 2-D data shown on the left can be projected onto the 3-D points. With that it is possible to increase the amount of information of each point of the surface measurement and thus of each point of the 3-D model. Based on this enhanced model and the segmentation results of the CNN (see Fig. 13a), a more comprehensive defect detection can be carried out.

Fig. 18
Two diagrams display a turbine blade. Nominal roughness is across the surface, increased roughness are in some areas on both sides and near the center, and shadowed regions are at the edges.

2-D classification from illumination sensor with the projection of 2-D labels onto a 3-D point cloud (Melchert et al. 2020)

However, due to the fact that multiple calibrations depend on each other, the registration process is prone to error. Since a lot of calibration approaches are based on robot mounted cameras, even small errors of the camera calibration influence the hand-eye calibration, the identification of the rotary stage or the stereo calibration results. In addition the robot has assembly-related deviations in segment lengths and axis alignment. To reduce this effect, the robot is factory-calibrated. Nevertheless these aspects impact the overall registration performance and have to be noted.

5 Conclusions and Outlook

The application of optical measuring instruments of different scales and modalities in a common, global coordinate system opens up a wide range of novel inspection possibilities. In this way, the advantages of different measuring principles can complement each other while taking into account the resolution, measuring field size, accuracy and measuring duration in an optimal way and ensure a fast and meaningful diagnostic process. The detection and characterization of defects can be derived from multiple layers of information. Approaches for a multilayered defect detection and interpretation of combined measurement data are subject of further research.

In addition, individual sensors can profit from the multi sensor setup. A good example are the fringe projection unit and the illumination sensor. While the illumination sensor is capable of estimating the surface normals of the object on its own, the results are strongly dependant on the surface properties and favour low frequency structures. The surface reconstruction of the fringe projection system, on the other hand, can provide normals with a much higher certainty.

The interaction of multiple sensors opens up possibilities for measurement planning. While FPS and illumination sensor can be used to get a good overview of the measurement object, the LCI is introduced for local detail measurements. The position of these surface roughness measurements can be derived from the overview data e.g. by identified damaged regions based on the classification results. This on-demand sensor selection can greatly improve the measurement process, since only necessary measurements are performed.