Keywords

1 Introduction

Project C2 is located in the area of early fault detection within the process chain of turbine maintenance of CRC 871. Over the course of the previous two funding periods, research was conducted on a novel type of sensor technology for inspection in confined spaces. The measurement method of fringe projection was transferred to new scales and applications by means of a borescopic structure (Schlobohm et al. 2015, 2016; Pösch et al. 2017). This enables optical 3D measurement in areas that are difficult to access with limited space for movement. In particular, the current advancement in the field of smartphone cameras has made the implementation of miniature sensors possible. The use of these sensors for high-precision geometric component characterization will be demonstrated in this paper using the example of aero engine inspection.

2 Objective

The aim of this subproject is to develop a fast inspection approach for the inspection of complex geometries. For this purpose, the inverse fringe projection method was initially researched and used to perform rapid condition assessments of aircraft engine blades (Pösch et al. 2012; Schlobohm et al. 2017b, a). However, a precise metric derivation of defect sizes is not possible based on a single inverse pattern. Furthermore, precise knowledge of the orientation and geometry of the measured object is required. Due to rapid technical developments in fields of cameras, projectors and computing power, the fringe projection measurement method has caught up with the inverse fringe projection approach for rapid inspection. Today, high-speed fringe projection measuring systems can be built that can perform 3D measurements within one second for high-precision damage analysis. With these, it is even possible to perform handheld data acquisition at reduced accuracy (Matthias et al. 2018).

Significant advancement in the field of miniaturization of camera sensors have also been made due to the needs of the smartphone industry. In addition to the inspection of complex components, this also enables the development of 3D sensors for inspections in confined spaces. In particular, the early fault detection of complex capital goods such as blisks and turbine blades has been identified as an application area. Precise 3D inspection during a maintenance interval on assembled and disassembled aircraft engines is needed to investigate safety-critical aspects and prevent unnecessary and expensive engine disassembly and repair. Accordingly, the objective is to develop miniaturized 3D measuring systems based on the fringe projection method for the purpose of early fault detection. Two different inspection tasks within the maintenance process were defined for this specific application. On the one hand, the particularly challenging inspection of the assembled engine via maintenance openings has to be targeted. On the other hand, blisks and turbine blades of the partially disassembled engine have to be inspected. The two inspection tasks present different challenges for the development of the measuring systems. Therefore measuring head sizes of less than 10 mm in diameter are required and the working distance of the sensor is within the range of 10–20 mm or 40–60 mm. Miniaturized camera sensors in particular are more sensitive to malfunctions than industrial camera sensors and require a suitable measurement strategy with appropriate measurement poses when used within optical sensors. To achieve intelligent measurement pose planning, the reflective properties of the measurement object must be taken into account in addition to the sensor-specific requirements. Especially multiple reflections on highly reflective (shiny) components can lead to faulty reconstructed points within the measurement. For this purpose, a GPU-based simulation approach for determining low-reflection measurement poses and a compensation approach for the error-causing reflections are presented.

This article is structured as follows: First, the concept of a borescopic fringe projection sensor and its challenges during miniaturization are presented. Then a simulation approach to plan suitable measurement strategies is introduced and finally the measurement capabilities and results of the in-situ inspection are demonstrated.

3 Borescopic Fringe Projection Sensor

3.1 Design

For the adaptation of the fringe projection measuring technique to another scale range and the application in confined spaces, the classic camera projector design is adapted. In order to obtain such small measuring heads, miniaturized cameras in the “Chip-on-the-Tip” design are used instead of industrial cameras. To enable the projection of fringe patterns into small installation spaces, a borescope including a lens is used within the projection path. By projecting sinusoidal patterns through a borescope, the sensor head can be spatial separated from the projector and frame grabber board of the camera. Two iterations of borescopic sensors were developed within this subproject. First, the proof of concept for this device class of measurement systems with a measuring head diameter of about 10 mm was designed (Fig. 1 right). As the project progressed, technical advances in the camera sector enabled additional miniaturization of the borescope sensor with a measuring head diameter of about 6.5 mm (Fig. 1 left).

Fig. 1
A schematic presents borescopic fringe projection systems with a Chip on the Tip camera. Left 6.5 m m measures head with a 1 upon 6-inch sensor and 4 m m borescope. Right 10 m m measures head with a 1 upon 4-inch sensor and 6.5 m m borescope, labeled with projector, frame grabber board, clip, and camera.

Borescopic fringe projection systems with a Chip-on-the-Tip camera. Left: 6.5 mm measuring head with a 1/6” sensor and a (ø = 4 mm) borescope, right: 10 mm measuring head with a 1/4” sensor and a (ø = 6.5 mm) borescope

Figure 2 shows a schematic of the measuring head of both sensors. Here the green cone visualizes the field of view of the projector and the pyramid the field of view of the camera. The camera is fixed to the borescope shaft by a customized design of a 3D print clip. This can be flexibly adapted according to the required triangulation basis and thus adjusts the working distance of the measuring system. The two developed measurement systems are based on the components presented in

Fig. 2
A schematic represents the essential components and field of view of a measuring head in a borescope fringe projection sensor. It includes a borescope, camera cable, clip, camera, the field of view of the camera, and the specimen, illustrate their arrangement and function.

Essential components and field of view of the measuring head of a borescopic fringe projection sensor. Image according to Middendorf et al. (2022b)

Table 1 both camera sensors were manufactured by OmniVision Technologies, Inc (Santa Clara, United States). For the projection of the sinusoidal patterns, an evalution module 4500 from Texas Instruments Inc (Dallas, US) was used. The micromirror device forms the fringe pattern by binary tilting each individual micro mirror.

Table 1 Components of both measuring systems. The 6.5 mm measuring head corresponds to the measuring system shown left in Fig. 1

3.2 Miniaturization

In addition to inspection in confined spaces, a miniaturized measuring head has been developed to enable the inspection of maintenance openings in engines. Due to the continuously shrinking size of electrical components and sensors, the quality of the optical measurement also decreases and thus the suitability for use in optical measurement systems. In order to investigate the most relevant properties of the sensors and the influence of miniaturization, analyses on the signal to noise ratio, the edge spread function and spatial frequency response of the cameras are performed below. In addition, the illumination homogeneity, the distortions of the camera-projector pair and the measurement uncertainty of the two measurement systems are presented.

Camera Noise

The signal to noise ratio (SNR) of a camera indicates the ability to differentiate phase information retrieved from the sinusoidal patterns and noise. The noise of an image sensor results from the photon shot noise, sensor read noise, fixed pattern noise, thermal noise, pixel response non-uniformity and quantization noise. In order to determine a corresponding SNR in practical application, Eq. 1 is used to describe the SNR in context of read-and shot-noise. According to the approach of Martinec (2008), the summed intensity of two images (µsummedl mage) was related to the standard deviation of the difference of the images (crdifferencel mage).

$$SN{R}_{Read,Shot}=\frac{\sqrt{2}{\mu }_{summedImage}}{2{\sigma }_{differenceImage}}$$
(1)

This test requires the cameras to be out of focus and acquire multiple images under constant illumination conditions. The exposure time is increased incrementally during the test to use the full intensity range of the sensor. In order to create comparative conditions and compensate for the inhomogeneous illumination, regions of interest (ROI) were analyzed in each image. Based on the Fig. 3, it can be shown that miniature sensors (OV5647 and OV2740) with one-piece injection-molded lenses are significantly more susceptible to noise compared to industrial cameras with high-quality lenses (CB120).

Fig. 3
A line graph of S N R versus mean normalized intensity with an upward trend. It compares the S N R of miniaturized image sensors with that of an industrial sensor and highlights performance differences across various intensity levels.

SNR of both miniaturized image sensors (OV2740, OV5647) and industrial sensor (CB120) for comparison. Results published in Middendorf et al. (2021a)

Spatial Frequency Response

The spatial frequency response (SFR) is one of the most important quality metrics in the camera sector since it quantifies the extent to which a camera and lens system can resolve image details. The slanted edge method (SEM) according to the implementation of van den Bergh (van den Bergh 2019, 2018) is used for this analysis. This approach is robust against distortions, which is particularly relevant for miniaturized systems. For the evaluation according to van den Bergh, the cameras were aligned in their focal plane and the intensity gradients at the edges of black rectangles on white background were examined (test target USAF1951 from Thorlabs, Inc. (Newton, United States)). Figure 4a therefore shows the Edge Spread Function (ESF) and Fig. 4b the SFR of the miniaturized camera sensors.

Fig. 4
Two line graphs. The left graph, labeled edge spread function, indicates normalized intensity versus distance per pixel, starts stable, then inclines, and becomes stable again. The second graph of contrast versus frequency per cycles per pixel, follows a downward trend.

Edge spread function and spatial frequency response of the miniaturized camera sensors. Results published in Middendorf et al. (2021a)

The plot of the ESF depicts the intensity curve over an edge (of a black rectangle) normalized to the average of white and black areas. The sensor OV5647 overshoots and undershoots at the edge, which looks similar to an unsharp masking effect. This behavior is not to be expected, the ESF of the OV2740 and other industrial sensors record a smooth curve and therefore behave approximately ideal. Internal signal processing within the sensor (OV5647) can probably explain this effect. The plot of SFR shows the resulting modular transfer function (MTF) as the contrast over the spatial frequency related to the sensor pixel. The MTF50 is usually used to compare optical systems regarding the local frequency to which a sensor images sharply. By the example of sensor OV2740, the MTF50 is about 0.15 Cycles and the imageable frequency is 288 (pixel count of 1920). In the context of fringe projection, the OV5647 is able to display higher frequencies more accurate than the OV2740, but the OV2740 performs more traceable and can reproduce intensity transitions during fringe projection measurement more accurate.

Homogeneity

The homogeneity of the combined sensor systems is also analyzed, since strong differences in exposure are caused by different lenses, working distances and borescopes. Especially the need of high dynamic range approaches can be evaluated. The homogeneity evaluation is carried out using a white photographic target which is illuminated by a solid field pattern of the projector.

A strong radial intensity gradient can be observed starting from a certain center in Fig. 5. Both sensors show contrary behavior due to a larger measuring volume of the OV2470 and a different alignment of the borescope. When varying the working distance of the sensor, the centers of the rationally symmetrical intensity drops are shifted. In order to precisely adjust the projection of the light into the C-mount lens of the borescope a more flexible design of the digital micro mirror device must be used.

Fig. 5
Two heat maps compare deviation per normalized intensity for 6.5 m m and 10 m m measures heads. The left map 6.5 m m has a wider range from minus 0.4 to 0.4, while the right 10 mm ranges from minus 0.1 to 0.2. Axes are labeled in pixels.

Homogeneity of both miniaturized sensors. Results published in Middendorf et al. (2021a)

Calibration

In the context of sensor miniaturization, it is meaningful to verify, whether the distortion of the smallest sensor can sufficiently be corrected. Distortion correlates anti proportionally to the aperture size and is not an inherent property of a sensor. Therefore, camera and projector are calibrated according to the pinhole camera approach of Zhang (2000). With respect to radial and tangential distortion of camera and projector, the distortion is modeled via the polynomial approach of Conrady and Brown (Brown 2002). For the determination of the extrinsic system parameters a final stereo calibration of camera and projector is performed. Figure 6 shows the resulting distortion plots. Considering the direction of pixel displacement (arrow directions within the image), it can be concluded that the camera exhibits pincushion distortion, while the projector has a barrel distortion.

Fig. 6
Two heat maps, on the left, the O V 2740 camera exhibits up to 56 pixels of displacement, indicated by arrows for direction. On the right, a projector borescope combo indicates a similar displacement mapping. Axes are labeled in pixels.

Distortion visualization of the 6.5 mm measuring head. Results published in Middendorf et al. (2021a)

Camera and projector are subject to strong radial distortion while the camera also has tangential distortion. Tangential distortion is probably caused by the lens tilting in the thread of the camera package. The corners of the camera image underlie strong image field curvature, so that the features extracted are false and neglected. High pixel displacement within the distortion plot of the projector result from an artificially extrapolated resolution of the projector.

Reconstruction Quality

The probing error with respect to form is often used to classify the reconstruction quality. Using a borescopic sensor with a measuring head at 30 mm working distance it was determined to be 20 µm according to VDI/VDE 2634-2 (Deutsches Institut für Normung e. V. 2012) for a cylindrical feature of a calibrated micro contour standard. Additionally, the probing error with respect to size was calculated following the guideline JCGM 100:2008 (International Organization for Standardization 2008). The probing error with respect to size on this feature is 40 µm within 20 repeated measurements. Please refer to Matthias et al. (2017) for further accuracy and measurement uncertainty investigations and supplementary explanations. These specifications apply to surfaces with good optical cooperativity, for surfaces with limited optical cooperativity, the known physical limits of triangulating optical measurement principles apply.

4 Planing a Measurement Strategy Using Ray Tracing Simulations

During the optical measurement of complex geometries, especially during the measurement of optically non-cooperative surfaces (or glossy surfaces), multiple reflections caused by the shape of the specimen occur frequently. This is critical for fringe projection measurements, since multiple reflections can lead to incorrect phase information which is unwrapped from the camera images. An example of this can be seen in Fig. 7. Here, a fringe projection measurement was performed on the concave surface of a highly reflective compressor blade. Due to multiple reflections, false points are reconstructed outside the actual geometry. The use of such measurement data leads to erroneous damage derivations and prevents the automated data evaluation.

Fig. 7
A heat map indicates a reconstructed point cloud of a compressor blade, and reveals false points caused by multiple reflections. A gradient scale labeled deviation illustrates various levels of inaccuracies in the reconstructed points and highlights issues with reflection artifacts in the 3 D model.

Reconstructed point cloud of a compressor blade with false reconstructed points due to multiple reflections

In order to gain a deeper understanding of this problem and to develop a compensation approach for these influences, an optical simulation of the measurement is performed. Since these kinds of simulations can last up to several days on a CPU basis, a near real-time GPU-based approach has been implemented. Thus, a physically based high resolution simulation of the measurements can be carried out within one second. After explaining the simulation pipeline, an approach to identify low-reflection measurement poses and a method to compensate for erroneous phase information based on the ray tracing simulations is presented.

4.1 GPU-Based Ray Tracing Simulation

Modern ray tracing algorithms rely on the rendering equation of James Kajiya (1986). This equation describes the energy conservation of light rays in space and provides a physically based description of light based on radiometric quantities to simulate an image. To render an image, the following equation has to be solved for each pixel of the image:

$${L}_{o}\left(\bf{p},{\bf{w}}_{o}\right)= {\int }_{{H}^{2}}f\left({\bf{p}},{\bf{w}}_{o}, {\bf{w}}_{i}\right){L}_{i}\left({\bf{p}},{\bf{w}}_{i}\right)\text{cos}{\theta }_{i}d{w}_{i}$$
(2)

The amount of outgoing radiance Lo(p, wo) at a surface point p is integrated over all incident light ray directions dwi of a corresponding hemisphere H2 as a function of the incoming radiance Li(p, wi) and the reflection properties of the object surface f(p, wo, wi). During the simulation of fringe projection measurement cos 0i describes the angle between the optical axis of the camera wo and the optical axis of the projector wi. The bidirectional reflection distribution function (BRDF) f(p, wo, wi) of an object surface describes the distribution of the reflected light. In order to simulate reflections physically based and take the surface roughness into account the BRDF model according to Torrance-Sparrow (Torrance and Sparrow 1967) is applied. The rendering of a gray-scale image of the projection of a sinusoidal pattern of the measurement sequences is depicted in Fig. 8a. For a more detailed mathematical breakdown, Middendorf et al. (2021b) can be referred to. To calculate the occurrence of multiple-reflections and their reflection locations efficiently, an inverse ray tracing approach is used. Since a large number of light rays emitted by the projector are not reflected into the camera, the inverse approach reduces the computational effort significantly. As a consequence, interaction of the light rays and the specimen surface are traced from the camera origin to the projector origin. To further limit the computational effort of tracing multiple reflections, a ray tracing approach according to Whitted is used (Whitted 1980). Starting from the camera origin, a primary ray is traced, and a secondary ray is generated for each intersection of a light ray with an object. This creates a path structure, where the secondary rays are calculated based on the normal direction of the specular reflection of the incident ray. In order to perform ray tracing efficiently, the algorithm was implemented using OptiX a ray tracing engine developed by NVIDIA® (Parker et al. 2010). To parallelize the rendering on the graphics card, the raytracing application is based on NVIDIA’s Compute-Unified-Device-Architecture (CUDA) (NVIDIA et al. 2020). This recursive ray tracing is continuously performed until a self-defined reflection depth is reached. In this application, it can be assumed that the influence of a light beam from the 4th reflection on wards has a minor effect on the resulting camera image. Figure 9 shows an exemplary reflection map calculated using the measurement pose from Fig. 8a. In this figure, the maximum reflection depth per camera pixel is color-coded.

Fig. 8
A model features two parts, first, a rendered image of a turbine blade with its detailed surface, and second, a depiction of primary rays generating secondary rays through reflections on various object surfaces. It illustrates the functional principle of ray tracing for simulating rendered intensity images.

Rendered intensity image of a ray tracing simulation and the functional principle of whitted ray tracing. Results published in Middendorf et al. (2021b)

Fig. 9
A graph of V pixel versus U pixel illustrates the calculated reflection depth distributed results from predicted reflections at the measurement pose. It visualizes how reflections are distributed spatially, and provides insights into the surface characteristics and measurements.

Calculated reflection depth distribution as result of the predicted reflections of the measurement pose of Fig. 8a. Results published in Middendorf et al. (2021b)

4.2 Evaluation of Suitable Measurement Poses

To identify suitable measurement poses, a consistent evaluation metric is first defined. Based on the sum of all maximum reflection depths per camera pixel, a representative value per measurement pose is determined. To position the sensor according to a targeted field of view and its working distance, a surface point on the specimen is aligned in the focal plane of the camera sensor. This enables the comparison of the reflectivity of a region of interest around the object point. To identify a low-reflectivity measurement pose, a spherical scanning of possible measurement poses around the defined surface point is performed. Figure 10 shows 1024 different measurement poses with respect to the reflectivity of the measurement pose on the sphere. The yellow star shows the center of the sphere and represents the surface point observed. The green star indicates the measurement pose with the lowest reflectivity and the red star represents the measurement pose with the highest reflectivity. The rendering of both measurement poses are shown in Fig. 11a and b. In Fig. 11a, the fringe pattern is projected into the blade and in the direction of the blade root, causing the light to be reflected multiple times. This leads to incorrect phase information recorded by the camera. In contrast, the fringe pattern in Fig. 11b is projected to the blade tip, which avoids reflections.

Fig. 10
A heat map illustrates spherical scanning of 1024 measurement poses for a single surface point. The color indicates the summed reflection depth per pose, depicted on a gradient scale labeled as summed reflection depths.

Spherical scanning of all 1024 measurement poses related to a single surface point. The color indicates the summed reflection depth per pose. Results published in Middendorf et al. (2021b)

Fig. 11
Two photographs compare measurement poses, image A depicts the pose with the most reflection, a total of 41,681,535 reflections. Image B is of the pose with the least reflection, a total of 2,476,560 reflections. It highlights various levels of reflectivity captured in the images.

Comparison of most reflective and least reflective measurement pose. Image a has a total reflection count of 41.681.535 while b has 2.476.560. Results published in Middendorf et al. (2021b)

4.3 Compensation of Multiple Reflections

In order to reduce the influence of the faulty phase information within the reconstruction, a masking approach is presented below. Based on the calculated reflection locations and the reflection depth in the camera image, a binary mask can be created. The masking is applied to the images of the entire measurement sequence so that the areas of multiple reflections are suppressed everywhere. An application of this approach can be seen in Fig. 12. To apply this approach to real measurements and to identify reflections in these, a pose estimation of the measurement object has to be performed first. This allows to obtain the relative pose of the specimen in the camera coordinate system. Using the calculated pose, a ray tracing simulation of the measurement can be performed, and a mask can be calculated. The real measurement can be subsequently filtered with the simulated mask. In addition, it is also possible to calculate a mask for the projection, which reduces the reflections during the measurement. To apply this approach for real measurements some assumptions are made. For example, the real position and size of the measured object differs from the simulated one and thus causes a certain uncertainty budget. This is currently taken into account in a simplified manner by performing an erosion of the masking to allow for small errors. In addition, possible defects and other machining operations that change the shape and surface of the component cannot be taken into account, resulting in possible uncompensated reflection effects. To validate this simulation approach, three different simulations were performed and reconstructed according to the normal phase shift pipeline used for fringe projection measurement. Subsequently, a deviation analysis of the reconstructed simulation was performed in comparison to the CAD file. Simulation one from Fig. 13a represents an ideal measurement with a reflection depth of one, which avoids erroneous multiple reflection effects. The deviation analysis proves that the simulation was successfully reconstructed despite the optical properties of the measurement system. In the second simulation, a fringe projection measurement was performed with a reflection depth of four. Significant deviations can be seen in the reconstructed point clouds (Fig. 13b). Especially in the area of the leading edge and the blade root, multiple reflections occur. By means of the masking approach, the deviations from the second simulation were compensated. The masked image areas and reduced measurement deviations are shown in Fig. 13c.

Fig. 12
3 models feature three stages. a, Rendered intensity representation reveals object distortion. b, A calculated green mask highlights key features, and C. a corrected image with minimized distortion from multiple reflections. Each stage refines the object representation effectively.

Example of a masking approach based on the ray traced image and the calculated reflectance map. Results published in Middendorf et al. (2021b)

Fig. 13
3 heat maps compare ray tracing simulations to C A D geometry, A. Reconstruction with one reflection depth, B. Reconstruction with four reflections, and C. Reconstruction with four reflections and a multiple reflection mask applied. This highlights improvements in accuracy.

Deviation analysis of reconstructed raytracing simulations to the CAD geometry. a Reconstruction of the simulation with a reflection depth of one. b Reconstruction with a reflection depth of four and c the reconstruction of the simulation with a reflection depth of four and the multiple reflection mask applied. Results published in Middendorf et al. (2021b)

5 Inspection of Turbine Blades in Confined Spaces

For the study of measurement in confined spaces, a geometric blade arrangement was reproduced from the mounted aircraft engine. Using five blades in different states of wear, investigations were carried out on damages at the leading edge. Besides the wear at the blade tip, this is one of the main places of wear, as erosion, cracks, nicks, dents, burns, leading edge burn through, coating damages and blocked film-cooling holes occur (IAE. International Aero Engine 2000). Figure 14 shows an image of the borescopic sensor with two blades in different states of wear. The left-hand blade is heavily worn and only slightly visible due to a burnt coating, while the right-hand blade is well visible, as only coating at the leading edge is missing. In addition to the missing coating, a lot of material is missing at the leading edge. An example measurement pose for the borescopic engine inspection is depicted here. The inhomogeneous illumination is caused by the miniaturized measuring system, which is particularly noticeable. Depending on the measurement pose, up to three blades are within the field of view of a borescopic measurement. Due to the rotational degree of freedom of the engine shaft, the position of the turbine blades relative to the measuring system is initially unknown. In order to clearly identify and assign the blades and their position in the engine, the unknown pose must first be determined. The identified measurement pose can then be used to perform wear and deviation analyses. Given the highly variable measurement and wear conditions within the aero engine, a feature segmentation approach is used in addition to the common iterative closest point (ICP) registration approach. The film-cooling holes of the turbine blades have proved to be unique features that can be detected, even after increased wear. These can be segmented both in the two-dimensional image via color gradients as well as in the three-dimensional point cloud via its geometric shaping. To register the measured point cloud to the reference geometry of the CAD models, the film-cooling holes in the CAD reference are first identified. A detailed description of this approach can be found in Middendorf et al. (2022a). Using an equivalent segmentation approach, the film-cooling holes within the measured point cloud are segmented using a clustering approach (DB-Scan). To identify the features, the coordinates of the centroid of the segmented cluster are used. Based on the identified film-cooling holes, a random sample consensus (RANSAC)-based numerical optimization approach is used to find the closest possible match between the set of film-cooling holes from the measurement and the model. The segmented film-cooling holes in the reference geometry are shown in Fig. 15. Based on the estimated pose of the turbine blades, a subsequent fine registration process based on an ICP approach can be used to align the entire measurement to the reference geometry. To evaluate the condition of the measured specimen and derive damages, a surface comparison to the reference geometry is performed. For this purpose, the deviation of the point cloud to the CAD geometry is determined in polygonal normal direction. When assigning the respective (polygon) planes to the corresponding 3D points, a 1-Nearest-Neighbor (NN) classification of the reference point cloud (generated from all polygons) and the reconstructed point cloud is calculated. Subsequently the Euclidean distance of all points is determined to the nearest polygon. The right-hand turbine blade from Fig. 14 is damaged along the leading edge. Deviations of more than 1 mm compared to the reference geometry can be measured, see Fig. 16. Based on the knowledge of the exact geometric shape and location of the damage, the damage can be classified and a disassembly decision for a particular engine can be made. With this metric and high-precision measurement data, the subjective and error-prone assessment of the normal borescope process can be extended (Drury et al. 1997; See 2012; Aust and Pons 2022). In combination with damage classification approaches based on neural networks, such as that implemented by Aust et al. (2021), very fast, efficient and reliable inspection becomes possible. With an appropriate mechanical connection to the engine and a positioning mechanism, an automation of the inspection process based on borescopic fringe projection sensors can be realized in near future.

Fig. 1.14
A monochrome photograph presents an exemplary measurement pose of a borescope fringe projection sensor. It highlights features such as missing coating, worn blades, damage, and coating, providing detailed visual inspection capabilities for assessing surface conditions and defects.

Camera view of an exemplary measurement pose of the borescopic fringe projection sensor

Fig. 1.15
A representation of segmented film cooling holes overlaid on the reference geometry. This visualization aids in the precise mapping and alignment of cooling features, crucial for optimized thermal management and performance in engineer applications.

Segmented film-cooling holes plotted on the reference geometry

Fig. 1.16
A heat map of deviations, with the vertical axis ranging from 40 to 80 millimeters and the horizontal axis from minus 10 to 10. A scale on the right ranges from minus 1.5 to 1.5 represents deviation values. It analyzes exemplary leading edge damage for precision assessment.

Deviation analysis of an exemplary leading edge damage. Results published in Middendorf et al. (2022a)

6 Conclusions

Within this research project, a borescopic inspection approach for confined spaces was developed. Using the example of aero engine inspections, the successful miniaturization and suitability of the measurement system for the intended measurement task could be demonstrated. For the design and development of miniaturized 3D measurement systems, it became evident that especially the camera and the size of the borescope are decisive. Compared to industrial cameras, strong noise influences, non-linearities in the intensity response, possibly implemented data processing pipelines on the sensor and strong distortions due to small working distances have to be considered. Concerning the borescopes, bending effects, which have an influence on the optical properties of the sensors, have to be taken into account in addition to the oscillation-sensitive design. A reduction of the borescope diameter also leads to a loss of intensity within the measurement scene and a drop in intensity within an image. This means high dynamic range measurements must be carried out. By means of a GPU-based ray tracing simulation, an approach for automated measurement pose planning could be developed. This approach takes the sensor-specific properties and reflective characteristics of the measurement objects into account. In addition, the measurement pose planning should be extended by further influential factors such as the sensor noise as well as geometry and pose-dependent measurement uncertainties. Furthermore, the influence of production-specific variances of the measurement objects and their effect on the masking and compensation approach should be investigated. For the application of fringe projection measurements on shiny components, a compensation approach for multiple reflections was developed. This enables the examination of highly reflective surfaces, which could previously only be measured with an anti-reflection spray. However, the subsequent compensation of multiple reflections in real measurements requires a successful pose estimation of the measured object and a precise reference geometry. The inspection in confined spaces in an academic environment was also successfully implemented. In particular, navigation and orientation within the aero engine could be addressed with a rigid endoscopic sensor. The pose estimation of the turbine blades within the engine were realized using a feature segmentation approach. In the end, the condition assessment and damage derivation of worn turbine blades could be demonstrated using exemplary damages with impacts at the leading edge. For future tasks, it is possible to bring the developed sensors to a level of industrial maturity to be tested on real aircraft engines outside the laboratory.