1 Introduction

Additive manufacturing (AM) of metallic components has attracted a lot of attention from both the scientific and business communities. Powder bed fusion (PBF) processes have the largest share of metallic AM processes, followed by direct energy deposition (DED) [1]. This paper will focus on laser powder bed fusion (PBF-LB/M), also called L-PBF or selective laser melting (SLM), in which a controlled, planar bed of powder is melted by means of a laser beam. PBF-LB/M can be used to produce components with excellent mechanical properties and a high degree of geometrical complexity and accuracy. However, the build-up rates and the maximum possible component sizes are limited with this process. With DED, on the other hand, much larger components can be produced and the production speed is also higher. However, the components have lower dimensional accuracy [2].

Major application fields for AM are aerospace, medical and energy industries [3]. One application of AM technologies is the production of hybrid components. Here, a geometry is added to an existing component by means of AM. The geometry that is built onto the existing component using AM will be referred to as build-up. The process is called hybrid-AM and is not to be confused with approaches, where AM is combined with post processes in one set-up [3]. Using hybrid-AM, highly specialised or very application-related properties can be realised. The layer-by-layer production of the build-up by AM leads to additional freedoms in design. Since the base component is typically manufactured using conventional manufacturing processes, a hybrid-AM process can save manufacturing time and costs. In addition, process steps are combined and assembly costs can be avoided. Hybrid-AM processes can also be used for component repair; the damaged areas of the component are removed using conventional methods, e.g. cutting or milling, and then rebuilt using AM. Components can be reused after the maintenance process instead of being replaced by new parts. This sustainable approach is especially interesting for high cost components or parts with lengthy procurement schedule and for expensive parts that have only local defects [4]. Furthermore, when using AM for the repair of components, the repaired area can be upgraded in geometry or material to the latest standard [5].

One possible application field for hybrid-AM is the repair of gas turbine components such as burners or blades. Parts of the components are exposed to heavy loads during operation while other areas experience lower loadings [5]. Due to uneven wear, sections of the components can be removed and then rebuilt by AM. As superalloys are frequently needed in the hot gas path of gas turbines, replacing the components is associated with high costs. Thus, partial repair with AM enables a cost-efficient and sustainable maintenance process.

Hybrid-AM for build-up on other components is mainly performed with DED (94% DED to 6% PBF in 2016), according to Leino et al. [6]. One reason for this is that the kinematics of the DED ensures good accessibility of the tool centre point. Therefore, compared to PBF-LB/M, a repair process can be implemented more easily using DED. The repair of gas turbine components by DED is subject of many investigations which focus on process and hardware development, characterisation of the interface microstrucure and resulting mechanical performance [6,7,8]. Process combinations of DED and PBF-LB/M are also being investigated in order to exploit the advantages of both processes [2, 9, 10]. However, in these approaches, the build-up is manufactured using DED on top of a PBF-LB/M component. In order to manufacture more complex geometries and achieve a higher dimensional accuracy of the build-up, in this work, the build-up is to be manufactured using PBF-LB/M on the base component.

When manufacturing hybrid build-ups, the quality of the transition from the base component to the additively manufactured part is of key importance. Some of the major challenges are the dimensional accuracy to reduce or avoid rework, the formation of micro-cracks or pores in the transition area, the influence of powder cycles on defect formation, anisotropic material properties and their influence on subsequent machinability, and the positioning of the build-up during pre-processing to match with the base component [10,11,12,13,14,15]. While the focus of previous research is mainly on the formation of the microstructure and the mechanical properties [5, 9, 16], the focus of this work is on matching the base component to the build-up. This task contains two challenges: Firstly, the individual adaptation of the build-up to that of the base component, which will not be detailed in this paper. Secondly, the identification of the exact location of the base component to match the build-up. If the offset between build-up and component is too large, rework is required, which causes additional cost, complexity and in some cases even scrap. To avoid this, Merklein et al. [17] and Papke et al. [18] propose a process in which complex structures are built on conventionally fabricated sheet metal using PBF-LB/M. The position of the build-up on the sheet metal is subordinate. The final shape of the hybrid part is generated using laser cutting and subsequent metal forming [17]. This approach does not require the exact identification of the position of the sheet metal, since the final shape of the part is created during further processing steps.

In Smelov et al. [19], turbine blades are repaired using PBF-LB/M. The turbine blade is welded to the centre of the build platform and the build-up is positioned on the centre of the platform in the CAD system. Exact positioning of the turbine blade on the build platform is not possible. Due to distortions caused by the welding process, the position can still change. It is also difficult to measure and thus align the blade at the correct angle on the build platform. These inaccuracies cannot be taken into account during the digital positioning of the build-up in the CAD system. For this reason, the build-up has to be provided with an allowance of 0.5 mm. The final shape of the component is achieved by subsequent reworking.

To avoid these additional post-processing-steps, position detection of the components has to be performed and the determined positions have to be transferred to the CAD system. Another reason for the position detection of the components is the future use as a repair application. The components shape may deviate from each other due to different loads in use, resulting in individual wear and distortion. Therefore, individual build jobs with associated position detection are required. Position detection of the component can be performed by means of a camera installed in the PBF-LB/M machine. Kulkarni et al. [20] proposes a camera calibration method for part alignment in PBF-LB/M machines. Their approach allows to determine positions of components with a location-dependent accuracy ranging from 69μm up to 2.5 mm. Positional deviations of up to 2.5 mm are not considered accurate enough for the purpose of the current work. To obtain smaller position deviations, higher resolution cameras could be used. The use of high-resolution cameras has often been proposed for in situ defect detection in PBF-LB/M processes [21,22,23,24]. However, the focus of this work lies on the detection of process instabilities and typical defect-associated deviations. In Andersson et al. [5], component and build-up are aligned using a camera. However, the workflow used for position detection is not explained.

The process of component positioning for reducing the offset or an offset-free hybrid-AM process has not been described in literature so far. In this work, a camera-based, highly accurate and precise position detection system is developed. With the developed method, the upper area, the so-called tip area, of gas turbine blades is to be repaired using hybrid-AM. The misalignment between component and build-up shall be less than 100μm so that no or only little mechanical rework is required. For this purpose, a workflow is developed allowing the position of components in the PBF-LB/M system to be determined with high accuracy and precision. This will be applied to quantify the precision of the process using 2D geometries. Subsequently, the knowledge gained will be used to build a 3D demonstrator.

2 System technology and methods

2.1 System approach

For position detection by means of a camera, it is necessary to integrate the camera system into the PBF-LB/M machine. The camera system setup has to be calibrated in order to achieve the envisaged level of accuracy and precision. By calibrating the camera, perspective distortion can be equalised and the relevant process coordinate systems can be aligned. This is performed by engraving reference markers in a calibration plate with the laser beam of the PBF-LB/M machine. The camera is used to acquire an image of the engraved calibration plate. From this image, the actual positions of the reference markers are determined by image processing. Mapping the actual positions to their nominal target coordinates enables the calibration of the camera system. For the position detection of the reference markers, different image segmentation algorithms are compared regarding their accuracy and precision.

Accuracy and precision of the segmentation algorithms and the position detection process are evaluated by engraving several objects with the laser beam of the PBF-LB/M machine. The accuracy of the position detection system is calculated from the deviations between the nominal target coordinates of the engraved objects and the coordinates determined from the calibrated camera image. By repeating this process, the scatter of the deviations can be determined from which the precision of the position detection is derived.

2.2 System setup and requirements

The proposed approach for position detection is implemented in a commercial PBF-LB/M machine SLM 280HL (SLM Solutions Group AG, Germany) with a 280 mm by 280 mm build platform. The camera system being used is the monochrome Basler ace 2 (model: a2A5328-15umPRO; Basler AG, Germany) camera with 5328 pixel by 4608 pixel in combination with the Zeiss Dimension 2/50 (Carl Zeiss AG, Germany) lens. To protect the sensor from laser radiation, a 1025-nm shortpass filter (Edmund Optics Inc., USA) is mounted in front of the optical lens.

It is beneficial to place the camera outside of the process chamber to avoid damages by the process environment. For position detection of components, the ideal camera position is orthogonal above the object to be observed. In conventional PBF-LB/M machines, this position is occupied by the laser scanning head. This requires an off-axis positioning of the camera. For the SLM 280HL machine used, an off-axis position above an inspection window was found for the camera. The setup used is shown in Fig. 1.

Fig. 1
figure 1

Camera system setup in SLM 280HL including mirror system and different coordinate systems (offset between CAD coordinate system and laser coordinate system is overstated)

Due to the off-axis positioning of the camera, the build platform is partly outside of the field of view. Therefore, two silver-coated mirrors are used to align the field of view with respect to the centre of the build platform. The position of the field of view on the build platform can be changed by adjusting the mirrors, resulting in a change in perspective of the image.

The accuracy of the position detection is given by the smallest feature that can still be resolved. According to the Shannon sampling theorem, the minimum required resolution has to be twice as large as the smallest feature [25]. Therefore, in addition to the camera position, the spatial resolution of the setup is also important for position detection. The spatial resolution R is calculated according to [21] as

$$ R=\frac{\mathit{FOV}_{m}(x,y)}{O_{p}(x,y)} , $$
(1)

where FOVm(x,y) is the size of the taken image (field of view, FOV) in mm and Op(x,y) is the optical format of the sensor in pixel in x- and y-direction (see Fig. 2). A smaller value of R represents a higher spatial resolution. Most camera sensors are not squared. The spatial resolution in x- and y-direction has to be calculated separately. As shown in Eq. 1, the spatial resolution can be increased either if the field of view is reduced or if the optical format of the sensor is enlarged.

Fig. 2
figure 2

Simplified representation of the thin lens based on [25]

The sensor size can only be influenced by the selection of the camera. In this study, the inspection window limits the maximum possible sensor size. The camera has been selected accordingly, so the sensor size is fixed and cannot be varied. Thus, the spatial resolution can only be affected by the field of view. The factors influencing the field of view can be taken from the thin lens equation [25]

$$ \frac{1}{f}=\frac{1}{i}+\frac{1}{o} , $$
(2)

with f as focal length, i as working distance from lens to image plane and o as sensor distance from lens to sensor (see Fig. 2). The magnification with Om(x,y) as sensor size in mm and FOVm(x,y) as image size is calculated as follows [25]

$$ \frac{O_{m}(x,y)}{\mathit{FOV}_{m}(x,y)}=\frac{o}{i} . $$
(3)

From Eqs. 2 and 3, the field of view is calculated as follows

$$ \mathit{FOV}_{m}(x,y)=i\cdot\left( \frac{1}{f}-\frac{1}{i}\right)\cdot{O_{m}(x,y)} . $$
(4)

The field of view and thus the spatial resolution is affected by the working distance i, the focal length f and the optical format of the sensor Om(x,y). The optical sensor format is given by the camera and the working distance is given by the installation on the PBF-LB/M machine. Therefore, the field of view can only be adjusted by the focal length of the lens.

For the aforementioned setup and equipment used, the working distance of 520 mm corresponds to a field of view of 137 mm by 118 mm and the resulting spatial resolution is around 26μm/pixel (see Eqs. 1 and 4). In order to achieve the highest possible spatial resolution and considering the limiting physical factors of the PBF-LB/M machine, it has been accepted that the field of view does not encompass the entire area of the build platform.

2.3 Process coordinate system alignment

For the hybrid build-up process, different coordinate systems are to be considered. All process coordinate systems are shown in Fig. 1. The machine coordinate system is set at the machine origin. The CAD coordinate system describes the ideal location of the component with respect to the machine origin in the CAD system or in the slicing software. The laser coordinate system specifies the actual working point of the laser beam. The camera coordinate system depends on the camera position and varies with every camera installation.

In an ideal process, machine and laser coordinate systems should be aligned. Van Le and Quinsat [26] showed that errors can occur between actual and desired position. This so-called laser drift occurs since the laser galvanometric scanning head of the PBF-LB/M machine shifts during the course of time. The laser drift has also been observed during this investigation. To correct the laser drift, the scanning head has to be calibrated [26]. Since calibration is time consuming and expensive, it cannot be carried out before every repair process in future applications. For this reason, the laser drift is not quantified and not corrected by additional calibration of the scanning head in this investigation. This results in a deviation between the machine and the laser coordinate systems and thus in an offset between build-up and component.

To minimise this offset, the build-up and the current laser working point have to be aligned in the CAD system. As the laser coordinate system defines the actual position of the laser working point and thus the position of the build-up in the PBF-LB/M machine, this is the primary coordinate system to which the CAD coordinate system should be aligned. To achieve this, a calibration routine was developed. The laser beam was used to engrave reference markers onto a calibration plate. This process is called contouring. The calibration plate is matte black, so a high contrast with the bright laser contour simplifies image segmentation (see Section 2.5). During build job preparation, the positions of reference markers in the CAD coordinate system are assigned to the contouring build job. The contours of the reference markers are created by the laser beam in the laser coordinate system. By detecting the reference markers with the camera, the CAD coordinate system is aligned according to the laser coordinate system in which the part will be built. This represents the current machine state, including the laser drift. The workflow described above can be adapted to any PBF-LB/M machine. Since the positions of the reference markers are known in pixel coordinates and in CAD coordinates, a conversion from camera coordinates to CAD coordinates is possible. The camera coordinate system is the coordinate system that is responsible for aligning the CAD system to the laser coordinate system, so it is of utmost importance that accuracy is maintained by the camera system.

In conventional PBF-LB/M machines, components cannot be installed in the process chamber with repeatable accuracy. Therefore, no information about the position of the component in the laser coordinate system is available. The build-up has to be aligned according to the real position of the component and matched to the laser working point. For this purpose, the position of the component is determined in the camera image. Since the camera and the laser coordinate system are aligned per the previously described routine, the position of the component can be translated into the laser coordinate system by means of the camera system.

2.4 Undistorted perspective and coordinate transformation

The off-axis position of the camera, shown in Fig. 1, leads to perspective distortion in the acquired image. Practice shows that small paving inaccuracies, such as rotation or tilting, increase the perspective distortion. Due to perspective distortion, it is not possible to measure or determine a position in an image. This can be illustrated by Fig. 3a. The image shows a squared calibration plate. The upper edge appears shorter than the lower edge. In addition, the right and left sides should appear parallel.

Fig. 3
figure 3

Perspective undistortion process of squared calibration plate with source points in distorted image and target points in undistorted image

To enable position detection, the perspective distortion has to be corrected. Referring to [26], a routine was developed to undistort the perspective. The developed process includes the previously described alignment of the coordinate systems.

Camera calibration for lens distortion rectification is not performed. Initial tests showed that this does not significantly improve the results. This is due to the fact that a very high quality lens is used and the objects for position detection are in the centre of the field of view. The perspective distortion is corrected using homography. Homography can be used to determine the correspondence between source and target points in two images. The target points T(x,y) are calculated according to [27]

$$ T(x,y)=\mathbf{H}\cdot S(x,y) , $$
(5)

where S(x,y) are the source points and H the homography matrix. The projective transformation using the homography matrix is called warping. For calculation of the homography matrix, at least four source and four target points are needed [26, 27]. For this purpose, five reference markers are contoured on a calibration plate. Four are arranged squared and the fifth in the square centre. Next, an image is taken of the contoured calibration plate and the position of the reference markers in the camera image is determined (see Fig. 3). The determined reference marker positions are the source points S(x,y). For the calculation of the target points, the minimum distance from the centre point C(x,y) to the four outer reference markers is determined. By adding the minimum distance to the centre point C(x,y) in every direction, the four new outlying target points are calculated as

$$ T(x,y)=C(x,y)\pm \min(C(x,y)\pm S(x,y)) . $$
(6)

Thus, the centre serves as a supporting point and is identical in the perspective distorted image and in the warped image. Using the minimum distance leads to a loss of information, but no points between the reference markers need to be extrapolated when warping the image. Having the source and the target points, the homography matrix is calculated from Eq. 5 using the function findHomography from the open source programming library openCV [28]. With the homography matrix, a projective transformation of arbitrary images acquired in the same setup can be performed. In Fig. 3b, the warped image is shown. The black borders on the left, bottom and right of the image illustrate the corrected perspective distortion. Due to the loss of information in Eq. 6, the warped image appears smaller which results in a reduction of the spatial resolution.

The perspective correction includes the alignment of the coordinate system described in Section 2.3. It also defines the pixel to metric ratio. Assuming lp is the distance between two target points in the warped image and lm is the reference distance between the reference markers in the CAD system, then the pixel to metric ratio r is equal to

$$ r =\frac{l_{m}}{l_{p}} . $$
(7)

By contouring the reference markers, CAD coordinates are assigned to the current laser working point. This aligns the CAD coordinate system with the laser coordinate system and the laser drift, described in Section 2.3, is considered. In this way, arbitrary pixel coordinates from the camera coordinate system can be converted into CAD coordinates in the CAD coordinate system. The reference marker in the centre is used as a support point with the coordinates Cp(x,y) and Cm(x,y). Then the pixel coordinate Dp(x,y) can be converted by

$$ D_{m}(x,y)=r\cdot(D_{p}(x,y)-C_{p}(x,y))-C_{m}(x,y) $$
(8)

into the CAD, respectively laser, coordinate Dm(x,y).

2.5 Image segmentation and contour extraction

Image processing is required to detect objects in an image. The main challenge is to separate an object, e.g. a component or a reference marker, from the background of the image. The output of image segmentation is a binary image. The various methods for image segmentation are subject of many researches [25, 29,30,31].

In this investigation, image segmentation is done by thresholds and by edge detection algorithms. The threshold methods applied are global threshold using the openCV function threshold with the flag THRESH_BINARY [25, 28, 30] and Otsu threshold using the openCV function threshold with the flag THRESH_BINARY+THRESH_OTSU [28, 29, 32]. The edge detection methods applied are Sobel operator using the openCV function Sobel [28, 30, 31] and Canny edge detection using the openCV function Canny [28, 33]. The described segmentation methods are compared with respect to accuracy and precision. The method with the highest accuracy and precision is selected for further experiments. The design of experiment is explained in more detail in Section 2.7.

From the binarised image, the outer contours of the objects have been extracted by the border following algorithm of Suzuki and Abe using the openCV function findContours [28, 34]. All contiguous pixels are identified as a contour. In order to detect only the desired objects in the image, an area filter has been applied using the openCV function contourArea [28]. Thus, small interferences such as noise or dust particles are not considered in the object detection.

2.6 Position detection using image moments

Image moments describe average values of pixel intensities and can be used to describe objects in an image. Properties such as size, centroid or orientation can be calculated [33, 35]. Hence, image moments of the processed image are used for position detection of objects described by a contour. From a binarised input image with extracted contours, the image moments were calculated by using the openCV function Moments. The centroid coordinates of the closed contours resulting in [35]

$$ x_{C}=\frac{m_{10}}{m_{00}} , y_{C}=\frac{m_{01}}{m_{00}} , $$
(9)

with m00 as size of the object and m10, respectively m01 are first-order moments in x- and y-direction. The centroid of an object defines the first position characteristic. As the coordinates xC and yC are calculated via pixel space in the camera coordinate system, they have to be converted into the laser coordinate system using Eq. 8.

The second position characteristic is the orientation of the object with respect to the centroid, which also has to be determined. For this, the second-order central image moments are calculated by [35]

$$ \begin{array}{@{}rcl@{}} \mu^{\prime}_{11}&=&\frac{m_{11}}{m_{00}}-x_{C}\cdot y_{C} , \\ \mu^{\prime}_{02}&=&\frac{m_{02}}{m_{00}}-{y_{C}^{2}} , \\ \mu^{\prime}_{20}&=&\frac{m_{20}}{m_{00}}-{x_{C}^{2}} . \end{array} $$

The orientation of the object has to be determined through the intensity distribution I(x,y) of the objects contour in the image. Therefore, the covariance matrix has to be calculated as follows [35]

$$ \text{cov}[\mathbf{I}(x,y)]= \begin{bmatrix} \mu^{\prime}_{20} & \mu^{\prime}_{11}\\ \mu^{\prime}_{11} & \mu^{\prime}_{02} \end{bmatrix}. $$
(10)

From the covariance matrix, the Eigenvalues and Eigenvectors can be determined. The Eigenvalues λi are calculated with

$$ \lambda_{i}=\frac{\mu^{\prime}_{20}+\mu^{\prime}_{02}}{2}\pm\sqrt{\frac{4\mu^{'2}_{11}+\left( \mu^{\prime}_{20}-\mu^{\prime}_{02}\right)^{2}}{2}} $$
(11)

and the resulting Eigenvectors Vi through solving the equation system

$$ (\text{cov}[\mathbf{I}(x,y)]-\lambda_{i}\cdot \mathbf{E})\cdot \mathbf{V}_{\mathbf{i}}=0 , $$
(12)

using the unit vector E [35]. Eigenvalues and Eigenvectors can be calculated using the numpy function linalg.eig. The Eigenvalues and Eigenvectors of the covariance matrix describe the minor and major axes of the object. The Eigenvectors are equivalent to the principal axis of the object, in which the Eigenvector with the larger Eigenvalue describes the main orientation of the object [29, 35]. Therefore, no conversion of Eigenvectors from the camera coordinate system to the laser coordinate system using Eq. 8 is required. To quantify the orientation, the angle of the major Eigenvector Vl can be calculated with

$$ {\Phi}=\text{arctan2} \left( \frac{V_{l,C_{x}}}{V_{l,C_{y}}}\right) . $$
(13)

The position characteristics of an object are thus described by its centroid and the corresponding x- and y-coordinates and its major Eigenvector Vl. In the following, the position characteristics are indicated by the Vector D

$$ \mathbf{D}(x_{C},y_{C},\lambda_{l}) . $$
(14)

Alternatively, D can also be expressed with the corresponding angle Φ

$$ \mathbf{D}(x_{C},y_{C},{\Phi}) . $$
(15)

Additional steps were performed to simplify the determination of the position characteristics. The entire workflow for determining the position characteristics is shown in Fig. 4.

Fig. 4
figure 4

Position detection workflow including image segmentation, contour extraction and calculation of position characteristics

The first step is to crop the input image with a squared mask. All non-relevant areas are thus equalised. It has to be checked whether the object is completely visible in the region of interest. The image is then binarised to separate the object from the background and extract the contour (see Section 2.5). If the contour is not closed, the threshold of the binarisation has to be adjusted. After binarisation, the area of all recognised contours is determined and small contours below a defined threshold are filtered out. In this way, interfering influences such as dust particles or noise can be equalised. In the further workflow, only the objects of interest are considered for the position detection. Then the image moments are calculated and the position characteristics D(xC,yC,λl) are calculated for the objects of interest. The position characteristics can be converted to CAD coordinates using Eq. 8. By importing the CAD coordinates into the CAD system, the build-up can be positioned in the laser coordinate system.

2.7 Design of experiment

To determine the precision and accuracy of the position detection, the position characteristics from the camera image are compared with the CAD positions. For this purpose, reference objects are contoured onto a calibration plate and an image is acquired. The position characteristics of the reference objects in the CAD system are known. The position characteristics of the reference objects are determined according to the workflow shown in Fig. 4. By repeating this process, precision and accuracy of the position detection can be calculated. The precision can be quantified by the expanded measurement uncertainty [36, 37]

$$ P=k\cdot \sqrt{{\sum\limits_{i}^{n}}{s_{i}^{2}}} , $$
(16)

with si as the standard deviation of the measurand i and k as the factor for the size of the confidence interval. The measurands i are the centroid coordinates in x- and y-direction and the orientation Φ of the corresponding Eigenvector. Due to the different units, the precision for the x- and y-coordinates has to be calculated separately to the Φ-orientation. For the calculation of the measurement uncertainty, k = 3 is chosen, which corresponds to a confidence interval of 99.73%. The standard deviation si is calculated by

$$ s_{i}=\sqrt{\frac{{\sum}_{j=1}^{m}(x_{j}-\overline{x})^{2}}{m-1}} $$
(17)

with xj as j th indicated value, \(\overline {x}\) as mean and m as the sample size. The precision is to be distinguished from the accuracy. The precision makes a statement about the scatter of the values among each other, whereas the accuracy describes the deviation of the measured values from a nominal value [38]. The accuracy is calculated from the deviation between the position characteristics determined from the camera image and the nominal position characteristics provided by the CAD system. The system has two discrete accuracy characteristics: the first defining the centroid location by means of translational coordinates and the second defining the orientation of the object. The translational accuracy Atrans is calculated by error propagation of the x- and y-deviations between the camera and CAD centroid

$$ A_{\text{trans}}= \sqrt{\left( x_{C_{\text{Cam}}}-x_{C_{\text{CAD}}}\right)^{2}-\left( y_{C_{\text{Cam}}}-y_{C_{\text{CAD}}}\right)^{2}} . $$
(18)

The rotational accuracy Arot is calculated as follows

$$ A_{\text{rot}}={\Phi}_{\text{Cam}}-{\Phi}_{\text{CAD}} $$
(19)

with ΦCam as angle determined from the camera image and ΦCAD as angle provided by the CAD system, both calculated using Eq. 13.

2.7.1 Comparison of segmentation algorithms

For selection of the segmentation algorithm, the different methods described in Section 2.5 were compared. A reference object was contoured onto a calibration plate and the precision and accuracy determined according to Eqs. 16 and 18. The results were compared and a method for segmentation was selected. The results are shown in Section 3.1.

2.7.2 Position detection of cube geometry

For evaluation of the developed position detection method, nine cubes were contoured onto a calibration plate. The cubes appear as squares on the plate. After image segmentation, the position characteristics of the squares were determined from the camera image. Precision and accuracy were calculated from the deviation between the nominal position characteristics provided by the CAD system and the position characteristics determined from the camera image. In this series of experiments, only the translational position characteristics in x- and y-direction were determined by Eq. 9. The orientation of the squares was not taken into account. Precision and accuracy were calculated using Eqs. 16 and 18. A total of three plates were contoured. Thus, the number of samples was m = 27. The results are shown in Section 3.2.

2.7.3 Position detection of blade geometry

Subsequently, the geometry of a turbine blade was contoured. Besides the translational position characteristics in x- and y-direction, the orientation was also calculated by the major Eigenvector Vl (see Eq. 12).

As detailed in Section 2.3, the deviation between the laser coordinate system and the CAD coordinate system will always exist due to multiple system inaccuracies including component insertion into the PBF-LB/M machine. To account for these installation inaccuracies, the translational and rotational positions were varied in the CAD system by ± 1 mm respectively ± 1. Accordingly, 25 different blade positions were contoured and their positions determined from the camera images. The sample size was m = 25. Precision and accuracy were calculated using Eqs. 1618 and 19. The results are shown in Section 3.3.

2.7.4 Transfer to 3D geometry

The results from the above experiments are applied to build up a hybrid structure. For this purpose, the upper 10 mm of a turbine blade, the so-called tip, was manufactured using PBF-LB/M. Afterwards the tip was grinded flat to ensure the plane-parallelism required for the process. The demonstrator was installed in the PBF-LB/M machine and the position characteristics were determined according to the process shown in Fig. 4. A Siemens NX routine was developed for aligning the CAD coordinate system to the laser coordinate system by means of processed camera image outputs. The position characteristics were imported to Siemens NX 1988 (Siemens AG, Germany) and the translational positioning of the build-up was done automatically by aligning the centroids. The orientation was performed by aligning the Eigenvector with the principal axis. The positioned build-up was then exported in its corrected position as an STL file. The build job preparation was done in Magics 25.0 (Materialise GmbH, Germany). After the build file was loaded into the PBF-LB/M machine, the manufacturing process was started. The workflow of position detection including correction of perspective distortion is shown in Fig. 5.

Fig. 5
figure 5

Overall workflow for the hybrid assembly of components

After the AM manufacturing process, a 3D point cloud of the hybrid structure was generated with the high-precision 3D measuring device ATOS 5 Scanbox (GOM GmbH a Zeiss Company, Germany). In the ATOS 5 Scanbox, a blue stripe light projection is used to generate a 3D point cloud via a stereo camera system. The maximum bidirectional length measurement error of the system is given with reference to DIN EN ISO 10360-8 as EUni95%:Art:ODS = − 35.2μm with a bidirectional repeatability range of RUni95%:Art:ODS = 17.7μm [39]. With reference to DIN EN ISO 10360-8, 95% of the points are used to create the mesh from the 3D point cloud in order to eliminate measurement artefacts [39]. Subsequently, the mesh was overlaid with the ideal CAD model in the software GOM Inspect Pro (GOM GmbH a Zeiss Company, Germany). For this purpose, the CAD model was matched to the component from the mesh via a Gaussian best fit. By comparing the ideal CAD geometry to the build-up mesh, the form deviation between the two can thus be determined along the entire profile.

3 Results and discussion

3.1 Image segmentation

In the following, the different segmentation algorithms, see Section 2.5, are compared. The results of image segmentation by the different methods are shown in Fig. 6. The segmentation using Global threshold, Otsu threshold and Canny edge detection delivers good results. Only the use of the Sobel operator does not provide closed contours of the reference markers (see Fig. 6d). The segmentation was performed with the kernel sizes 1, 3, 5 and 7 and delivered comparable results. For this reason, the Sobel Operator was not investigated further.

Fig. 6
figure 6

Comparison of different segmentation algorithms

The translational precision and accuracy were determined based on segmented images using Global threshold, Otsu threshold and Canny edge detection. The results are shown in Fig. 7. The mean of all measured values for each method is highlighted. All methods applied have a high precision. The scatter of the values among each other is approx. 1 pixel, i.e. 30μm. Global threshold segmentation provides the highest accuracy at Pglobal = 29.82μm followed closely by Canny edge detection with Pcanny = 30.46μm. However, none of the applied methods is perfectly accurate. All algorithms deviate from the expected zero offset. Canny edge detection provides the highest accuracy from the samples at ACanny = 8.99μm.

Fig. 7
figure 7

Comparison of different segmentation algorithms in terms of accuracy and precision

The scatter and the offset of the values have different causes. On the one hand, unavoidable inaccuracies in subpixel range occur during position detection. The inaccuracies can be attributed to lighting conditions in the image from one sided illumination (see Fig. 1). These inaccuracies are also included in the calculation of the homography matrix. As a result, the perspective cannot be corrected completely. On the other hand, the camera setup affects the image segmentation. By contouring the reference plates, small grooves are engraved in the plate’s surface. Due to the perspective, the groove contours depth is also captured from the image’s perspective leading to a varying width in contour lines based on their orientation relative to the camera. Hence, the grooves parallel to the perspective appear larger. The effect can be observed in the segmented images. Laser contours parallel to the x-direction appear wider than those in y-direction (see Fig. 8b), proving the presence of a parallax error on account of the cameras perspective. This is the reason for the different accuracies of the methods (see Fig. 7). While the mean values of Global and Otsu threshold are slightly negative, the deviation of the Canny algorithm is shifted positively in y-direction. One reason for this is the thinning of the lines by non-maximum suppression using Canny edge detection. This compensates for the effect of thicker horizontal lines, resulting in the highest accuracy for this segmentation method.

Fig. 8
figure 8

Binarised image of the contours of cube geometries with nominal CAD positions (circles) and positions determined from the camera image (plus signs)

However, for a high accurate position detection of contoured (2D) reference objects, the determined position characteristics have to be corrected. The required correction values can be determined by contouring the build-up geometry onto a calibration plate. Then the position characteristics of the build-up contour are determined from the camera image and the deviation to the position characteristics provided by the CAD system are determined. The determined deviations are used as correction values. The correction value is taken into account when translating the pixel coordinates into CAD coordinates. Accordingly, Eq. 8 changes to

$$ \begin{array}{@{}rcl@{}} D_{m}(x,y)&=&r\cdot (D_{p}(x,y)-C_{p}(x,y))\\ &&-C_{m}(x,y)-C_{\text{corr}}(x,y) , \end{array} $$
(20)

with Ccorr(x,y) as a correction value. Repeated contouring and subsequent correction value determination can increase accuracy by averaging. The parallax correction of position characteristics only needs to be performed when determining the position of contoured objects in a 2D plane. Since 3D parts have no grooves on the surface caused by the laser beam, there is no parallax error in the images and thus the conversion from pixel coordinates to CAD coordinates is still performed by Eq. 8.

Since none of the examined algorithms is fully accurate, precision was chosen as the criterion for the selection of the segmentation method. As the global threshold shows the highest precision, the segmentation is performed with this method. In the following, the precision is used for the evaluation of the position detection.

3.2 Precision of cube geometry

In this section, the precision of the detected position of the cube geometry is determined. The image is binarised using global threshold. The result is shown in Fig. 8a. The position characteristics in x- and y-direction are determined from the image according to the workflow shown in Fig. 4. In Fig. 8, the nominal CAD positions of the cubes are shown as a circle and the positions determined from the camera image as plus signs. The deviation between nominal position provided by the CAD system and determined position from the camera can be seen in Fig. 8b or schematically in Fig. 8c.

For the determination of precision, the position characteristics have to be translated into CAD coordinates by Eq. 20. The results in Fig. 9 are already adjusted by the correction value described in Section 3.1. The correction value in x-direction is determined as 2.34μm and in y-direction as 59.38μm. This results in a precision of Pcube = 119μm using Eq. 16.

Fig. 9
figure 9

Accuracy and precision of translatoric offset between nominal CAD-position and position from image from cube geometry

The scatter of the x-values of approximately ± 25μm around the nominal value is smaller than the scatter of the y-values. As a result, the standard deviation of the y-values is twice as large as that of the x-values with about 32μm. Thus, the y-direction has the main influence on the precision of the position detection. This is also evident from the correction value. It is larger in the y-direction than in the x-direction. Furthermore, a systematics in the deviations can be observed. The deviation of the y-values from the expected value increases for 3 cubes in a row and then decreases abruptly. One reason for this is the residual perspective in conjunction with the parallax error mentioned in Section 3.1, which is more pronounced in the y-direction. Overall, the deviation of the y-values decreases, which is evident from the negative slope of the trendline in Fig. 9. This can be explained by the position of the cubes on the plate. Cubes 1 to 3 are located at the top of the plate and furthest away from the camera (see Fig. 8a). The distortion from perspective is greatest in this row. The effect of the wider horizontal lines, described in Section 3.1, can be seen most distinct. The closer the cubes are to the camera, the less this effect can be observed. This is why the deviation from the expected value is smallest in the cubes 7 to 9.

Due to these effects, precision and accuracy of the position detection depend on the position of the objects in the field of view. To increase accuracy and precision, the field of view can be divided into zones for location-dependent position detection. The zones are highlighted in Figs. 8a and 9. For each zone, the correction values are determined individually. Applying the correction value individually for the objects within a zone according to Eq. 20, the precision can be calculated to \(P_{\text {corr}_{\text {cube}}}=79 \upmu \text {m}\) using Eq. 16. To perform a location-dependent position correction, the objects have to be within one zone. It is not possible to overlap objects across multiple zones. If this is the case, new zones have to be created. The highest accuracy and precision is achieved when the field of view is not divided into zones but the correction value is determined individually for each object.

3.3 Precision of blade geometry

When determining the position of the blade geometry, only one object is located in the field of view. For this reason, the mentioned special case of an object-specific position correction occurs. In addition to the translational position in x- and y-direction, the orientation is considered for the blade geometry. This is described by the major Eigenvector Vl (see Section 2.6). A correction value is determined for all three position characteristics and the positions are corrected by Eq. 20. The determined centroid as well as both Eigenvectors are shown in Fig. 10 on a binarised image of a blade.

Fig. 10
figure 10

Processed blade contour on calibration plate including centroid and Eigenvectors

The result of the deviation between the 25 different position characteristics determined from the camera image and the CAD system is shown in Fig. 11.

Fig. 11
figure 11

Accuracy and precision between CAD-position and position from image from blade geometry at varying locations in field of view

The precision is calculated using Eq. 16 and is Ptrans = 30μm in x- and y-direction and Prot = 0.021 in the Φ-orientation. The results show that higher precision can be achieved when each object’s position is corrected individually. This is also illustrated by the lower slope of the trend lines. An additional increase in precision occurs since the size of the objects is taken into account in the calculation of the position characteristics using image moments (see Eqs. 9 to 12). The relative proportion of the thicker horizontal lines is smaller relative to the total size of the blade geometry than with the cube geometries.

3.4 3D demonstrator

The 3D turbine blade tip demonstrator was successfully manufactured. The hybrid-AM component is shown in Fig. 12.

Fig. 12
figure 12

Hybrid-AM turbine blade tip demonstrator

The offset between component and build-up of the 3D demonstrator is determined by overlaying the measured 3D mesh with the ideal CAD model. It is given as an absolute deviation, normal to the profile section, derived at the plane for the build up between both CAD and the mesh. The deviations are therefore not given in each direction, i.e. x- and y-direction as well as Φ-orientation, but as an absolute value in μm. The results are shown in Fig. 13. The histogram shows the distribution of the deviations. On average, the deviation between component and build-up is 69μm with measurement error given in Section 2.7.

Fig. 13
figure 13

Offset between measured 3D mesh and ideal CAD model determined by GOM Inspect Pro

Only in the lower tip area, highlighted in Fig. 13, the deviation between mesh and CAD is larger compared to the main body. One possible reason for this is the taper of the geometry. This can cause heat accumulation during the PBF-LB/M process, which leads to distortion of the original geometry. In order to detect the source of distortion, in-situ monitoring methods, such as thermography, could be utilised.

4 Conclusions

The focus of this study was to minimise the offset between component and build-up in a hybrid-AM repair process. A camera-based approach was chosen for the position detection of the component in the PBF-LB/M machine. An off-axis position outside the process chamber was used for the camera, taking into account the spatial limitations of the PBF-LB/M machine. A mirror deflection system was developed to adjust the field of view of the camera on the build platform. The resulting perspective distortion in the captured image was corrected. The developed algorithm to undistort the perspective is based on the calculation of a homography matrix. For this, reference markers were contoured onto a calibration plate. This also enables the alignment of the machine and CAD coordinate systems to the laser coordinate system using the camera. The homography matrix can be used to undistort images acquired in the same setup.

Position detection requires the segmentation of images to separate the objects from the background. For this purpose, different segmentation algorithms were compared with respect to the achievable precision. Accuracy was not used as an evaluation criterion since none of the applied algorithms provided accurate results. The best results were obtained with the global threshold. Consequently all binarisations were performed with this algorithm. The contours of the objects were extracted from the segmented images using border following algorithm. For the determination of the position characteristics, centroid and Eigenvectors were determined from the image moments.

Reference objects were lasered onto calibration plates to quantify accuracy and precision. This allowed the position characteristics determined from the camera image to be compared with the position characteristics provided by the CAD system. The test series of the cube geometry showed that inaccuracies occur in the position detection in subpixel range. These inaccuracies could not be avoided. Thus, a small residual perspective could be detected in the images. The off-axis position of the camera leads to a parallax error on 2D calibration plates, which has a negative effect on accuracy and precision. To reduce this influence, a correction value was calculated to adjust the determined position characteristics. With this workflow, a precision Ptrans = 30μm in the translational direction and Prot = 0,021 in the rotational orientation was achieved in the blade geometry test series. It was shown that the highest precision can be achieved if the position detection is performed object-specific.

The developed workflow was transferred to a real turbine blade tip demonstrator. After determining the position characteristics, the build-up was automatically aligned accordingly in the CAD system and subsequently manufactured in a hybrid-AM process. By comparing a mesh created from a 3D scan and the CAD model, an offset from base component to build-up of 69μm was determined. Compared to the precision on 2D calibration plates, the demonstrator’s offset was 39μm larger. This is due to the fact that the contrast between the demonstrator and the powder bed was lower than for the black calibration plate and the laser contour. This affects the image segmentation and thus the offset between build-up and component.

In future work, the influence of perspective will be reduced by using a tilt-shift adapter between camera and lens. By tilting the image plane parallel to the build platform, the perspective distortion in the acquired image is reduced. As a result, less information is lost in the process of undistortion leading to higher accuracy and precision of the image processing. In addition, the improvement of accuracy and precision by installing a higher resolution camera will be investigated. While maintaining the size of the field of view, a higher spatial resolution can be achieved, and smaller features can be displayed. As a result, the position of the contours can be determined with higher accuracy and precision. Lastly, the effects of uniform lighting within the PBF-LB/M system will also be investigated. A uniform illumination can be used to specifically highlight contrasts and contours in the powder bed to simplify image segmentation and thus to increase the accuracy and precision of the position detection.