1 Introduction

This article corresponds to parts of a dissertation written by the lead author in German [1]. Essentially, text passages from the dissertation were reused and translated for this article.

Material extrusion (MEX) is used in a wide range of applications [2,3,4]. However, to be able to additively manufacture products with high quality requirements, tools for systematic quality management must be available [5, 6]. This involves sophisticated monitoring technologies to collect quality data during the manufacturing process. In addition, definitions are needed to evaluate the collected quality data in a generally valid and comprehensible way [7, 8].

Requirements for a MEX product are to be defined in terms of surface, geometry, mechanical properties and feedstock materials [9]. With regard to the mechanical properties, for example, there are a large number of part-specific studies (e.g. [10, 11]). Additionally, for MEX plastic parts, quality classes with respect to relative part density, dimensional accuracy and mechanical properties are defined in ISO/ASTM DIS 52924:2020–05. These apply to the entire part [12]. Generally, a classification of parts into three quality classes is proposed for MEX [13]. However, there are no definitions for quality indicators or limits for quality classes that allow a high-resolution evaluation of MEX manufacturing processes (e.g., individual layers or subareas of layers) and that are usable in more than one individual application [8]. Therefore, the objective of this work is to derive generally applicable quality classes for specific quality indicators from empirical investigations of the MEX process.

Manufacturing a part using MEX involves melting a feedstock with a moving extrusion head and depositing it layer by layer onto a substrate in the form of beads [14, 15]. A wide variety of imperfections can occur during this process, for example stringing [16], scars [16], underfill and overfill [17], discolorations [18] or incorrect bead deposition positions [19].

The MEX layer surfaces are never completely planar. Depending on the process parameters, a regular surface profile is formed. Therefore, a promising way to evaluate layer surfaces is the detection of imperfections as deviations from the dominant regularity [20, 21]. In this work, quality indicators are used which are a measure for the regularity of layer surfaces. The quality classes are intended to be derived from the scatter of quality indicators in real MEX production processes and therefore must be collected with an appropriate monitoring system.

Technologies for the acquisition of condition data have already been in the topic of investigation of many studies. Temperature sensors, vibration sensors, acoustic emission analysis, measurements of electrical quantities, as well as force and pressure sensors are mainly used to monitor mechatronic components of the MEX machine [7, 8].

Scheffel et al., for example, use vibration sensors mounted on machine frame, build platform and extrusion head. The authors utilize a convolutional neural network to analyze the collected data. The system achieves an accuracy of 97.7% when differentiating between production sequences that can be assigned to normal process states on the one hand and crack formation between adjacent layers on the other [22]. Other approaches that use vibration sensors deal with the determination of an effectively existing nozzle diameter [23, 24] or the detection of faulty mechanical components of the MEX machine [25]. Furthermore, Xu et al. use an acoustic emission sensor placed on the build platform to categorize different severity levels of warpage with a decision tree and distinguish them from a defect-free process [26]. Acoustic emission sensors can also be applied to analyze part detachment from the build platform and part deformation [27, 28] or the condition of the extruder [29].

In contrast, image processing systems, optical 3D measurements and thermographic methods enable the direct acquisition of part quality data [7, 8]. These approaches are particularly promising, because the entire part can be inspected with just one sensor if the individual layers are the target of the analysis.

A widely used approach is to analyze the layer surface with cameras attached to the extrusion head. For example, Liu et al. detect overfilling and underfilling using two digital microscopes [30, 31]. Brion und Pattinson also implement a camera attached to the extrusion head, with its viewing axis focused on the nozzle tip. They train an artificial neural network to classify the parameters filament flow rate, extrusion head lateral speed, Z offset of nozzle tip and hotend temperature into three categories: “low”, “good” and “high”. The in total 81 possible parameter combinations can be identified by the system with an average accuracy of 84.3% [32].

Other promising concepts for MEX process monitoring are those developed by Petsiuk und Pearce. The camera is mounted on a tripod in front of the MEX machine. A ring light arranged around the build platform is moved by a motor to set a constant distance to the current layer. A model of the layer geometry is derived from the G-code. This enables a comparison with acquired layer images to determine geometric deviations of the layer outer edges and defects in the layer structure. In addition, an unsupervised machine learning method based on texture analysis is investigated to detect faulty infills [33, 34].

Besides monitoring individual layers, image processing systems are also used to inspect the side walls of parts. In this technical variant, the camera axis is often perpendicular to the normal vector of the build platform. Saluja et al. use this hardware setup to detect detachment of the part from the build platform using a convolutional neural network. The manufacturing processes of cuboids, prisms and cylinders are classified with an accuracy of 99.3% when assessed layer by layer [35]. Rill-García et al. investigate the inspection of side walls during the production of large-format MEX parts and evaluate the quality of deposited material beads during the processing of concrete [36]. A monitoring concept by Henson et. al is based on three cameras arranged around the build platform. Defects are detected by deriving simulations of images that correspond to the camera perspectives from the digital part data. These synthetic reference images can be compared with the captured camera images [37].

Caltanissetta et al. have developed a solution for large-format MEX in which the cooling characteristics of deposited material beads can be analyzed with a thermographic camera and local temperature deviations can be detected. These are a signal for underfilling and overfilling, incorrect extrusion temperatures and porosity [38]. To precisely assign acquired temperature data to the part elements of the CAD model, Binder et al. use perspective transformations. In addition, filters are used to exclude the visible extrusion head from the analysis [39].

3D data of the layers produced can be determined using a structural light scanner and cameras, for example [40]. Lyu und Manoochehri, on the other hand, record layer data with a laser triangulation sensor and analyze it with a convolutional neural network. The four states “normal print”, “over extrusion”, “under extrusion” and “severe under extrusion” can be distinguished with an accuracy of 90.1%. Moreover, a mechanism for controlling the process has been implemented [41]. In this context, a study by Kline et al is relevant. The authors aim to compare part data, which are firstly recorded with a computer tomograph and secondly with a laser triangulation sensor. The subject of the analysis is the characteristics of voids [42].

Despite the large number of existing approaches to monitor the MEX process, no industrial implementation of a capable monitoring system has been realized yet, as the current systems are not able to monitor complex and varying parts. Furthermore, they are usually only designed for specific types of defects and no project is pursuing a high-resolution inspection of the entire manufacturing process, including all layer subareas. The measurement of quality indicators as summarizing metrics for part quality is also not considered in any publication. In addition, there are fundamental deficits regarding the systematic and comprehensive investigation of developed process monitoring systems. For example, measurement uncertainties are generally not determined (see also reviews [7, 8]). Therefore, this work uses a novel image processing system. This is used for layer-by-layer data acquisition and is intended to enable high-resolution evaluation of part qualities during manufacturing.

In Sect. 2, the developed image processing system is presented. On the one hand, this includes the hardware and the methods for layer-wise image acquisition. On the other hand, innovative approaches for the segmentation of the layer images as well as for the detection of anomalies are presented. Section 3 includes experimental investigations on the capabilities of the image processing system. The proposal for quality classes is presented in Sect. 4. Finally, Sect. 6 summarizes the main conclusions of the study.

2 Image processing system

Figure 1 shows the structure of the subchapters and the functional concept of the image processing system. The individual function modules generate analysis results, which serve as input information for subsequent processing steps. All function modules as well as the interfaces for data transfer are implemented in the programming language Python. Central for the realization of the required functions is the integration of the open-source libraries Matplotlib [43], NumPy [44], OpenCV [45], pandas [46], PyOD [47], scikit-image [48] and SciPy [49].

Fig. 1
figure 1

Functional concept of the image processing system

2.1 Image acquisition

Figure 2 shows the hardware of the image processing system as a CAD model. The industrial MEX machine utilized is the Kühling & Kühling HT500.3, with a build platform of 200 mm × 180 mm and a build height of 290 mm. Furthermore, the monochrome camera Basler acA4572 17um is used. The Kowa LM35HC lens with a fixed focal length of 35 mm is mounted on it. Also included in the equipment is the round darkfield illumination IDRA-T450DWHV from the manufacturer Leimac Ltd. which has an inner diameter of 414 mm. The darkfield illumination is arranged around the build platform.

Fig. 2
figure 2

CAD model of the image processing system

The principle for image acquisition is that layer images are taken under darkfield illumination directly after completion of a layer. A camera is mounted vertically above the build platform for this purpose. Specific key commands are inserted into the G-code at the end of each layer. These cause the extrusion head to move to the side after the completion of a layer and to clear the viewing axis of the camera. Additionally, the camera and the power supply of the darkfield illumination are triggered.

2.2 G-code analysis

For each layer, the G-code is analyzed to create two different layer models. Firstly, a geometry model specifies the geometric properties of the layer (Fig. 3 center). Secondly, an illumination model is used to determine illumination angles that result in ideal image contrasts. To realize varying illumination angles in the manufacturing process, the height of the build platform is changed before image acquisition. The position of the darkfield illumination remains unchanged. If the build platform moves upward, for example, the light falls flatter on the layer surface. The vertical distance between the current layer surface and the plane of the darkfield illumination is called the illumination height (Fig. 3 left). To determine locally ideal illumination heights, geometric influences of the part as well as influences on the radiant energy of light beams are represented in the model.

Fig. 3
figure 3

Influence of the modeled illumination heights on the visibility of edges

The images on the right side of Fig. 3 illustrate that ideal illumination heights depend on the local layer geometry. For example, at the cylindrical recess, edges of the layers that are located deeper are visible at a large illumination height (44 mm). For high contrast imaging of edges of the manufactured digit “0”, however, exactly this large illumination height is required. The smaller illumination height (7 mm) leads to darkening of the digit “0” and prevent high contrast imaging. Depending on the layer geometry, it is therefore often necessary to acquire several images with varying illumination heights for one layer.

2.3 Segmentation

Segmentation aims to identify zones of the image that contain material of the current layer. To implement an effective segmentation, the layer edges are detected algorithmically. These are imaged brightly and with high contrast due to the darkfield illumination. Edge detection is based on a watershed algorithm with actively placed markers (see [50]). Two types of markers specify points that can be reliably assigned to either the current layer or the background. They are placed based on the geometry model of the layer and, in the case of filigree layer structures, using a local thresholding method (e.g., in grid infills).

The geometry model may describe the layer geometry inaccurately due to various influencing variables. Therefore, in a first step, an edge detection takes place, in which it is accepted that the layer edges are partially not detected correctly. Subsequently, based on an image registration, deviations between the layer models and the real layer are minimized with an iterative closest point algorithm (see [51]). This allows all layer edges to be correctly localized during a second edge detection. As exemplified in Fig. 4, the identified layer surface is finally divided into the following homogeneous layer subareas by masking with the geometry model:

  • perimeter (outer shell)

  • solid infill inner area

  • solid infill border area (extension equals 12 times the bead width starting from the boundary of solid infill)

  • grid infill inner area

  • grid infill border area (grid elements whose centers are placed no further than 75% of the diagonal extent of a grid element from boundary of grid infill)

Fig. 4
figure 4

Isolation of layer subareas by masking with the geometry model

2.4 Anomaly detection

Anomaly detection involves inspecting the layer subareas isolated from each other for imperfections (Fig. 5). Unsupervised machine learning algorithms are used to determine anomaly scores as a measure of the irregularities associated with single data points. Generally, no separate data set is required for the training of the machine learning methods. The image processing system takes a limited number of images during ongoing production and uses them to train the anomaly detection. This involves deriving models of the normal surface properties from the acquired layer images. These models represent the dominant data structure.

Fig. 5
figure 5

Functional structure of anomaly detection

Feature extraction is the first step and is used to focus the anomaly detection on characteristic properties. For perimeters and solid infills, a computation of feature vectors takes place in the local neighborhoods of pixels. For grid infills, geometric features of the contours of individual grid elements are analyzed. The individual features are listed in Fig. 5.

A large variety of algorithms exists in the area of unsupervised machine learning and anomaly detection (e.g., [52, 53]). In this work, an isolation forest (see [54, 55]) is used for solid infills (inner area and border area), perimeters and grid infills in the inner area. An isolation forest shows very good classification performance over a wide range of applications and is also time efficient [56,57,58]. In the border areas of grid infills, the feature vector consists of only one feature and the deviations from the median (see [59]) are used as anomaly scores.

Figure 6 illustrates determined anomaly scores exemplarily for two images showing layer surfaces with manually inserted imperfections (manipulated G-code). In order to make visualization uniform, the anomaly scores are normalized to the value range from zero to one.

Fig. 6
figure 6

Layer images with imperfections (left), anomaly scores (middle) and detected imperfections after binarization with a threshold (right)

2.5 Quality indicators measurement

The final step of image processing involves determining quality indicators. This requires binarization of the anomaly scores with an individual threshold for each layer subarea (Fig. 6 right). Binarization consists of deciding whether a data point is normal or anomalous. To identify imperfections consisting of multiple image points, neighboring anomalous data points are interpreted as connected image regions.

As already suggested in [8], the characterization of imperfections of the layer surface is based on ISO 8785:1999–10. In accordance with the aforementioned standard, a measurement of the following quality indicators is implemented in the image processing system for each layer subarea [60]:

  • percentage of imperfect area \(IA\)

  • number of imperfections per area \(IN\)

3 Capability of the image processing system

The image processing system is used to empirically derive quality classes for MEX. Therefore, the capabilities of the image processing system must be investigated with respect to the functions for segmentation and measurement of quality indicators. The data set used for this purpose is presented in Sect. 3.1. Subsequently, the results of the analysis are described in Sect. 3.2 and Sect. 3.3

3.1 Data set

The material ABS Fusion + (color gray) from the manufacturer BASF SE was used to produce parts. The layer height had a value of 0.2 mm and the bead width was 0.44 mm. Acquired layer images were evaluated both by the image processing system and manually. In this way, the analysis results of the image processing system can be compared with a manual reference.

Since imperfections occur rarely in some cases, a specimen (Fig. 7 top left) was manufactured several times with different synthetic imperfections by modifying the G-code. The imperfections introduced are voids, underfills, overfills, stringing, impurities and scratches. In addition, the data set contains images of layers that have not been actively modified. With respect to the specimen, these are ten layers covering characteristic geometric constellations. Furthermore, three layers were randomly selected from each of the other parts shown in Fig. 7. The three components of the glasses were produced in a single manufacturing process. In total, the data set contains images of 46 layers.

Fig. 7
figure 7

Data set consisting of test specimen and realistic parts

3.2 Segmentation

The average symmetric surface distance (see [61]) between the edge points of an image processing system segmentation and the associated manual reference segmentation is used as a first object of investigation. The F-measure (see [62]) is used as a second object of investigation and is determined by identifying the true-positive, false-positive and false-negative pixels of the image processing results.

In Fig. 8, the qualitative results of two segmentations are presented as examples. For layer external edges, average symmetric surface distances to the manual reference segmentations of 56 µm are obtained as the mean of the 46 considered layer images. Grid infill edges exhibit average symmetric surface distances of 18 µm. For the overall segmentation, average symmetric surface distances of 43 µm and an F-measure of 0.981 can be determined. In the field of MEX research, no results exist for a direct comparative assessment. However, evaluations of similar complex segmentation tasks in the field of medical (e.g., [63]) and biomedical (e.g., [64]) image processing show that the segmentation quality achieved at this point is very high.

Fig. 8
figure 8

Qualitative results for the segmentation of the layer images of a gear (left) and a motor mount (right)

3.3 Quality indicators measurement

In order to be able to detect anomalies, the machine learning methods are trained based on the entire data set and the measured values of quality indicators are then determined for each layer separately. The scatter of measured values serves as a target value for the analysis and is an indicator for the uncertainty of the measurement results. In the considered data set, the characteristics of imperfections to be measured vary. Therefore, the relative error \({D}_{rel}\) between a manually determined reference measurement value \({x}_{ref}\) and the measurement value given by the image processing system \({x}_{img}\) is calculated:

$${D}_{rel}=\left\{ \begin{array}{c}\frac{{x}_{img}- {x}_{ref}}{{x}_{ref}}\cdot 100 \% , {x}_{ref}\ne 0\\ 100 \% , {x}_{img}\ne 0 \wedge {x}_{ref}=0\end{array}\right.$$
(1)

Negative values of \({D}_{rel}\) describe measurement values that are too small and positive values represent results that are too large. If the image processing system measures imperfections despite manually determined freedom from defects, the result is \({D}_{rel}\) = 100%. The standard deviations of \({D}_{rel}\) over all 46 layers serves as the objective for evaluating the capability of the image processing system. Thresholds of the anomaly scores, which lead to minimum standard deviations, specify the operation points of the image processing system.

Qualitative results are visualized in Fig. 9 as an example. The results shown in the bottom line demonstrate that true-positive regions (green) occur in every case and that the imperfections are thus correctly indicated by the image processing system. However, the false-negative (blue) and false-positive (red) zones illustrate that the determined areas of the imperfections deviate from the real geometries.

Fig. 9
figure 9

Qualitative results when detecting imperfections

Table 1 lists the standard deviations of \({D}_{rel}\). Large standard deviations of \({D}_{rel}\) occur especially in border areas of solid infill and grid infill. The uncertainties are smaller for grid infill in the inner area. Nevertheless, it can be stated that there are always geometrical errors in the measurement of imperfections, which cannot be compensated systematically. The reason for this is the shape misinterpretations as well as subdivisions of single imperfections into several defects shown in Fig. 9.

Table 1 Standard deviations of relative errors \({D}_{rel}\) in the different layer subareas

As an example, a standard deviation of around 39% can be determined for perimeters in Table 1 when measuring the percentage of imperfect area \(IA\). If the measured value of \(IA\) is 1%, the conventional true value lies in the interval from 0.61 to 1.39% with a probability of 68.27%. For a measured \(IA\) of 10%, on the other hand, the conventional true value is in the interval from 6.1 to 13.9% with a probability of 68.27%. This example illustrates that the determined uncertainties have to be interpreted in the context of the magnitude of the quality indicators, as they are relative values. Large measured values lead to large absolute scatter. Small quality indicator values are associated with only small absolute scatter.

4 Derivation of quality classes

To derive quality classes, the quality indicators are measured in different MEX manufacturing processes. The procedure for generating a realistic data set is presented in Sect. 4.1. Subsequently, in Sect. 4.2, a proposal for the definition of quality classes is deduced based on a process scattering analysis.

4.1 Data set

The determination of quality indicators in accordance with ISO 8785:1999–10 (see [60]) is based on the assumption that the quality classes are valid independently of the part and layer geometry. To take this into account, a data set containing varying layer constellations is required. Therefore, the five parts shown in Fig. 7 as well as the single component of a 6-axis robot and the gear wheel of an iris diaphragm were manufactured. From each part, the relevant layers are randomly selected, resulting in a total of 89 layer constellations. To consider the scatter of the manufacturing process and the measurement results, the parts are produced five times each, leading to a data set consisting of data from 35 parts and 445 layers.

4.2 Quality classes

Figure 10 shows the acquired layer data. The arithmetic mean of all layers describes an averaging over the 89 layers for each of the five production runs. Consequently, the blue box plots are derived from five values each. In contrast, the arithmetic mean of the individual layers corresponds to an averaging over the five identical layer constellations. In this case, the corresponding orange box plots consist of 89 individual data sets.

Fig. 10
figure 10

Box plots used to derive quality classes. The length of the whiskers is limited to 1.5 times the interquartile range

The scatter of the measured values for the arithmetic mean of all layers is reduced compared to the analysis of the individual layers. This indicates that the individual layer constellations have an influence on the occurrence of imperfections and consequently on the magnitude of the quality indicators. A definition of quality classes, which is valid independently of the layer constellation, must therefore contain large ranges of tolerances.

Furthermore, the scatter of \(IA\) is increased for borders of solid and grid infills compared to the respective inner areas (Fig. 10 left). The reason for this is presumably that the layer geometry has a direct influence on the properties of the layer surface in border regions. In the inner regions, on the other hand, the influence of the layer constellation is much less important.

Furthermore, the high medians of the quality indicators for solid infill border areas compared to inner regions are remarkable. This is due to imperfections caused by directional changes and accelerations of the extrusion head. In addition, perimeters show relatively high medians of the quality indicators. This can partly be explained by downtimes of the extrusion head movement, which leads to local material accumulation (e.g., during layer changes).

Since the objective is to determine quality classes with a validity for each single layer, an analysis of the arithmetic means of the individual layers must be done (orange box plots in Fig. 10). The resulting proposals for limits of quality classes are shown in Table 2. Three quality classes — A, B, and C — are listed for each layer subarea.

Table 2 Proposal for limits of quality classes

The MEX machine used in combination with the selected process parameters represents a medium quality compared to other MEX processes. Therefore, it can be assumed that most of the measured values can be assigned to the quality class B. Only outliers belong to quality classes A and C. Therefore, the limits of quality class B correspond to the whisker expansions of the box plots in Fig. 10. Quality class A includes all values that are smaller than the whisker expansions at the bottom. The upper limits of quality class C are estimated by doubling the maximum values of quality class B. For solid infill border areas, imperfections are always permissible according to the results for \(IA\) in quality class C. This definition is also applied to the quality indicator \(IN\) for reasons of consistency.

Table 2 shows that the sizes of the tolerance ranges vary depending on the layer subareas. For example, the permissible values of \(IA\) for perimeters are two times higher than the one for solid infill inner areas. Furthermore, the permissible magnitudes of the quality indicators in border subareas are generally larger than in inner areas. It is noticeable that in quality class A, no imperfections are permitted for inner subareas of solid infill and in general for grid infills. In contrast, in quality class C, imperfections are always permissible in the border areas of solid infills. This wide range of permissible quality indicator values depending on the layer subarea and the quality class reflects the highly variable properties of the MEX.

The measurement uncertainties listed in Table 1 (standard deviations of \({D}_{rel}\)) are reduced by the repetitive production of the layer constellations in combination with the described averaging. The scatter of a mean value is reduced by the square root of the number of individual values compared to a single measurement [65]. Interpreting the measurements on the five similar layer constellations as measurement repetitions leads to the rough assumption that the random scatter of measurement results is cut in half.

5 Significance of the research results

The research results presented in the previous sections are summarized in Table 3. They are associated with different advantages and limitations. These are described below to put the significance of the findings into context.

Table 3 Summary of research objectives and key results

5.1 Advantages

It was possible for the first time to measure MEX quality indicators as condensed metrics for production quality separately for the different layer subareas. Quality classes determined with the image processing system are therefore an important contribution to creating an understanding of MEX quality and can be a basis for standardizing quality requirements.

Furthermore, the system enables the monitoring of MEX processes of complex parts and their termination if the quality does not meet the requirements. New, safety–critical fields of application, such as medical technology or the aerospace industry, can be opened for MEX, as trust in the manufacturing process and the products is established and product characteristics can be documented.

If safety–critical parts are to be manufactured, one possible method is to manually evaluate part regions displayed by the image processing system using the captured layer images. Critical imperfections can be identified with a high degree of probability in this way.

The machine learning algorithms work unsupervised and learn normal layer characteristics autonomously. This makes it unnecessary to parameterize the image processing system manually when MEX process parameters are changed (e.g. feedstock materials). This enables low-cost operation. A further advantage is that no parts need to be manufactured solely for training the system. Manual classification of reference images is also not necessary.

The hardware used consists of inexpensive industrial standard components and is therefore well suited for practical use. With minor adjustments to the mechanical structure, similar concepts for the integration of camera and darkfield illumination can therefore be implemented in many MEX machines.

The high capability of the functions for segmenting the layer images should also be emphasized at this point. With an F-measure of 0.981 and average symmetric surface distances to manual reference segmentations of 43 µm, layer edges can be detected highly accurately despite low contrasts and challenging image scenes. These results go beyond the current state of the art and are the basis for the effective detection of imperfections. At the same time, this results in potential applications for the automated evaluation of part and layer geometries during the MEX process.

5.2 Limitations

It was possible to provide proof of function for an image processing system that evaluates MEX layers based on self-learning anomaly detection algorithms and an analysis of the digital part information. However, additional developments and further investigations are required in order to further increase the maturity of the system with regard to the uncertainties in the measurement of quality indicators. This concerns systematic analyses on the optimal design of the methods for feature extraction and classification. For example, Nguyen et al. explain that a variety of artificial intelligence methods are used in the field of additive manufacturing and that neural networks are particularly promising for predicting part and production process characteristics [66].

Valid quality classes are normally based on many years of research into a manufacturing process and a wide range of experience from industrial applications [67]. Furthermore, the determined tolerance limits depend on many influencing variables (e.g. feedstock material). Despite the extensive investigations, the determined quality classes are therefore to be understood as an initial proposal and do not replace further analyses and specifications subject to the specific application.

It should also be noted that measured quality indicators and derived quality classes depend on the measurement process itself and are therefore only suitable to a limited extent for statements regarding the true properties of layer imperfections. This is particularly relevant for unsupervised machine learning methods, as the characteristics of normal layer surfaces are learned autonomously.

The data set used is practice-oriented and covers a large number of varying part configurations. However, the data set is limited in quantity, as the manual assessment of a single layer image takes several hours. In addition, the evaluation of layer images by a human is not error-free and is not completely reproducible [61, 68]. The determined capabilities of the image processing system are therefore subject to uncertainties regarding the general validity and reproducibility.

The results show that the image processing system can be used for manufacturing processes of complex part geometries. It is therefore possible to use the system in the manufacturing of bionic structures, as described for example by Nguyen-Van et al. [69] and Peng et al. [70]. However, mechanical properties are particularly relevant for the industrialization of these parts and the image processing system cannot be used to derive them directly.

6 Conclusion

Quality classes for quality indicators and monitoring technologies for the acquisition of MEX process data are essential for the implementation of a systematic quality management. Experiments with a variant-rich data set show for the developed image processing system that an F-measure of 0.981 can be achieved in the segmentation step. Furthermore, relative measurement errors with standard deviations of 25 to 76.1% are found for the layer-wise measurement of quality indicators.

Empirical investigations on real manufacturing processes have made it possible to derive quality classes for MEX experimentally for the first time. This involves acquiring MEX quality indicators as condensed features for the production quality separately for five different subareas of a layer. The quality classes and the image processing system are an important contribution to the deeper understanding of the MEX and can be a basis for the standardization of quality requirements as well as the further industrialization of the MEX in safety–critical areas. However, the determined quality classes are subject to a relatively high uncertainty due to the measuring principle and should therefore be understood as a proposal.

Despite the knowledge gained, there is still potential for research and development. The review and further development of the determined quality classes within a framework of industrial benchmark studies is the next logical step. It is also feasible to extend the existing investigation to other feedstock materials. In addition, the functions of the image processing system for measuring quality indicators can be optimized in order to minimize measurement uncertainties. One possibility is to classify types of imperfections. In this way, separate measurement processes can be designed and parameterized for each type of imperfection.