Recursive quality optimization of a smart forming tool under the use of perception based hybrid datasets for training of a Deep Neural Network

In industrial metal forming processes, the generation of datasets for inline and optical quality assessment is expensive and time-consuming. Within the research project SimKI, conventional metal forming plants were digitalized under the use of perception-based 3D-sensors in combination with a completely redesigned forming tool. The integration of optical quality observation methods connected with a retrofitting approach of the press tool provides the opportunity to generate an information-feedback loop that predicts part defects before their occurrence. Additionally, the SimKI-method combines conventional statistical measurement methods with AI-based defect detection algorithms that are trained by generic datasets of a finite-element simulation, real component images of a 3D imaging device, and a combination of both. The generated datasets are used to accelerate the training of a DNN-based algorithm to identify the position and deviation from the agreed quality. The high degree of innovation is based on obtaining real-time component quality information under the use of AI-based optical quality assessment, which in turn provides information to the control algorithm of the smart forming tool.


Introduction
The manufacturing sector plays a decisive role in the economy of many industrial countries. Considering Europe's nonfinancial business economy in 2017, approximately 9% of all enterprises are classified as manufacturing industry and manufacturing-related activities account for around 30% of the EU gross value added [1]. Due to the growing competition in Asia, new challenges arise for the manufacturing industry, as the associated price pressure makes continuous process improvements necessary. One approach to overcome these challenges and become more efficient is digitalization and the implementation of Industry 4.0 objectives [2,3], which requires a widely connected production environment and the integration of real-time data analytics procedures to gain benefits, e.g. performance improvement or reduction of waste [4]. Typically, the machinery in manufacturing companies consists to a significant proportion of older machinery, which often cannot be fully replaced due to the high investment costs. Digitalization is introduced by retrofitting [5], i.e. attaching sensors and data processing technology to acquire important process data such as information on machine usage and condition. In the context of small and medium-sized companies, the absence of process data and the complexity of forming processes led to structural disadvantages in comparison to bigger competitors [6]. To counteract this, experienced and highly skilled workers in combination with controlling the forming process are widely used and often considered the best solution for this challenge [7]. Therefore, the usage of Artificial Intelligence (AI) for the development of process control systems, preserving the knowledge and the experience of these employees, has become a feasible solution. A number of contemporary studies focus on perception-based defect detection methods for metal parts using Convolutional Neural Networks (CNN). In order to obtain more samples to support the CNN training process, generic image modification is a common method, e.g. Gaussian noise was randomly applied to the sample images [8]. Other methods merge relevant image parts, such as surface cracks captured under different lightning conditions, into a fused training image dataset [9]. The construction of modern and complex forming tools demands a deep understanding of the tool influence in combination with the impact of heat zones on the archived part quality. Aalen University's SimKI 1 research activities target optical DNN-based quality assessment and the abstraction of assessment results to enhance the forming simulation process. The procedure is demonstrated by the example of deep-drawn parts using a retrofitted hot forming tool. Especially the generation of knowledge prior to the production process and the early identification of tool areas that lead to defective or disordered production results can save post-processing costs. Thus, the objective is to demonstrate the applicability of the hybrid Deep-Neural-Network [DNN] based training algorithm that accelerates the AI learning process using generic training datasets. On the one hand, the main goal of the SimKI method is to overcome the need to produce defective parts and thus shorten the DNN training process for optical quality inspection, and on the other hand to use the information obtained to improve the design process of forming tools, compare Fig. 1. The method presented in this procedure targets the initial data collection and assessment of the SimKI procedure. The objective is to verify a fast and sufficient method for obtaining appropriate quality information of the produced parts. For this purpose, the perception-based acquisition of quality parameters captured by a 3D laser imaging device 2 was used to acquire the geometry data of the formed parts. The influence of forming parameter combinations is determined by a generic simulation process without the need to produce physical parts labelled as "good" or "defective". Thus, a digital twin of the forming tool is used in combination with a finite element method [FEM] forming simulation, which is 1 3 capable of generating a wide range of generic, image-based DNN training data sets and predicting defects as a function of the parameter settings used. Figure 1 illustrates the initial structure of the SimKI process. The SimKi approach consists of various complex system components that result in a fully digitised and validated digital twin of a retrofitted hot forming press tool. The forming simulation, Fig. 1 a, is used to design the hot forming tool according to the component requirements and to generate generic images of the formed component. The forming simulation also contains a digital twin of the forming press. Data from the forming tool and the forming simulation are transferred to an IoT system for the upcoming process data prediction and analytics procedures, compare Fig. 1b. Further, Fig. 1c illustrates the data acquisition process of the 3D sensor device. The acquisition results are used for training of the DNN as well as for automatic labelling of the hot formed parts. Figure 1e displays a sample of the trained neural network used to assess the captured image data sets. The hybrid data-set used for training the DNN-Training Tool (DNNTRAIN) algorithm consists of generic FEM simulation images (Fig. 1a) and point-cloud images acquired from the 3D laser imaging device (Fig. 1c); the latter one being attached to a collaborative robot to achieve high repeating accuracy of the component scans. According to the quality demands and standards in the automotive industry, the structural shape of the sample parts was chosen to demonstrate the relevant hot forming techniques. The parts were attached to a fixation unit to guarantee position congruence during the 3D image acquisition process.
In addition, the imaging device contains a web-based HMI interface, an embedded computer system for on-device measurement tasks and a TCP/IP interface to transfer point cloud data to the SimKI DNN training algorithm. The position and error level of the defects identified by the trained DNN algorithm are visualised by a heat map on the 3D scan image, compare Fig. 1d. To enable a user-friendly scan data quality assessment, the DNN algorithm is fully implemented in the AIBOT software tool. The tool was developed within the SimKI research project and provides a user interface that allows the user to automatically train the DNN with hybrid training data. A recursive data feedback system based on a real-time target machine [RTM] captures important process parameters that are necessary to influence the digital twin and the forming simulation, compare Fig. 1f. The captured quality information is used to change the simulation parameter settings to produce defect-free parts. Within the forming process, the SimKI system can be used to identify process parameters that influence the occurrence of cracks and shape deviations without producing test parts. Thus, time is saved Fig. 1 Simulation based recursive DNN training data generation of the forming process and material waste is reduced throughout the overall process. In addition, the data from the DNN evaluation method can be used to predict and correct wear-related quality problems in the manufacturing process.
Beyond this introduction, the paper is structured into three main parts. The first part gives an overview of the smart forming tool and discusses the sensors used with emphasis on the influencing factors along the manufacturing process. The second part provides information about the virtual forming tests based on the simulation model and the parameterized design of experiments (DoE) study. The third part presents experimental forming tests, the implementation of the DNN for quality assessment and the comparison of training strategies with a special focus on the different datasets.

Description of the smart forming tool
A smart forming tool is developed to capture detailed process data and to perform the demonstration manufacturing process using an AI-based parameter optimization of the RTM control algorithm. This includes a multitude of sensors that collect data sets using the RTM system. Furthermore, the RTM system provides the captured data sets to an Edge Microserver, which stores the process data in a database of the IoT platform PTC-ThingWorx for further process analysis. In order to create thermal distortion and deliberately produce reject parts, the forming tool is made of a four-zone punch, a four-zone die, and a one-zone blank holder, which is illustrated in Fig. 2. Those zones can be heated and controlled separately by the RTM.
The heated areas are thermally separated from the press or norm parts by insulation plates to guarantee their longterm functionality during heat assisted forming processes. Furthermore, the temperature of each tool zone can be monitored as a function of the processing time and displayed on the dashboard. Different temperatures can be applied in the forming tool zones to generate artificial distortion and therefore get both "good" and "defective" components, which is crucial for the evaluation by AI in the demonstration process. In addition, it is possible to illustrate forming at elevated temperatures to form components made out of high-strength aluminum alloys [10][11][12]. Especially the 7000 aluminium group can be well shaped using heat-assisted forming processes [13]. The forming tool is equipped with an integrated pneumatic short-stroke cylinder, as hardness measurements on the formed component can be converted into tensile strength values using a material specific factor and thus provide detailed information on the component quality. A thermocouple monitors the blank temperature during forming at elevated temperatures, as the formability of high-strength aluminium alloys is highly dependent on the temperature profile used. A distance sensor measures the acceleration and speed profiles, which are essential for the formability of the component and the load collective in the forming simulation. This sensor is also used for the calculation of the number of strokes and cycle time of the process, which is displayed on the dashboard and stored to be available for later productivity examination. For process monitoring, a camera is installed in the forming tool, which ensures on the one hand conspicuous details and on the other hand the correct position of the blank.

Simulation and data generation
Virtual forming tests are created to teach the AI system by using a large quantity of generative FEM datasets in combination with a few images acquired from the 3D imaging device. Finite-element method is used to simulate the real forming process including generic part defectives. The simulation model created contains both process parameters of the real forming process (e.g. forming speed or pressure) as well as a thermal material model of the blank to be formed. For this purpose, temperature-dependent material parameters such as flow curves and anisotropic parameters were determined in separate test series. These experimental data are used to create a material model in LS-DYNA® (MAT36) 3 . As a result, the simulation provides a rendered image of the formed component in which characteristic values such as distortion, formation of wrinkling, thinning, or cracks are shown. These images are used for training the AI in addition to the real images of the formed component.
To generate as many virtual images as possible a DoE study with LS-OPT® 4 is created (see Fig. 3). For this purpose, the forming simulation is parameterized. The process parameters are varied within the limits that can occur in the forming process. For each simulation, the process parameters, the resulting image, and the evaluation of the formed part are stored as a data set. It is important to ensure that the point of view from which the simulation images are created corresponds exactly to that of reality. The results of the DoE study form the basis for training the AI. Teaching the AI with input of simulation data has the advantage to change a large number of simulation parameters quickly and to detect interactions using DoE. In addition, few attempts are sufficient to train the AI. For this purpose, different teaching strategies are investigated, namely the AI training by (a) trial images only, (b) simulation data only, or (c) a hybrid variant consisting of 80% simulation images and 20% trial images.

Laboratory
For the experimental forming test of the demonstration component, the designed smart forming tool and a hydraulic press (Rapp and Seidt) are used. The press indicates a maximum tappet force of 1200 kN (see Error! Reference source . Further, a corresponding control system is provided to adjust the temperature of the zones to examine the hot forming process or other heat-assisted forming process routes. For the forming tests, the anti-friction agent Omega 35 is used due to its temperature resistance and good friction properties. Further, the sheet metal forming is carried out as a 1-stage process with a forming speed of 10 mm/s. In addition, a Scara robot and a two-part conveyor belt are used for partial automation (see Fig. 4, pos. 4, 5). The optical measuring system (GOM ARAMIS® 5 ) and the 3D laser imaging device (Gocator® 6 ) are used for inline data evaluation. The generated data is analysed in real-time and uploaded to ThingWorx (see Fig. 4, pos. 6, 7). When using a demonstration process, it is crucial for the evaluation by AI that both good and reject parts are produced. On the one hand, the insertion position of the sheet was changed to obtain slanted parts or incompletely formed parts with wrinkles. On the other hand, the temperature of the forming tool is changed to produce greater distortion or even component cracks.

Implementation of the DNN for quality assessment
The DNN for quality assessment is implemented using the Matlab®-API. 7 Here, predefined modules are available in the Deep Learning Toolbox with which the network architecture of a convolutional neural network is implemented. When implementing a convolutional neural network, the first step is to create a labelled image data set. For this purpose, the images are sorted into "defective" and "good" folders according to their quality information. The training dataset is generated as described in "Simulation and data generation" section and "Experimental and comparison study of training strategies" section. The initial 3D scan image dataset has a size of 267 training images, the simulation dataset 256 images and the hybrid dataset 322 images. To implement the coded architecture for neural network training, the generated image sets were uploaded to an image data store by algorithm. These images are split 90% into a training dataset and 10% into a validation dataset. The training data set is augmented using random rotation and random translation. The initial images of the forming simulation were provided as a data set with a native resolution of 2.07 megapixels, the 3D image sensor device used provides snapshots with a native resolution of 2.0 megapixels. Finally, the data set is resized to the resolution of 227 × 227 × 3 according to the first input layer of the DNN. The architecture of a CNN consists of several layers, with the convolution layers being the most important component. These convolutional layers consist of filters whose parameters include trained weights. The size and number of the filters in the network architecture are determined via hyperparameters. Filters are multiplied with the weights and the sum is added with the bias to the input matrices at intervals determined by the stride's hyperparameter. The output matrices calculated from this are submitted into an activation function. As with all newer networks, Squeezenet uses the ReLU 8 activation function (compare Fig. 5). The ReLU function covers a range between [0, ∞]. In contrast, the sigmoid function covers a range between [0, 1] and can therefore only be used to model probabilities. However, all positive real numbers can be modelled using ReLU. When calculating CNNs, the main advantages of the ReLU function are that there are no vanishing gradients and that the training efficiency is higher [14].
Another important component of CNN are pooling layers as they summarize the outputs of neighbouring groups of neurons in the same kernel map. The convolution layers usually result in more output than input parameters; pooling reduces the resolution and thus the subsequent parameters and increases the robustness to noise and distortions [15]. In general, the groups of neurons clustered by adjacent pooling units do not overlap. More precisely, a pooling layer can be thought of as consisting of a grid of pooling units spaced s (stride) pixels apart, each summarising a cluster of size zxz centred on the position of the pooling unit. Setting s = z results in traditional local pooling as commonly used in CNNs. Using max-pooling, the kernel map is summarised as the maximum unit value of the kernel map and using average-pooling, the kernel map is summarised as the average value of all units of the kernel map [16]. To reduce overfitting, dropout layers are used. Overfitting results in the DNN indicate better results when using the training data set, but worse results when using the validation and test data sets. The main idea of a dropout layer is to randomly remove units (along with their connections) from the neural network during training to prevent units from co-adapting. During training, samples are removed from an exponential number of different "thinned" networks. At test time, the effect of averaging the predictions of all networks can be approximated by simply using a single non-thinned network with smaller weights. This significantly reduces overfitting and results in significant improvements compared with other regularization methods [17]. When applying neural networks for classification, the softmax activation function for the output parameter is used to interpret the output values as probabilities (compare Fig. 6).
The softmax activation function is a non-linear activation function that sums the outputs of the network to a total sum of one and thus provides a condition for interpreting the outputs as class probabilities [18]. It can be written mathematically as presented in Fig. 6. The denominator works as a normalisation to obtain values in the range  The Squeezenet architecture consists of 69 Layers whereby eight so-called "fire modules" (compare Fig. 7) are a special feature. One fire module consists of a 1 × 1 convolutional layer, which is used as a squeeze convolution layer and feeds after a ReLU-function into an expand layer, that has a mix of 1 × 1 and 3 × 3 convolution filters. If the number of filters in the squeeze layer is set smaller than the sum of filters in the expand layer, the number of input channels of the 3 × 3 convolutional filters decreases, and thus the number of parameters decreases overall [14].
After the 9th fire module, a 50% dropout layer is implemented. The trained Squeezenet-DNN is used for initial testing and adapted individually for the investigated use case. As the Squeezenet is already trained for 1000 classes, the classification layer has to be replaced with a 2-class classification layer. In addition, a fully connected layer is added. In sum this leads to 26 convolutional layers, 3 max pooling layer and 1 global average pooling layer. A dropout rate of 50% is used here. As described above the ReLU-function is used at every convolution layer output. In the presented use case, an Adam-optimization function was used to optimize the parameters. Compared to stochastic gradient descent, Adam-optimization often leads to a faster initial decrease in training loss [20] and has a default learning rate that works well across problem settings. In comparison to optimizers such as the stochastic gradient descent with momentum, the Adam optimizer does not feature a fixed learning rate, instead, it is calculated individually for each time step. The updated parameters are calculated as shown in formula (1). t represents the updated parameters, is set as the learning rate and ϵ is a constant which is added to avoid divisions by zero. Also, the algorithm updates are exponential moving averages of the gradient ( m t ) and the squared exponential moving averages of the gradient ( v t ). The hyper-parameters 1 , 2 ∈ [0, 1) control the exponential decay rates of these moving averages. Both averages are calculated according to the formulas below, which are for the exponential moving averages of the gradient m t formula (2) and for the exponential squared moving averages of the gradient v t formula (3) is used. ∇E t−1 represents the gradient of the loss function at the point of the parameter vector [20]. The parameters α, ϵ, 1 and 2 are set in the training options. In the presented case a base learning rate of 1e −4 is used. The default values are used for ∈= 10 −8 , 1 = 0.9 and 2 = 0.999 . Due to a limited computing memory, the complete data set is divided into batches of 32. During training, the network was validated every 30 iterations with the validation data set.

Comparison of different training strategies for the AI
On the one hand, the adaptation of the simulation parameters is based on real-time process data (zone temperature, forming speed and duration, etc.) collected via the RTM and on the other hand on perception-based data collected via a 3D imaging sensor [21]. The following section describes the perception-based quality assessment process under the use of a DNN-based method. As a basis, a software interface for point-cloud data acquisition of a 3D laser image sensor is developed using a TCP/IP interface. The interface provides the opportunity to capture 3D data by a LED sensor with a resolution of up to 0.06-0.09 mm in the XY direction and 0.0047 mm in the Z direction. Further, the field of view is 142 × 190 mm and the scan rate is 4 Hz. The sensor provides a fully autonomous system including data processing, measurement, database, and a web server. Part dimensions and surface quality can be inspected inline using predefined features within the sensor system. Data can be accessed via TCP/IP from the Edge Micro-Server to provide them as process performance data for condition monitoring tasks. Furthermore, the additional image-based method is an advanced quality inspection instance that uses the sensor point cloud data for an AI-based analytics tool that provides the ability to identify the intensity of occurring errors by a heatmap. Error! Reference source not found. 8 provides an overview of defects that may occur during the forming process of the sample part. In particular, the formation of wrinkling (e) and cracks (f ) are a representation of the heatmap generated by the AI. The heatmap shows the position of the defect and the accuracy of detection (see Fig. 8).
A widespread DNN based on SqueezeNet architecture is used for the reference training and result evaluation [22]. The training is performed at a workstation with an Intel Xenon® Quad-Core E5-2690 processor. One of the main challenges in applying this method over a wide area is to provide a scalable amount of sample training images that represent the variety of defects and anomalies. For this purpose, the conventional point cloud scan data provided by the 3D imaging device is enriched by generic simulation datasets. The simulation generates generic defects such as cracks or distortion and provides them as a greyscale image. The main challenge in this step is to match the image data representation of the simulation with the image data provided by the 3D sensor. Figure 9a displays a sample of the scan data set. Figure 9b represents the filtered simulation image data. The training data is converted to a greyscale representation and filtered to match the simulation dataset. It has been shown that the best DNN evaluation results can be achieved when the DNN has been trained with equalised greyscale images. This allows the DNN input layer to be adapted in subsequent configurations. The final acquisition results were labelled as "good" and "defective" and shifted to the respective training folder by algorithm. The DNN training using this hybrid training dataset was performed with 20% abstracted images of formed parts captured with the 3D imaging device and 80% images of the generic forming simulation.
In Fig. 10 represents the confidential score of the verification runs with the trained DNN. For verification, "defective" and "good" parts are randomly provided to the sensor. Based on the test results, the following important statements can be achieved: • For the "good"-sample the DNN trained with only simulation data reaches a median confidential score of 0.228 as "defective" and 0.772 as "good" (compare Fig. 10). All components are classified correctly as "good" parts (compare Fig. 11). • The "defective" parts are classified under the use of simulation training data with 0.996 as "defective" and 0.004 as "good". Figure 11 indicates that 17.95% of the defective scan-samples note are correctly classified. (1)
The initial performance validation of the hybrid trained DNN indicates that a mixed training method using images with generic defect pattern images is competitive with conventional training methods of a DNN with captureonly images. The variance of the assessment results of the hybrid training method is even better than that of the scan-only trained DNN, compare Fig. 10. The number of misidentified "good" parts is approximately 10% higher than that of the scan-only training method. The identification of defective parts is significantly higher than with the scan-only method. The initial validation was carried out under laboratory conditions and has to be validated in further test runs. 1 3

Conclusion
The research results indicate that even outdated forming machinery can be equipped with intelligent sensor technology for AI-based quality assessment. The perception-based acquisition of process images with generic training data offers the potential to identify critical areas of press tools that lead to insufficient component quality. Using a DoE study to simulate the forming process, the parameter combinations for good and defective components can be determined. This can be used to train an AI to later evaluate the quality of real components. By avoiding physical defect components that are commonly required for a successful DNN training procedure, the hybrid SimKI method is time-as well as cost-saving, and performs similarly to conventional training methods using a set of generic image datasets. In combination with the heatmap representation of component defectives, the design and try-out process of forming tools can be considerably improved due to the identification of critical tool-areas. In addition, parts with defective occurrences can be integrated algorithm-based into the image data sets, e.g. for inline quality assessment. Thus, it should be noted that the advantages in the hybrid DNN training procedure by enriching the captured image training data with generic simulation data sets promise an enormous potential for increasing quality and efficiency in AI training procedures.
The future work within the research group targets the combination of sensor data from the retrofit approach of the forming tool and the AI-based information about the identified critical areas that lead to defective parts. Merging the process data within the upcoming development steps provides the opportunity to directly influence the forming process in real time.
Author contributions All authors contributed to the study conception and the production of this paper. WR, JS, JJ and MS contributed to the research work in the field of material characterization, simulation data, and forming tests. Data processing, system integration, and the development of a neural network were performed by SF, CR and TS. All the authors read and approved the final manuscript.Authors information Sebastian Feldmann studied industrial engineering and earned his PhD in the Chair of Mechatronics at the Faculty of Engineering at the University of Duisburg-Essen. He is co-founder of the start-up STURM-INDUSTRIES. As winner of the "Adesso Mobile Solutions Award" he also founded NECTONE, a service provider for mechatronic and robotic product development. In 2018, he was appointed with the professorship for digital system integration in mechanical engineering at Aalen University, where he teaches advanced methods in the fields of mechatronics, artificial intelligence, internet of things, and robotics.
Wolfgang Rimkus has over 30 years of experience in the field of simulation and calculation using the finite element method. Since 2016 he has been head of the Lightweight Construction Technology Center, a consortium consisting of the city of Schwäbisch Gmünd, the University of Design in Schwäbisch Gmünd, the Institute for Precious Metals (FEM), and Aalen University of Applied Sciences. There is a high level of expertise from R&D projects in the application of industryrelated simulation and calculation programs (including ANSYS, LS-DYNA, CREO/Simulate, Altair/Inspire/Optistruct), which is evidenced by a large number of publications, especially in the field of manufacturing simulation. Michael Schmiedt graduated with a master's degree in "Materials and Production Engineering" from the University of Stuttgart. The knowledge of materials required for lightweight construction and the associated manufacturing processes was deepened through many years of work at Voestalpine Automotive Components. Since 2019, Michael Schmiedt has been working as a scientific assistant at the Lightweight Construction Technology Center at Aalen University. Michael Schmiedt is currently a PhD candidate at Glasgow Caledonian University with research interests in metal forming of high-strength aluminum alloys as well as hybrid components with local reinforcement.
Julian Schlosser completed his PhD in the field of Hotforming technology at Glasgow Caledonian University after studying mechanical engineering at Aalen University of Applied Sciences. Being a project manager in the Research and Development department at Voestalpine Automotive Components, he deepened his expertise in the field of simulation and forming techniques. Since 2017, Julian Marc Schlosser has been employed as a research assistant at Aalen University in the Lightweight Construction Technology Center.
Christian Rathmann holds a PhD in mechanical engineering from the Ruhr University Bochum in the field of smart materials and their technical and economic potentials. Prior to this, he studied industrial engineering at the University of Duisburg-Essen. With his extensive engineering and business expertise, as a management consultant, he advises multinational companies in production and supply chain mainly in operational excellence and digitalization topics. Christian Rathmann is also a freelance module director and lecturer at the IU International University of Applied Sciences in digitalization.
Tobias Stempfle graduated with a bachelor's degree in "General Mechanical Engineering" at Aalen University. Upon his graduation, Tobias Stempfle implemented an artificial intelligence algorithm to detect defects in metal die casting parts and improved the metal die casting process by increasing the part quality. Currently, Tobias Stempfle graduates with a Master of Science degree in "Advanced Materials and Manufacturing".
Funding Open Access funding enabled and organized by Projekt DEAL. Open access funding is provided by Aalen University of Applied Sciences. The presented results are part of the german research project "SimKI-Echtzeitdatenerfassung und Parameterkorrektur mittels einer mit Simulationsdaten angelernten KI". Funding was provided by the Ministerium für Wirtschaft, Arbeit und Wohnungsbau Baden-Württemberg within the research program KI-Innovationswettbewerb.
Data availability Raw data is available upon request from the corresponding author.

Code availability
The training and validation images analysed during the current study are available in the BW-materials cloud repository: https:// archi ve. mater ialsc loud. org/ record/ 2022. 16. The source code generated is available on reasonable request from the corresponding author Sebastian Feldmann at sebastian.feldmann@hs-aalen.de.

Declarations
Ethics approval and consent to participate Not applicable (this article does not contain any studies with human participants or animals performed by any of the authors).

Competing interests
The authors declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.