Keywords

1 Introduction

This chapter focuses on low-level machine and sensor data from manufacturing processes. Data within the Internet of Production (IoP) can be sourced at different layers, ranging from a single sensor over a production cell to the shop floor and, finally, to the World Wide Lab (WWL) where information is exchanged globally across company borders (Pennekamp et al., 2019).

Figure 11.1 visualizes the different process layers of production technology according to three process durations (left), which range from milliseconds to days. The involved parties (exemplary) and their influence (right hand side) further differ with each process layer.

Fig. 11.1
figure 1

The process layers of production technology involve different time scales, ranging from close control loops to information flows during usage, and various parties (black: subject of this chapter, gray: upcoming research) (Mann et al., 2020). (With kind permission of Springer Singapore)

The workpiece as the core layer is influenced by events occurring within milliseconds, while the assembly is handled in the context of further production steps and ranges in minutes. The final product may ultimately involve the supply chain, which may range over days. Therefore, each process layer requires specific methods of data acquisition and processing to be able to control their specific quality requirements. The connection of the different layers is a major challenge within the IoP.

In this chapter, we initially focus on low-level data sources on the shop floor. Here, raw data from production processes is recorded by sensors directly at the production devices. Thus, any resulting manipulations can have an immediate impact on the process. More specifically, the proximity to the production processes requires a tight connection of data-driven methods and expert knowledge of the production technologies to advance the state of production. By using combinations of technological information and data-driven methods in so-called gray box models, it is possible to utilize raw data to control the processes directly and provide useful information for high-level domains, such as process planning (cf. [Resilient Future Assembly Systems Operation in the Context of the Internet of Production]), production management (cf. Part V, “Production Management”), or production development (cf. Part VI, “Agile Development”). A digital shadow (Brauner et al., 2022) holds the resulting information and models and is the foundation for different approaches, thus enabling a better understanding of processes on shop floor level (cf. Jarke et al. 2018). In line with the vision of the IoP with its WWL, models are expected to be transferable to and (re)usable by other stakeholders and production sites with their own machines, shop floors, and domain experts. Thus, the continual and iterative exchange of process information allows for even more advances. We summarize this vision and our methodological approach in Fig. 11.2 following an abstract closed-loop control scheme.

Fig. 11.2
figure 2

Abstract scheme of a model-based control system for manufacturing processes

The system that is to be controlled is represented by the machine and the process on the shop floor (upper right, green area). Information is gathered by different sources, combined via sensor fusion, and pre-processed before it is contextualized using expert knowledge (lower center-right, blue area). The enriched data is used to identify, build, and optimize gray box models (center, yellow area), which themselves are used for decision support and autonomous control systems of the machine and processes (upper center, red area). Models and data are shared within the IoP to connect different shop floors (left, gray area). To enable this complex exchange of information, common ontologies across company and even technological borders are necessary and need to be established. Existing ontologies (e.g., process specification language (PSL) (Grüninger, 2004)) do not suffice the necessary requirements in terms of cooperation within the IOP. This especially holds for low-level models and data, which have strong adaption to specific manufacturing technologies.

This chapter addresses different aspects within the scope of connected and controlled processes following the structure in Fig. 11.2 and is organized as follows: First, Sect. 11.2 provides a general state of the art covering methodical fundamentals. Subsequently, Sect. 11.3 covers approaches and methods in the subsystems’ data aggregation and sensors (Sect. 11.3.1), model identification and optimization (Sect. 11.3.2), autonomous systems and decision support (Sect. 11.3.3), and the model usage in connected job shops (Sect. 11.3.4). As technological aspects remain essential, the developed methods are demonstrated on several domain-specific use cases in the manufacturing fields of milling and welding. A conclusion of the chapter and discussion of future challenges is in Sect. 11.4.

2 State of the Art

The overarching scheme (cf. Fig. 11.2) of the applied approaches spans a wide field of disciplines. Applying the idea of the IoP on these disciplines requires specific fundamentals, which can roughly be grouped into three topics: data acquisition and semantics, model optimization, and model-based control and decision. Data acquisition and data semantics or ontologies both concern the flow of data from its source to the model identification and optimization at the center of Fig. 11.2. While the former describes dependencies and provides transferable knowledge from previous processes within the WWL, the latter helps optimizing and tailoring to the specific processes by contextualizing this transferable knowledge, e.g., using existing domain knowledge or hybrid, gray box modeling approaches. Finally, the optimized models are utilized in advanced control approaches to maximize the productivity while accounting for constraints regarding quality and safety. Eventually, the tailored models are fed back into the collective database. In the following, a brief overview of the fundamentals regarding these topics is given to enable further understanding of the applied approaches and their interconnection.

Data acquisition in connected job shops

Acquisition of data across the shop floor remains an obstacle along industrial production systems. The high costs of sensors, acquisition systems, and infrastructure as well as the costs for connection and configuration of multiple assets discourage investments. The machines themselves, however, come with a significant amount of data sources. Such data was initially only used for control purposes, e.g., positioning the machine axes as precisely and as fast as possible. However, to monitor the machine tool, or even establish a digital shadow of the manufacturing process, it is essential to systematically collect and store said data (Siemens, 2022).

Internal machine data as well as further data from external sources can be combined in a digital shadow of the process using data fusion techniques. Data fusion is the combination of different sensors to a single information, enhancing signal quality and reducing uncertainty. According to Hall and Llinas (1997), data fusion can be done on three different levels: (i) raw data level, (ii) feature or state vector level, and (iii) decision level. The main challenge when fusing these different signals is maintaining the high-quality information within the signal without deteriorating the information by overly relying on poor signals (Hall and Llinas, 1997).

Model-based controllers

Model Predictive Control (MPC) is an intuitive control algorithm which explicitly considers process knowledge (by an according model) to optimize the future behavior of the controlled system under given objectives and constraints (cf. Richalet 1993). Further compared to simple control methods, such as PI controllers, MPC has the ability to anticipate future changes in the reference trajectory and handle large time delays and high-order dynamics. While most real processes are not linear, they can often be approximated by linear models over a small operating range. Linear MPC approaches are used in the majority of applications. When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used, such as linearization techniques (cf. Rawlings 2000). Due to its algorithmic complexity and demand of process knowledge, MPC is still not state of the art for the control of production processes. However, if process knowledge is available, research results show the effectiveness of MPC for quality- and safety-oriented control of production processes (cf. Stemmler et al. 2019; Wu et al. 2019).

Data-driven modeling approaches

Existing MPC approaches typically require the availability of a sufficiently accurate model to achieve desired closed-loop stability and performance guarantees. As there are often only uncertain system models available, MPC approaches allowing for an online adaptation or learning of the underlying model are of crucial importance. Fueled by the widespread success of machine learning, the question of how to beneficially employ learning techniques in the context of control has received an increasing amount of attention in recent years. Gaussian processes have been used to define and adapt online a continuous stirred-tank reactor model for MPC by Maiworm et al. (2021) and for autonomous racing by Kabzan et al. (2019). Similarly, neural networks are used to learn models for MPC in numerous applications, such as distillation columns (Vaupel et al., 2019) or laser beam welding (Bollig et al., 2003). Finally, Support Vector Machines (SVM) have, e.g., been used for predictive path tracking of autonomous vessels (Liu et al., 2021) and the control of a neutralization reactor in chemical plants (Lawrynczuk, 2016).

The maximum possible control performance of MPC is limited by the accuracy of the underlying process models. Identifying these models, especially for highly nonlinear systems, is a time-consuming and complex endeavor as most real systems are too complex to be fully described by physical models (Schoukens and Ljung, 2019). However, it is possible to derive an approximate system model by first-order principles with sufficient accuracy. Such models are known as white-box models.

Since most industrial processes are closely monitored, measurement data describing the processes is often available. This data can be exploited to create data-driven input/output models, so-called black-box models. While they have the potential of being universal function approximators, they often lack physical interpretability as they only map relations between system inputs and outputs.

Gray box models intend to combine the advantages of both models: data-driven methods estimate/predict parameters or system dynamics and, thus, augment white-box models based on process knowledge.

The aforementioned SVM represent a modeling technique with gray box ability. They come with great theoretical guarantees, such as globally optimal models without the risk of sub-optimal learning and favorable computational complexity for large feature spaces. The latter is often required for complex physical systems with multifaceted dependencies. SVM further map the given features into a larger dimensional space to fully explore the complex dependencies between the features. Thus, SVM are proficient at discovering unknown dependencies for dynamic modeling (Suykens, 2009).

At the same time, an additional computational expense for the larger dimensional space can be avoided by so-called kernel functions. Kernel functions preclude an explicit mapping of the features and instead operate directly on the target space (henceforth referred to as kernel space). Furthermore, specifications for the kernel function and kernel space allow for gray box modeling by pairing SVM with existing domain knowledge (Ay et al., 2019b). Hammerstein and Wiener’s approaches further allow embedding of existing knowledge (Falck et al., 2009). Hence, in combination with fast solution methods (e.g., sequential minimal optimization (Ay et al., 2021) or least-squaresSVM (Liu et al., 2021)), SVM are highly suitable for online identification and MPC of systems with fast changing dynamic behavior.

Integrated, data-driven quality control

Quality can be described in terms of workpiece quality (nonvolatile workpiece properties, e.g., workpiece surface quality or weld seam geometry) and process quality (volatile process properties, e.g., sustainability and economic efficiency) under appropriate process boundary conditions (e.g., materials and auxiliaries). Due to their physical mechanisms and interdependencies, production processes cannot be adjusted arbitrarily without violating basic process and machine limits such as stability boundaries. The moderation of the application requirements with the available process space thus describes the core challenge of process and workpiece quality. Model-based quality control approaches must therefore not only represent quality on the basis of available sensor data and digital shadows but also offer a control strategy that takes into account basic stability criteria. The first key component is the digital shadow for describing the workpiece and process quality on the basis of sensor data. Primary sensor data is characterized by high availability and is directly available at the production system. Secondary process data requires dedicated sensors, but directly describes the quality characteristics. The modeling effort of the digital shadow is, therefore, inversely proportional depending on the significance of the data available, e.g., the availability of sensor data instead of process data. The control strategy ultimately contains the methodical competence to control the process according to corresponding target conflicts on the basis of decisive and transient features. The production system finally receives the ability not only to control quality but also to provide usable quality data for the production network (Reisgen et al., 2019, 2020c).

Semantic Web and ontologies

The challenge of creating machine-understandable data is addressed by techniques of the Semantic Web (Barbau et al., 2012). In essence, it enables the design of universally valid ontologies, a formal description how annotation data is structured, as well as an automatic annotation of data. While these technologies have long been a subject of research, there is still an ongoing discussion on their usability and appropriate use cases (Hogan, 2020). The common opinion is that these technologies have a huge potential, especially in domains like the IoT (ISO/IEC, 2018) or the IoP.

In the Semantic Web, formal ontology description and data storage is based on the Resource Description Framework (RDF), a data model standard developed by the World Wide Web Consortium (W3C) (Cyganiak et al., 2014). The RDF data model is represented as a directed graph, where each relation is a triple consisting of two nodes (subject, object) and an edge (predicate) linking them together. Every resource (subject, predicate, object) in RDF is formed analogously to the convention of URLs to ensure a worldwide unique identifier. The Ontology Web Language (OWL) (McGuinness and Van Harmelen, 2004) was developed on the basis of RDF and is used in the Semantic Web to describe ontologies.

The aforementioned fundamentals lay a methodological base for the further described approaches. The further described applications rely on these methods, though providing domain-specific solutions within different fields in production technology.

3 Domain Application

This section covers the three main subsystems of the control (cf. Fig. 11.2): data aggregation and sensors, model identification and optimization, and autonomous systems and decision support. The shop floor, consisting of machines and processes, marks the system to be controlled and is thus not covered in a single subsection. The covered approaches are domain specific and aim at explaining different solutions following the common idea of the IoP. These approaches mostly cover specific technological solutions at shop floor level (Sect. 11.3.4), but explicitly target model and data usage across shop floors and the IoP.

3.1 Data Aggregation and Sensors

While installation of new sensor systems is expensive, the machine tool itself has already a big variety of sensors integrated which can be sampled. Furthermore, connecting and utilizing multiple data sources arise different new challenges, namely, handling of redundant data or data with different frequencies. Another project targets the quality control after the manufacturing process, aiming for quality measurement of the workpiece directly on the machine.

Data acquisition and signal fusion

When aggregating data from the shop floor, different sources are used to sample data from machines and processes. While machine tools have integrated sensors that can be used as data sources, commercial CNC controls have a large variety of interfaces, making it difficult to acquire data from different machines.

For the continuous acquisition of high-frequency data from machine tools, a middleware for commercial CNC controls was developed (Brecher et al., 2018), e.g., Sinumerik 840D sl and Mitsubishi M700/800. Machine internal data, including drive data, such as motor current, actual position, and spindle speed, as well as process- and control-related information, i.e., the actual tool number, the line number of the NC program, zero point offset, etc., is continuously captured in position control cycle (500–1000 Hz). Furthermore, the trace software is extendable to synchronously sample signals from external sensors, for example, thermistors and piezoelectric sensors. For lower-frequency dimensional data, data acquisition using standardized Open Platform Communications Unified Architecture (OPC UA) interfaces to machine data can be sufficient. OPC UA interfaces exist for a multitude of machine controllers. Machine data are read from the machine control system which is realized here by using an edge computer (Brecher et al., 2018) which transfers the information into a downstream data infrastructure consisting of the publisher/subscriber protocol Message Queuing Telemetry Transport (MQTT) as illustrated by Sanders et al. (2021). Sustainable data collection and storage are essential for future reuse and analyses (Bodenbenner et al., 2021a). Hence, machine information is annotated with metadata using a defined data syntax that can be automatically structured and stored in a database (Bodenbenner et al., 2021b), laying the groundwork for sustainable data storage according to FAIR data principles (cf. Sect. 11.3.4). Sample rates of machine internal data are usually limited to the abovementioned 1000 Hz.

However, some applications require data with higher sampling rates. During rough machining in milling, e.g., the process forces determine productivity and product quality (Liang et al., 2004). Typical approaches for monitoring process forces still mostly rely on piezoelectric dynamometers. These are costly and reduce the stiffness of the tool-machine system, making them nonoptimal candidates for usage in industrial environments. As an alternative, motor current of the machine tool’s feed drives can be used as a soft sensor for indirect force measurements as it is directly proportional to the motor torque (Miura and Bergs, 2019). However, current signals of feed drives can have low sample rates, be noisy, and be of varying quality and, thus, require sensor fusion of different signal sources to obtain a more stable signal. At the core of the sensor fusion, the spindle provides a high-quality signal as it has a constantly high speed. External sensors with sample rates of 50 kHz are integrated in the motor power circuit and sampled on different real-time devices to include high process dynamics.

In practice, the fusion can be realized using a Kalman filter which implements a system model and continuously corrects this model based on signal quality, while it is itself optimized through experiments (Schwenzer, 2022). Following the differentiation of signal fusion according to Hall and Llinas (1997), this can be considered on the second level, as the information needs to be transformed to a common coordinate system as well as put into relation using force models. The resulting high-quality signal is then used to identify models of the process at run-time. To use this in closed-loop systems (cf. Sect. 11.3.3), one main challenge is that the sensor data needs to be provided with low latencies to allow for fast responses. Process-near sensor processing is best suited for this purpose and, additionally, Time-Sensitive Networking (TSN) can help to ensure the low latencies. However, additional challenges arise as soon as the process control is moved to remote locations. Here, novel in-network computing approaches might provide a suitable middle ground as reasonably expressive computations (Kunze et al., 2021) as well as simple control functionality (Rüth et al., 2018) are possible on networking devices.

On-machine measurements

Common quality management measures for inspecting specification and tolerance compliance often involve transporting workpieces into climate-controlled measurement rooms, acclimatization periods, and dimensional measurements on Coordinate Measuring Machines (CMMs). A number of standards exist to enable traceable measurements that are, e.g., required for safety-critical applications and part certification. ISO standards 10360 and 15530 define ways to analyze the measurement uncertainty of CMM measurements accounting for a multitude of influences.

With on-machine measurements, the workpiece’s geometry is measured with the machine tool’s probing system itself after material removal and in the original clamping situation. Advantages are immediate feedback on dimensional accuracy allowing for direct rework while significantly reducing the number of necessary production steps compared to CMM measurements in a measurement room (Sanders et al., 2021). To create reliable workpiece measurements on the same machine, the workpiece was machined on geometric and thermoelastic machine and workpiece errors need to be accounted for. Thus, aforementioned uncertainty analyses and corresponding modeling approaches for error compensation of machine (Dahlem et al., 2020) and workpiece deformation are required. Current work within ISO/TC 39/SC 2 aims at translating CMM-specific measurement definitions for machine tools into an additional technical report part 13 for ISO 230.

To illustrate the relevance of the said topic, Emonts et al. (2022) performed an experimental analysis of thermoelastic deformation of an example turbine housing. They simulated machining heat influx by attaching heat pads to the workpiece, increasing local workpiece temperature by 30 K and average temperature by 15 K. Results showed a part diameter increase of approx. 500 \(\upmu \)m (nominal approx. 1400 mm), e.g., twice the expected diameter change, assuming homogeneous temperature distribution and linear thermal expansion with average part temperature.

3.2 Data-Based Model Identification and Optimization

For controlling of production processes, the usage of different data sources alone is not sufficient. Sensors are noisy and systems may change quickly. Online model identification and model optimization enable consideration of system changes and extend the range of the validity of models. This subsection covers research in online force model identification in milling as well as model optimization in the condition monitoring of ball screw drives.

Online model identification

Model-based control systems require online identification to account for changes within the systems, which can occur in both, the machine and the process. Models need to be as simple as possible while remaining as precise as necessary to suffice the usage within control systems. Milling is a highly flexible process resulting in constantly changing engagement between tool and workpiece and, as a result, nonlinearly changing process forces. The relation between geometrical engagement of tool and workpiece and process forces is modeled using the force model according to Kienzle (1952). The coefficients of the Kienzle model are identified using engagement simulation data from the process planning phase and the fused force signals from the motor currents (cf. Sect. 11.3.1). As the Kienzle model is nonlinear and nonobservable, an ensemble Kalman filter is used as a nonlinear observer to enable an instantaneous model identification in one step (Schwenzer et al., 2020). This approach allows usage of a simple force model to account for changes in the manufacturing process, instead of trying to apply a very specific model for each process beforehand. The identified models can be shared in the IoP and reused as a starting point for future identifications.

Regarding the machine tool, SVM are utilized to identify the unknown behavior of the drives and enable model-based controllers like MPC to accurately forecast the future engagement of the tool while maximizing its velocity (Ay et al., 2019a). The data lake can thereby be deployed for an initial identification as it can be searched for already existent data from a comparable process. Thus, no additional resources (time and personnel) have to be expended for model identification experiments. The data lake models are then tailored to the present process online by SVM and sensor-acquired data, with aforementioned methodologies for gray box modeling and efficient online identification (Ay et al., 2021).

The implicit data selection of SVM also has positive implications for data processing and memory efficiency. SVM assess the importance of every data sample for the resulting model to the extent that irrelevant data samples are excluded. Therefore, the limited memory during the process can be utilized more efficiently only for the relevant subset of data. Furthermore, when new sensor-acquired data emerges and the memory runs at limit, SVM offer two measures to reach a decision about which data to exclude from memory in favor of new data: (i) the model weights of SVM for the aforementioned assessment of data relevance and (ii) the evaluated kernel function of SVM. The latter determines the similarity between data samples for correlation-based kernel functions. Thus, the most expendable data samples can be determined as those with low relevance for the resulting model and high similarity to already existing data samples.

However, the utilization of the initial models from of the data lake is not sufficient in an application for forecasting within MPC. Heuristic methods are thereby combined with robust optimization to automatically tune the controller and its soft sensors. Thus, no additional configuration of the controller is needed at shop floor level. A suitable method for this purpose is Bayesian optimization as it can consider model uncertainties due to later model adaptation/optimization (Stenger et al., 2020).

Overall, the presented technologies enable a closed data cycle: Information from the data lake can be used for the initial deployment of the model-based quality control of the process. Subsequently, sensor-acquired data helps the model-based control framework to self-optimize during the process. Finally, the newly optimized models are fed back to the data lake, including relevant data and production context.

Model optimization for condition monitoring of ball screw drives

The availability of production facilities and equipment plays a decisive role in the competitiveness of manufacturing companies under the increasing pressure of globalized markets, with a particular focus on reducing downtimes due to unplanned maintenance activities to ensure sustained high productivity (Bullinger et al., 2008; Schapp, 2009). The inclusion of empirical knowledge regarding the service life of machine components and tools cannot generally be included in maintenance planning, as there is usually a large variance in the components to be manufactured. The resulting conflict of goals between the reduction of non-value-creating activities (through reactive maintenance) and the avoidance of unplanned downtimes (through preventive maintenance) represents a major challenge (Wloka and Wildemann, 2013). The component load is directly linked to the feed axis forces of a machine tool, which for their part correlate with the manufacturing productivity. The poor accessibility of components within the machines leads to comparatively high costs and long downtimes for maintenance work (Brecher et al., 2008). The analytical prediction of the service life of ball screws is based on calculations according to the standards (DIN/ISO, 2011), which are based on findings by Weibull as well as Lundberg and Palmgren. The service life \(L_{10}\) (number of revolutions) with an occurrence probability of 90% is calculated on the basis of empirically determined equations (Lundberg and Palmgren, 1949; Weibull, 1949):

(11.1)

where \(C_{\mathrm {dyn}}\) is the dynamic load rating and \(F_m\) is the equivalent axial load.

Due to the non-consideration of decisive influencing factors, such as the stroke length, manufacturing deviations of components, additional loads due to assembly errors, and the unknown loads occurring during operation, the calculated and the actual service life match only in 20% of the cases. Denkena attributes this, among other reasons, to the fact that the service life calculations do not offer the possibility of taking into account the usage history of a system (Denkena et al., 2009). This highlights the need to develop new concepts for forecasting component failures.

Machine data of real processes, which are continuously available within the framework of the IoP, contain the potential to increase the availability of plants and to ideally plan necessary maintenance work by using the knowledge implicitly included in the data. The procedure for achieving this goal is divided into two parts: (i) Determination of the current machine condition on the basis of historical operating data, which can be extended with condition indicators obtained from reference runs. (ii) Prediction of the usage-dependent development of the machine condition based on the current condition and the assumption of a future usage profile. Since in the literature the feed axes of machine tools, and, in particular, the linear guides and ball screws, are identified as the cause of machine downtimes, the IoP will pay particular attention to these components. From the recorded data of the motor current, the effective feed force can be calculated in a first approximation for a horizontally installed feed axis with a ball screw (Brecher et al., 2020):

(11.2)

where \(F_P\)/\(F_F\) denotes the process/friction force, h the spindle pitch, n the motor speed, i the gear ratio, I the (torque-forming) motor current, \(J_G\) the gearbox inertia (input side, output side), \(J_{SP}\) ball screw inertia, \(J_M\)/\(J_T\) the motor/table inertias, \(k_T\) the torque constant, and \(T_F\) the frictional torque (input side, output side).

The necessary compensation of inertial and frictional forces requires different model depths depending on the design of the feed axis. These depend on the different designs of gearboxes, linear guides, and other machine components or disturbing influences (Brecher et al., 2020). In a first step, this procedure enables service life calculation and prognosis on the basis of historical load data and service life models according to the state of the art (Munzinger and Schopp, 2009; Huf, 2012).

Since known models are simplified in their complexity by assumptions made and thus reduced in their prognosis quality, several model extensions were developed. This allows load distributions within machine components to be calculated in a process-parallel manner as a function of geometry, material, and load parameters, so that a discrete-position service life calculation can be carried out. The model extensions described by Brecher and Biernat allow the consideration of load data determined in parallel with the process as well as further meta-information, including spindle pitch errors and the tolerance class of machine components.

The influence of the tolerance class and the stroke length on the service life can be specified as up to 30%. Relevant models are presented in detail in Brecher et al. (2020, 2021a,b,c), among others. Figure 11.3 shows the example transfer of traced data into a position- and force-resolved representation of fatigue-relevant load cycles. The availability of data from production and test benches, which are obtained in the context of the IoP, enables a cross-domain validation as well as further development of the reduced models according to requirements under real operating conditions. Finally, these models will be used to implement a prognostic maintenance planning with significantly improved quality.

Fig. 11.3
figure 3

Transfer of a time series into matrix notation (Brecher et al., 2021b). (With kind permission of wt Werkstatttechnik online)

3.3 Autonomous Systems and Decision Support

The main objectives to enhance production processes are improving productivity and reducing tool wear while maintaining part quality and process safety. Data acquisition and model identification are necessary steps to apply autonomous systems in production processes. However, to close the control loop, data and models also need to be utilized in autonomous systems or for humans as decision support. The IoP delivers an infrastructure for sharing models and data, but the application remains at shop floor level. This section addresses autonomous systems and methods for decision support, using previously acquired data (cf. Sect. 11.3.1) as well as identified and optimized modeling techniques (cf. Fig. 11.3.2). The foci, however, target objectives at manufacturing level, including workpiece quality, tool wear, process safety, and improvement of productivity.

Workpiece quality monitor

The workpiece quality is one of the most relevant indicators for machining production in milling. Errors in machining processes can be traced back to one of the following root causes: static, transient, and dynamic geometric errors as well as tool errors (see Fig. 11.4): (i) Static geometric errors are inherent to all mechanical platforms and are caused by imperfections in mechanical structures, guide systems, encoder systems, and numerical uncertainties. After calibration, modern machine tools allow for control-based compensation which significantly reduces said errors. (ii) Tool errors are caused by tool manufacturing imperfections and tool wear over time. While initial tool dimension errors can be measured and compensated for, tool wear over time must be predicted based on models and data. (iii) Transient thermal errors are caused by (inhomogeneous) thermal states in machine tool and workpiece and their respective thermoelastic deformation. While assumed homogeneous, linear thermal expansion can be compensated for, complex thermal deformation prediction in real time is an area of active research. Relevance of thermal errors increases with the ratio between required tolerances and part dimension. Thus, their importance increases for precision manufacturing and large workpieces. (iv) Dynamic errors in machine tool and workpiece are caused by acceleration, forces, and control system inaccuracies resulting from the machining process itself.

Fig. 11.4
figure 4

Data-driven workpiece quality and machine state monitor: Multiple influencing factors reduce workpiece quality and degrade machine performance over time. A set of complementary applications predicting and, thus, enabling correction are presented. As a future step prediction, output of said models can be combined to an overall machine state and workpiece quality

To analyze the workpiece dimensional accuracy, the machine’s internal probing system is used to probe the workpiece (cf. Sect. 11.3.1). In order to estimate the workpiece surface quality, such as straightness or flatness, high-frequent process data and machine dynamic models are mandatory (Königs and Brecher, 2018). In the so-called process-parallel material removal simulation, the relative position between the workpiece and cutting tool is calculated from the encoder signal, which is continuously sampled by a middleware described in Sect. 11.3.1. When the tool intersects the workpiece, the corresponding volume will be removed. Random errors caused by component wear, controller deviation, or material inhomogeneity are already contained in the encoder data. To determine the actual position of the tool center point, systematic errors, such as geometric-kinematic accuracy and force-induced tool deformation, still need to be accounted for. While the former can be compensated for by means of volumetric compensation based on machine calibration data, the latter requires knowledge of axis and tool stiffness, and cutting force. For this purpose, a real-time-enabled force model is developed by Fey et al. (2018), which is driven by process-parallel trace data. Machine and workpiece stiffness are either identified experimentally or simulated by finite element analysis. Thus, the real cutter engagement location is determined.

After the manufacturing process, a virtual measurement is performed based on the resulting virtual workpiece. Straightness, roundness, or surface flatness are evaluated by extracting the points on the measurement path from the point cloud. Negative trends regarding quality tolerances can thereby be detected immediately after the manufacturing process, which enables a quick and reliable quality feedback loop.

Tool life monitor

The prediction of the optimal timing for tool change is essential for automated mass production, as this is directly associated with the machine-idle time and workpiece quality. Without further monitoring sensors or signals, a very conservative tool changing timing has to be selected due to the complexity of the machining process and numerous random influences in manufacturing. Thus, the main challenge of tool monitoring is providing a practical and reliable solution at shop floor level.

Our proposed tool monitor approach only bases on internal machine data, i.e., no additional sensors such as Charge-Coupled Device (CCD) cameras or dynamometers are required (Xi et al., 2021). To achieve a robust estimation of the tool condition, we adopt multiple built-in sensors and signals, creating a multi-domain evaluation. More specifically, the estimated cutting force, the spindle current, and the spindle speed are fused together to create a wear indicator. Using the aforementioned trace solution in Sect. 11.3.1, it is possible to automatically recognize which cutting tool is used and when it is used, identified by tool and NC line numbers. Followed by a wear model, which outputs a wear indicator for each cutting task, a tool wear progress chart can be generated. For roughing processes, as long as the indicator does not exceed the threshold, the roughing tool is assumed to be sufficient. However, for fine finishing processes, even medium tool wear could already affect the final surface quality. Thus, a comprehensive consideration of the wear indicator combined with the quality indicator is necessary. By utilizing the virtual quality inspection introduced above, the maximum lifetime of the cutting tools can be safely approached by means of a statistically controlled process.

Force control in milling

The high flexibility of the milling process results in constantly and abruptly changing engagement conditions between tool and workpiece. By using mechanical force models, such as the Kienzle model, the relation between tool-workpiece engagement and process force is modeled. Model parameterization is usually based on literature values, which have been identified once for specific combinations of tool and workpiece materials assuming constant tool conditions and homogeneous material. Using online model identification approaches (cf. Sect. 11.3.2) and a material removal simulation, changes in the system due to tool wear or material inhomogeneity can be accounted for. However, the changing engagement results in a highly dynamic and nonlinear behavior of the process forces which is difficult to grasp for conventional fixed-law controllers. As a consequence, these controllers typically fail at providing a stable force control. In contrast, more sophisticated control approaches can adapt models at run-time and account for the inherent nonlinear behavior of milling. To account for the changing process models in milling, an MPC is used to control the process force as it can predict the short-termed future and, therefore, account for abrupt system changes before they actually occur (Schwenzer, 2022). At the same time, MPC is able to respect safety critical constraints of the process while maximizing productivity.

Fume emission rate control

Gas Metal Arc Welding (GMAW) is one of the most frequently used industrial welding processes as it has broad applicability with a wide variety of joining partners, high scalability, as well as low process and system costs. Nevertheless, arc-welding processes involve considerable physiological risks. In addition to process-related noise and strong IR and UV emissions, welding fume emissions have currently come increasingly to the fore. In 2018, the United Nations International Agency for Research on Cancer categorized welding fumes as carcinogenic. Consequently, minimization of Fume Emission Rates (FER) as a crucial physiological and sustainable process quality is a central task of modern GMAW development.

In experimental investigations, it has been observed that a characteristic curve of the welding fume emission is formed via the welding voltage (Quimby and Ulrich, 1999; Pires et al., 2010). In particular, correlations to the FER could be identified in process features of electrical and optical time series by Reisgen et al. (2020a,b). To generate a digital shadow of the FER, a dataset consisting of 273 welded process parameters with high-resolution time records (200 kHz) from welding voltage and welding current was recorded. To label the time series with an according quality feature, the FER was simultaneously measured according to DIN EN ISO 15011-1. The process variables welding current and welding voltage can be recorded directly at the process and are thus characterized by high availability, which, however, also requires a high degree of interpretation and modeling with regard to the FER. For this purpose, time series can first be converted into feature vectors. The GMAW process is often characterized by a stochastic but also periodic material transition (e.g., short circuiting transfer in Fig. 11.5). This periodicity can be used to derive significant features over each process period. With each additional feature vector and the associated period duration, a feature vector series is created. The result is a feature vector time series with equidistant feature vectors for constant period durations or an irregular sequence if the period durations are based on stochastic process events, e.g., for short circuiting transfer in Fig. 11.5. On the basis of these vector series, statistical features such as mean values or standard deviations can be formed, which can make a reliable statement about the process characteristics.

Fig. 11.5
figure 5

Feature engineering for electrical GMAW time series data; top: electrical GMAW time series with periodicity TP; bottom: gathering single event features \(E_i\) to build one significant event series feature set E

This significant feature vector was thus linked to each FER label and used for supervised learning. The model generation process, in addition to achieving a sufficiently accurate model, was also aiming to reveal correlations for the consecutive control strategy. The concept of data-driven quality control for welding will contribute to the comprehensive acquisition of connected quality data. Nevertheless, the quality model or digital shadow must first rely on conventional process data often collected in the laboratory. However, typical laboratory tests only allow for a limited scope of experiments, which contrasts with the required datasets for deep learning. Nevertheless, with the manageable dataset used here and the feature engineering described, it was possible to show that high model accuracies are possible (Reisgen et al., 2020a). In particular, the XGBoost algorithm (Chen and Guestrin, 2016) was able to achieve high results with an \(R^2=0.89\) on the test dataset. In contrast, conventional statistical modeling via multiple linear regression resulted in poorer model quality (\(R^2=0.80\)), but led to the necessary system transparency on which the control strategy could be built.

By investigating correlations, using multiple linear regression models, two FER minima were found in the data. After comparison with the basic process stability, a control system was implemented, which adapts the current process working point to the next FER minimum within 1 s. The control loop was therefore closed, using distinct time series features and welding voltage correction parameters on the welding power source.

Figure 11.6 clearly shows that the welding fume emission as a decisive process quality can be reduced by an average between 12 and 45 percent, starting from three operating points. The optimization was carried out via the voltage correction, which, however, also influences the weld geometry. The resulting trade-off between different quality features must be considered here depending on the application. Finally, and in addition to this control application, the FER can be extracted directly on the welding system and without costly FER measurements in accordance with DIN EN ISO 15011-1, thus accessing an essential sustainable process characteristic.

Fig. 11.6
figure 6

FER minimization potential with data-based quality control at different process working points (A, B, and C)

With this approach, data-driven models in the sense of the digital shadow are applied on the one hand to solve domain-specific challenges. On the other hand, the welding system is empowered as a source of aggregated data and thus provides a valuable contribution to the data lake and cross-domain applications.

3.4 Model and Data Integration in Connected Job Shops

Distribution of models over different machines and even different job shops within the IoP requires common semantics. Heterogeneous data are combined using ontologies, including information from other domains, like process planning (cf. [Resilient Future Assembly Systems Operation in the Context of the Internet of Production]) or quality assurance. This section describes an approach to ontologies using Blade-Integrated Disk (BLISK) manufacturing as an example.

A BLISK is an integral rotor component, which combines disk and blades within a single component. It is used in the compressor of modern turbo engines. The manufacturing of such components represents one of the most challenging tasks in turbomachinery manufacturing (Ganser et al., 2022). The extremely tight tolerances put highest demands on product and process design. To efficiently achieve the required tolerances, the topics of model and data integration in BLISK manufacturing are of high importance.

Model integration refers to the integration of process models into a Computer-Aided Manufacturing (CAM) system to extend the digital shadow of a BLISK. Models include, e.g., a macroscopic engagement simulation based on a multi-dexel model (Minoufekr, 2015), an analytical model to calculate the microscopic engagement data (uncut chip geometry) (Cabral, 2015), a dual-mechanistic cutting force model (Altintas, 2012), and a model to predict tool and workpiece deflections. This information is stored in the digital shadow and used to optimize the process design (Fig. 11.7).

Fig. 11.7
figure 7

A picture of a BLISK and its digital shadow

Data integration aims to develop possibilities for systematic and efficient storage of simulation, process, and product data along the product and process development chain. It also aims to connect data stored in different systems and annotate it with meta information by developing an ontology to describe the meaning of the metadata.

Data generated during product and process development steps is stored in various data formats, e.g., in .stp, .igs, or .stl files for the design step (CAD), odb or .csv files for the process design (CAE), and .nc or .mpf files for CAM. However, the semantic meaning of the data is only understandable by experienced or trained employees and just in some cases understandable by machines. In general, data integration can be divided into three parts (Schiller et al., 2022): (i) The definition of an ontology that describes the relations and semantic meaning of the data. (ii) Adapting existing data into appropriate structures following the ontology, or adding additional metadata to the existing data so that it can be linked using the ontology. (iii) The formal storage of the data, e.g., using an information management system that centrally manages the data and can check the correct semantic description of new data. While the last two steps are strongly influenced by a technical implementation, the creation of an ontology requires a deep domain knowledge and a clear formal definition. For BLISK manufacturing, an ontology was defined using OWL (McGuinness and Van Harmelen, 2004), describing the core relations between the generated datasets in the individual steps of the product and process development chain. Figure 11.8 shows the structure and main core classes and properties for each of the four steps (CAD, CAM, CAE, CNC).

Fig. 11.8
figure 8

The core part of the ontology for the PLM of BLISK manufacturing

To structure the ontology, it is divided into four parts. For each featured process chain step, we define a single ontology namespace: The BLISK schema, the CAM schema, the milling simulation schema, and the manufacturing schema. For each schema, a core class was defined. Additional properties enabled to link the classes and create a knowledge graph connecting all four steps. The BLISK schema is related to the product design step. The core class is the BLISK, which is a subclass of a geometry model, from the CAM schema. The core class of the CAM schema is the milling operation class. The milling operation rules a milling process. The milling process class enables the connections between the CAM, CAE, and CNC step. The milling process which is described by this CAM operation can be a simulated milling process, defined in the simulation schema or a real milling process running on a machine, defined in the manufacturing schema. A graph like this can be extended with additional classes and relations. This makes it possible to append all the data that is generated in the steps of the product and process development chain to an entire knowledge graph, thus enabling complete data integration. This methodology, exemplary shown for BLISK manufacturing, can be used for sharing data and models across job shops and the whole IoP.

4 Conclusion and Outlook

This chapter gave an overview over the current research in model-based controlling approaches for production processes in the IoP. The chapter has been following the different subsystems necessary to control machine tools and production processes: data aggregation, model identification and optimization, and autonomous systems and decision control. A common usage of data and models connects machines, job shops, and the WWL. The individuality of different manufacturing processes results in domain-specific problems and, therefore, particular approaches in the different fields. However, a common ground lies in the methodical approaches, utilizing data as well as expert knowledge. Potentials for future research lie, e.g., in automated data selection from the data lake based on ontologies, such that the processes can go online within the WWL. Furthermore, a greater formalization of the domain knowledge has the potential to further generalize the application of the presented methodology and thus allow symbioses between different domains. More methodically: New data-driven modeling approaches have to be considered due to their different strengths, like long short-term memory (Long Short Term Mmemorys (LSTMs)) networks. LSTMs are recurrent networks and well suited for mapping time-varying dynamics, especially. Following additional modeling approaches and the aforementioned abstracted domain knowledge, the applicability of the control approaches should be improved for more generic models. In addition, the setup of the controllers should be further automated to at some point reaching a near plug-and-play capability. The main challenge regarding the connection of job shops in the future is the connection of different manufacturing technologies as they usually appear consecutively during the production of components. Especially domain knowledge, which remains essential to automate systems, is highly focusing on single manufacturing processes and does not include interconnections between them. Digital shadows need to able to apply technology-specific data and models in a way that other domain experts and even users of produced components are able to utilize them. The IoP has the potential to achieve these connections, but remains rather visionary yet. The current works aim at closing the gaps between the technologies and will further continue in doing so.