Abstract
This chapter describes the various approaches to analyse, quantify and evaluate uncertainty along the phases of the product life cycle. It is based on the previous chapters that introduce a consistent classification of uncertainty and a holistic approach to master the uncertainty of technical systems in mechanical engineering. Here, the following topics are presented: the identification of uncertainty by modelling technical processes, the detection and handling of datainduced conflicts, the analysis, quantification and evaluation of model uncertainty as well as the representation and visualisation of uncertainty. The different approaches are discussed and demonstrated on exemplary technical systems.
Download chapter PDF
The book at hand is devoted to portraying our holistic approach to master the uncertainty of technical systems in mechanical engineering over all the phases of the product life cycle. The conceptual basis of our specific approach, as motivated in Chap. 1 and elaborated in Chap. 3, as well as the consistent classification and definition of uncertainty in Chap. 2, form the foundation of this approach, see Fig. 1.12.
This chapter deals with the analysis, quantification and evaluation of data and model uncertainty in mechanical engineering as an essential first step to master uncertainty. This will then be extended and completed by the methods and technologies to master uncertainty presented in Chap. 5 and the strategies to master uncertainty introduced in Chap. 6. We provide both a mathematical and an engineering perspective to the analysis, quantification and evaluation of data and model uncertainty. Examples of this interdisciplinary approach are among others presented in Sects. 4.3.1 and 4.3.2. Furthermore, the methods are illustrated and their application is demonstrated using the technical systems presented in Sect. 3.6. The examples given appear in all phases of the product life cycle: design, production and usage, see Sect. 3.1 and, thus, offer a broad overview of the activities presented within this book.
We start with the identification of uncertainty by modelling technical processes in Sect. 4.1 with the aim to gain information on data uncertainty as introduced in Sect. 2.1; we consider uncertainty in single processes and its propagation in process chains. An important aspect in this domain is the detection and handling of datainduced conflicts and thus data uncertainty, which will be covered in Sect. 4.2; here the main goals are the prevention of critical failures, finding correlations among the data, and isolating faults. Computer models based on physical or empirical knowledge of a technical system are useful tools in the design phase. Since reality is commonly complex and cannot completely or exactly be represented by mathematical models, one faces the problem that all models are imperfect, i.e. model uncertainty occurs, see Sects. 1.3 and 2.2. In Sect. 4.3, the analysis, quantification and evaluation of model uncertainty are being studied; here different methods to identify sources of model uncertainty and quantify model uncertainty are discussed with the aim to analyse the accuracy of a model belonging to a technical system. Finally, in the case uncertainty is detected, it is often unclear how to represent and visualise the information in an informative way. In Sect. 4.4, a threelayer architecture is presented to solve this issue.
4.1 Identification of Uncertainty During Modelling of Technical Processes
Uncertainty occurs if properties in the life cycle process product design, production and usage, as introduced in Sects. 1.2 and 3.1, cannot be determined completely or at all. However, these are no measurable characteristics of an individual product. Uncertainty becomes obvious in, e.g. deviations between the actual and the planned product geometry as a consequence of incompletely determined production processes or undesired behaviour during usage processes. This section covers the identification of data uncertainty during modelling of technical processes as a step towards mastering uncertainty.
Besides model uncertainty, which has been introduced in Sect. 2.2 and is covered in this chapter in Sect. 4.3, possible uncertainty in the modelling of technical systems has to be considered during the product design as data uncertainty, see Sect. 2.1, of the model parameters used for design and dimensioning.
In Sect. 4.1.1 we show how random deviations of the component properties can be taken into account probabilistically during system design for the example of passive and active vibration isolation. In Sect. 4.1.2, we present the improvement of mathematical model predictions for the simulation of systems by means of a Bayesian inference based parameter calibration.
Uncertainty propagates in process chains and can ultimately lead to undesirable behaviour in production or usage processes, see Sect. 3.2. Section 4.1.3 proceeds with the modelbased description and analysis of uncertainty in consecutive machining processes, such as drilling and reaming or drilling and tapping.
4.1.1 Analysis of Data Uncertainty Using the Example of Passive and Active Vibration Isolation
The quantification and evaluation of uncertainty in loadbearing structures is of growing importance for decisionmaking in the early product design phase as introduced in Sect. 1.2. This may especially become necessary due to the increasing complexity and scope of structures with multifunctional properties like mechatronic, semiactive or active systems. For example, active vibration control in mobile applications, such as an active suspension strut of a car, needs additional energy sources, sensors, actuators and a controller, see Sect. 3.4. This makes the active system more complex compared to a passive system with tailored, but only fixed inertia, damping and stiffness properties [155]. In this section investigations to numerically compare the influence of aleatoric data uncertainty in the model parameters are summarised; according to Sect. 2.1, this is on predicting the dynamic behaviour from a passive and an active technology for vibration isolation of a one mass oscillator [128,129,130]. The variation and uncertainty of model parameters of the passive system may lead to inadequate tuning. In addition and due to growing complexity of the active system, new uncertainty in the dynamic behaviour may arise compared to the passive system. Most importantly, the energetic effort and possible reduced availability of the active system may influence the acceptance of the active technology, see Sect. 1.6. Figure 4.1a shows the simple mechanical model of a one mass oscillator with only four model parameters mass m, damping coefficient b and stiffness k for passive vibration isolation, as well as an additional gain factor g for active vibration isolation [130].
The mass m oscillates in zdirection when excited by the harmonic base point stroke \(w(t) = \widehat{w}\cos (\Omega \,t+\delta )\) with the excitation frequency \(\Omega \), excitation amplitude \(\widehat{w}\), time t, and phase shift \(\delta \). For simplification, \(\delta = 0\) throughout the analysis. We assume linear characteristics of the internal damping force \(F_\mathrm {b}\), stiffness force \(F_\mathrm {k}\), and actuator force \(F_\mathrm {a}\) in Fig. 4.1b. With \(2D\omega _0 = b/m\) and \(\omega _0^2 = k/m\) referring to the damping ratio D and the angular eigenfrequency \(\omega _0\) as well as with the frequency relation \(\eta = \Omega /\omega _0\) and the factor \(\zeta = \Omega /(m\,\omega _0^2)\), the complex amplification function (CAF) of mass displacement in zdirection in the frequency domain is
with the amplitudes \(\widehat{z}_p\) and \(\widehat{w}\) from the complex particulate integral approach \(\underline{z}_p(t) = \widehat{z}_p\,e^{{\text {i}\,\Omega \,t}}\) and \(\underline{w}(t) = \widehat{w}\,e^{{\text {i}\,\Omega \,t}}\) as derived in [128]. The amplitude of (4.1) is
and its phase is
Deterministic case studies for different damping
Figure 4.2 shows the amplitude (4.2) and phase (4.3) of the CAF (4.1) for different damping cases (a)–(f) depending on different damping coefficients \(b_{1}< b_2 < b_3\) and feedback gains \(g_{1}< g_2 < g_3\).
For the passive system in Fig. 4.2, cases (a)–(c), the mass m and stiffness k are assumed constant while three different damping coefficients \(b_{13}\) are chosen, with gain \(g = 0\). The higher the damping, the lower the maximum amplitude \(V_\mathrm {max}\) at resonance frequency \(\omega _\mathrm {0}\). However, the amplitudes beyond the isolation frequency \(\Omega > \omega _\mathrm {iso}\) remain higher with increased damping, which is well known. In case of active vibration isolation, cases (d)–(f), different gains \(g_{13}\) are chosen with assumed low passive damping \(b_{1}\). A higher gain leads to a lower maximum amplitude \(V_\mathrm {max}\) at resonance frequency \(\omega _\mathrm {0}\) and keeps a low amplitude beyond the isolation frequency \(\Omega < \omega _\mathrm {iso}\), which is the benefit of the active approach.
Probabilistic case studies for different CAFpointsofinterest
The influence of aleatoric data uncertainty on the numerical simulation of the dynamic behaviour of the passive and active one mass oscillator subject to vibration isolation is investigated with a Monte Carlo Simulation (MCS), see Sect. 3.3. For that, additional CAFpointofinterest case studies (i)–(vi) for the damping cases (a)–(f) are discussed: (i) varying maximum amplitude \(V_\mathrm {max}\), (ii) varying vibration amplitudes \(\underline{V}_0\) at the undamped resonance frequency \(\omega _0\), (iii) varying isolation frequency \(\omega _\mathrm {iso} = \sqrt{2}\,\omega _0\), (iv) varying amplitudes \(\underline{V}_{100}\) at the excitation frequency beyond the passive system’s fixed isolation frequency, \(\Omega = 100\,1/\mathrm {s} > \omega _{\mathrm {iso}}\), (v) varying excitation frequency \(\omega _{15}\) for \(15\) dB isolation attenuation, and (vi) varying decaying time \(t_{0.01}\) until steady state vibration is reached or, respectively, initial transient vibrations are damped, so only 1% is left, see also [128]. The model parameters m, k, \(b_{13}\), and \(g_{13}\) in Table 4.1 vary around an assumed nominal mean value, maximum and minimum values of the variations in % are considered as the \(\pm 3\sigma \) interval per model parameter according to experience and literature [120, 128, 142]. The MCS uses 10, 000 samples that meet the convergence criteria [101, 128].
As an example, Fig. 4.3 shows histograms of the relative frequency \(M_{nb}(x)/N\) for varying number of bins \(N_b\), with \(n_b = 1, \dots , N_b\) bins and with \(n=1,\dots , N = 10{,}000\) samples, and constant binwidth \(\Delta \) per varying output \(x = V_{\mathrm {100}}\) and \(x = \omega _{\mathrm {15}}\) according to the CAFpointofinterest case studies (iv) and (v) for damping cases (c) and (d) [128].
In summary, Fig. 4.3a shows that for case (iv), the relative frequency \(M_{nb}(V_{\mathrm {100}})/N\) of the amplitude \(V_{\mathrm {100}}\) becomes relatively less narrow around the empirical mean \(\overline{V}_\mathrm {100,(c)} = 12.66\,\mathrm {dB}\) with a relatively small standard deviation \(s_{V\mathrm {100,(c)}} = 0.45\,\mathrm {dB}\) due to high damping \(b_3\) for the passive approach, damping case (c). However, for the active approach, damping case (d), the standard deviation \(s_{V\mathrm {100,(d)}} = 0.33\,\mathrm {dB}\) and empirical mean \(\overline{V}_\mathrm {100,(d)} = 19.23\,\mathrm {dB}\) are smaller although the lowest gain \(g_1\) is used. For case (v) in Fig. 4.3b, the relative frequency \(M_{nb}(\omega _{\mathrm {15}})/N\) of the angular frequency \(\omega _{\mathrm {15}}\) at \(15\,\mathrm {dB}\) vibration attenuation becomes relatively less narrow around the empirical mean \(\overline{\omega }_{\mathrm {15,(c)}} = 122.82\,\mathrm {1/s}\) with relatively small standard deviation \(s_{\omega \mathrm {15,(c)}} = 5.45\,\mathrm {1/s}\) at higher passive damping \(b_3\), damping case (c). Again, for damping case (d), the empirical mean \(\overline{\omega }_{\mathrm {15,(c)}} = 80.48\,\mathrm {1/s}\) and the standard deviation \(s_{\omega \mathrm {15,(d)}} = 1.41\,\mathrm {1/s}\) are smaller than for the passive approach of damping case (c).
Conclusion
The observations described in this contribution show that if aleatoric data uncertainty occurs, high active damping results in less scatter at angular frequencies beyond the isolation point compared to the passive approach, see also [128]. Furthermore, the scatter of the amplitude attenuation beyond the angular isolation frequency is smaller with the active approach. Investigations are under way to validate the numerical comparison of uncertainty in passive and active vibration isolation with an experimental example.
4.1.2 Bayesian Inference Based Parameter Calibration for a Mathematical Model of a LoadBearing Structure
Loadbearing structures with kinematic functions such as the suspension of a vehicle and an aircraft landing gear enable and disable degrees of freedom and are part of many mechanical engineering applications. For an adequate numerical prediction of their load path, being e.g. necessary to develop a controller during the design phase, see Sect. 3.1, we need an adequate mathematical model with calibrated model parameters. Therefore, in this section, the adequacy of an exemplary loadbearing structure’s mathematical model is evaluated with its predictability being increased by model parameter uncertainty quantification and reduction, compare Sect. 2.1. Conventionally, optimisation algorithms are used to calibrate the model parameters deterministically, as e.g. investigated in [51, 104, 161]. In contrast and as presented here, the model parameter calibration is formulated to achieve a statistically consistent model prediction with the data gained from experiments [87, 118, 144]. The most influential parameters being of interest for the model prediction, i.e. the load path through the loadbearing structure represented by the support reaction forces, are identified for calibration by a sensitivity analysis. Subsequently, the mathematical model is adjusted to the actual operating conditions of the experimental loadbearing structure via the model parameters by applying a Bayesian inference based calibration procedure. Uncertainty represented by originally large model parameter ranges is reduced and quantified to increase the model prediction accuracy.
Loadbearing structure
The investigated loadbearing structure in Fig. 4.4 is derived from the more complex loadbearing system MAFDS intended to provide the possibility of intentionally introducing uncertainty in an exemplary technical system, see Sect. 3.6.1.
The loadbearing structure consists of a translational moving mass \(m_\mathrm {A}\) connected to a rigid beam with mass \(m_\mathrm {B}\) and mass moment of inertia \(\Theta _\mathrm {B}\) via a springdamper with stiffness \(k_\mathrm {S}\), a damping coefficient \(b_\mathrm {S}\), as well as two semiactive guidance elements. The two semiactive guidance elements provide an approach to redistribute loads, e.g. in case of weakened or damaged structural components, see Sect. 5.4.8. Two supports at the ends of the beam are equipped with an adjustable stiffness to simulate weakened or damaged structural components. A weakened or damaged structural component is represented by a reduced support stiffness depicting a reduced loadbearing capacity [53,54,55].
Mechanical and mathematical model
Having in mind to achieve load redistribution, according to [52] the mathematical model of the loadbearing structure in Fig. 4.4 comprises parts to describe the general system dynamic, the friction and the electromagnetic actuator. The model part describing the general dynamic is chosen for model parameter calibration in this section. The friction model calibration is described in detail in [52, 56].
Figure 4.5 depicts the mechanical model and the free body diagram of the loadbearing structure. The mechanical model consists of a movable mass \(m_\mathrm {A}\), a rigid beam with mass \(m_\mathrm {B}\), length \(l_\mathrm {B}\) and mass moment of inertia \(\Theta _\mathrm {B}\) in the xzplane, see Fig. 4.4. The associated independent degrees of freedom (DOF) are the vertical displacements \(z_\mathrm {A}\), \(z_\mathrm {B}\) and rotation \(\varphi \) [55]. The linear equation of motion system of the loadbearing structure becomes
with the \([3\times 3]\) mass \(\boldsymbol{M}\), damping \(\boldsymbol{D}\) and stiffness \(\boldsymbol{K}\) matrices, and the \([3\times 1]\) acceleration \(\boldsymbol{\ddot{r}}=[\ddot{z}_{\mathrm {A}}, \ddot{z}_{\mathrm {B}}, \ddot{\varphi }]^\mathrm {T}\), velocity \(\boldsymbol{\dot{r}}=[\dot{z}_{\mathrm {A}}, \dot{z}_{\mathrm {B}}, \dot{\varphi }]^\mathrm {T}\) and displacement \(\boldsymbol{r}=[z_{\mathrm {A}}, z_{\mathrm {B}}, \varphi ]^\mathrm {T}\) vectors. The \([3\times 1]\) force vector \(\boldsymbol{F}\) contains the excitation force \(F_\mathrm {ex}\), the friction induced force \(F_\mathrm {\mu }\) and the forces \(F_{\mathrm {ge,L}}\) and \(F_{\mathrm {ge,R}}\) for load redistribution provided by the semiactive guidance elements, see Sect. 5.4.8. A more detailed derivation of (4.4) is presented in [52, 55].
The mathematical model of the loadbearing structure is derived to capture the load path through the structure and to predict and evaluate the load redistribution capability in case of the semiactive structure in [52, 55, 56]. The derived mathematical model underlies model simplifications such as the assumption of lumped masses and rigid bodies. Furthermore, the springdamper and the guidance elements are assumed to be free of mass, the model is assumed to be planar and undesired friction is summarised in a single dissipative force \(F_\mathrm {\mu }\) [52, 57]. Although these model simplifications can be attributed to model uncertainty, they may contribute to data uncertainty, as introduced in Sects. 2.1 and 2.2, and can be—at least partly—considered via parameter calibration in the following.
Sensitivity analysis
The sensitivity of the mathematical model predictions on parameter variations is assessed by calculating the statistical significance of parameter variations on the model prediction variation. Thus, the influence of the model parameters with respect to a model prediction of interest is identified. We assess the statistical significance by an analysis of variance (ANOVA) using the coefficient of determination \(\mathrm {R}^2\) [9, 136]. The coefficient of determination of model parameters \(\boldsymbol{\theta }\)
calculates the proportion of the model output variability that can be ascribed to each calibration candidate parameter variation. The sum of squares total (SST) is the total model variability and the sum of squares error (SSE) is the unexplained model variability of the model parameters \(\boldsymbol{\theta }\). More details regarding SST and SSE can be found in [9, 36, 52, 136]. The three model parameters \(\boldsymbol{\theta }=[m_\mathrm {A}, b_\mathrm {S}, F_\mathrm {\mu }]\) turned out to be the most influential ones in the scope of the presented example and, therefore, are selected to be calibrated. Model parameters which are not selected for calibration are assumed to be deterministic. Their values are chosen, e.g. based on measurements or manufacturer information. The detailed sensitivity analysis is presented in [52, 57].
Bayesian inference for model parameter calibration
Bayesian inference is used as a statistical calibration approach to calibrate the uncertain model parameters identified as most influential on the model prediction of interest in the previous paragraph. The aim is to statistically correlate the model predictions with the measurements by solving an inverse problem [118]. The relation between measurements and simulations according to [75, 87, 144] is given by
where \(Y^{\mathrm {E}}_n(t)\) represents the experimental results and N is the number of measurements. The model prediction of interest \(Y^{\mathrm {M}}_n(t,\boldsymbol{\theta })\) is supplemented by the measurement error \(\varepsilon _n(t)\sim \mathcal {N}(0,\,\sigma ^2)\), that is assumed to be independent and identically distributed as well as normally distributed with zero mean and standard deviation \(\sigma \) [144]. Through the Bayesian inference approach, we can update current knowledge of the system and its model parameters with new information obtained from experimental tests. Thus, the parameter uncertainty is quantified and reduced by systematic inference of the posterior distribution [87, 144]. Using the Bayes’ Theorem [13, 144], the posterior parameter distribution given the experimental results can be stated as
with the likelihood function \(L(Y^{\mathrm {E}}\boldsymbol{\theta },Y^{\mathrm {M}})\) representing the probability of experimental results \(Y^\mathrm {E}\) given a set of parameters \(\boldsymbol{\theta }\) for the model prediction of interest \(Y^\mathrm {M}\) [52, 144]. The total probability \(P(Y^{\mathrm {E}})\) is typically not computable with reasonable effort and is only normalising the result anyway [65]. It is more practical to sample from a proportional relationship of the posterior parameter distribution.
The parameter space is explored using the Marcov Chain Monte Carlo (MCMC) sampling to approximate the posterior parameter distributions \(P(\boldsymbol{\theta },Y^{\mathrm {M}}Y^{\mathrm {E}})\) by drawing multiple samples from these posterior parameter distributions. That is, the histograms of the model parameters \(\boldsymbol{\theta }\) of all random samples produce the approximated posterior parameter distributions \(P(\boldsymbol{\theta },Y^{\mathrm {M}}Y^{\mathrm {E}})\) in (4.7) [117, 144]. Figure 4.6 depicts the model parameter calibration results obtained from 25, 000 MCMC runs. The parameter distributions are depicted as histograms representing approximations of the posterior parameter distributions for the three model parameters \(\boldsymbol{\theta } = [m_\mathrm {A}, b_\mathrm {S}, F_\mu ]\) [57]. Furthermore, the narrow histograms graphically depict the knowledge gain and the uncertainty reduction in the model parameter ranges. The model parameter ranges covering the 95% interpercentile can be reduced by approximately 89% for the mass \(m_\mathrm {A}\), by approximately 82% for the viscous damping \(b_\mathrm {S}\) and by approximately 84% for the dissipative force \(F_\mu \) compared to the prior bounds represented by the limits of the xaxis in Fig. 4.6.
Comparison of the noncalibrated and calibrated model predictions
The effect of the statistical calibration procedure on the model prediction accuracy is exemplarily shown in Fig. 4.7 for a step load excitation \(F_\mathrm {ex}=25\,\mathrm {N}\). The envelopes of each of the 300 Monte Carlo (MC) simulation runs for noncalibrated and calibrated model parameter ranges are conducted and compared to the related support reaction force measurements \(F_\mathrm {L}\) and \(F_\mathrm {R}\) averaged for 10 measurement repetitions. The support reaction force measurements \(F_\mathrm {L}\) and \(F_\mathrm {R}\) are quite similar as the loadbearing structure is undamaged. The noncalibrated model parameter ranges are equally distributed between the lower and upper prior bounds. The calibrated model parameter ranges are distributed according to the histograms in Fig. 4.6.
The simulations using calibrated model parameters tend to be closer to the measurement with smaller envelopes. Even though calibrated and noncalibrated envelopes widely encompass the measurements, the envelope area of the calibrated MC simulations is reduced significantly by 75% compared to the envelope area of the noncalibrated MC simulations [57], qualifying the Baysian inference as suitable calibration method for the presented example.
4.1.3 ModelBased Analysis of Uncertainty in Chained Machining Processes
In the production of components for technical systems, as described in Sect. 3.6, several processing steps are used. To generate the final product geometry, the processes are linked to a process chain, see Sect. 3.2. A widely used process chain in machining is roughing and finishing. A process with a high material removal rate is linked to one or more processes capable of generating the required machining quality. The production of highquality bore holes is often realised by the process chain drilling—reaming, as shown Fig. 4.8. The reaming process slightly enlarges the diameter of a bore hole in order to improve the surface quality and the circularity. Another frequently applied process chain is drilling – tapping. The desired thread geometry is created by removing material from the predrilled bore wall.
Machining processes are generally affected by data uncertainty in form of incertitude, see Sect. 2.1.3. For this reason, functional parts of a component are always provided with intervalbased tolerances, which ensure functionality in the overall system. Forms of geometrical uncertainty, which occur in drilling process chains, are shown in Fig. 4.9. The occurring uncertainty can be categorised according to their origin. One source of uncertainty are geometric deviations of the pilot hole, e.g. variations in diameter, straightness or cylindrical shape. Those deviations can be caused e.g. by hardness deviations in the workpiece material. Another source is the process chain in which positioning variation between the pilot hole and the following process step occur. Since uncertainty is accumulated, deviations can also neutralise each other. The uncertainty must therefore be evaluated in the overall context. Axes misalignment of up to 0.03 mm occur due to limited accuracy of the machine tool and reclamping operations. In industrial applications, e.g. reaming of valve guides in a cylinder head of a combustion engine, misalignments of up to 0.1 mm occur, which are caused by the joining process of the blanks [72]. A radial deviation of the predrilled bores is induced due to oblique and uneven surfaces, incorrectly placed centring, cavities, transverse bores, blowholes and inclusions. The resulting radial forces lead to elastic bending of the pilot drill and thus to a slope bore [125].
Additionally, uncertainty is caused by the final processes without being influenced by preprocessing. A runout describes the radial displacement of the tool in the chuck. In industrial applications, a radial runout during reaming can be limited to 0.003 mm by adjustable adapters [72]. Earlier investigations on tapping indicate a radial runout of 0.03 mm [37]. During tapping, a synchronisation deviation between the translatory and the rotational axes often occurs. This is generated by a deviation between the axis movements when reversing the direction of rotation.
Different approaches are used to model machining processes in order to predict process variables or raise process understanding. Zabel [166] differentiates models with regard to whether they are based on the finite element method (FEM) or not. In the simulation of machining processes, which are characterised by material deformation as well as occurring dissolution of the material bond, considerable computing time is required using FEMbased models.
NonFEMbased modelling requires more detailed knowledge of the particular process but is less demanding with regard to computational effort. This allows for the simulation of the effects of uncertainty in less time. Basically, these approaches are divided into analytical and geometrical models. Analytical models use closed mathematical expressions to describe the considered phenomena [166]. Thus, models of chip formation, shear planes, temperature and process forces can be established. The geometric approaches determine the geometric quantities of machining processes, which are often used as input for analytical models. Mechanistic modelling represents a combination of geometric and analytical models. It is fundamentally based on the assumption that process forces, occurring during the machining process, are proportional to the chip crosssection [85].
Examples of mechanistic models within the context of reaming operations and the disturbance variables occurring can be found in [21, 72]. For tapping, the first mechanistic model aiming towards the analysis of the process and the influence of uncertainty are developed by Dogra [37].
The basic structure of the mechanistic process model based on [85] is shown in Fig. 4.10. The main input of the model is the original workpiece contour. We use a chip crosssection model, as shown in [1], to determine the chip sizes, resulting from the tool geometry and position. The model is based on the intersection of 2D elements. The process forces are determined based on these chip sizes in an empirical force model. Summation of the forces caused by each of the tool’s cutting edges enables calculating the resulting force that leads to a radial deflection of the tool. In the model, the deflection of the tool is e.g. calculated by a combination of dynamic and static modelling approaches [21]. When selecting suitable approaches, we consider the rotational speed of the tool and the prevailing geometric constraints. Due to the low rotational speeds in tapping, dynamic considerations can be largely neglected. However, the complex geometric boundary conditions must be mapped. Based on the tool’s radial deflection and its path, the tool position is determined. This information is used to specify the geometric intersection in the next calculation step.
In linked machining processes, individual process steps are linked via the generated geometry of the feature created in the previous step, e.g. the pilot hole geometry. This serves as the starting point for the subsequent machining process. For simultaneous processes, however, successful model linkage requires further connection points, as each step’s stability may affect the others. For the combined machining of e.g. valve guide and valve seat, the process forces of each tool step are taken into account in an overall system. Here a Jeffcott rotor with several masses is loaded with the resulting forces of each individual tool step, so that the mutual influences can be mapped [21].
In the discussed mechanistic model approaches, we represent the geometry of the pilot hole by individual plane elements arranged in a star shape around the rotational axis of the model, see Fig. 4.11. Therefore, each point on each of the planes has the same angle \(\varphi \) when viewed in cylindrical coordinates. In order to implement for example geometric deviations, such as slope predrill bore and axial offset, we modify the individual plane elements. For this purpose, we vary the radius of the pilot hole as a function of the angle \(\varphi \) and the cutting depth z, as shown in Fig. 4.11a. A similar procedure can be used for the tool, since it is also mapped using plane elements. By shifting the radius of these elements depending on the angle, we can also map runout errors, see Fig. 4.11b.
For mapping deviations in the synchronisation, or more general deviations in the tool path, we alter the displacement of the tool after each calculation step. In addition, we can model tooth chipping by altering the geometry or by completely removing a plane element of the tooth. Further disturbance variables can be implemented externally to the geometric intersection model. For example, hardness gradients in the component can be considered by manipulating the used force model.
Our model approaches for tapping and reaming show that the axial offset between the pilot hole and the subsequent process step is the disturbance variable of greatest influence [72]. Due to the lack of radial guidance during tool immersion, radial forces can lead to tool misalignment and subsequently to tool inclination. As a result of the radial guidance of the tool after the immersion, its inclination over the drilling depth remains almost constant [2].
Uncertainty is caused by several reasons and is an unavoidable part of any machining process. Modelling approaches can describe and evaluate occurring uncertainty in chained machining processes. One approach is the mechanistic model approach, which is suitable to analyse the uncertainty in process chains like drilling—reaming and drilling—tapping. With the help of these models, we can raise the process understanding and investigate an accumulation of uncertainty. Thus, we may derive recommendations for the design of the process chain and the individual processes contained therein. This finally facilitates the mastering of uncertainty in chained machining processes.
4.2 DataInduced Conflicts
Active systems, as presented in Sects. 3.4 and 5.4, have proven their effectiveness in mastering uncertainty. But in turn, they rely completely on the veracity of data. In many applications, the fusion of redundant data has therefore become common practice. However, if the confidence intervals of data from two or more sources do not overlap, this leads to socalled datainduced conflicts, which cannot be resolved with classical fusion techniques. Such datainduced conflicts reveal ignored model or data uncertainty, see Sect. 2.2. In the case of realtime controls, they require an instantaneous decision on which source to trust. Datainduced conflicts aid in uncertainty identification and are therefore a valuable tool in mastering uncertainty, see Sect. 3.3.
In the past, unresolved and ignored datainduced conflicts led to several severe incidents. According to the European Space Agency (ESA), the crash of the ExoMars Schiaparelli probe on 14 March 2017 began when calculating the altitude from a saturated inertial measurement unit (IMU) signal, which resulted in a large negative altitude. A conflict with the radar Doppler altimeter unit was detected. Since no other verification methods had been implemented at that time, the true value could not be determined. Even though the conflicting IMU had been detected at that moment, this information was not passed to other subsystems and thus caused a chain of fatal decisions during the decent resulting in a crash at 150 m/s [152]. In the following two years the Boeing 737 MAX repeatedly encountered problems with the angle of attack (AoA) sensor. One of the malfunctions manifested in an ignored conflict between left and right sensor of 20\(^{\circ }\) [151]. Hence, the control system anticipated a stall and automatically pushed the nose down, which caused the fully manned plane to crash.
These incidents show that a general framework for verifying data as well as identifying and isolating a cause is required. Often a multitude of metadata, which include models, parameters and sensors, are involved in the generation of data. We refer to the general set of metadata as a data source. A data source consisting of a model and a sensor is also called a soft sensor, see Sect. 1.4. To master datainduced conflicts, the metadata involved and their uncertainty have to be taken into account. For example, statistical dependency between the metadata of different data sources invalidates majority decisions used by voting algorithms. However, the use of systematic redundancy allows the identification of the cause of conflicts.
In Sect. 4.2.1 we present and evaluate a method to establish systematic analytical redundancy and make it available for monitoring. Linking the metadata with the actual data allows us to link occurring conflicts with their cause. Two examples outline how to make use of physical models to infer specific causes in Sect. 4.2.2 and how to scale the method to systems where a multitude of conflicts might originate from a single fault Sect. 4.2.3.
4.2.1 Dealing with DataInduced Conflicts in Technical Systems
In the product life cycle phases of production and usage of modern technical systems, see Sect. 3.1, data is increasingly being generated redundantly. Redundancy is not necessarily physical redundancy of the sensor, but can also be established in the form of analytical redundancy, where measured values of the system are converted into the desired values via models. As introduced in Sect. 1.4, the combination of a sensor with a model to estimate target quantities using easily accessible auxiliary variables is called a soft sensor [48, 81]. An overview of the use of soft sensors can be found in a monograph by Fortuna et al. [48].
Redundancy increases the availability of information but may lead to contradictory statements and conflicts. These conflicts can be used to identify the uncertainty in the information about the system and, therefore, contribute to mastering uncertainty, see Sect. 3.3.
This section introduces the concept of datainduced conflicts, discusses the advantages and challenges, and presents a method for dealing with datainduced conflicts in technical systems. The method is a slightly extended version of [70].
Datainduced conflicts
Contradicting values of different redundant data sources are in conflict when their confidence intervals do not overlap. These so called datainduced conflicts can therefore be attributed to the model, see Sect. 2.2, to the parameters of the model, see Sect. 2.1 and to sensor errors; also they are a symptom for lack of knowledge, see Sect. 1.4. If uncertainty is not sufficiently taken into account or if too few or uncertain data sources are considered, these conflicts remain unnoticed and thus unresolved. Figure 4.12 illustrates three redundant data sources for a target quantity with their respective uncertainty characterised by the confidence intervals and a datainduced conflict between source A and the consensus of sources B and C. If two sources are in consensus, their confidence intervals overlap.
Different methods have been developed to deal with conflicting data sources. On the one hand, conflicts between data sources can be seen as part of erroneous system behaviour. Thus, different methods use conflicting data for fault detection and fault isolation [76, 80]. On the other hand, conflicts can also be seen as part of the system’s normal behaviour. In that case, data from multiple sources can be used to reduce uncertainty and to improve the overall level of data quality. Simple methods for data reconciliation of conflicting sensor data are voters [46]. More elaborate fusion methods are the Bayes method [27, 97], the Dempster–Shafer method [89, 165], and heuristic methods [94, 149]. In the process industry, reconciliation methods are implemented for the estimation of process state data. The goal is to fuse the conflicting data, i.e. reconcile the state of the system with the conservation laws of mass and energy. For this, the conservation laws and the measured values have to be an overdetermined equation system. With a quadratic minimisation method, the system states are changed until the values satisfy the conservation laws [76].
Method for dealing with datainduced conflicts
The methods mentioned above for dealing with conflicting data sources fail to differentiate between sensor and model or do not take uncertainty into account. Therefore, we propose a methodology to support interpretation and decision making processes in case of datainduced conflicts using the approach of redundancy via soft sensors. Through consideration of the relationships between sensor, models and information about the system, the cause of the datainduced conflicts can be isolated. For the proposed method, the following two points have to be addressed:

1.
Conflicts emerge when the confidence intervals of redundant data sources do not overlap. Hence, the uncertainty in an interconnected system has to be propagated, see Sect. 3.2. How can this be done efficiently in an environment with many sensors and models?

2.
The different data sources, i.e. soft sensors. How can the dependencies between different sensors and models be used in decision making processes?
The proposed approach provides a methodology to identify lack of knowledge in the interpretation of conflicting sensor data by differentiating data sources into models and sensors and spanning the investigation from the redundant observation of a single value to the interconnection between models and sensors throughout the system, see Fig. 4.13. Analytical redundancy via soft sensors is enforced by linking already existing, spatially distributed sensors with models to increase the availability of information about the desired values.
Each redundant data source \(Q_{i}\), cf. Sect. 1.4, is associated with a given level of uncertainty due to precision and accuracy of the sensor, as well as model uncertainty, see Sect. 2.2, which needs to be identified and propagated in the target quantity. The first step (i) is to examine that each data source \(Q_{i}\) is within certain boundary conditions to ensure physical plausibility. Those limits need to be determined and individually based on the respective metadata, such as calibration data and known characteristics.
On this basis, the redundant sources are compared among themselves to detect any possible conflicts in step (ii). In case of datainduced conflicts, a method for the compact visualisation of dependencies (iii) is provided to restrict whether the conflict is caused by a sensor or due to model uncertainty. In conclusion, the provided information supports the process of interpretation of sensor data (iv) and gives evidence in which sources to trust. In the following, each step of the systematic approach is presented in detail.

(i)
Plausibility. For checking the plausibility of data sources, sufficient metadata about the limit values derived from sensor characteristics, physical properties and technological limits are needed. With regard to the limits, there is a tradeoff between sensitivity to erroneous behaviour and normal fluctuations [80]. In the case of a data source exceeding the prescribed boundary condition, the respective sensor or model can be excluded for further considerations in advance, and crosschecks with other redundant sources become unnecessary.

(ii)
Detection of Conflicts. Datainduced conflicts can be attributed to sensor errors (technical failure or application errors) or to model uncertainty. Model errors can be caused by either insufficient simplifications or changes of the underlying physical system, e.g. due to the wear, deformations, failure of components, see Sects. 2.2 and 4.2.2. For the detection of datainduced conflicts, the uncertainty of sensors has to be considered. Especially soft sensors may have several sensor inputs to calculate the target quantity. Therefore, the sensor uncertainty has to be propagated by the model. For the propagation of uncertainty, standard methods, e.g. [83] can be used. The technical implementation of error propagation and the necessary calculation of derivatives is done with automatic differentiation (AD). In comparison to numerical methods AD has the benefit of calculating the exact derivative. In addition, datadriven models in the form of software code can be assigned a derivative with the help of AD [12, 109, 159].

(iii)
Visualisation. Especially in the case of datainduced conflicts, knowledge about possible dependencies between data sources is important. Erroneous sensors or models and the consequences of the errors for other values have to be found. For downstream interpretation, it is important to depict the relationship between the soft sensors and their inputs in a human and machine readable form. Therefore, a method for the visualisation of conflict scenarios has been developed in order to clearly depict interdependencies between soft sensors (sensors and models) throughout the technical system, see Sect. 4.2.3.

(iv)
Interpretation. The interpretation of the visualised dependencies is done by reasoning. If a particular data source is only involved in other observations revealing no inconsistencies, the confidence in the respective sensor/model increases. If, on the other hand, a particular input is involved in one or many conflicting source values, it is suspected to be the cause of the conflict. For automation of the reasoning process, various classification methods can be used, e.g. pattern recognition, reasoning methods or neural networks [80]. Data sources deemed to be trustworthy can then be used in a fusion process with methods mentioned above. Data sources being suspected to cause the conflict are excluded.
Our methodology (i)–(iv) reinforces redundancy and, therefore, datainduced conflicts. Through the consideration of physical metadata and sensor data with their respective uncertainty, conflicts can be identified and the data quality increases. Furthermore, our method provides information about the relationships between sensors, models and the physical system to identify the cause of the conflict for human and machine interpretation. Section 4.2.3 shows the application of the outlined methodology on an experiment series conducted at the Modular Active SpringDamper System introduced in Sect. 3.6 revealing datainduced conflicts.
4.2.2 DataInduced Conflicts for Wear Detection in Hydraulic Systems
Due to propagation and chain reactions of contamination [99], hydraulic systems are particularly sensitive to wear and contamination. Therefore, it is of interest to detect wear in early stages during the operation of a system in order to consequently avoid high cost due to unplanned maintenance. In the context of Sect. 4.2, this chapter serves to demonstrate the identification of ignorance, as introduced in Sect. 1.3, in the form of undetected wear by means of datainduced conflicts. As shown in Sect. 4.2.1, analytical redundancy can be used to learn about sensor or model errors by observing datainduced conflicts.
Wear itself is not directly measurable during the operation but manifests itself in the changed system characteristics. Different methods exist to detect and isolate the changing system characteristics [76, 80]. In this section, we demonstrate the use of soft sensors to determine wear via datainduced conflicts between redundantly calculated flow rates as shown in [141]. For predictive maintenance, this approach is promising in terms of costefficiency, since existing sensors and models are used. The approach of this example is rather simple with only two data sources (soft sensors for pump and fluid system) for one calculated quantity (flow rate). A more complicated system can be found in Sect. 4.2.3. In the following, first the two data sources are presented. Then wear detection via datainduced conflicts is discussed.
Analytical redundancy by means of soft sensors
To demonstrate analytical redundancy, the generic fluid system in Fig. 4.14, consisting of a positive displacement pump and a valve, is considered. The system acts as an abstraction for real fluid systems since the hydraulic resistance of a generic system is reduced to the hydraulic resistance of a valve. As indicated in Fig. 4.14, both components have an assigned soft sensor to determine the flowrate that flows through the respective component. The purpose of both soft sensors for the pump and the valve, is to generate redundant data of the volume flow rate to make conclusions about the wear condition of the system.
For the pump, an internal leakage model is used. The ideal flow rate \(Q_\mathrm {p}\), determined by the displacement volume V and the rotational speed n, is diminished by the internal leakage \(Q_\mathrm {L}\)
where the gap losses \(Q_\mathrm {L}\) are modelled with a semiempirical dimensionless approach [140].
For the valve soft sensor, the definition of the \(K_\mathrm {v}\) value for valves is used as a model. The valves flowrate \(Q_\mathrm {v}\) is given by
where \(\Delta p_0 := 1 \ \mathrm {bar}\) and \(\varrho _0 := 1000\,\mathrm {kg/m^3}\). \(\Delta p_\mathrm {v} \) is the pressure difference over the valve and \(\varrho \) is the fluid density. The \(K_\mathrm {v}\)value is calibrated in dependence on the valve opening degree \(\alpha \). The uncertainty for both soft sensors is determined with error propagation. Both soft sensors depict an unworn state of the system.
Identification of wear
The fluid system in Fig. 4.14 consisting of a positive displacement pump and a valve, can be described by the flowratepressure characteristics in Fig. 4.15. For the valve, the flow rate accelerates with increasing pressure, for the pump the flow rate decreases. The intersection of both curves is the operating point of the hydraulic system. Both pressure and flow rate have to be the same. When all components are new, both soft sensors are assumed to depict the relevant reality and therefore show the same flowrate.
Now imagine a worn valve. With wear, the crosssectional area through which the flow passes, increases. At the same pressure level more fluid can pass the valve and consequently the operating point of the system changes. Since the soft sensors depict the unworn state, they do not recognise this change and consequently deliver contradictory flow rate measurements. For a worn pump similar considerations hold.
The contradictory measurements are in conflict, if their uncertainty intervals do not overlap, see Sect. 4.2.1. The conflict can arise due to sensor break down or model error. In both cases, the datainduced conflict represents ignorance. A sensor breakdown can normally be excluded with limit checking, as presented in Sect. 4.2.1. We therefore concentrate on model error or, in this case, component characteristic change.
In order to review the presented soft sensor approach, the following two questions must be answered:

1.
Is the influence of wear greater than the soft sensor uncertainty?

2.
Are two components sufficient to identify if wear occurs and where?
To answer the first question, we carried out an experimental investigation of a worn valve [127]. The study revealed that the resulting flow rate changes from wear exceed the soft sensor uncertainty. This is a datainduced conflict indicating that wear occurs.
With regard to the second question, we carried out measurements with a test rig [69, 71]. Wear changes the cross sectional area of the valve, and a worn positive displacement pump has larger gaps where fluid can flow back. Consequently, we were able to simulate wear on the test rig by installing bypass flows for the pump and the valve. This offers the benefit of easily changing wear conditions without actually destroying the components. The principle of the test rig used can be found in Fig. 4.16. The studies show that identification of wear is possible via datainduced conflicts when the flow rate outputs of the two soft sensors differ by about 6%.
From the studies, it can be concluded that the localisation of wear is not possible with only two soft sensors. According to Fig. 4.15, the possible operating range of the hydraulic system for various wear conditions is always below the characteristic curves with components in a new condition (light grey area). Therefore, the calculated volume flow rate of the valve soft sensor \(Q_\mathrm {v}\) is always lower or equal to the calculated volume flow of the pump \(Q_\mathrm {p}\). This is independent from the wear condition of both components. For this reason, it is not possible to deduce the component subject to wear from these two calculated volume flows alone, the measuring system is underdetermined. However, the history of the measurement data, additional information in the two soft sensors or additional soft sensors would make it possible to deduce the worn component.
All in all, datainduced conflicts help in detecting ignored wear in hydraulic systems and can provide a measure for predictive maintenance. The localisation of the worn component is not possible with only two soft sensors. Additional system information in the form of additional soft sensors is needed for this purpose, see Sect. 4.2.3.
4.2.3 Fault Detection in a Structural System
Structural systems use mechanical elements to transfer forces along a path. The force transmission can be represented by mathematical models used to optimise the design, see Sect. 6.1.1. A combination of such models with sensors allows to determine quantities, such as forces in spatially distant elements, and can therefore be used to estimate states that are infeasible to measure.
Applications lie in the field of structural health monitoring of aeroplane to detect defects in fuselage panels [78], wing panels [167], and their connecting elements [114]. The basis for those technologies are data obtained from integrated sensors and implemented models, which are assumed to be reliable. However, unreliable data due to data uncertainty have led to numerous incidents in the past [79], as mentioned in Sect. 4.2.
Today’s methods of online data validation are often based on the mere comparison of a few data sources. Information about the sensors and models involved as well as the results of these comparisons are not centrally fed back and forwarded to other subsystems. In the case of the ExoMars incident, faulty gyroscope data were detected in one subsystem but nevertheless reused at a later point in time [152].
Besides the mentioned applications, soft sensors also allow to generate redundant data throughout the system and to set up a sensor network. As a result, data sources can be continuously checked for being in conflict with each other. But when it comes to linking larger numbers of sensors and models, a single fault may cause numerous conflicts. To distinguish between possible faults, conflicts and their corresponding links in the network have to be visualised and analysed.
Both, sensor data and models, forming the soft sensor, are afflicted with uncertainty. The uncertainty of sensor data expresses itself by an unknown distribution, which is considered by means of confidence intervals. Furthermore, the models used to describe the behaviour of mechanical components often contain many assumptions and simplifications. Therefore, the uncertainty of a soft sensor can be classified as incertitude as described in Chap. 2.
In the following, the method presented in Sect. 4.2.1 is applied to a complex technical system, which is introduced briefly as a sensor network. We describe the subsequent steps of our method from conflict detection to visualisation regarding a real sensor fault case. As introduced in Sect. 3.1, uncertainty occurs over different phases of the product life cycle. The presented and evaluated method is applicable in the system’s design phase to establish analytical redundancy as well as in the system’s usage phase by the visualisation of datainduced conflicts to master uncertainty.
Scaling analytical redundancy to sensor networks
The method presented in Sect. 4.2.1 allows us to analyse a large number of comparative quantities from analytically redundant data sources. These data sources consist of sensors only or of sensors linked with models, thus, soft sensors. Thereby, it is taken into account that the involved data sources can be afflicted with data and model uncertainty, see Sects. 2.1 and 2.2. The method for dealing with datainduced conflicts is applied to the Modular Active SpringDamper System (German acronym: MAFDS) presented in Sect. 3.6.1, which represents a multiple sensorintegrated structural system. The MAFDS consists of one upper and one lower truss structure, which are connected to each other via guidance elements and a springdamper. The two truss structures in turn consist of individual beams, which are assembled to form tetrahedral elements shown in Fig. 4.17. Further, the MAFDS is equipped with several sensors, such as force transducers \(F_{\mathrm {S}j}\) and \(F_{\mathrm {P}j}\) as well as strain gauges in the upper, \(\epsilon _{j}^\mathrm {U}\), and lower, \(\epsilon _{j}^\mathrm {L}\), truss structure. More details related to the integrated sensors in the MAFDS are given in Sect. 3.6.1. To establish analytical redundancy, the measured sensor data are converted by means of models to the desired redundant quantity, denoted here as comparative quantity. As mentioned above, a linkage of measured data obtained by an arbitrary sensor with an analytical model to gain an another arbitrary quantity represents a soft sensor. In case of the MAFDS, an example for such a soft sensor is the linkage of the beam strain gauge \(\epsilon _{15}^\mathrm {U}\) in Fig. 4.18a with a mechanical model m for the conversion of the measured beam strains to beam forces; here we assume a linearelastic beam behaviour using Hooke’s law in Fig. 4.18b. The uncertainty of parameters involved as well as the measurement uncertainty have been taken into account to estimate the confidence interval.
Based on the calculated beam forces, analytical redundancy can be established at the fixed support points, e.g. fixed support point 1 (FSP1) in Fig. 4.17. The forces measured by \(F_{\mathrm {P}1}\) at FSP1 must be in equilibrium with the beam forces \(F_{1}\), \(F_{11}\), and \(F_{15}\), which are calculated via the beam strain gauges \(\varepsilon _{1}^{\mathrm {U}}\), \(\varepsilon _{11}^{\mathrm {U}}\) and \(\varepsilon _{15}^{\mathrm {U}}\) of the corresponding beams B1, B11 and B15, labelled in Fig. 4.17. It should be noted that the beam forces are converted into the components of the global coordinate system, which corresponds to the coordinate system of the piezoelectric based force transducers \(F_{\mathrm {P}i}\) at the three fixed support points.
Interpretation of datainduced conflicts in sensor networks
To investigate the method presented in Sect. 4.2.1, an erroneous data set of a drop test at the MAFDS is used. In this case, the triaxial force transducer \(F_{\mathrm {P}1}\) at FSP1 was connected incorrectly so that the measuring channels for the y and zcomponent of the measured forces were switched, thus, resulting in conflicts among multiple data sources \(Q_i\). In the first step, as shown in the proposed method according to Fig. 4.13, the plausibility of the sensor/transducer signals is verified by checking whether the measured data are within a reasonable range respecting the specific properties of the sensor, such as measuring ranges etc., as well as the structural system. After a successful plausibility check, the conflict detection for the data sources \(Q_i\) is continued.
Assuming the observed quantities to show a Gaussian normal distribution, the measured values are within a confidence interval around the mean value of all measurements with a certain probability. The uncertainty of sensors and, in this case, mechanical models is propagated throughout each data source \(Q_i\) with Gaussian error propagation, which is implemented by using automatic differentiation, see Sect. 4.2.1. The level of confidence is set to 95%. To determine datainduced conflicts, the overlap of the confidence intervals of the data sources is regarded. It is defined in terms of the set [\(\mu  k\sigma ,\mu + k\sigma \)] and the point where \(\mu \) is the expected value and \(\sigma \) the standard deviation, while the coverage factor k determines the considered amount of the probability space. If the confidence intervals match completely, there is no datainduced conflict and, in turn, an absolute datainduced conflict exists if the confidence intervals do not overlap. As a measure of the severity of a datainduced conflict between two data sources \(Q_i\) and \(Q_j\), a discrepancy \(d_{ij}\) is defined in the Eq. (4.10)
The discrepancy \(d_{ij}\) is calculated for every sample of the two data sources \(Q_i\) and \(Q_j\) at the equidistant discretetime intervals \(t_n=\frac{n}{f_\text {s}}\), where \(f_\text {s}\) is the sample rate of data acquisition. The redundant sources for the force equilibrium \(\text {FE}_{1}\) of FSP1 are represented by Fig. 4.19a–c, in which the force components in each direction are plotted over time.
The plots in Fig. 4.19d–f show the discrepancy between the two data sources over time for each force component. To define whether there is a conflict or not, we introduced a measure, denoted as degree of conflict (DOC), \(c_{\text {ij}} = \overline{d_{\text {ij}}(t)}\), which is the timeaveraged discrepancy over a time interval of interest \([t_0\), \(t_{1}]\), which has to be defined individually for each scenario. If the calculated DOC is greater than a predefined threshold value \(c_{\text {thresh}}\), a datainduced conflict is detected. In this case, \(c_{\text {thresh}}\) has been set to 1.96, which is equal to the coverage factor \(k=1.96\) of a 95% confidence interval. That means that the timeaveraged confidence intervals of the two data sources are exactly adjoining (\(\overline{\mu _{i}}  k\overline{\sigma _{i}} = \overline{\mu _{j}} + k\overline{\sigma _{j}}\)).
For \(\text {FE}_{1}\), a datainduced conflict emerges for the y and zdirection. The result of the experiment shown in Fig. 4.19 is illustrated graphically by the conflict matrix shown in Fig. 4.20. The displayed vertical bars symbolise the different data sources \(Q_{i}^{(m)}\) for the mth comparative quantity based on different sensors and models, which are listed on the left side. Two or more data sources are used to determine one comparative quantity, displayed on the bottom of Fig. 4.20. As described, these redundant values are examined for conflicts. Here, conflicts were detected in two of the evaluated comparative quantities: ‘support force 1’, which is based on the equilibrium of forces FE_{1}, as described above, and ‘force symmetry’, which includes the condition that, in the case of a vertical impact, the forces in the support points 1–3 must act pointsymmetrically around the centre axis due to geometric considerations. For the sake of simplicity, only sensors are shown in Fig. 4.20, but the described procedure can be extended analogously with the models contained in the applied soft sensors.
For ‘support force 1’, the DOC \(c_ ij \) of the three data sources are illustrated in the conflict submatrix \(C^ (III) \) above. Comparisons in this submatrix that exhibit a datainduced conflict are marked in black as the well as the bars that represent sources involved in a conflict and the comparative quantities estimated by this sources. Other comparative quantities, which contain the same sensors, are highlighted in grey. An important benefit of the shown representation is the marking of sensors that are involved in a conflict as potentially faulty, so that this information can be considered elsewhere, which in the case of ExoMars, as mentioned above, could have supported the identification of a faulty data source.
Sensor \(F_{\mathrm {P}1}\) is the common component of both conflicting scenarios, thus it is obviously recommendable to check this sensor for a variety of sensor errors. To quantify this suspicion, a conflict rate \(R_{C,n}\) is introduced, which gives the operator of the system a hint on which sensor to check first. The \(R_{C,n}\) of sensor n is the number of conflicts in which the sensor is involved in relation to all comparisons of sensor n with other data sources. To illustrate that, it is shown exemplary for \(\varepsilon _1^{\mathrm {U}}\). This sensor is contained in one source for comparative quantity ‘contraction’, where it is compared with two other sources without a conflict and in one source for the comparative quantity ‘support force 1’ facing one other conflicting data source, so its conflict rate is 33%. If the measuring channels for the y and zcomponent of the force transducer at support 1, as switched in the described case, were connected correctly, there would no longer be any conflict in the comparative quantities considered.
Conclusion
Misinterpretations can occur during the processing of sensor data due to uncertainty, especially ignorance. Datainduced conflicts occur when physical quantities used for monitoring are recorded redundantly and contradict each other. These conflicts can be used specifically to detect faults. For this purpose, we developed a method which is based on differentiating data sources into models and sensors, linking them in such a way that relevant variables are consciously recorded redundantly.
The proposed approach was applied to the MAFDS, the structural system presented in Sect. 3.6.1. An information model was built that contains all relevant metadata of the underlying sensor system, such as quantified uncertainty, as well as the used physical models. Automatic differentiation was implemented to propagate and determine the resulting uncertainty for conflict checking.
While stateoftheart faultdetection methods only take some redundant data into account, they lack the view on the whole system. A single fault may result in a multitude of conflicts, especially in timevariant processes. To assist an operator or developer in finding the fault, the amount of data from the conflict checks must be reduced and visualised in a way that makes it easier for humans to recognise a pattern. We presented a conflict interpretation method that furthermore takes the metadata into account. Hence, the method is scalable in both, the number of soft sensors and the model depth, for example to identify faulty model parameters.
4.3 Analysis, Quantification and Evaluation of Model Uncertainty
Trying to predict the future is deeply rooted in mankind. In almost every field of science and engineering, more or less sophisticated models are used to predict processes or properties and finally make decisions or draw conclusions [144]. Along these lines, models can be mathematical formulations, e.g. physical axioms and constitutive equations, or physical simplifications of reality, e.g. scaled prototypes. However, every prediction made by models comprises uncertainty, see Sect. 1.3. The uncertainty in model predictions arises essentially from the sources data uncertainty and model uncertainty, cf. Chap. 2, supplemented by numerical errors in case of mathematical models [87]. This section focuses on the analysis, quantification and evaluation of model uncertainty.
Reality is complex and cannot be completely represented by models, neither in mathematical formulations nor in prototype realisations, cf. Figure 1.5. Simplifications, assumptions, conceptualisations, abstractions and approximations all result in model uncertainty [144]. No matter if underlying physics is only poorly understood or linearisations need to be used to reduce computational burdens: “Essentially, all models are wrong, but some are useful.” [23].
With this in mind, model uncertainty needs to be taken into account for any kind of decision making and for the evaluation of model predictions in general. The ongoing trend towards digitalisation and the related substitution of real experiments by virtual testing or the combination to HardwareintheLoop (HiL) tests emphasises the necessity of a detailed analysis of model uncertainty to get reliable predictions.
This section is less understood as a textbook, but rather includes the consideration of model uncertainty for manifold examples and applications of mechanical loadbearing structures from both an engineering and a mathematical perspective. The section shows the importance of evaluating the model uncertainty in order to improve the models themselves and their predictions, and finally the conclusions to be drawn from the predictions. Additionally, mathematical approaches and algorithms are presented to analyse and quantify model uncertainty in theory and in practical examples of mechanical loadbearing structures.
4.3.1 Detection of Model Uncertainty via Parameter Estimation and Optimum Experimental Design
In this subsection we develop an algorithm to detect model uncertainty using tools from parameter estimation, optimum experimental design and statistical hypothesis testing. The mathematical models which are investigated consist of functional relations between input and output quantities, such as model parameters, state variables and boundary conditions, cf. Sect. 2.2. Within a probabilistic frequentist framework, it is assumed that the true values of the model parameters can be estimated by repeated calibration and validation processes with new observational data. The latter are subject to noise, and as a consequence, uncertainty propagates to those parameter estimates. In an optimally designed experiment we then find the best choice among experimental setups, so that the extent of data uncertainty upon the model parameters which we quantify by confidence regions is minimised. If the mathematical model is correct then repeated model calibration and validation with different data sets obtained from optimal experimental setups should entail almost the same parameter values within a confidence region. We interpret inconsistencies in the parameter estimates obtained from different measurements as an indicator for model uncertainty, i.e. the mathematical model is incapable to explain all the data with the same set of model parameters. An important feature of our algorithm is that we neither assume any prior distribution nor a specific algebraic form of the model discrepancy term in the mathematical equations. Thus, we identify the source of model uncertainty as ignorance, see Sect. 2.2. We first proposed our approach in [50].
Mathematical setting
Let \( u_j \in U_\mathrm {ad} \), \( j = 1, \ldots , n_\mathrm {u} \) be the inputs, such as boundary or initial conditions, \( p \in P_\mathrm {ad} \subset \mathbb {R}^{n_\mathrm {p}} \) be the model parameters, such as material constants, and \( y_j \in Y \) be the corresponding state variables. The first part of the mathematical model is given by an operator \( e : Y \times U_\mathrm {ad} \times P_\mathrm {ad} \rightarrow Y \) that defines the state equation
We require that for every \( p \in P_\mathrm {ad} \) and every \( u_j \in U_\mathrm {ad} \) there exists a unique solution \( y_j(u_j, p) \) of the state equation. Furthermore, the solution operators
are demanded to be twice continuously differentiable in both arguments.
In order to compare the state \( y_j(u_j, p) \) to experimental data it is necessary to map certain components to quantities that are actually measured. This mapping forms the second part of the mathematical model. Therefore, let us define an observation operator by
where \( n_\mathrm {s} \) is the number of data collecting sensors. The experimental setup is characterised by these predefined sensor types or locations and the inputs \( u_j \). We assume h to be twice continuously differentiable in both arguments as well.
It is commonly observed that the acquisition of measurements \( z \in \mathbb {R}^{n_\mathrm {M} \times n_\mathrm {u} \times n_\mathrm {s}} \) is subject to uncertainty, where \( n_\mathrm {M} \) is the number of repeated measurement series. To this end, we assume a Gaussian noise profile that is added to the true but in general unknown value \( z^\star \) of the quantity of interest:
for all \( i = 1, \ldots , n_\mathrm {M} \) and \( j = 1, \ldots , n_\mathrm {u} \) where \( \sigma ^2 \in \mathbb {R}^{n_\mathrm {s} \times n_\mathrm {s}} \) is the diagonal variance matrix of the employed sensors. Thus, we assume that the noise profile is independently distributed for each sensor. If the model is correct then it is a valid explanation of the data, i.e.
holds for all \( i = 1, \ldots , n_\mathrm {M} \) and all \( j = 1, \ldots , n_\mathrm {u} \) with the true but in general unknown parameter values \( p^\star \). Now, the following questions arise:

1.
How can we estimate \( p^\star \) from z and quantify the uncertainty in the estimation?

2.
What are useful criteria to determine whether the Eq. (4.13) is incorrect?
The first question is extensively explored in the literature [10, 14, 92, 150] and we briefly introduce our method of choice below. The second question is strongly related to the detection and quantification of model uncertainty, cf. Sect. 2.2. This is still an active field of research. In the following we present our approach to detect model uncertainty as described in [50].
Parameter estimation
For given measurements z the following nonlinear leastsquare problem with state equation constraints [19] is solved to obtain an estimate of the true values of the model parameters:
where \( \sigma _{kk}^2 \) are the variances of the sensors introduced above and \( \omega _k \in \left\{ 0,1\right\} \) are their weights, i.e. \( \omega _k = 1 \) if, and only if, sensor k is used. We allow sensors to remain unused to save operational costs. Since the parameter estimate depends on the data z as well as on the weights \( \omega \), which both remain fixed, we associate a solution operator \( (z, \omega ) \mapsto p(z, \omega ) \) with Problem (4.14).
We choose \( n_\mathrm {z} = n_\mathrm {s} n_\mathrm {u} n_\mathrm {M} \) as the new dimension to rewrite problem (4.14) in vector form and further insert the solution operators \( \mathcal {S}_j \) of the state equation (4.11). Let \( \tilde{z} \in \mathbb {R}^{n_\mathrm {z}} \) be the data vector obtained from rearranging z and let \( \tilde{h} \) consist of \( h(\mathcal {S}_j(u_j, p), p) \) for all \( j = 1, \ldots , n_\mathrm {u} \) in a row and copied \( n_\mathrm {M} \) times. We define \( \mathcal {S}(u, p) :=\mathcal {S}_j(u_j, p)_{j = 1, \ldots , n_\mathrm {u}} \) for brevity. Then
are the residuals in vector form. The diagonal weight matrix \( \Omega \in \mathbb {R}^{n_\mathrm {z} \times n_\mathrm {z}} \) consists of copies of \( \omega \in \mathbb {R}^{n_\mathrm {s}} \) and the diagonal variance matrix \( \Sigma \in \mathbb {R}^{n_\mathrm {z} \times n_\mathrm {z}} \) contains copies of \( \sigma ^2 \). Then problem (4.14) can be rewritten into
Each locally optimal solution of (4.16) is a random variable. In general, its probability distribution differs from the one of the measurements z. This is due to the fact that the mapping \( (z, \omega ) \mapsto p(z, \omega ) \) is nonlinear. The computation of confidence regions would lead to nonellipsoidal sets which are difficult to handle. We therefore choose for a given confidence level \( 1  \alpha \), where \( \alpha \in (0,1) \), a linear approximation of the confidence region K in the parameter space around \( {\mathbb {E}}\left[ p(z, \omega )\right] = p^\star \), see [92]. In fact, we approximate the distribution of \( p(z, \omega ) \) to be Gaussian with expected value \( p^\star \) and covariance matrix C. Then the set K is an \( n_\mathrm {p} \)dimensional ellipsoid determined by the covariance matrix C:
where \( \chi ^2_{n_\mathrm {p}} \) is the quantile function of the \( \chi ^2 \) probability distribution with \( n_\mathrm {p} \) degrees of freedom. We consider the following approximations for the covariance matrix C coming from a Gauss–Newton approach [19, 38] and a sensitivity analysis [10, 38], respectively:
where J is the total derivative of the residual vector r with respect to the model parameters p and
Our choice is determined depending on the application and the computational effort for the Hessian H, which requires the calculation of second order derivatives of the solution operator, compare Eq. (4.17) with Eq. (4.15).
Optimal design of experiments
In general, the goal in optimum experimental design is to minimise the confidence region of the parameter estimates by changing the experimental setup, namely, sensor locations and types represented by the variable \(\omega \), boundary and initial conditions described by the inputs \( u_j \), etc. Since we employ a linear approximation of the confidence region the aim is to minimise the “size” of the covariance matrix C. There exists extensive research on different design criteria \( \Psi \) that measure the “size” of a matrix in the context of optimum experimental design [49, 143, 158]. We list a few prominent options:
Depending on the application, the computational effort, and the adaptability of the experimental setup, we formulate slightly different optimisation problems. If the calculation of the Hessian H is fast, the number of sensors is small and the experimental setup is limited to adapting sensor positions only, we consider the matrix \( C_\mathrm {S} \) from above in the optimisation model:
Note that in an iterative solver scheme a new parameter value \( p(z, \omega ) \) and new solutions \( \mathcal {S}_j(u_j, p(z, \omega )) \) to the state equation have to be computed after each step for all \( j = 1, \ldots , n_\mathrm {u} \). The inputs \( u_j \) remain fixed here as the experimenter can only adjust sensor positions. Moreover, \( C_\mathrm {S} \) depends on the measurements, and this would require new data, if any input values are changed. The constraints \( G(\omega ) \) describe userspecific restrictions on sensor combinations and on the minimal number of used sensors, see [50, 92] for more details. Problem (4.18) is a nonconvex mixedinteger nonlinear program. Since we assume the number of sensors to be small, we employ heuristic methods to solve it.
If the computational effort for the Hessian H is large, the number of available sensors is high, and the experimental setup can be adapted in sensor positions and inputs, we use the covariance matrix arising from the Gauss–Newton approach in the following optimisation problem:
where \( \beta _1, \beta _2 \) are positive constants. Note that the experimenter is now given the possibility to optimise both sensor weights \( \omega \) and input variables \( u = \!\left( u_j\right) _{j = 1, \ldots , n_\mathrm {u}} \). Besides, \( C_\mathrm {GN} \) is independent of experimental data, and the parameter values p stay fixed in this setting. However, this approximation of the true covariance matrix may be less accurate than \( C_\mathrm {S} \) where the parameter values are continually updated within the optimisation scheme. The function \( R(\cdot ) \) serves as a regulariser for the inputs to guarantee smoothness. For a fixed \( \varepsilon \in (0, 1] \) the penalty term \( P_\varepsilon (\cdot ) \) is chosen to be a smooth approximation of the \( \ell _0 \) “norm”. We refer to [3] for a detailed mathematical description. This penalty is intended to yield sparse and \( \left\{ 0, 1\right\} \)valued optimal sensor weights. To achieve this, we proceed in the following way. Problem (4.19) is first solved with \( \varepsilon _1 = 1 \) and we obtain optimal weights \( \overline{\omega }_{\varepsilon _1} \) and optimal inputs \( \overline{u} = \!\left( \overline{u}_j\right) _{j = 1, \ldots , n_\mathrm {u}} \). Then we choose another \( \varepsilon \) such that \( 0< \varepsilon < \varepsilon _1 \) and solve the following optimisation problem with \( \overline{\omega }_{\varepsilon _1} \) as starting point and fixed inputs \( \overline{u} \):
By successively solving (4.20) with diminishing \( \varepsilon _i \) such that \( 0< \varepsilon _i < \varepsilon _{i1} \) and with \( \overline{\omega }_{\varepsilon _{i1}} \) as starting point, the optimal sensor weights tend to become sparse and \( \left\{ 0, 1\right\} \)valued after a few iterations \( i = 1, 2, \ldots \), see [4].
Model uncertainty
We now employ the two previously introduced methods, parameter estimation and optimum experimental design, to identify model uncertainty. Hereby, it is assumed that the model \( \mathcal {M} \) is valid in all sensor locations specified by \( \omega \), for all inputs \( u_j \in U_\mathrm {ad} \) and for the same true model parameters \( p^\star \). Since \( p^\star \) are in general unknown we state the hypothesis that a particular solution of (4.16) serves as a good approximation. Then repeated solutions of (4.16) for measurement series taken at different sensor locations with possibly differing inputs should lie in the confidence region of previously estimated parameter values. But, if certain data sets lead to estimates that lie outside the confidence region of previous tests then the model is unable to predict the results of all experiments, i.e. the underlying model is inadequate.
Figure 4.21 summarises our algorithm to detect whether a model \( \mathcal {M} \) is inadequate. In line 02 initial (or artificial) data are introduced because they appear in the covariance matrix \( C_\mathrm {S} \) in Problem (4.18). In the alternative way (line 07), it is necessary to compute an initial parameter estimate from this data before solving Problem (4.19).
The acquisition of experimental data sets z in line 05 happens at the optimal sensor locations \( \overline{\omega } \) for those inputs \(u_j\) that entered the optimisation problem (4.18). Thus, the size of the predicted confidence region for the model parameters is at its minimum provided that the measurement error has the previously stated variance \(\sigma ^2\), see Eq. (4.12). In line 08, experimental data is acquired at the optimal sensor locations but with inputs in the vicinity of the computed optimum \( \overline{u} \). By continuity of the objective function in Problem (4.19) with respect to the inputs, the size of the confidence region for the model parameters stays close to the minimal one.
A fundamental assumption of our methodology is that the measurement errors are Gaussian. To check whether the measurement errors are normally distributed (line 10) we refer to conventional techniques as described in [30], for example. We do not consider experiments that yield data with nonGaussian noise since this violates our fundamental assumption in Eq. (4.12).
The choice of the calibration and the validation set in line 11 is crucial. The model \( \mathcal {M} \) may or may not pass the test depending on that choice. It is possible to divide the data set randomly as in a Monte Carlo crossvalidation [39]. However, there are applications where an expert judgement is necessary to perform a meaningful division. Additional help to target the worstcase split can improve the performance of our algorithm. An example for this is given in Sect. 4.3.2 where we detect uncertainty in mathematical models of the 3D Servo Press.
From lines 13 onward, a classical hypothesis test with Bonferroni correction [40] is performed. The null hypothesis and the alternative hypothesis are
Let \( \overline{\mathtt {TOL}} = \mathtt {TOL}/n_\mathrm {tests} \) be the corrected test level. The null hypothesis \( \mathrm {HYP}_0 \) is rejected if \( p_\mathrm {val} \notin K(p_\mathrm {cal}, C_\mathrm {cal}, \overline{\mathtt {TOL}}) \). Recall
Since we are usually performing more than one hypothesis test on similar data sets, we need to account for the problem of multiple testing. The Bonferroni correction of the test level, \( \overline{\mathtt {TOL}} = \mathtt {TOL}/n_\mathrm {tests} \), is a very conservative method to control the familywise error rate (FWER), i.e. the probability of rejecting at least one true null hypothesis. It is reasonable to choose a small threshold for the FWER, e.g. 5% since it represents the error of the first kind which we want to be small when rejecting a model. The \( \alpha _\mathrm {min} \) (line 13) is the pvalue of the statistical test, which is the smallest test level under which the null hypothesis can only just be rejected.
The greater the number of test scenarios \(n_\mathrm {tests}\) the easier it becomes for a null hypothesis to pass a particular test. Since we are interested in the overall null hypothesis that the true values \(p^\star \) of the parameters stay within the computed confidence regions, we interpret any rejected null hypothesis as significant, i.e. then the mathematical model itself is subject to uncertainty. In practice, it may occur that an inadequate model passes quite a lot of tests. This behaviour can be explained by the fact that even an inaccurate model may provide satisfactory results on a particular range of inputs. However, provided that enough data are available one can identify ranges of inputs for which an inadequate model fails the hypothesis test.
In summary, we proposed a new algorithm to detect model uncertainty and to quantify the quality of our decision when rejecting a mathematical model via error probabilities. We combined methods from parameter estimation, optimum experimental design and hypothesis testing to achieve this. Furthermore, our approach is suited to identify particular ranges of inputs for which the model fails to explain the data. This is especially helpful when reconsidering the system design phase in product development, see Sect. 3.1, to improve the models that have been used so far.
4.3.2 Detection of Model Uncertainty in Mathematical Models of the 3D Servo Press
The method proposed in Sect. 4.3.1 is demonstrated here using a component of the 3D Servo Press, a multitechnology forming machine that combines spindles with multiple eccentric servo drives, see Sect. 3.6.3. Forming machines have the task of performing accurate motions of the tool centre point (TCP) under high process forces. Besides control actions, this requires the acquisition of the TCP position, see Sect. 5.4.1. Since direct measurements of the TCP are technically infeasible, elastic models shall provide the basis for the state estimation of the TCP [66]. To calibrate and validate the elastic models, measurements were taken on the smallscale prototype of the press. Furthermore, the costs for obtaining these measurements are reduced in view of future experiments on the fullscale 160t press. In this subsection, we briefly sketch mathematical models of the 3D Servo Press and present numerical results on the detection of model uncertainty from [50].
In order to model the elastic 3D Servo Press, components were classified according to their load scenario and their functional setting, respectively. The press mainly consists of coupling links and bearings, see Fig. 4.22. Additionally, friction between these components needs to be taken into account. In the following we describe the mathematical models that were employed for the different parts of the press and for the description of their behaviour:

A bar model is employed for those coupling links where the stress under load is very small. Each bar is discretised by the finite element method. Each finite element stands for two masses connected by a spring. However, the actual bar elements do not have a uniform crosssectional area. To take this into account, a massspring model is derived from a finite element analysis [50].

The remaining coupling links which experience bending moments are modelled as beams. Each beam is again discretised by the finite element method and reduced to a massspring model with lumped masses, i.e. all nondiagonal elements in the mass matrix are neglected. The governing equations come from the Euler–Bernoulli beam theory [58].

Due to their progressive stiffness characteristics, all bearings are modelled as nonlinear spring elements, located between the joints of the coupling links.

As expected, friction was observed in all bearings that move. Thus, the results of the experiments reveal a hysteresis behaviour in the loaddisplacement curve. Since the complete physical modelling of friction is very challenging, we use applicationspecific substitute models and evaluate them experimentally. We propose three different models \( \mathcal {M}_1, \; \mathcal {M}_2 \) and \( \mathcal {M}_3 \) to deal with this phenomenon:
$$ \begin{array}{ll} \mathcal {M}_1: &{} \text {linear model where friction is neglected}, \\ \mathcal {M}_2: &{} \text {discontinuous Coulomb's friction model}, \\ \mathcal {M}_3: &{} \text {continuous friction model with rateindependent memory}. \end{array} $$
In order to validate each of these models, several experimental data sets were collected. The measurements were conducted with \(n_\mathrm {u} = 29\) different process forces, which we call input variables. The first 15 forces were part of the loading and the last 14 were part of the unloading cycle. Our quantities of interest are the vertical displacements in point D, the horizontal displacements in point F, and vertical displacements in point \(B_0\), when a vertical process load \( q_\mathrm {P} \) is applied to the press, see Fig. 4.22.
There are \(n_\mathrm {s} = 3\) sensors installed at these locations which measure the displacements. Each series of measurements was repeated \( n_\mathrm {M} = 6\) times, although with slightly different process forces. To deal with this variability, we work with the known setpoint values of the applied forces and linearly interpolate the data, see [50] for more details. We deviate from the algorithm presented in Sect. 4.3.1 to some extent in that we do not distinguish between initial data and the actual acquisition of measurements.
After the experimental data had been acquired and the measurement errors were checked to be Gaussian, our goal was to minimise costs to obtain these measurements by selecting only two out of the three sensors. The model parameters to be estimated are the stiffnesses of two bars, \(k_7\) and \(k_5\), see Fig. 4.22. Since there are only two parameters to be estimated, it suffices to employ two sensors and repeat the measurement process. Table 4.2 shows the results for the most important design criteria for the model \(\mathcal {M}_3\), where we used the matrix \(C_S\) as covariance matrix, see Sect. 4.3.1.
From Table 4.2 we infer that the absence of the second sensor entails an increase in all design criteria by a factor of \({\approx }10^{20}\) compared to the initial value where all sensors are employed. This is a strong indication that the covariance matrix became singular, i.e. it is impossible to estimate the model parameters with that sensor choice. The absence of the first sensor, though, increases the maximal expansion, which is related to the design function \(\Psi _E(C)\), and the volume, which is related to the design function \(\Psi _D(C)\), of the confidence ellipsoid. However, the sensor combination displayed in the last row of Table 4.2, i.e. measuring the vertical displacements in point D and the horizontal displacements in point F only, leads to the smallest expansion of the confidence ellipsoid. We choose this sensor pair. Computations for the models \(\mathcal {M}_1\) and \(\mathcal {M}_2\) bring us to the same conclusion.
As already mentioned, the experiments revealed a hysteresis behaviour. We want to apply the algorithm introduced in Sect. 4.3.1 to see whether model uncertainty is recognised in the friction models \(\mathcal {M}_1, \; \mathcal {M}_2\) and \(\mathcal {M}_3\). The output of these models together with the measurement data is shown in Fig. 4.23 for comparison.
The continuous friction model is trained by an artificial neural network using real and simulated data [16, 112]. Hence, we used four of the six data series. Thus, only \(n_\mathrm {M} = 2\) measurement series remained for the application of our algorithm. To stay fair, we used these two measurement series for all models alike during the validation which we perform by hypothesis testing. The splitting of the data set into a calibration and a validation set was done in four different ways, see [50]. First, we split the test set into loading \(z^l\) and unloading \(z^u\). For the loading case, we again split the data homogeneously into one calibration \(z^{l_1}_c\) and one validation \(z^{l_2}_v\) set. The same was done for the unloading case. Next, the loading scenario was tested against unloading, such that we split the data homogeneously into one calibration \(z^{l}_c\) and one validation \(z^{u}_v\) set. Finally, data points from both loading and unloading were tested against each other, i.e. we had \(z^{lu}_c\) for calibration and \(z^{lu}_v\) for validation.
For each of these \(n_\mathrm {tests} = 4\) test scenarios we computed the \(\alpha _\mathrm {min}\), respectively, as shown in Table 4.3. Adopting the usual threshold \(\mathtt {TOL} = 5\%\) for the error of the first kind and applying the conservative Bonferroni correction with \(n_\mathrm {tests} = 4\), see Sect. 4.3.1, the corrected test level becomes \(\mathtt {TOL}/n_\mathrm {tests} = 1.25\%\). Then, it is clear that model \(\mathcal {M}_1\) is rejected in all four test scenarios. As expected, the experimental data cannot be described by a linear model that neglects friction. A first attempt to model hysteresis which is caused by friction is given by \( \mathcal {M}_2 \). This model seems to be able to accurately model loading and unloading separately. However, it is insufficient in describing both phenomena with the same set of parameters. Our algorithm is able to detect this deficiency in the third and fourth test scenario. This result can be explained with the fact that the Coulomb model is discontinuous whereas friction is a continuous effect. The last column of Table 4.3 shows that \(\mathcal {M}_3\), which has been trained by an artificial neural network, passes all tests successfully. Thus, this model is able to explain the present type of hysteresis with the same set of parameters which are valid within their confidence region.
In conclusion, we have seen that the algorithm introduced in Sect. 4.3.1 performs well, if applied to the 3D Servo Press. The choice of the calibration and validation test sets has been done by expert judgement because of the special behaviour of the technical system, namely, the loadingunloading cycles. By splitting the data set this way, we directly target the worstcase test scenario, so that a MonteCarlolike splitting is not necessary. Since further development steps and online algorithms of machines rely on a valid model, a statement about model uncertainty is a valuable indication for the engineer. Furthermore, our hypothesis test could be used as a stopping criterion for the performance training of an artificial neural network. Besides the optimal placement of sensors for the model calibration, the presented method can also be used to identify uncertainty in different complex models, to be considered in the model selection.
4.3.3 Assessment of Model Uncertainty for the Modular Active SpringDamper System
Research on methods to quantify model uncertainty in structural engineering has intensified more and more in recent years. The information gain of such methods typically relies on using experimental data, such as structural responses or vibration analysis. Examples for methods range from the wellknown Bayesfactor [137] to methods based on the Bayesian inversion [60], an errordomain model falsification approach [122], or a technique using the adjustment factor method [124]. In Sect. 2.2 a comprehensive overview over methods for quantification of model uncertainty can be found. In this section, we introduce and compare two different methods to quantify model uncertainty by applying them exemplarily to the MAFDS, as presented in Sect. 3.6.1. First, a method based on the direct application of the Bayes’ theorem is presented and, subsequently, a method based on the modelling of a discrepancy function by means of a Gaussian process is shown.
Figure 3.23 depicts the MAFDS and Fig. 4.24 the two degrees of freedom (2 DOF) model to capture its dynamic behaviour, where the drop height is denoted by h. The position of both the upper and lower mass is determined by the coordinates \(z_\mathrm {u}\) and \(z_\mathrm {l}\) of the 2 DOF model, where \(z_\mathrm {r} =z_\mathrm {u} z_\mathrm {l}\) denotes their relative displacement. The equations of motion are
where \(k(z_\mathrm {r})\) denotes the stiffness and \(b(\dot{z}_\mathrm {r})\) denotes the damping of the springdamper system, as functions of the relative displacement \(z_\mathrm {r}\) and \(k_{\mathrm {ef}}\) denote the stiffness of the elastic foot, as can be seen in Fig. 3.23. The structure is subject to gravitation g. For details on the derivation of the equations of motion see [107]. Regression studies on the stiffness and damping properties of the springdamper system yielded several model candidates to describe the dynamic behaviour by combinations of linear, bilinear and power functions [105]. This leads to uncertainty regarding which model candidate is most adequate to predict the dynamic behaviour of the MAFDS. This model uncertainty is assessed and analysed subsequently by comparing the different model candidates in terms of model adequacy. The main content of this section is based on [45, 108].
As outlined in Sect. 3.6.1, the inputs to the system are the drop height h and the additional weight \(m_{\mathrm {add}}\) that can be added to the frame. The system inputs are summarised in the input vector \(\boldsymbol{x} = \left( h , m_{\mathrm {add}}\right) ^\top \). Exemplarily, the system outputs are the maximum relative compression \(z_{\mathrm {r,max}}\), the maximum force in the elastic foot \(F_{\mathrm {ef,max}}\) and the maximum force on the springdamper system \(F_{\mathrm {sd, max}}\), as depicted in Fig. 4.24. The system outputs were chosen in such a way that they can be calculated by simulation of the model candidates, as well as measured experimentally, and thereby enable to compare different models. In Fig. 3.24 in Sect. 3.6.1, the system outputs are shown as horizontal lines in the trajectories of the experimental drop test. For simplicity of notation, the scalar simulation model outputs \({\eta }\) and the scalar experimental outputs y are condensed in the vectors \(\boldsymbol{\eta }\), \(\boldsymbol{y}\), respectively:
Application of Bayes’ theorem for quantification of model uncertainty
In [108] we presented a method to compare different mathematical models based on the extent of agreement between simulation model output \(\boldsymbol{\eta }\) and experimental output \(\boldsymbol{y}\). Model uncertainty was quantified for \(P=4\) selected mathematical models. In a Bayesian framework, the posterior probability gives a measure of how adequate a mathematical model represents the dynamic behaviour of the MAFDS. It estimates the probability of a simulation model output \(\boldsymbol{\eta }\) of each model candidate with index \(q=1,\dots ,Q\) under the condition that the experimental output \(\boldsymbol{y}\) has been observed, for which measurement errors were not considered. Assuming an eventbased Bayesian approach, \(H_{y,q}\) denotes the statistical event describing the output of each model candidate q and \(A_y\) is the statistical event associated with the observed experimental output \(\boldsymbol{y}\). Bayes’ theorem [13] is then written as
where \(P(H_{y,q})\) denotes the prior probability that the model \(H_{y,q}\) is the true model and is assumed equal for all \(Q=4\) model candidates: \(P(H_{y,q}) = 1/4 \). The likelihood \(P(A_y\vert H_{y,q})\) is the probability that experimental output \(\boldsymbol{y}\) is observed when assuming a model q. Similar to a distance metric, it is estimated by the Cartesian vector distance \(d_p\) of the simulation model and experimental outputs (4.22):
The total probability \(P(A_y)\) serves as a normalisation constant and is determined analytically as the sum of the product of the likelihood \(P(A_y\vert H_{y,q})\) and the prior probabilities \(P(H_{y,q})\) for all mathematical models q. Subsequently, the posterior probability (4.23) is calculated for \(K=9\) different, independent events. Here, an event constitutes an experimental output \(\boldsymbol{y}\) and simulation model output \(\boldsymbol{\eta }\) for the \(Q=4\) model candidates. For each \(k=1,\dots ,K\) event, the numerical values of the system inputs \(\boldsymbol{x}_k\) are unique as given in [108]. Figure 4.25 depicts the posterior probabilities \(P(H_{y,k,q}\vert A_{y,k})\) for the four model candidates and the nine events.
Models 3 and 4 exhibit a higher posterior probability for all \(K=9\) events than models 1 and 2, indicating that models 3 and 4 prove more adequate to predict the dynamic behaviour. In order to compare models over a multitude of events, the probability that one model holds true for all \(K=9\) events can be determined by
and is used as a measure of adequacy. For models 1 and 2, the overall posterior probability amounts to \(7.0 \cdot 10^{8}\) and \(5.6 \cdot 10^{5}\). Models 3 and 4 exhibit significantly higher values of the posterior probability with 0.38 and 0.618, respectively. In conclusion, model 4 is most adequate to represent the dynamic behaviour of MAFDS. In summary, the presented method provides a straightforward, computationally nonintensive way to quantify model uncertainty for comparing different models.
A Gaussian processbased method for quantification of model uncertainty
Now, we apply a different method as presented in [45] to quantify model uncertainty. It is assumed that all models are wrong and incorporate a model error due to missing or incomplete physics in the mathematical model [23]. Based on this assumption, the method builds upon the pioneering work of Kennedy and O’Hagan [87], where a model discrepancy function is introduced to incorporate the model error; it thereby serves as a measure of adequacy of a mathematical model. In this framework, any experimental output of a system is represented as
where \(y_n \in \mathbb {R}, (n=1,\dots ,N)\) denotes the nth of a total of N measurements, \(\eta \) is the simulation model output (i.e. \(F_{\mathrm {ef,max}}\), \(F_{\mathrm {sd,max}}\), \(z_{\mathrm {r,max}}\)) with not necessarily unique inputs \(\boldsymbol{x}_n = \left[ h , m_{\mathrm {add}}\right] ^\top \), \(\delta \) is the discrepancy function and \(\varepsilon _n\) represents zeromean normally distributed measurement noise for each measurement n. We model the discrepancy function \(\delta (\boldsymbol{x})\) by a Gaussian process \(\boldsymbol{\delta }(\boldsymbol{X}) \sim \mathcal {N}\big (m(\boldsymbol{X}), C(\boldsymbol{X,X})\big )\), where \(\boldsymbol{X}\) represents an input matrix \( \boldsymbol{X} = \left[ \boldsymbol{x}_1,\dots , \boldsymbol{x}_N \right] \). The mean function is denoted by \(m(\boldsymbol{X})\) and \( C(\boldsymbol{X,X})\) is the covariance matrix which is built up by the covariance function \(c(\boldsymbol{x}_i, \boldsymbol{x}_j)\) where \(\boldsymbol{x}_i, \boldsymbol{x}_j \) with \(i,j = 1,\dots ,N\) denote the input vectors. The Gaussian process itself is fitted to the difference between measurement \(y_n\) and the model output \(\eta (\boldsymbol{x}_n)\), using the data set
For the Gaussian process, a constant mean scale m and a squared exponential covariance function \(c(\boldsymbol{x}_i, \boldsymbol{x}_j)\) are selected
where \(\delta _{ij}\) denotes the Kronecker delta in this case. The matrix \(\boldsymbol{M}\) is set to \(\boldsymbol{M} = \boldsymbol{I}\ell ^{2}\) with identity matrix \(\boldsymbol{I} \in \mathbb {R}^{2\times 2} \) and length scale \(\ell > 0\) [132]. The signal variance \(\sigma _\text {f} > 0\) determines how much the discrepancy function values deviate from the mean value. Larger values of the signal variance \(\sigma _\text {f}\) lead to larger deviations of the discrepancy function. Measurement noise is accounted for by the noise level parameter \(\sigma _n\) in the covariance function (4.28). It is assumed to be an additive, independent identically distributed Gaussian noise with variance \(\sigma _n^2\) [132].
The hyperparameters (\(\beta , \ell , \sigma _\text {f}, \sigma _\text {n}\)) inherent to the mean and covariance function (4.28) essentially govern the behaviour of the Gaussian process and are determined using a Bayesian optimisation scheme. Using the optimised set of hyperparameters, the quantiles of the 95%confidence interval for the discrepancy functions are specified analytically by
Comparing the 95%confidence interval of the discrepancy functions by their 2.5 and 97.5% quantiles \(C_{2.5}\) and \(C_{97.5}\) yields a measure to select between competing models. The maximum absolute value of the two quantiles shows how much the discrepancy function deviates from zero and consequently indicates how adequate the model is.
For the example at hand, the Gaussian processes describing the discrepancy functions \(\delta _{q,{z_\text {r}}}\), \(\delta _{q,{F_{\text {sd}}}}\) and \(\delta _{q,{F_{\text {ef}}}}\) are determined for the \(q=1,\dots ,4\) model candidates (The model candidates are not identical with the ones investigated in the previous section). The 95%confidence intervals of the discrepancy functions are shown in Fig. 4.26. All models overestimate the three outputs of the system, which can be seen by the fact that the absolute values of the 2.5 and 97.5% quantiles of the discrepancy functions are consistently negative. For output \(z_{\mathrm {r,max}}\) displayed in Fig. 4.26a, the 2.5% quantile for model 3 is closest to zero, indicating a higher adequacy of the model. However for the force outputs \(F_{\mathrm {sd, max}}\) and \(F_{\mathrm {ef, max}}\) shown in Fig. 4.26b, the 2.5% quantile for model 1 is closest to zero, suggesting that model 1 is most adequate. In conclusion, no model consistently ranks best in terms of model adequacy.
Conclusion
Both presented methods provide a measure of adequacy that can be used to quantify model uncertainty. They essentially differ in their assumptions about model error. The method presented first does not differentiate between model error and measurement error. In consequence, the chosen likelihood function (4.24) does not reflect a distribution but is rather to be understood as a distance metric. In contrast, the Gaussian process based method assumes the model discrepancy as a Gaussian process and accounts for measurement error separately.
Further, the discrepancy function provides valuable information about the difference between model and measurement. For example, the mean scale exhibits if a model tends to under or overestimate system quantities. In contrast, for the method based on the Bayes’ theorem, this information is lost due to the quadratic form of the likelihood function.
For the modelling of an adequate discrepancy function, the Gaussian process based method essentially relies on assumptions on or a priori knowledge about suitable mean and covariance functions. As a priori knowledge is missing here, a rather simple choice for mean and covariance function was made. For the rare cases, in which there is a priori knowledge about the model discrepancy, the mean function could for example be polynomial, or consist of weighted basis functions.
As a concluding remark, the computing time for the first method is negligible, whereas it highly depends on the number of model inputs, outputs and measurements for the hyperparameter optimisation.
4.3.4 Model Uncertainty in Hardwareintheloop Tests
HardwareintheLoop (HiL) tests investigate the behaviour of real components connected to real time simulated systems [82, 98]. As depicted in Fig. 3.20, HiL tests enable mastering uncertainty by a stepwise integration of a module into a real system by combining cyber world and real world. This section discusses the influence of the active interface between the two worlds. Therefore, compared to a simulation of the virtual component, HiL tests are in the virtual system.
The first HiL tests were used in 1936 to simulate instruments in an aircraft cockpit [82]. In the mid 1960s, electrical and hydraulic actuators were used to simulate cockpit movements [82]. Since the late 1960s, HiL tests have been used to simulate the response of structures and components to earthquakes [119]. Since the 1980s, HiL has been used at universities as well as in research and development departments for component validation [133, 139].
Formulated briefly, HiL tests are a symbiosis of an experiment and a simulation as Fig. 3.20 shows. This results in the following advantages compared to classical tests and pure simulation [82, 98, 139]:

1.
Real system components can be tested in the virtual system at an early design stage. This saves costs and development time. It is a prerequisite for the agile development of physical systems.

2.
Parameters of the virtual system can be changed with little effort to investigate different test configurations.

3.
Components with complex nonlinear behaviour can be investigated in the simulation as real components. The model uncertainty is reduced, since reality can be investigated.
Therefore, HiL tests are ideal to examine components like the Active Air Spring with the associated parts we developed, cf. Sect. 3.6.2. It is not necessary to have a two mass oscillator or a complete vehicle in hardware, since they can be virtually simulated to master the uncertainty of the component in an early design stage. HiL tests can therefore already be used in the design phase of the product life cycle introduced in Sect. 1.2. The disadvantages of HiL tests are a real time capable hardware being required with this hardware having an influence on the result; this is due to signal propagation times, measurement uncertainty, filtering, and the dynamics of the test rig, as shown in this subsection. In addition, an appropriate modelling is necessary, where a compromise between the required computing time and the complexity of the model has to be found. The relevant reality, cf. Sect. 1.3, is never represented completely, so there is model uncertainty. In this subsection we therefore investigate the incertitude of model uncertainty and our approach in mastering this uncertainty.
In our HiL tests, the Active Air Spring is coupled with the virtual quarter car, which is simulated in parallel in a realtime simulation environment. Figure 4.27 shows the principle structure and signal flows of these HiL tests. The air spring deflection \(\Delta z\) calculated in the realtime simulation is transmitted to the uniaxial servo hydraulic test rig, the active interface, which deflects the Active Air Spring. The measured axial force F is fed back into the simulation. This is therefore a “closed loop simulation”. The simulated quarter car model—a foot point excited two mass oscillator—and the implemented controller are introduced in Sect. 3.6.2. The excitation is also used in the preview function of the implemented controller, which is equally integrated in the real time simulation. In order to minimise the influence of measurement noise, measured and fed back signals are filtered with a second order Butterworth filter with a cutoff frequency of 170 Hz (ZQC in Fig. 4.27). A more detailed investigation in the results of the HiL experiments can be found at Hedrich [73].
The performance indicators that are examined with these HiL tests are driving safety, e.g. wheel load fluctuation \(\sigma _{F_\mathrm {w}}\), and driving comfort, e.g. variation of body acceleration \(\sigma _{\ddot{z}_\mathrm {b}}\), being obtained from real time simulation.
The conflict diagram, Fig. 4.28, displays the measurement result of a HiL test driving on a highway with 100 km/h marked by the diamond. To this end the standard deviations of the wheel load \(F_\mathrm {w}\) and the body acceleration \(\ddot{z}_\mathrm {b}\) are determined by the time signals measured for \(T=20\,\text {s}\). Figure 4.28 shows the measured points for the designed controller with preview. The simulated result (square) is determined with a linear Active Air Spring model and the quarter car model from Sect. 3.6.2. The active Pareto line represents an ideal active system where the controller parameters have been optimised. The HiL influence appears in a deviation of the experiment from simulation in driving safety by 11%. The influence on driving comfort is negligible. Our investigations and results from literature [26] with similar HiL tests show that the test rig in particular has a mayor influence on the results of the HiL tests. The dynamics of the test rig and the sensors have not yet been taken into account in the simulation, since this has never been necessary at any open loop component measurement. Following the principle of simplicity from Heinrich Hertz introduced in Sect. 1.3, the transfer functions \(G_\mathrm {p}=G_\mathrm {sen,z}=G_\mathrm {sen,F}=1\) were assumed (cf. Fig. 4.29, square). The model used for the hardware therefore does not take the reality sufficiently into account.
To consider the influence of the HiL test rig on the calculations, we modelled the hardware as shown on the right in Fig. 4.29. The transfer function \(G_\mathrm {P}\) is used to model the behaviour of the test rig. The transfer functions of the position sensor and the force sensor are also included via \(G_\mathrm {sen,z}\) and \(G_\mathrm {sen,F}\) respectively. Experiments carried out with the sampling time of \(1\ \mathrm {ms}\) have shown that (i) the transfer behaviour of the positioncontrolled servohydraulic test rig up to 25 Hz can be approximately described by a dead time of 10 ms [100, 102] and that (ii) the influence of the sensor system in this frequency range can be neglected compared to the dead time of the testing machine. These results are consistent with results from literature [11]. A Padé approximation \(G_\mathrm {p}\) with a dead time of 10 ms of the third order is used to represent the dead time of the test rig in the hardware model [102].
Figure 4.30 shows all results in the conflict diagram. The deviation of the HiL test from the adapted simulation (triangle) in driving safety comes down to 2%. The following conclusions can be drawn: (i) The influence of the active interface is recognisable and it influences the results. (ii) If the influence of the test rig is considered in the calculation via a dead time element, measurement and simulation correspond quantitatively well. (iii) The remaining deviation is acceptable in the context of the linearised models used and measurement uncertainty.
The model uncertainty, i.e. the neglect of the influence of the active interface, can thus be mastered by taking the transfer function of the test rig into account. Since the influence on the driving comfort, which is the main focus of the tests, is small, the HiL influence on the experiments is tolerated. In the future we will investigate the Active Air Spring integrated in the MAFDS with foot point excitation. This enables validations of the HiL tests and the simulation of the virtual component in the virtual system with a real component in the real system.
4.3.5 Identification of Model Uncertainty in the Development of Adsorption Based Hydraulic Accumulators
When starting a product development from scratch, not much is known about the intended system. There is general physical knowledge and experience in the form of physical axioms and constitutive equations (cf. Sect. 1.3). However, ignorance in these early stages of product development can lead to a significant model uncertainty (see Fig. 1.5 in Sect. 1.3).
In this section, we demonstrate this point in the development of innovative hydraulic accumulators. To ensure consistency with general knowledge we used axiomatic models to determine the potential of hydraulic accumulators filled with adsorptive material. In the following, we show that the omission of some of the system’s numerous interconnected physical effects can lead to a large model uncertainty.
Hydraulic accumulators are used to store energy in hydraulic systems, e.g. for dynamic energy demand. The storage medium is compressed gas. Especially in mobile applications space and weight reduction by smaller and lighter components is very important. Thus the quality measured in acceptance and effort is increased, cf. 1.9. However, with hydraulic accumulators there are two opposing dependencies:

(i)
The energy density of hydraulic accumulators depends on the excitation frequency due to heat transfer processes. At low frequencies hydraulic accumulators are isothermal, whereas at large frequencies the state change is adiabatic. The transition frequency between isothermal and adiabatic behaviour is inversely proportional to the accumulator volume (to be more precise, it is proportional to the specific surface) and therefore, large volume and isothermal behaviour are mutually exclusive.

(ii)
The energy content of the hydraulic accumulators depends on the volume of the accumulator and thus on the size [93, 131]. Hence, energy density and energy content of conventional hydraulic accumulators cannot be maximised at the same time.
To overcome this limitation, different physical effects can be considered. One of these effects is adsorption, i.e. adherence of gas molecules to the surface of a porous material (adsorbent) which was proposed in [126, 131]. The idea behind filling hydraulic accumulators with adsorbent material like activated charcoal, is that adsorption will act as an additional gas storage capacity. In addition to that, gas molecules interact with the adsorbent and therefore lose a translatory degree of freedom during adsorption. Kinetic energy of adsorbed molecules is consequently lower than of free molecules, and energy in the form of heat has to be released during adsorption (heat of adsorption \(E_\mathrm {A}\)). Adsorption is consequently a heat source [113]. The interdependence of these effects make it necessary to evaluate the usability of adsorption in reducing the size of hydraulic accumulators via suitable models. In the following, we show some challenges of adequately modelling hydraulic accumulators with adsorption and the potential huge impact of model uncertainty on the outcome.
The increase of energy density in hydraulic accumulators is equivalent to stiffness reduction. Figure 4.31 illustrates this connection. The work for volume change performed on a gas volume corresponds to the shaded areas in Fig. 4.31. The work for the volume change performed is limited by the upper working pressure \(p_1\) based on the mean working pressure \(p_0\). A lower stiffness of the hydraulic accumulator is reflected by a lower gradient in the pV diagram. With the same upper working pressure \(p_1\), a lower stiffness leads to a higher compression from the average working volume \(V_0\) to \(V_1\) instead of \(V_1'\). Thereby more volume change work can be carried out and thus more energy can be stored.
The intended function of adsorbent material for reducing the stiffness was originally thought to be the additional gas storage capacity. With this in mind an analysis of a simple model was done in the two publications [126, 131]. For completeness the model from the two publications and some results are presented below.
In comparison to the frequencies found in the application of hydraulic accumulators, the typical inherent time for adsorption is much smaller. Consequently, adsorption was modelled as an equilibrium process. The number of adsorbed molecules q in mol depends on the pressure p in the accumulator and the mass of adsorbent \(m_\mathrm {ads}\). For small deviations from equilibrium conditions, the linear Henry approximation with Henry constant H is valid
For the gas we assume the ideal behaviour
to hold, where \(\varrho \) is the gas density, R is the specific gas constant and T is the absolute temperature. The equations of mass and energy conservation for the hydraulic accumulator result in
where V is the volume of the accumulator, M is the molar mass of the gas, \(c_v\) and \(c_p\) are the specific heat capacities of the gas. The heat transfer to the surrounding gas with temperature \(T_u\) is modelled with the heat transfer coefficient \(\alpha \) and surface area of heat transfer A. All parameters for the hydraulic accumulator (\(V_0\), \(p_0\), \(\alpha \), \(T_u\), A, \(\hat{V}\)) were chosen to represent typical accumulators found in literature [93]. All adsorption parameters, namely isosteric heat of adsorption \(E_\mathrm {A}\) and Henrycoefficient H were estimated for nitrogen as described in [111].
In our case the accumulator volume V is changed dynamically, denoted by
where the index 0 denotes the precharged average working state of the accumulator.
Both energy density and energy content depend on the change of the pressure p with changing volume V, i.e. the stiffness
and for comparison purposes are dedimensioned with
For dynamic applications the stiffness of the accumulator as a function of loading frequency \(\Omega = 2 \pi f\) is of interest. Therefore, the frequency response of the stiffness is shown in Fig. 4.32 (whitefilled circles). Comparing the frequency response to the response of a model without adsorption (light grey curve), a stiffness reduction in the isothermal range, and a stiffness increase in the adiabatic range are visible [126, 131].
Measurements on a similar system however, showed a stiffness reduction in the whole frequency range [29]. This deviation from reality is a sign of model uncertainty (cf. Sect. 2.2). Consequently, the assumptions for the model were revisited and the assumption of temperature independence of the adsorption was given up. In the updated model, the number of adsorbed moles is a function of pressure and temperature. For the temperature dependency of the Henry coefficient H(T) the following exponential Arrhenius relation can be assumed [113]
The resulting stiffness from numerical simulation of the full nonlinear equations can be seen in Fig. 4.32 (black dots). It shows a stiffness reduction in the whole frequency range.
To find the reason for the stiffness reduction, a parameter variation of the linearised new model in the adiabatic range was carried out. The parameter variation of the new model for the parameters \(H_0\) and \(E_A\) in the adiabatic frequency range (cf. Fig. 4.33) shows that the stiffness in the adiabatic range is mainly influenced by \(E_A\). This indicates that the magnitude of the adsorption enthalpy is more significant for stiffness reduction than the process of adsorption itself.
To examine this issue further, a sensitivity analysis of the adsorption equilibrium was carried out. The results show that the sensitivity of the equilibrium loading with respect to temperature T is greater than on pressure, i.e.
Due to \({\partial q}/{\partial T}\) being larger than \({\partial q}/{\partial p}\), the number of adsorbed molecules decreases when compressing. Therefore, both results suggest that the stiffness reduction in the adiabatic range is reduced by a lower increase of temperature due to \(E_A\) being drawn from gas for desorption. The pressure and temperature rise of compression are diminished due to the heat \(E_A\) being released in the adsorption. In other words: In contrast to the original assumptions, the adsorptive material is an additional mass source and a heat sink instead of being a mass sink and heat source.
This is a totally different effect than originally intended, and therefore demonstrates a large model uncertainty due to omission of relevant effects. The discovery of this unexpected behaviour was only possible by comparing results from related areas with the model in early stages of the product development process. Inspired by these results, the model uncertainty, i.e. relevant but ignored reality was identified (cf. Sect. 1.3). It emphasises the large effect model uncertainty can have on the results, especially in systems with interdependent physical effects.
4.3.6 Uncertainty Scaling—Propagation from a Real Model to a FullScale System
Models may be mathematical, but they may also be physical, i.e. scaled real models representing a fullscale component or system. The real model is usually scaled in size or material. Geometrically scaled models are common in architecture. In mechanical engineering, scaled models are equally gaining more and more relevance. When it comes to agile development, rapid prototyping is increasingly used resulting in scaled real models.
Figure 4.34 shows two methods to predict the functionality of the fullscale component taking the example of a buckling beam: firstly, scaling from real model measurements; secondly, predicting the function of the fullscale prototype by means of a cyber model. This cyber model is a mathematical model of the component, e.g. a finite element model.
Here, we focus on the first method, the scaling of the prototype’s function from model measurements. Compared to the cyber model, the advantage of the methodology presented here is that there is no need to consider the uncertainty of mathematical modelling, see Sect. 2.2. So far it has remained an open question how to scale the uncertainty in shape and measurement of the physical model test to the fullscale component.
State of the art for scaling are the four steps: (i) produce a scaled physical model, (ii) measure, (iii) undimension and (iv) scale, see Fig. 4.35. When mastering uncertainty, it is no longer sufficient to only take the parameter uncertainty of a real model into account. In addition, the uncertainty must be scaled in a fifth process step, see Fig. 4.35. In this subsection we introduce a new methodology to propagate the uncertainty from a physical model to the real prototype. The beam and the related buckling load are used as an applicationrelated example being an important predefined functional restriction \(g\le 0\) of a load carrying structure.
The following subsections describe the procedure of dimensional analysis, scaling and newly introducing the propagation of scaling uncertainty. We refer to the application of uncertainty scaling according to Vergé et al. [156].
Dimensional analysis
The following recap of dimensional analysis is based on Spurk [148]. A system function g and/or quality F is prescribed by n dimensional physical measures \(p_{j}, j = 1,\ldots , n\). The unit of each physical measure \(p_{j}\) is given as a monomial of the \(i = 1,\ldots , m\) base units \(P_i\). The dimension of the measure is
The matrix \(A=\left( a_{ij}\right) _{n,m}\) is the dimension matrix being central in dimensional analysis. The coefficients \(a_{ij}\) are the exponents of the ith base unit for the jth physical measure.
As a consequence of the Bridgeman’s postulate [24] the relation of \(p_{j}, j = 1,\ldots , n\), is equivalent to the relation of \(\Pi \,_{r}\) with \(r = 1,\ldots , d\) dimensionless measures. Each \(\Pi _{r}\) is a monomial of the \(j = 1,\ldots , n\) physical measures.
The demand \([\Pi _{r} ]\overset{!}{=} 1\) for a truly relative quantity yields
This is only satisfied for
There are d linear independent solutions of this linear system of equations. From linear algebra we know that \(d=nrg(A)\), where rg(A) is the rank of the dimension matrix \(A=\left( a_{ij}\right) _{n,m}\).
As an illustrative example we look for the buckling load of a beam. The analytic, i.e. mathematical model goes back to Euler [43]. This analytic model is not in focus here. The buckling beam in Fig. 4.34 with fixedfree clamping is assumed to be a cylindrical beam of circular crosssection with nominal diameter D, length l, Young’s modulus E and the second moment of area I. For predicting the demanded buckling load \(F_\mathrm {c}\) of the fullscale component we use the measured buckling load \(F_\mathrm {c}'\) of the physical scaled model.
For the system there is only one dimensional product
Scaling
Scaling is used to predict the function g and quality F of the fullscale component. Not only geometric quantities, but other physical quantities, such as the buckling load \(F_\mathrm {c}\), can be scaled. The physical properties of the physical model \(p_j'\) (values of our physical model are marked by a prime) correlate with the fullscale \(p_j\) by
with the scaling factors \(M_{j}\). If the dimensionless products of a real physical and a fullscale model are equal, both are said to be similar [148]:
If there is equality of all dimensionless products, we speak of complete similarity. With Eq. (4.44) and (4.45) we demand
for complete similarity.
For the beam we assume complete similarity in the dimensionless product \(\Pi \) given in Eq. (4.43). Hence, using Eq. (4.46) we get the scaling factor \(M_{F_\mathrm {c}}\) of the critical buckling load \(F_\mathrm {c}\)
The use of such a scaling law is straight forward: Usually the geometric scaling factors \(M_{D}\) and \(M_l\) are known. The same is true for the scaling factor \(M_{E}\) for the Young’s modulus. Hence, the scaling law helps predicting the full scale function \(F_\mathrm {c}\) from the measured model function \(F_\mathrm {c}'\).
Uncertainty scaling
In order to take uncertainty into account, the true value \({p}_{j}\) is given as the combination of the nominal value \({\overline{p}}_{j}\) and the tolerance range \(\delta p_{j}\) for incertitut, cf. Sect. 2.3:
With the definition of relative uncertainty \(U_{j}: = \delta p_{j}/{\overline{p}}_{j}\), the true value is
The same can be applied to the dimensionless products
When considering uncertainty, the product of the scaling factors reads
Since we assume complete similarity of the dimensionless products, \(\prod _{j = 1}^n{{\overline{M}}_{j}^{k_{rj}} = 1}\) applies. Hence, we obtain
With Eq. (4.51) the uncertainty of the function and/or quality of a physical system can be calculated. The equation only needs to be solved according to the uncertainty sought.
The manufacturing of the physical model of the full scale beam entails production tolerances. For uncertainty quantification, we refer to the ISO 27681 standard [33]. In ISO 27681 general tolerances are given for components not specified in great detail. Here the incertitude of the distribution is covered by intervals. For the physical model we choose a length \(l' = 200\,\text {mm}\) and a diameter \(D' = 10\,\text {mm}\). We assume that the material used is not changed. Hence, there is a complete similarity of the material \(M_{E} = 1\). The uncertainty of the critical buckling load of the model \({U}_{F_\mathrm {c}}\) that is gained by model measurements, is defined for our example \({U}_{F_\mathrm {c}'}=0.085\). With Eq. (4.51) the uncertainty of the critical buckling load results in:
Equation (4.52) shows that the ratios of the uncertainty of the model and fullscale parameters have an influence on the calculation of the fullscale function uncertainty. These ratios are multiplied by the term for the measured uncertainty of the model function. The uncertainty scaling is illustrated in Fig. 4.36. A geometric scaling factor \({M}_D = {M}_l = 1\) represents our real physical model. For scaling factors greater than one (upscaling), the relative uncertainty decreases. This is due to higher precision being possible in manufacturing of large diameters and lengths. For downscaling, which is for lower geometric scaling factors, there is a strong increase in the production uncertainty. This affects the uncertainty of the critical buckling load \(U_{F_\mathrm {c}}\) of the beam, which shows a variation of higher range. Since the tolerances are defined for specific parameter regions, we obtain a discontinuous function for the uncertainty \(U_D\) and \(U_l\) and thus for \(U_{F_\mathrm {c}}\).
Conclusion
The analysis has shown that there is a strong need for uncertainty scaling. In the example of the buckling load, the relative uncertainty of the predicted function, the buckling load \(F_\mathrm {c}\), increases when scaling down. This has to be considered in the design, as it may otherwise lead to unforeseen failure due to the great uncertainty.
4.3.7 Improvement of Surrogate Models Using Observed Data
Computer models of technical systems are playing a more and more important role in the design and construction of complex technical systems. Implemented as computer code, such models enable the use of socalled computer experiments, i.e. an experiment with the technical system is simulated via a computer program using the underlying mathematical model. An overview of the design and analysis of computer experiments can be found in [138] or [44]. In general, these computer models are imperfect, in the sense that they do not predict the reality perfectly, as discussed in Sect. 2.2. There are several reasons, e.g. because of missing knowledge of underlying physical dependencies, or because of an approximation of those to reduce complexity. A typical example is neglecting the friction or considering it constant. Furthermore, in uncertainty quantification it is often required to perform a large number of computer simulations of an experiment with the technical system, which can be timeconsuming, since typically these computer simulations are computationally expensive. A solution to circumvent this problem is to use a socalled surrogate model. There is a vast variety of literature on methods for estimating a surrogate model. For example [25, 31, 90] used quadratic response surfaces, [22, 32, 77] investigated surrogate models in the context of support vector machines, [121] concentrated on neural networks, [17, 86] used kriging and [160] used Gaussian processes. In the following, a method is described, which is able to circumvent the challenges of the imperfectness and the computationally expensiveness of the computer model, by estimating an improved surrogate model, which has a smaller prediction error and is faster to compute than the computer model, as shown in [62, 88, 91]. Furthermore, the improved surrogate model can then be used to quantify and analyse model uncertainty as shown in [164]. According to Fig. 3.1 the method is applied in the product or system design phase (A).
Mathematical setting
The method, which will be described below is based on the following mathematical setting: Let (X, Y), \((X_1,Y_1)\), \((X_2,Y_2)\), ... be independent and identically distributed random variables with values in \(\mathbb {R}^d\times \mathbb {R}\), and let \(m:\mathbb {R}^d\rightarrow \mathbb {R}\) be a measurable function. Here X describes (random) inputs of an experiment with the technical system, Y the outcome of the experiment and m is a computer simulation of the experiment with the technical system, thus we use m(X) as an approximation of Y. Given the data
the aim is to estimate an improved surrogate model \( \hat{m}_{n}:\mathbb {R}^d\rightarrow \mathbb {R}\) of the computer simulation m. Note that the method implicitly assumes that the distribution \( \mathbf{P }_{X} \) of X is either known or that a large quantity of input values is available, i.e. stochastic uncertainty as described in Sect. 1.6 occurs. In an application, this is often not the case. How to circumvent this problem is described in Sect. 4.3.8.
Method
In the following, a method to estimate an improved surrogate model based on experimental data and a computer simulation is described. The method is based on the proposed estimators in [62, 88, 91, 164].
We start by estimating a surrogate model \(\hat{m}_{L_{n}}\) of the computer simulation m. There is a vast variety of methods (cf. [44, 138]). Here (penalised) leastsquares estimates are used, defined by
where \( \mathcal{F}\) is a set of functions, \( (X_{1},m(X_{1})),\ldots ,(X_{L_{n}},m(X_{L_{n}})) \) is the set of input values evaluated with the computer model m of size \( L_{n} \in \mathbb {N}\) and \( pen_{n}^2(\cdot ) \) is a penalty term which usually penalises the ‘roughness’ of the function and which is nonnegative for each \( f \in \mathcal{F}\), i.e. \( pen_{n}^2(f) \ge 0 \). If the input dimension is smaller or equal to 3 then smoothing spline estimates can be used for \( \mathcal{F}\) as shown in [91]. For bigger input dimensions neural network estimates can be applied as in [62] or [88]. Of course, there exist other estimator function classes as discussed above.
As discussed in Sect. 2.2, usually every computer model has an inherent model error. To circumvent this problem, an estimator of the residuals is constructed by first calculating the residuals of the surrogate model with respect to the experimental data
and then applying a (penalised) leastsquares estimate on this sample, defined by
where \( \bar{\mathcal{F}} \) is a set of functions. Finally, the improved surrogate model is a composition of the estimators above, defined by
In case that only a small sample of experimental data is available, the estimator of the residuals (4.55) usually does not yield satisfying results. In this case, if an additional independent sample of input values \( X_{n+Ln+1},\ldots , X_{n+Ln+N_{n}} \) of size \( N_{n} \in \mathbb {N}\) is available, one can use a weighted (penalised) leastsquares estimate instead of (4.56) defined by
where \( w^{(n)} \in [0,1] \) is a weighting term, which should be chosen datadependent. Here, adding the weighted mean square of the euclidean norm of the vector \( (f(X_{n+Ln+1}),\ldots , f(X_{n+Ln+N_{n}})) \) of function values of the additional sample is used as a regularisation.
Application
In order to demonstrate the usefulness of the above described approach, we apply it to the drop tests with the MAFDS, which are described in Sect. 3.6.1; here we only consider the drop height as input variable and neglect the additional payload, as in [91]. The system outputs are the maximum relative compression \( z_{r,max} \). For \( \mathcal{F}\) and \( \bar{\mathcal{F}} \) we use a smoothing spline estimator as implemented in the MATLAB routine csaps(). A smoothing spline estimator depends on an additional smoothing parameter. In the estimation of \( \hat{m}_{L_{n}} \) this smoothing parameter is chosen by generalised crossvalidation, cf. [157]. The smoothing parameter and the weighting parameter \( w^{(n)} \) in the estimation of \(\hat{m}_{n}^{\epsilon } \) are chosen by a kfold crossvalidation, cf. [68], where the smoothing parameter is chosen from the fixed set \( \{ 2^{l} : l \in \{8,\ldots ,1\} \}\) and the weighting parameter is chosen from the set \( \{0,0.1,\ldots ,1\} \).
The result is illustrated in Fig. 4.37. To conclude, we observe that model uncertainty occurs. The computer model overestimates the outcome of the experiments, whereas the improved surrogate model fits the experimental data quite accurately.
4.3.8 Uncertainty Quantification with Estimated Distribution of Input Parameters
Methods of uncertainty quantification are frequently applied in an experimental setting. This serves to quantify the uncertainty in the outcome Y of an experiment with a technical system, depending on an input X. This would be easy, if a large quantity of experimental data is available, but in most cases running experiments is expensive and time consuming. In order to circumvent this problem, one can use knowledge (e.g. physical knowledge) of the experiment with the technical system to implement a computer model m and use this to generate a data set of computer experiments. In this context, the inputoutput tuple (X, Y) is modelled as an \( \mathbb {R}^d\times \mathbb {R}\) valued random variable, i.e. the experiment depends on a ddimensional real valued input and has a real valued output. Then, if the input distribution \( \mathbf{P }_{X} \) is known, one can generate realisations of the input X and evaluate them with the computer model m to generate the data set
of computer experiments. This data set can then be used as an approximation of reality to apply a method of uncertainty quantification, for example see Sect. 5.2.6. In the case that the computer model does not fit reality and a sample of experimental data is available, one can also use the method described in Sect. 4.3.7 to construct an improved surrogate model, which then can be used instead of m.
Frequently, we see the situation that the distribution \( \mathbf{P }_{X} \) is unknown and instead only a (rather small) data set of experimental data is available. In the following a method to estimate the probability density function \( g :\mathbb {R}\rightarrow \mathbb {R}\) of Y based on the set of experimental data and a computer model \( m :\mathbb {R}^d\rightarrow \mathbb {R}\) is described. Comparing the probability density function estimated by the method with an estimate of the probability density function based on the computer model enables the detection of model uncertainty. The method is according to Fig. 3.1 applied in the product or system design phase (A).
Mathematical setting
The method described in the following is based on the subsequent mathematical setting: Let (X, Y), \((X_1,Y_1)\), \((X_2,Y_2)\), ...be independent and identically distributed random variables with values in \(\mathbb {R}^d\times \mathbb {R}\), and let \(m:\mathbb {R}^d\rightarrow \mathbb {R}\) be a measurable function, i.e. stochastic uncertainty as described in Sect. 1.6. Here Y describes the outcome of an experiment with the technical system, X the (random) inputs of the experiment and m is a computer model of the experiment with the technical system, thus we use m(X) as an approximation of Y. Given the data
the aim is to estimate the probability density function \( g :\mathbb {R}\rightarrow \mathbb {R}\) of Y. Note that to apply the method described below, it will be necessary that the evaluation of m on specific input values is possible.
Method
The method described in the following is based on [88], which is an extension of [62, 91]. In the following, we will assume that X is multivariate normally distributed to estimate its distribution and generate a sample based on this estimated input distribution. An overview of methods to generate a data set based on a specific class of distribution can be found in [35].
In order to estimate the parameters of the distribution \( \mathbf{P }_{X} \) of X, a maximum likelihood estimate based on the data (4.60) defined by
and
is used, where \( X_{k}^{(i)} \) denotes the ith component of the ddimensional random variable \( X_{k} \). Alternatively, one can use the unbiased version of (4.62) defined by
Given these estimators of the parameters \( \mu \) and \( \Sigma \) of the input distribution \( \mathbf{P }_{X} \), a sample of size \( N_{n} = N_{n,1} +N_{n,2} \in \mathbb {N}\) can be generated which is independent and multivariate normally distributed with mean \( \hat{\mu } \) and covariance \( \hat{\Sigma }\). Therefore, we first generate an independent sample \( Z_{1},\ldots ,Z_{N_{n}}\) of ddimensional vectors, where for each vector the components are independent and standard normally distributed, and set for every \( i = 1,\ldots ,N_{n} \)
where \( \hat{O} \) and \( \hat{\Lambda } \) are defined by the eigendecomposition
of \(\hat{\Sigma }\). Here \( \hat{\Lambda } = {\text {diag}}(\hat{\lambda }_{1},\ldots ,\hat{\lambda }_{d}) \) is a diagonal matrix consisting of eigenvalues of \( \hat{\Sigma } \) and \( \hat{O} \) is an orthogonal matrix whose columns are eigenvectors of \( \hat{\Sigma } \).
To estimate an improved surrogate model \( \hat{m}_{n} \) of m we use the method described in Sect. 4.3.7, with a few minor changes. To estimate the surrogate model \( \hat{m}_{L_{n}} \) of m we first generate the data set
of size \( L_{n} \in \mathbb {N}\), where the values in this set are independent and uniformly distributed on the centred cube \( B_{n} := [c \cdot (\log L_{n}),c \cdot (\log L_{n})]^d \), for some suitably chosen constant \( c > 0 \). This set is then used to construct the surrogate model \( \hat{m}_{L_{n}} \) of m, i.e. we define the estimator by
where \( \mathcal{F}\) is a set of functions. In case the data set (4.60) is sufficiently large, the estimator of the residuals can be defined as estimate (4.56) in Sect. 4.3.7. Otherwise we make a modification of estimate (4.58) from Sect. 4.3.7, where we replace the sample of additional input data by the first \( N_{n,1} \) of the generated input data, i.e. the estimator is defined by
where \( \bar{\mathcal{F}} \) is a set of functions.
The improved surrogate model is constructed as in Sect. 4.3.7, i.e. it is defined by
In order to estimate the probability density function g of Y, the kernel density estimator of [123, 135] is applied on the sample \( \hat{m}_{n}(\bar{X}_{N_{n,1}+1}) \), ..., \( \hat{m}_{n}(\bar{X}_{N_{n,1}+N_{n,2}}) \), i.e. it is defined by
for some bandwidth \( h_{N_{n,2}} > 0 \) and some kernel \( K:\mathbb {R}\rightarrow \mathbb {R}\), which is usually chosen as a symmetric and bounded density, e.g. the Gaussian kernel \( K(t) = \frac{1}{\sqrt{2\pi }} \exp ( \frac{1}{2} t^2)\).
Application
As an example we consider a lateral vibration attenuation system with piezoelastic supports. A visualisation of the technical system can be found in [103, Fig. 1].
This system consists of a beam with circular crosssection embedded in two piezoelastic supports A and B. Support A is used for lateral beam vibration excitation and support B for lateral beam vibration attenuation, as proposed in [61]. The two piezoelastic supports A and B are located at the beam’s end; each consists of one elastic membranelike spring element made of spring steel, two piezoelectric stack transducers arranged orthogonally to each other and mechanically prestressed with disc springs, as well as the relatively stiff axial extension made of hardened steel that connects the piezoelectric transducers with the beam. For vibration attenuation in support B, optimally tuned electrical shunt circuits are connected to the piezoelectric transducers [63].
Our aim is to quantify uncertainty, i.e. to estimate the probability density function of the maximal amplitude of the vibration occurring in an experiment with this attenuation system. Five parameters vary during the construction of the attenuation system and influence the maximal vibration amplitude: the lateral stiffness \( k_{lat,y} \) and \( k_{lat,z} \) in direction of y and z, the rotatory stiffness \(k_{rot,y}\) and \(k_{rot,z}\) in direction of y and z, and the height of the membrane \(h_x\). In our setting these five values are the input X of the experiment with the technical system. A computer model (above denoted by m) is available with which we can compute an approximation m(X) of the maximal vibration amplitude Y to a corresponding input value X. To apply the density estimator (4.70) we measured the corresponding parameters for ten real built systems. As a result we got the data in Table 4.4.
Since the parameters vary in scale, it does not make sense to estimate the surrogate model \(\hat{m}_{L_{n}} \) on \( U_{i,n} \sim U([c \cdot \log (L_{n}),c \cdot \log (L_{n})]^d)\). Instead we rescale the components of \( U_{i,n} \) so that for each component \(U_{i,n}^{(j)} \sim U([\hat{\mu }^{(j)}  2 \cdot \sqrt{\hat{\sigma }_{jj}},\hat{\mu }^{(j)} + 2 \cdot \sqrt{\hat{\sigma }_{jj}}])\) holds.
We apply the density estimator (4.70) to the data and obtain as a result Fig. 4.38, where we compare it to a density estimator based on the surrogate model \(\hat{m}_{L_{n}}\). The result shows that the estimator based on the improved surrogate model fits the data better, i.e. the improved surrogate model is able to predict the reality more accurately than the surrogate model; hence the density estimator is more accurate. Model uncertainty occurs which leads to the conclusion that the computer model does not fit reality.
4.4 Representation and Visualisation of Uncertainty
Product development is a knowledgeintensive process where, despite its uncertainty [42, 95, 106], designers define what the product has to achieve in the physical domain and how this has to be accomplished, potentially, according to customer specifications, cf. Sect. 1.2. As to the definitions it is determinated which tests are necessary, how and in which quantity the product must be manufactured and at what time it needs maintenance. Uncertainty may have negative effects on decisions, which can lead to oversizing, unfulfilled customer demands and unforeseen failures [74]. As pointed out by Anderl et al. [6], software applications used in the engineering context rarely consider uncertainty quantification, which partly explains the lack of awareness about uncertainty by designers. This section introduces our approach to overcome this issue by visualising uncertainty and its consequences for developers and the required digital representation of knowledge about uncertainty. This aims to support engineers and designers to better understand uncertainty regarding product and process properties, and thus helps the engineers to recognise, evaluate and analyse uncertainty in their designs (cf. Sect. 1.7).
A threelayer architecture that includes representation, presentation and visualisation of uncertainty [6] is the basis for the approach introduced in this section. The representation layer is dedicated to the digital representation of data uncertainty with all of its subtypes (cf. Sect. 2.1). For this purpose, it uses an ontologybased information model, see Sect. 4.4.1. The presentation layer serves as an auxiliary layer for the visualisation and creates usecase defined objects, which serve as an intermediate representation for visualising uncertainty. The concept of uncertainty cloud (uCloud) enables the tangible presentation of geometric tolerance uncertainty. To this end, it creates an Euclidean space that describes the probability distribution of a body existence of a physical part [6]. The visualisation layer uses the instances of the presentation layer and maps its objects into the functionality of computer graphics, see Sect. 4.4.2. The MADFS (see Sect. 3.6.1) serves as an application example for the outlined approach and its methods, see Sect. 4.4.3.
4.4.1 OntologyBased Information Model
For the purpose of identifiying the uncertainty in early stages of the product development process and thus enabling uncertainty management, information about all product life cycle phases is necessary. Therefore, a suitable model is required to digitally represent information about uncertainty. The ontologybased approach offers the opportunity of an appropriate conceptual space based on scientific knowledge about uncertainty. In addition, it provides high semantic value.
Ontologies are defined as “formal models of selected aspects of the real world” [67]. They digitally represent objects or assets and their relations for the use in advanced applications of information and communication technology. Ontologies are designed for the specification of semantics of higherorder to enable knowledge representation. Ontologies use triples to formalise information. Each triple comprises subject, predicate, and object.
For authoring the ontology, we use a variant of the Web Ontology Language in version 2 (OWL 2). OWL 2 is standardised by the World Wide Web Consortium (W3C) and comprises three language variants of various expressive power. Here, we chose the variant OWL Description Logics (DL), since it comes with the greatest possible expressive power, while maintaining the computational completeness and decidability necessary for inference and validity checking. OWL 2 supports serialisation using the Extensible Markup Language (XML), which enables the easy exchange of information. A further advantage of OWL 2 for dealing with uncertainty and especially ignorance is the Open World Assumption made by OWL 2, so a statement can be true irrespective of whether it is known to be true [64].
Since ontologies are based on description logic, socalled inference machines can infer new knowledge based on already known information. In addition, they can be used to verify the integrity of the knowledge [7, 20]. An ontology comprises an Assertional Box (ABox) and a Terminational Box (TBox). The TBox formalises the knowledge about the concepts—also called classes—of the described domain, whereas the ABox contains the knowledge about the specific instances of these concepts in the domain.
This section describes the information model used for the exchange and visualisation in loadbearing systems and therefore forms the fundament for the methods described in Sect. 4.4.2 and Sect. 4.4.3 and is therefore a contribution to the modelling of uncertainty in information technology. Figure 4.39 provides an overview of the ontologybased information model named Collaborative Ontologybased Property Engineering System (COPE), which we have developed with its major components [145]. It incorporates the propertydriven development approach [162] and comprises the three life cycle phases development, production, and usage (cf. Sects. 3.2 and 1.7). Data uncertainty is characterised as uncertain property value and uncertain relationship that specifies the effect of the uncertain value. The product model and a process model define the context of these two components. A major approach of the ontology is property and process classification. The following paragraphs describe four of the major components of the ontologybased information model “COPE” in more detail.
Uncertain property value
For the representation of the uncertain property values, we have developed a partial model referred to as ‘Uncertainty Data Type’ (UDT) [146]. Its aim is to represent digitally the uncertainty of product and process properties to improve the interchangeability of these data types. The approach is based on the digital uncertainty representation introduced and discussed in Sect. 2.1, and it covers all three types of data uncertainty described there.
Uncertain relation
Multivalent directed relations represent the dependencies on uncertain causal connections in the ontology. Thus, it is supported to define the distinct relationship types and to parametrise the relations individually. These relations can refer to both, the nominal value and the distribution of the uncertain property [147].
Process model
Processes are highly important in the context of uncertainty, and therefore processes are integrated to the main parts of the information model. Four values characterise processes. The Name of the process describes its type semantically (e.g. drilling, landing). Appliances are resources that the process needs but does not consume (e.g. drill, light aircraft). In and Output represent the transformation by the process (e.g. speed, load). The last value comprises influencing factors. They are structured into disturbance, information, resources, and user (e.g. temperature, energy, qualification).
Standardised terminologies are used as far as possible within the process model. For production, we use the classification given by DIN 8580 [34]. It provides an overview of production processes, such as forming and drilling. In contrast, the usage processes depend on the used product. For the application of loadbearing structures in mechanical engineering, such a standard is not available. Therefore, the developer must anticipate the potential processes of the product during the development phase and specify them further later.
Product model integration
The information model is integrated into the product model for two reasons. It references the uncertainty in the integrated product model and is used to assign uncertain property values to parts of the product model. This approach enables unique identification throughout all life cycle phases and improves the usability of uncertainty information. Furthermore, the integration into the product model provides an appropriate basis for the visualisation of the uncertainty information (see Sect. 4.4.2) in the respective product context. The item references entities of the Boundary Representation (BRep) to localise the uncertainty information.
The ontologybased information model extends the product model based on ISO standard 10303108 for parameterisation and geometric boundary conditions of explicit product models of parts and assemblies. In this context, the ontology constitutes the TBox. Specific CAD models and attached data constitutes the ABox. The definition of uncertain geometric entities results in a geometrically underdetermined state in the ABox. Systems of equations cannot further characterise the relations between the geometric entities without reducing the degrees of freedom of the geometric entities and thereby removing the information of the geometric variation. The Abox describes geometry and topology of the geometric entities and allows the characterisation of the solutions of the system of equations, which are algebraically identical but geometrically different.
For the processing of timevariant uncertainty information (see Sect. 3.4), we have developed a concept with the corresponding implementation for the bidirectional connection of a CAD system (Siemens NX), and a numerical linear equation system solver (Matlab). The ontology serves as a mediator between the two systems so that the results of the ontology queries are applied in the CAD system, as well as in Matlab. The representation of timevariant uncertainty extends the A and TBox representing design variants in the parametric product model. Furthermore, timevariant changes in the geometry of assembly components are also represented [168]. Figure 4.40a shows a graphbased visualisation of a small section of the information model. It shows individual points and their connections. Circular symbols indicate concepts, and diamonds indicate specific instances. This example depicts a point with its three Cartesian coordinates and four points derived from it. The derived points represent possible corner points after production and after consideration of the uncertainty. The figure equally shows a small part of the class hierarchy. Figure 4.40b is a visualisation of the CADModel and a larger quantity of derived points for selected vertices. A designer interprets the selected geometry and decides whether the boundary conditions meet the requirements.
The automated generation of the TBox is based on the software OntoStep developed by the National Institute of Standards and Technology (NIST) [96]. This software tool has been extended for the extraction of product parameters for the generation of the instances. In this way, data sets concerning uncertainty and its distributions are integrated.
The ontologybased information model was also adapted for a specific domain [170]. Here, we extended the ontology for the application scenario Uncertainty Mode and Effects Analysis (UMEA) for human effects in aerospace. UMEA is an extension of the failure mode and effects analysis (FMEA), and was proposed by Engelhardt [41]. Besides our own extensions, domainspecific [8] and crossdomain (e.g. Dolce UltraLight [110]) ontologies were used. Thus, we could confirm that the ontologybased information model can be contextualised and reused in further specialised usecases.
As a further extension of the ontologybased information model, the automatic extension of the knowledge base and the automatic classification of contradictory data were taken into account. The methods used for this purpose comprise ontology matching and inductive reasoning.
For inductive reasoning, methods of pattern recognition and clustering extend reasoning. Entities are classified with respect to similarity with the result that new inferences are possible [163]. In consequence, however, this classification and the knowledge acquired is uncertain. Nevertheless, this knowledge enables improvement of product development decisions. Therefore, the designer is provided with the inferences including a measure of confidence. Ontology matching is applied to integrate knowledge from heterogeneous and distributed sources automatically. Thereby, analogies between two or more ontologies need to be identified and used to join the knowledge.
Domainspecific rules for ontology matching and inductive reasoning of axioms of geometric relationships are the core of the integration of both methods in the ontologybased information model. This enables the integration of methods to detect and control data and model uncertainty into the ontologybased information model.
We presented an ontologybased information model that combines domainspecific knowledge to support product developers. In addition, it provides a basis for further analyses and the visualisation of the effects of uncertainties. We chose an ontologybased information model that is based on description logics and OWL 2. Thereby, the advantages of an expressive, descriptive language are combined with those of decisive formal semantics. In contrast to alternative forms of data representation, such as databases, ontologies not only allow data queries but automated classification, validation of the integrity of data, and extension of the knowledge base by inference. Furthermore, due to the high semantic value, knowledge interpretation improves, and the exchange of information is simplified. The ontologybased information model offers a functionality to store not only timeinvariant but timevariant information about uncertainty, too. Furthermore, instances can be generated, and ontology matching and inductive reasoning can extend the knowledge automatically. The use of an ontologybased approach allows to extend the information model further. The integration into a digital twin, for example, can enlarge the knowledge base and thus increase the quality of product development decisions, see Sect. 4.4.3.
4.4.2 Visualisation of Geometric Uncertainty in CAD Systems
The visualisation of geometric uncertainty comprises the graphical presentation of the statistical distribution of data obtained from measurements conducted during production and usage, by utilising the functionality of computer graphics [47] to generate an appropriate appearance of uncertainty [6, 84, 154]. The following section introduces our approach for the visualisation of the geometric uncertainty in CAD (computeraided design) systems, focusing on stochastic data uncertainty associated with geometrical model parameters, see Sect. 2.1. The visualisation of uncertainty is part of the middle layer of the framework of mastering uncertainty introduced in Sect. 1.7 and thus, an important element within the analysis, quantification and evaluation of uncertainty in mechanical engineering.
Despite the fact that the consideration of uncertainty associated with geometry is crucial during the design process, today’s CAD systems provide only a limited designoriented view with functionalities to specify nominal geometry and geometric tolerances. There is still a lack of functionalities for the visualisation of geometric uncertainty [6]. The effect of the different geometric tolerances on the part (e.g. shape, dimensions, features, locations) cannot be graphically visualised either. Advanced tools, such as ComputerAided Tolerancing (CAT), focus mainly on geometric dimensioning and some basic stackup analysis, but do not provide harmonised solutions for the graphical visualisation of tolerance and uncertainty associated with measurement. Therefore, there is a need to integrate geometric uncertainty in the geometric product model in order to explicitly depict uncertainty [6, 145].
Our approach for the visualisation of geometric uncertainty focuses on the integration of information about uncertainty and its correlations into CADsystems via an ontologybased information model. Therefore, the geometric product model representing the 3DCAD model is decomposed into appropriate elements, such as features and boundary representation elements (BREPelements), which enable the association of uncertainty. The hierarchical structure of the product model, as well as its uncertainty information, are mapped into the ontologybased information model in terms of a product and process representation, see Sect. 4.4.1. The mapping assures that the product model can be transformed into an ontologybased representation and vice versa [6, 116, 145].
When integrating information about geometric uncertainty into the product model, it is necessary to do both, specifying uncertainty explicitly and deriving a presentation appropriate for visualisation. For the presentation of geometric uncertainty associated with tolerance, we have developed the concept of the uncertainty cloud or “uCloud”. The uCloud concept creates a threedimensional space that visualises the probability distribution of the physical part surface location. The uCloud is generated by a settheoretical operation, which compromises two volumes, each representing a maximum, respectively minimum value of a particular geometrical property [6].
The resulting uCloud element is then used to apply visualisation techniques for geometric tolerances. Conceptually, visualisation techniques are divided into the domains of (i) graphical, (ii) symbolic, (iii) structural and (iv) verbal visualisation [6].
(i) Graphical visualisation uses functionality of computer graphics, such as colour, colour intensity, transparency or coloured patterns. To attach the semantics of uncertainty to the graphical appearance of uncertainty, a crossreference table is needed [6]. (ii) Symbolic visualisation associates predefined symbols to presentation objects, and it enables the attachment of uncertainty information. In the domain of (iii) structural visualisation presentation objects are mapped onto structures, such as lists, tables, tree and graphstructures. (iv) Verbal visualisation expresses uncertainty lexically and creates a textual output using the ontology approach [146, 147]. Figure 4.41 shows graphical visualisation techniques for uncertain properties and their uncertain value description with respect to the Uncertainty Data Type (UDT).
With the concept of uCloud combined with visualisation techniques for uncertain geometric properties, we provide an approach for the static visualisation of timeinvariant uncertainty on individual parts by creating a cloudlike space, which contains the part surface of the real product. The result is a visualisation through transparently shaded offset bodies, which are linked to the corresponding UDTs (Uncertainty Data Type). Additionally, uncertainty information related to product structures such as lists, tables or tree structures can be displayed.
Figure 4.42 shows the uCloud approach for a geometric deviation of the diameter of a shaft with different types of uncertain value descriptions resulting from manufacturing process specific tolerances. Figure 4.42a illustrates the uCloud for an interval, or more specifically for a geometric tolerance, where an upper deviation, a nominal diameter and a lower deviation are graphically visualised [145]. With respect to this information, the diameter of the manufactured surface of the shaft lies within the transparent shaded offset. Since the geometric deviations are small in relation to the dimension of the shaft, the technique of enlarged detail, known from technical drawings, is used. This approach is also available for visualising stochastic tolerance data. Therefore, a sigma level (e.g. Six Sigma) or a confidence interval of the distribution function (e.g. 99.9997%) is selected, depending on the available input data. Both sigma levels and the expected value result in three characteristic points, as indicated by an additional specific symbol [145].
Figure 4.42b shows the visualisation of the geometric deviation of the diameter of a shaft with stochastic uncertainty information regarding its geometric tolerance. With a given histogram as an input, the colour density range is mapped onto the minimum and maximum frequency, and it is visualised by elements which correspond to the classes of the histogram. The element colour density corresponds to the probability distribution given by the representation of the histogram. In the case of a given distribution function, the uCloud element colour density is mapped to the probability of the function [6].
Each uCloud element has to be generated respectively to the given type of the uncertainty value description. With the approach of a sectional view we enable engineers and designers to interpret different influences, which occur in the product life cycle, such as imperfect manufacturing, wear and corrosion. Engineers are able to interpret uncertainty occurring within a single part or an assembly. Furthermore, the uCloud concept complements the 3D data model without manipulating the idealised description of the geometry allowing its further usage for e.g. Finite Element Method (FEM) or Digital Mock Up (DMU) analysis [145]. Detailed information and further visualisation examples are available in [6, 145, 147].
Geometric uncertainty is not only crucial in the context of individual parts. The effects of component properties affected by uncertainty also appear in the context of assemblies, in which individual uncertainties are mutually influential and interdependent. A typical example is the tolerance stacking of geometric manufacturing tolerances. In order to make uncertainty information about afflicted part properties available throughout the entire assembly, this information is attached to elements of the topology of the 3D CADmodel as attributes described by Product Manufacturing Information (PMI) [74]. PMI comprises nongeometric information and aims at providing annotations for 3D geometric models [74]. It is typically used to describe additional properties to define the product geometry more precisely, primarily for manufacturing purposes such as manufacturing tolerances. PMI also refers to any data that is linked to geometry or topology of a 3D CADmodel [74].
For the purpose of visualising geometric uncertainty in assemblies, PMI is attached to topology entities of the 3D CADmodel and specified for objectoriented implementation. In this context, the target topological entities for referencing PMI are body, face, and edgeattributes, as they are important for assembly constraints. The bodyattribute serves as an individual part specific information carrier and contains all information from face and edgeattributes, that belong to an entire body. Through the configuration of individual parts via assembly constraints, the bodyattributes associate corresponding parts with one another and enable bidirectional PMI exchange [74]. One individual part contains exactly one bodyattribute but multiple face and edgeattributes; these comprise uncertainty information mapped into a specific PMI being associated with the afflicted object property. The ontologybased information model provides the informational context, which is linked to the different attributes [74].
Object attributes for edges refer to the information for the mathematical description of the geometrical instance of the edge in a threedimensional space. Object attributes for faces reference information for the corresponding surface, such as: direction of the normal surface, radius and central axis of cylindrical surfaces, surface contents as well as the mathematical description of the surface and the uncertainty type in relation to the geometric deviation [74]. Thus, the geometric deviation in the x, y and zdirection in a threedimensional space is described. Object attributes for bodies collect the information from attributes attached to a part’s surfaces and edges to provide a complete attribute bundle for the neighbouring parts. In order to reference individual parts within an assembly, the designer assigns assembly constraints, referring to different reference elements. Figure 4.43 illustrates the interlinking between attributes, individual parts, assembly constrains and configuration logic.
Through the referencing of two individual parts using an assembly constraint, the body, edge and faceattributes of the individual parts are automatically linked bidirectionally with one another [74]. As a result, PMI containing geometric uncertainty are referenced to the neighbouring part. The configuration logic for assembly constraints defines how the geometric uncertainty is being propagated into the neighbouring parts. With the help of object attributes and their internal processing in the ontologybased information model, the concept of uCloud can be extended from single parts to assemblies. Combined uncertainty zones of individual parts within an assembly are visualised with respect to an absolutely positioned, freely selectable individual part. The validation of the concept, applied to the MAFDS (see Sect. 3.6.1), is outlined in Sect. 4.4.3, see [74].
4.4.3 Digital Twin of Load Carrying Structures for the Mastering of Uncertainty
The digital representation of physical objects (e.g. a product, a production system, a test rig), as well as the biunivocal relationship between such physical objects and their equivalent digital counterparts are subject to the digital twin concept, together with the cyberphysical system approach [59, 115]. Having a digital twin allows defining, simulating, predicting, optimising and verifying the objects along their life cycle phases, from conception and design, via production, to usage and servicing. Along the life cycle phases, different types of models are created and used to represent physical objects, e.g. system models, functional models, 3D geometric models, multiphysics models, manufacturing models, and usage models, see Sect. 1.3. These models constitute the digital twin.
The transfer of data from the physical domain to the digital domain is a key approach to generate the digital twin. In the widest sense, a digital twin requires to implement a data flow where data, acquired from testing, production, maintenance and operation are integrated into a digital domain to support such models and assist in predictive and decisionmaking processes, see Sect. 1.4. This section addresses the challenges of mastering uncertainty associated with the respective data (Sect. 2.1) and models (Sect. 2.2) and introduces our approach to the visualisation of datainduced conflicts (Sect. 4.2) for uncertainty identification (Sect. 3.3) in the digital twin context.
The benefits derived from the digital twin implementation, depend on incorporating data from the physical domain into the digital domain. In the physical domain, data acquisition requires measuring physical magnitudes. The result of measurement should be a threefold structure: nominal value of the magnitude, measurement unit, and uncertainty of the measurement [18]. The most widely used data quality dimensions are: accuracy, completeness, currency, and consistency [15]. Within the accuracy dimension, the uncertainty of a measured magnitude is a significant contributor to the indicator of the data validity, see Sect. 2.1. However, literature shows that explicitness of the uncertainty of measured data is still a challenge. There is a lack of bidirectional semantic harmonisation of the uncertainty representation in the standards used to transfer data, both from the digital domain to the physical domain and vice versa [134].
Geometric data obtained from the physical domain are used to recreate 3D geometric models of the physical objects. In the literature, these models are named with the terms asbuilt, asfabricated and asmanufactured [28, 59, 153]. The aim of having an asmanufactured 3D model is to represent the geometric deviations caused by the manufacturing processes and use the model representation to perform simulations that previously were executed using an asdesigned 3D model. Consequently, it is necessary to explicitly represent the uncertainty of the reconstructed asmanufactured 3D model, see Sect. 2.2)
Measurements are necessary to capture the main geometrical dimensions of the physical components. We used measurement results with their corresponding uncertainty to create a 3D model of the physical test rig MAFDS (see Sect. 3.6.1) referred to as the virtual demonstrator. The virtual demonstrator includes material and physical properties in addition to part geometries and product structures. It consists of a MultiBodySimulation (MBS) model with a set of virtual sensors to simulate the functional behaviour while visualising the behaviour and movements of the test rig. The dynamic analysis allows the determination of velocities, accelerations and displacements of the moving components, as well as the reaction forces. Figure 4.44 shows the implementation of the virtual demonstrator in Siemens CADSystem NX12.
Internally, the moving components, joints and drivers are converted into a mathematical system of differential equations, which can then be solved to determine the desired quantities. This can be performed using different solvers which depend on the respective CADsystem and are mostly proprietary. Additionally, moving components are simplified to their mass, inertia properties and geometrical dimensions, while deformation properties are neglected. This leads to a major challenge for the quantification of the respective model uncertainty [5].
In the context of a digital twin, another effect that must be taken into account for MBS is the influence of geometric tolerances of the physical component, see Sect. 4.4.2. Since such effects are often not taken into account, it may appear, for instance, that an interference fit situation occurs in the simulation, when in reality there is a slight clearance in the joint or vice versa. Therefore, it is not only necessary to explicitly represent the uncertainty of the reconstructed asmanufactured 3D model, but to consider effects, such as the classical tolerance stacking of geometric manufacturing tolerances in assemblies. With the help of object attributes and their internal processing in the ontologybased information model, as outlined in Sect. 4.4.2, the concept of uCloud allows the visualisation of combined uncertainty zones of individual parts within an assembly.
Figure 4.45 shows the visualisation of an uncertainty zone by the bidirectional exchange of information about stochastic data uncertainty associated with geometrical model parameters between individual parts with a maximum deviation due to overlapping forms of geometric uncertainties in the subassembly of the upper structure of the MAFDS.
The uncertainty zone visualises the possible geometric deviation resulting from cumulative manufacturing tolerances of the individual parts in the context of assembly constraints as faceted bodies. The object attributes of the topology elements contain information about geometric uncertainty, such as the divergence between actual and target geometry or surface roughness. Using the concept of object attributes, it is also possible to consider nongeometric properties, such as damping properties and spring stiffness, with uncertainty ranges in order to simulate product behaviour under uncertainty [169].
In general, the digital twin concept aims at integrating measured data acquired from testing, production, maintenance and operation into the digital domain to assist in decisionmaking processes. These processes depend strongly on the quality of the underlying information base. The data to quantify and evaluate a system response is typically gathered by a variety of sensors, see Sect. 1.4. Because of the complexity of the context, several data streams must be integrated, and possible datainduced conflict situations must be identified, see Sect. 4.2. To identify a possible erroneous sensor behaviour, values of interest are observed redundantly in the physical domain. The objective is to avoid situations where possible errors remain unnoticed, see Sect. 4.2.1. Redundancy increases the availability of information and thus, contributes to the verification of the data. On the other hand, if several sources provide inconsistent or conflicting data, a defective interpretation may occur. Therefore, it is necessary to provide methods for explicitly representing and visualising datainduced conflicts in the digital twin context, see Sect. 4.2.3.
Section 4.2.1 presents a methodology for the identification of datainduced conflicts and the interpretation of conflicting sensor data. The approach is based on differentiating data sources, such as soft sensors, into models and sensors, by spanning the investigation from the redundant observation of a single value to the interconnection between models and sensors throughout a technical system. Here, the proposed methodology is applied to the virtual domain for the purpose of the visualisation of datainduced conflicts in CAD systems, see Fig. 4.46.
In addition to information on components, such as dimensions and parameters with their respective uncertainty, the information model represents the underlying sensor system as well as models for generating analytical redundancy. Each sensor of a physical test rig is represented by its metadata, including identification data, calibration data, known uncertainty, as well as relative and absolute placement within the test rig. Soft sensors are used to convert the measured data values and to generate analytical redundancy, see Sect. 1.4. In order to represent information about the models, system knowledge, such as symmetry characteristics and orientation of the system components, are integrated into the information model.
The information model is the basis for the data evaluation and is as such implemented in Matlab. The prototype development comprises also methods and classes for the propagation and calculation of the resulting uncertainty. The result is a software tool for the automatic detection of datainduced conflicts, see Sect. 4.2.3. The output of the system provides detailed information on datainduced conflicts, as well as on the interconnections between models and sensors throughout the system. In addition, the prototype software tool allows to generate statistics for each sensor, providing information on the total amount of redundant observations (comparisons) including sensing, as well as the total percentages of confirmations and conflicts with other sources. Assuming that the models describe the system behaviour with sufficient accuracy (Sect. 2.2), the trustworthiness of the respective sensor is visualised in the form of a histogram.
The virtual demonstrator is interconnected via Siemens NX’s application programming interface (API) NX Open with the prototype data evaluation tool implemented in Matlab. The information model allows to map metadata of the sensor system as well as the evaluation results into its virtual counterpart. The CAD system serves as a user interface through which the data sets are loaded and the evaluation results visualised. Figure 4.46 illustrates a conceptual example of the visualisation concept applied to a piezoelectric force sensor in the upper truss of the MAFDS, see Sect. 4.2.3. In case of any decisionmaking process where conflicting data could occur, this information helps engineers to identify uncertainty, upcoming conflicts and to limit the selection of valid sensors to be considered in the process. The developed prototype supports the identifying of the trustworthiness level and the interpretation of sensor data.
References
Abele E, Geßner F (2018) Spanungsquerschnittmodell zum Gewindebohren: Modellierung der Auswirkung von Unsicherheit auf den Spanungsquerschnitt beim Gewindebohren. wt Werkstattstechnik online 108(1–2):2–6
Abele E, Hauer T, Haydn M, Bölling C (2011) Reduzierte Unsicherheit bei der Bohrungsfeinbearbeitung  Neue Erkenntnisse zum Vorbohrungseinfluss auf den Reibprozess. Werkstattstechnik online 101(1–2):81–87
Alexanderian A, Petra N, Stadler G, Ghattas O (2014) Aoptimal design of experiments for infinitedimensional Bayesian linear inverse problems with regularized \(\ell _0\)sparsification. SIAM J Sci Comput 36(5):A2122–A2148. https://doi.org/10.1137/130933381
Alexanderian A, Petra N, Stadler G, Ghattas O (2016) A fast and scalable method for Aoptimal design of experiments for infinitedimensional Bayesian nonlinear inverse problems. SIAM Journal on Scientific Computing 38(1):A243–A272
Anderl R, Binde P (2017) Simulationen mit NX/Simcenter 3D: Kinematik, FEM, CFD, EM und Datenmanagement. Mit zahlreichen Beispielen für NX 11, 4th edn. Carl Hanser Verlag. https://books.google.de/books?id=QDqZDgAAQBAJ
Anderl R, Maurer M, Rollmann T, Sprenger A (2013) Representation, presentation and visualization of uncertainty. In: CIRP design 2012. Springer, pp 257–266
Antoniou G, Franconi E, van Harmelen FF (2005) Introduction to semantic web ontology languages. In: Eisinger N, Maluszynski J (eds) Reasoning web: first international summer school, vol 3564. Lecture notes in computer science. Springer, Berlin, pp 1–21. https://doi.org/10.1007/11526988_1
Ast M, Glas M, Roehm T (2013) Creating an ontology for aircraft design: an experience report about development process and the resulting ontology. In: Deutsche Gesellschaft für Luft und Raumfahrt – LilienthalOberth e.V. (ed) Publikationen zum DLRK 2013, pp 1–11
Atamturktur S, Hemez FM, Laman JA (2012) Uncertainty quantification in model verification and validation as applied to large scale historic masonry monuments. Eng Struct 43:221–234. https://doi.org/10.1016/j.engstruct.2012.05.027
Bard Y (1974) Nonlinear parameter estimation. Academic press, New York
Batterbee DC, Sims ND, Plummer AR (2005) Hardwareintheloop simulation of a vibration isolator incorporating magnetorheological fluid damping. In: ECCOMAS thematic conference on smart structures and materials
Baydin AG, Pearlmutter BA, Radul AA, Siskind JM (2018) Automatic differentiation in machine learning: a survey. J Mach Learn Res 18(153):1–43
Bayes T (1763) An essay towards solving a problem in the doctrine of chances. Philos Trans R Soc Lond 53:370–418. https://doi.org/10.1098/rstl.1763.0053
Beale E (1960) Confidence regions in nonlinear estimation. J R Stat Soc: Ser B (Methodol) 22(1):41–76
Bertino E (2015) Data trustworthiness – approaches and research challenges. In: GarciaAlfaro J, HerreraJoancomartí J, Lupu E, Posegga J, Aldini A, Martinelli F, Suri N (eds) Data privacy management, autonomous spontaneous security, and security assurance. Springer, pp 17–25
Bertotti G, Mayergoyz ID (eds) (2006) The science of hysteresis. Academic, Amsterdam and Boston
Bichon BJ, Eldred MS, Swiler LP, Mahadevan S, McFarland JM (2008) Efficient global reliability analysis for nonlinear implicit performance functions. AIAA J 46(10):2459–2468. https://doi.org/10.2514/1.34321
BIPM, IEC, IFCC, ILAC, IUPAC, IUPAP, ISO, OIML (2008) Evaluation of measurement data – guide to the expression of uncertainty in measurement. JCGM 100
Björck A (1996) Numerical methods for least square problems. SIAM, Philadelphia
Bock J, Haase P, Ji Q, Volz R (2008) Benchmarking owl reasoners. In: van Harmelen F, Herzig A, Hitzler P, Lin Z, Piskac R, Qi G (eds) Proceedings of the workshop on advancing reasoning on the web: scalability and commonsense, ARea 2008, at the 5th European semantic web conference, ESWC08, CEUR workshop proceedings, vol 350
Bölling C (2019) Simulationsbasierte Auslegung mehrstufiger Werkzeugsysteme zur Bohrungsfeinbearbeitung am Beispiel der Ventilführungs und Ventilsitzbearbeitung. Dissertation, TU Darmstadt
Bourinet JM, Deheeger F, Lemaire M (2011) Assessing small failure probabilities by combined subset simulation and support vector machines. Struct Saf 33(6):343–353. https://doi.org/10.1016/j.strusafe.2011.06.001
Box GEP, Draper NR (1987) Empirical modelbuilding and response surfaces. Wiley, New York
Bridgman PW (1922) Dimensional analysis. Yale University Press, New Haven
Bucher C, Bourgund U (1990) A fast and efficient response surface approach for structural reliability problems. Structural Safety 7(1):57–66. https://doi.org/10.1016/01674730(90)90012E
Burrows CR (ed) (1994) The active control of vibration. Mechanical Engineering Publ
Castanedo F (2013) A review of data fusion techniques. The Scientific World Journal 2013:704504. https://doi.org/10.1155/2013/704504
Cerrone A, Hochhalter J, Heber G, Ingraffea A (2014) On the effects of modeling asmanufactured geometry: toward digital twin. Int J Aerosp Eng 2014. https://doi.org/10.1155/2014/43927 Article ID 439278
Coakley J, Elliot AS (2012) An air spring. Patent WO002012052776A1
D’Agostino RB (1986) Goodnessoffittechniques. CRC Press
Das P, Zheng Y (2000) Cumulative formation of response surface and its use in reliability analysis. Probab Eng Mech 15(4):309–315. https://doi.org/10.1016/S02668920(99)000302
Deheeger F, Lemaire M (2010) Support vector machine for efficient subset simulations: 2SMART method. In: Proceedings of the 10th international conference on applications of statistics and probability in civil engineering (ICASP10)
Deutsches Institut für Normung (1991) Din iso 27681:199106. general tolerances – tolerances for linear and angular dimensions without individual tolerance indications
Deutsches Institut für Normung (2003) DIN 8580. Fertigungsverfahren  Begriffe, Einteilung. https://doi.org/10.31030/9500683
Devroye L (1986) Nonuniform random variate generation. Springer, New York
Dodge Y (ed) (2010) The concise encyclopedia of statistics. Springer, New York. https://doi.org/10.1007/9780387328331_62
Dogra APS, DeVor RE, Kapoor SG (2002) Analysis of feed errors in tapping by contact stress model. J Manuf Sci Eng 124:248–257. https://doi.org/10.1115/1.1454107
Donaldson JR, Schnabel RB (1987) Computational experience with confidence regions and confidence intervals for nonlinear least squares. Technometrics 29(1):67–82
Dubitzky W, Granzow M, Berrar DP (2007) Fundamentals of data mining in genomics and proteomics. Springer, Berlin
Dunn OJ (1961) Multiple comparisons among means. J Am Stat Assoc 56(293):52–64
Engelhardt R, Birkhofer H, Kloberdanz H, Mathias J (2009) Uncertaintymodeand effectsanalysis – an approach to analyze and estimate uncertainty in the product life cycle. In: Norell Bergendahl M (ed) DS 582: proceedings of ICED 09, the 17th international conference on engineering design, vol 2. Design theory and research methodology, ICED. Design Society, Glasgow, pp 191–202
Engelhardt R, Koenen J, Enss G, Sichau A, Platz R, Kloberdanz H, Birkhofer H, Hanselka H (2010) A model to categorise uncertainty in loadcarrying systems. In: 1st MMEP international conference on modelling and management engineering processes, pp 53–64
Euler L(1744) Methodus inveniendi lineas curvas: Maximi minimive properietate gaudentes, sive solutio problematis isoperimetrici latissimo sensu accepti. MarcumMichaelem Bousquet
Fang KT, Li R, Sudjianto A (2006) Design and modeling for computer experiments. Chapman & Hall/CRC, Boca Raton
Feldmann R, Platz R (2019) Assessing model form uncertainty for a suspension strut using Gaussian processes. In: Proceedings of the 3rd international conference on uncertainty quantification in computational sciences and engineering (UNCECOMP 2019)
Fischer MJ (1983) The consensus problem in unreliable distributed systems (a brief survey). In: Karpinski M (ed) Foundations of computation theory. Lecture notes in computer science, vol 158. Springer, Berlin, pp 127–140. https://doi.org/10.1007/354012689999
Foley JD, Van FD, Van Dam A, Feiner SK, Hughes JF, Angel E, Hughes J (1996) Computer graphics: principles and practice, vol 12110. AddisonWesley
Fortuna L, Graziani S, Rizzo A, Xibilia MG (2007) Soft sensors for monitoring and control of industrial processes. Advances in industrial control. Springer, London. https://doi.org/10.1007/9781846284809
Franceschini G, Macchietto S (2008) Modelbased design of experiments for parameter precision: state of the art. Chem Eng Sci 63(19):4846–4872
Gally T, Groche P, Hoppe F, Kuttich A, Matei A, Pfetsch ME, Rakowitsch M, Ulbrich S (2020) Identification of model uncertainty via optimal design of experiments applied to a mechanical press. Submitted for publication
Gaul L, Albrecht H, Wirnitzer J (2004) Semiactive friction damping of large space truss structures. Shock Vib 11(3–4):173–186. https://doi.org/10.1155/2004/565947
Gehb CM (2019) Uncertainty evaluation of semiactive load redistribution in a mechanical loadbearing structure. Dissertation, TU Darmstadt
Gehb CM, Platz R, Melz T (2016) Active load path adaption in a simple kinematic loadbearing structure due to stiffness change in the structure’s supports. J Phys: Conf Ser 744(1):012168. https://doi.org/10.1088/17426596/744/1/012168
Gehb CM, Platz R, Melz T (2017) Global load path adaption in a simple kinematic loadbearing structure to compensate uncertainty of misalignment due to changing stiffness conditions of the structure’s supports. In: Barthorpe RJ, Platz R, Lopez I, Moaveni B, Papadimitriou C (eds) Model validation and uncertainty quantification, vol 3. Conference proceedings of the society for experimental mechanics series. Springer, Cham, pp 133–144. https://doi.org/10.1007/9783319548586_14
Gehb CM, Platz R, Melz T (2019) Two control strategies for semiactive load path redistribution in a loadbearing structure. Mech Syst Signal Process 118:195–208. https://doi.org/10.1016/j.ymssp.2018.08.044
Gehb CM, Atamturktur S, Platz R, Melz T (2020) Bayesian inference based parameter calibration of the LuGrefriction model. Exp Tech 44(3):369–382. https://doi.org/10.1007/s40799019003557
Gehb CM, Platz R, Melz T (2020) Bayesian inference based parameter calibration of a mechanical loadbearing structure’s mathematical model. In: IMAC – 38th international modal analysis conference
Gere J, Timoshenko S (1997) Mechanics of materials, 4th edn. PWS, Boston
Glaessgen E, Stargel D (2012) The digital twin paradigm for future NASA and US Air Force vehicles. In: 53rd AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference 20th AIAA/ASME/AHS adaptive structures conference 14th AIAA, p 1818
Goller B, Schuëller GI (2011) Investigation of model uncertainties in Bayesian structural model updating. J Sound Vib 330(25–15):6122–6136
Götz B, Schaeffner M, Platz R, Melz T (2016) Lateral vibration attenuation of a beam with circular crosssection by a support with integrated piezoelectric transducers shunted to negative capacitances. Smart Materials and Structures 25(9):095045. https://doi.org/10.1088/09641726/25/9/095045
Götz B, Kersting S, Kohler M (2018) Estimation of an improved surrogate model in uncertainty quantification by neural networks. Submitted for publication
Götz B, Platz R, Melz T (2018) Effect of static axial loads on the lateral vibration attenuation of a beam with piezoelastic supports. Smart Materials and Structures 27(3):035011
Grau BC, Horrocks I, Motik B, Parsia B, PatelSchneider P, Sattler U (2008) OWL 2: the next step for OWL. Journal of Web Semantics 6(4):309–322
Green PL, Worden K (2013) Modelling friction in a nonlinear dynamic system via Bayesian inference. In: Allemang R, de Clerck J, Niezrecki C, Wicks A (eds) Special topics in structural dynamics, vol 6. Springer, New York, pp 543–553
Groche P, Hoppe F, Sinz J (2017) Stiffness of multipoint servo presses: mechanics vs. control. CIRP Ann 66(1):373–376. https://doi.org/10.1016/j.cirp.2017.04.053
Gruber TR (1993) A translation approach to portable ontology specifications. Knowl Acquis 5(2):199–221
Györfi L, Kohler M, Krzyżak A, Walk H (2002) A distributionfree theory of nonparametric regression. Springer series in statistics. Springer, New York. https://doi.org/10.1007/b97848
Hartig J, Schänzle C, Pelz PF (2019) Concept validation of a soft sensor network for wear detection in positive displacement pumps. In: 4th international rotating equipment conference – pumps and compressors
Hartig J, Hoppe F, Martin D, Staudter G, Öztürk T, Anderl R, Groche P, Pelz PF, Weigold M (2020) Identification of lack of knowledge using analytical redundancy applied to structural dynamic systems. In: Model validation and uncertainty quantification, vol 3. Springer, pp 131–138
Hartig J, Schänzle C, Pelz PF (2020) Validation of a soft sensor network for condition monitoring in hydraulic systems. In: 12th international fluid power conference. Technische Universität Dresden
Hauer T (2012) Modellierung der Werkzeugabdrängung beim Reiben – Ableitung von Empfehlungen für die Gestaltung von Mehrschneidenreibahlen. Schriftenreihe des PTW. Shaker, Aachen. Dissertation, TU Darmstadt
Hedrich P (2018) Konzeptvalidierung einer aktiven Luftfederung im Kontext autonomer Fahrzeuge, Forschungsberichte zur Fluidsystemtechnik, vol 20. Shaker, Aachen
Heimrich F, Anderl R (2016) Approach for the visualization of geometric uncertainty of assemblies in cadsystems. Journal of Computers 11(3):247–257
Higdon D, Gattiker J, Williams B, Rightley M (2008) Computer model calibration using highdimensional output. J Am Stat Assoc 103(482):570–583. https://doi.org/10.1198/016214507000000888
Hodouin D (2010) Process observers and data reconciliation using mass and energy balance equations. In: Sbárbaro D, del Villar R (eds) Advanced control and supervision of mineral processing plants. Advances in industrial control. Springer, London, pp 15–83
Hurtado JE (2004) Structural reliability: statistical learning perspectives, vol 17. Lecture notes in applied and computational mechanics. Springer, Berlin
Ihn JB, Chang FK (2008) Pitchcatch active sensing methods in structural health monitoring for aircraft structures. Structural Health Monitoring: An International Journal 7(1):5–19. https://doi.org/10.1177/1475921707081979
Imai S, Blasch E, Galli A, Zhu W, Lee F, Varela CA (2017) Airplane flight safety using errortolerant data stream processing. IEEE Aerospace and Electronic Systems Magazine 32(4):4–17. https://doi.org/10.1109/maes.2017.150242
Isermann R (2006) Faultdiagnosis systems: an introduction from fault detection to fault tolerance. Springer, Berlin. https://doi.org/10.1007/3540303685
Isermann R, Ballé P (1997) Trends in the application of modelbased fault detection and diagnosis of technical processes. Control Engineering Practice 5(5):709–719. https://doi.org/10.1016/S09670661(97)000531
Isermann R, Schaffnit J, Sinsel S (1999) Hardwareintheloop simulation for the design and testing of enginecontrol systems. Control Engineering Practice 7(5):643–653. https://doi.org/10.1016/S09670661(98)002056
ISO (2008) Uncertainty of measurement – Part 3: guide to the expression of uncertainty in measurement
Johnson CR, Sanderson AR (2003) A next step: Visualizing errors and uncertainty. IEEE Comput Graphics Appl 23(5):6–10. https://doi.org/10.1109/MCG.2003.1231171
Kapoor SG, DeVor RE, Zhu R, Gajjela R, Parakkal G, Smithey D (1998) Development of mechanistic models for the prediction of machining performance: model building methodology. Mach Sci Technol 2(2):213–238
Kaymaz I (2005) Application of kriging method to structural reliability problems. Struct Saf 27(2):133–151. https://doi.org/10.1016/j.strusafe.2004.09.001
Kennedy MC, O’Hagan A (2001) Bayesian calibration of computer models. J R Stat Soc: Ser B (Stat Methodol) 63(3):425–464. https://doi.org/10.1111/14679868.00294
Kersting S, Kohler M (2019) Uncertainty quantification based on (imperfect) simulation models with estimated input distributions. Submitted for publication
Khaleghi B, Khamis A, Karray FO, Razavi SN (2013) Multisensor data fusion: a review of the stateoftheart. Information Fusion 14(1):28–44. https://doi.org/10.1016/j.inffus.2011.08.001
Kim SH, Na SW (1997) Response surface method using vector projected sampling points. Structural Safety 19(1):3–19. https://doi.org/10.1016/S01674730(96)000379
Kohler M, Krzyżak A (2017) Improving a surrogate model in uncertainty quantification by real data. Submitted for publication
Körkel S, Kostina E, Bock HG, Schlöder JP (2004) Numerical methods for optimal control problems in design of robust optimal experiments for nonlinear dynamic processes. Optim Methods Softw 19(3–4):327–338
Korkmaz F (1982) Hydrospeicher als Energiespeicher. Springer, Berlin. https://doi.org/10.1007/9783642817373
Kreß R, Crepin PY, Kubbat W, Schreiber M (2000) Fault detection and diagnosis for electrohydraulic actuators. IFAC Proceedings Volumes 33(26):983–988. https://doi.org/10.1016/S14746670(17)39273X
Kreye ME, Goh YM, Newnes LB (2011) Manifestation of uncertainty – a classification. In: DS 686: Proceedings of the 18th international conference on engineering design (ICED 11), impacting society through engineering design, vol 6. Design information and knowledge
Krima S, Barbau R, Fiorentini X, Sudarsan R, Sriram RD (2009) Ontostep: OWLDL ontology for step. NIST Pubs
Kumar M, Garg DP, Zachery RA (2006) A generalized approach for inconsistency detection in data fusion from multiple sensors. In: American control conference. IEEE Operations Center, Piscataway, NJ, p 6. https://doi.org/10.1109/ACC.2006.1656526
Ledin JA (1999) Hardwareintheloop simulation. Embedded Systems Programming 12(2):42–60
Lehner S, Jacobs G (1997) Contamination sensitivity of hydraulic pumps and valves. In: Totten GE (ed) Tribology of hydraulic pump testing, STP/ASTM, pp. 261–276. ASTM, Philadelphia, Pa. https://doi.org/10.1520/STP11852S
Lenz E (2017) Methodischer Reglerentwurf für eine aktive Luftfeder unter Unsicherheit. Internal report, TU Darmstadt
Lenz J, Platz R (2019) Quantification and evaluation of parameter and model uncertainty for passive and active vibration isolation. In: Barthorpe R, Platz R, Lopez I, Moaveni B, Papadimitriou C (eds) Model validation and uncertainty quantification, vol 3. Conference proceedings of the society for experimental mechanics series. Springer, Cham, pp 135–147
Lenz E, Hedrich P, Pelz PF (2018) Aktive Luftfederung – Modellierung, Regelung und HardwareintheLoopExperimente. Forschung in Ingenieurwesen, pp 1–15. https://doi.org/10.1007/s1001001802722
Li S, Götz B, Schaeffner M, Platz R (2017) Approach to prove the efficiency of the monte carlo method combined with the elementary effect method to quantify uncertainty of a beam structure with piezoelastic supports. In: Proceedings of the 2nd international conference on uncertainty quantification in computational sciences and engineering (UNCECOMP 2017), pp. 441–455. https://doi.org/10.7712/120217.5382.16762
Liu DP (2006) Parameter identification for LuGre friction model using genetic algorithms. In: Proceedings of 2006 international conference on machine learning and cybernetics. IEEE, Piscataway NJ
Locke R, Kupis S, Gehb CM, Platz R, Atamturktur S (2019) Applying uncertainty quantification to structural systems: Parameter reduction for evaluating model complexity. In: Barthorpe RJ (ed) Model validation and uncertainty quantification, vol 3. Conference proceedings of the society for experimental mechanics series. Springer, Cham, pp 241–256
Lutters E, Van Houten FJ, Bernard A, Mermoz E, Schutte CS (2014) Tools and techniques for product design. CIRP Annals 63(2):607–630
Mallapur S, Platz R (2018) Quantification of uncertainty in the mathematical modelling of a multivariable suspension strut using Bayesian interval hypothesisbased approach. In: Pelz PF, Groche P (eds) Uncertainty in mechanical engineering III, vol 885. Applied mechanics and materials. Trans Tech Publications, pp 3–17
Mallapur S, Platz R (2019) Uncertainty quantification in the mathematical modelling of a suspension strut using Bayesian inference. Mechanical Systems and Signal Processing 118:158–170. https://doi.org/10.1016/j.ymssp.2018.08.046
Margossian CC (2019) A review of automatic differentiation and its efficient implementation. Wiley Interdiscip Rev: Data Mining and Knowledge Discovery 9(4):e1305. https://doi.org/10.1002/widm.1305
Mascardi V, Cordi V, Rosso P (2007) A comparison of upper ontologies. In: Baldoni M, Boccalatte A, de Paoli F, Martelli M, Mascardi V (eds) WOA 2007: Dagli Oggetti agli Agenti. 8th AI*IA/TABOO joint workshop “From Objects to Agents”: Agents and Industry: Technological Applications of Software Agents. Seneca, Torino, Italy, pp 55–64
Maurer S, Markmann B, Mersmann A (1998) A priori Vorhersage von Adsorptionsgleichgewichten. Chemie Ingenieur Technik  CIT 70(9):1104–1105. https://doi.org/10.1002/cite.330700960
Mayergoyz ID (2003) Mathematical models of hysteresis and their applications. Elsevier. https://doi.org/10.1016/B9780124808737.X50002
Mersmann A, Kind M, Stichlmair J (2005) Thermische Verfahrenstechnik: Grundlagen und Methoden, second revised and enlarged. Chemische Technik Verfahrenstechnik, Springer, Berlin
Mickens T, Schulz M, Sundaresan M, Ghoshal A, Naser AS, Reichmeider R (2003) Structural health monitoring of an aircraft joint. Mechanical Systems and Signal Processing 17(2):285–303. https://doi.org/10.1006/mssp.2001.1425
Monostori L, Kádár B, Bauernhansl T, Kondoh S, Kumara S, Reinhart G, Sauer O, Schuh G, Sihn W, Ueda K (2016) Cyberphysical systems in manufacturing. CIRP Annals 65(2):621–641. https://doi.org/10.1016/j.cirp.2016.06.005
Mosch L, Sprenger A, Anderl R (2010) Approach for visualization of uncertainty in cadsystems based on ontologies. In: ASME 2010 international mechanical engineering congress and exposition. American Society of Mechanical Engineers Digital Collection, pp 243–249. https://doi.org/10.1115/IMECE201037651
Muehleisen RT, Riddle M (2014) A guide to Bayesian calibration of building energy models. In: ASHRAE/IBPSAUSA. https://doi.org/10.13140/2.1.1674.9127
Nagel JB (2017) Bayesian techniques for inverse uncertainty quantification. Dissertation, ETH Zürich
Nakashima M (2001) Development, potential, and limitations of realtime online (pseudodynamic) testing. Philos Trans: Math Phys Eng Sci 359(1786):1851–1867
Ondoua S (2016) Unsicherheit in der Bewertung von StrukturEigenschaftsbeziehungen zwischen aktiven und passiven Systemelementen in aktiven lasttragenden Systemen. Dissertation, TU Darmstadt
Papadrakakis M, Lagaros ND (2002) Reliabilitybased structural optimization using neural networks and Monte Carlo simulation. Computer Methods in Applied Mechanics and Engineering 191(32):3491–3507. https://doi.org/10.1016/S00457825(02)002876
Park I, Amarchinta HK, Grandhi RV (2010) A Bayesian approach for quantification of model uncertainty. Reliability Engineering & System Safety 95(7):777–785
Parzen E (1962) On estimation of a probability density function and mode. Ann Math Stat 33:1065–1076. https://doi.org/10.1214/aoms/1177704472
Pasquier R, Smith IF (2015) Robust system identification and model predictions in the presence of systematic uncertainty. Advanced Engineering Informatics 29(4):1096–1109
Paucksch E, Holsten S, Linß M, Tikal F (2008) Zerspantechnik: Prozesse, Werkzeuge, Technologien, twelfth edn. Studium. Vieweg + Teubner, Wiesbaden. https://doi.org/10.1007/9783834894946
Pelz PF, Groß TF, Schänzle C (2017) Hydrospeicher mit Sorbentien – Verhalten, Modellierung und Diskussion. O+P – Ölhydraulik und Pneumatik 61(1–2):42–49
Pelz PF, Dietrich I, Schänzle C, Preuß N (2018) Towards digitalization of hydraulic systems using soft sensor networks. In: 11th international fluid power conference 2018. RWTH Aachen, Aachen, pp 40–53
Platz R, Enss GC (2015) Comparison of uncertainty in passive and active vibration isolation. In: Atamturktur S, Moaveni B, Papadimitriou C, Schoenherr T (eds) Model validation and uncertainty quantification, vol 3. Conference proceedings of the society for experimental mechanics series. Springer, Cham, pp 15–25
Platz R, Melzer CM (2016) Uncertainty quantification for decision making in early design phase for passive and active vibration isolation. In: Proceedings of ISMA 2016 including USD 2016 international conference on uncertainty in structural dynamics, pp 4501–4513
Platz R, Ondua S, Enss GC, Melz T (2014) Approach to evaluate uncertainty in passive and active vibration reduction. In: Atamturktur S, Moaveni B, Papadimitriou C, Schoenherr T (eds) Model validation and uncertainty quantification, vol 3. Conference proceedings of the society for experimental mechanics series. Springer, Cham, pp 345–352
Preuß N, Schänzle C, Pelz PF (2018) Accumulators with sorbent material – an innovative approach towards size and weight reduction. In: 11th international fluid power conference, pp 504–517. http://wl.fst.tudarmstadt.de/wl/publications/ paper_180319_Aachen_11th_IFK_Proceedings_Hydrospeicher_Sorbentien_preuss_schaenzle_pelz.pdf
Rasmussen CE (2003) Gaussian processes in machine learning. In: Bousquet O, von Luxburg U, Rätsch G (eds) Advanced lectures on machine learning, vol 3176. Lecture notes in computer science. Springer, Berlin, pp 63–71
Rieger KJ, Schiehlen W (1994) Active versus passive control of vehicle suspensions – hardware in the loop experiments. In: Burrows CR (ed) The active control of vibration. Mechanical Engineering Publ
Ríos J, Staudter G, Weber M, Anderl R (2019) A review, focused on data transfer standards, of the uncertainty representation in the digital twin context. In: IFIP international conference on product lifecycle management. Springer, pp 24–33. https://doi.org/10.1007/9783030422509_3
Rosenblatt M (1956) Remarks on some nonparametric estimates of a density function. Ann. Math. Statist. 27:832–837. https://doi.org/10.1214/aoms/1177728190
Saltelli A (2008) Global sensitivity analysis: the primer. Wiley, Chichester. https://doi.org/10.1002/9780470725184
Sankararaman S, Mahadevan S (2011) Model validation under epistemic uncertainty. Reliability Engineering & System Safety 96(9):1232–1241. https://doi.org/10.1016/j.ress.2010.07.014
Santner TJ, Williams BJ, Notz WI (2018) The design and analysis of computer experiments. Springer series in statistics. Springer, New York
Sarhadi P, Yousefpour S (2014) State of the art: hardware in the loop modeling and simulation with its applications in design, development and implementation of system and control software. International Journal of Dynamics and Control 3(4):470–479. https://doi.org/10.1007/s4043501401083
Schänzle C, Ludwig G, Pelz PF (2016) ERP positive displacement pumps – physically based approach towards an applicationrelated efficiency guideline. In: 3rd international rotating equipment conference (IREC) 2016. Düsseldorf
Schänzle C, Dietrich I, Corneli T, Pelz PF (2017) Controlling uncertainty in hydraulic drive systems by means of a soft sensor network. Sensors and Instrumentation 5:1
Schuëller GI (2007) On the treatment of uncertainties in structural mechanics and analysis. Computers & Structures 85(5–6):235–243
Silvey SD (1980) Optimal design: an introduction to the theory for parameter estimation, vol 1. Springer, Netherlands
Smith RC (2014) Uncertainty quantification: theory, implementation, and applications, computational science and engineering, vol 12. SIAM, Philadelphia
Sprenger A, Anderl R (2012) Product life cycle oriented representation of uncertainty. In: Product lifecycle management. Towards knowledgerich enterprises. Springer, pp 277–286. https://doi.org/10.1007/9783642357589_24
Sprenger A, Mosch L, Anderl R (2011) Representation of uncertainty in distributed product development. In: 18th annual European concurrent engineering conference 2011
Sprenger A, Haydn M, Ondoua S, Mosch L, Anderl R (2012) Ontologybased information model for the exchange of uncertainty in load carrying structures. In: Hanselka H, Groche P, Platz R (eds) Uncertainty in mechanical engineering, vol 104. Applied mechanics and materials. Trans Tech Publications, pp 55–66. https://doi.org/10.4028/www.scientific.net/AMM.104.55
Spurk JH (1992) Dimensionsanalyse in der Strömungslehre. Springer, Berlin
Steinhorst W (1999) Sicherheitstechnische Systeme: Zuverlässigkeit und Sicherheit kontrollierter und unkontrollierter Systeme. Aus dem Programm Naturwissenschaftliche Grundlagen. Vieweg+Teubner, Wiesbaden. https://doi.org/10.1007/9783322909275
Stuart AM (2010) Inverse problems: a Bayesian perspective. Acta Numer 19:451–559
Tjahjono S (2019) Aircraft accident investigation report Boeing 7378 (MAX); PKLQP
TolkerNielsen T (2017) EXOMARS 2016Schiaparelli anomaly inquiry: DGI/2017/546/TTN. Technical report, Agency, European Space. https://sci.esa.int/documents/33431/35950/1567260317467ESA_ExoMars_2016_Schiaparelli_Anomaly_Inquiry.pdf
Tuegel EJ, Ingraffea AR, Eason TG, Spottswood SM (2011) Reengineering aircraft structural life prediction using a digital twin. Int J Aerosp Eng 1–14. https://doi.org/10.1155/2011/154798
Tufte ER (1983) The visual display of quantitative information, vol 2. Graphics Press, Cheshire
Verein Deutscher Ingenieure (2010) VDI 2064:2010–11 Aktive Schwingungsisolierung [Active vibration isolation]. Beuth, Berlin
Vergé A, Lotz J, Kloberdanz H, Pelz PF (2015) Uncertainty scaling – motivation, method and example application to a load carrying structure. In: Pelz PF, Groche P (eds) Uncertainty in mechanical engineering II, vol 807. Applied mechanics and materials. Trans Tech Publications, pp 99–108
Wahba G (1990) Spline models for observational data, vol 59. SIAM
Walter E, Pronzato L (1990) Qualitative and quantitative experiment design for phenomenological models  a survey. Automatica 26(2):195–213. https://doi.org/10.1016/00051098(90)90116Y
Walther A, Griewank A (2012) Getting started with ADOLC. In: Naumann U, Schenk O (eds) Combinatorial scientific computing. Chapman & Hall/CRC computational science, vol 20121684. CRC Press, Boca Raton, pp 181–202. https://doi.org/10.1201/b116448
Wang S, Chen W, Tsui KL (2009) Bayesian validation of computer models. Technometrics 51(4):439–451. https://doi.org/10.1198/TECH.2009.07011
Wang X, Lin S, Wang S (2016) Dynamic friction parameter identification method with LuGre model for directdrive rotary torque motor. Mathematical Problems in Engineering 2016:1–8. https://doi.org/10.1155/2016/6929457
Weber C (2007) Looking at “DFX” and “Product maturity” from the perspective of a new approach to modelling product and product development processes. In: Krause FL (ed) The future of product development. Springer, Berlin, pp 85–104
Weber M, Staudter G, Anderl R (2018) Comparison of inductive inference mechanisms and their suitability for an information model for the visualization of uncertainty. In: Pelz PF, Groche P (eds) Uncertainty in mechanical engineering III, vol 885. Applied mechanics and materials. Trans Tech Publications, pp 147–155. https://doi.org/10.4028/www.scientific.net/AMM.885.147
Wong RKW, Storlie CB, Lee TCM (2017) A frequentist approach to computer model calibration. J. R. Stat. Soc. Ser. B. Stat. Methodol. 79(2):635–648. https://doi.org/10.1111/rssb.12182
Yager RR (1987) On the DempsterShafer framework and new combination rules. Information Sciences 41(2):93–137. https://doi.org/10.1016/00200255(87)900077
Zabel A (2010) Prozesssimulation in der Zerspanung: Modellierung von Dreh und Fräsprozessen, Schriftenreihe des ISF/Technische Universität Dortmund H, vol 2. VulkanVerlag, Essen. TU Dortmund, Habilitation
Zhao X, Gao H, Zhang G, Ayhan B, Yan F, Kwan C, Rose JL (2007) Active health monitoring of an aircraft wing with embedded piezoelectric sensor/actuator network: I. Defect detection, localization and growth monitoring. Smart Materials and Structures 16(4):1208–1217. https://doi.org/10.1088/09641726/16/4/032
Zocholl M, Anderl R (2014) Ontologybased representation of time dependent uncertainty information for parametric product data models. In: Liu K, Filipe J (eds) KMIS 2014 – proceedings of the international conference on knowledge management and information sharing. Scitepress, Setúbal, Portugal, pp 400–404
Zocholl M, Trinkel T, Anderl R (2014) Methode zur Beherrschung von Unsicherheit in ex35 pliziten 3DCAD Geometrien. In: Rieg F, Brökel K, Feldhusen J, Grote KH, Stelzer R (eds) 12. Gemeinsames Kolloquium Konstruktionstechnik 2014: Methoden in der Produktentwicklung: Kopplung von Strategien und Werkzeugen im Produktentwicklungsprozess. Bayreuth, pp 173–182. https://epub.unibayreuth.de/1789/
Zocholl M, Heimrich F, Oberle M, Würtenberger J, Bruder R, Anderl R (2015) Representation of human behaviour for the visualization in assembly design. In: Pelz PF, Groche P (eds) Uncertainty in mechanical engineering II, vol 807. Applied mechanics and materials. Trans Tech Publications, pp 183–192
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this chapter
Cite this chapter
Schaeffner, M. et al. (2021). Analysis, Quantification and Evaluation of Uncertainty. In: Pelz, P.F., Groche, P., Pfetsch, M.E., Schaeffner, M. (eds) Mastering Uncertainty in Mechanical Engineering. Springer Tracts in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/9783030783549_4
Download citation
DOI: https://doi.org/10.1007/9783030783549_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030783532
Online ISBN: 9783030783549
eBook Packages: EngineeringEngineering (R0)