Abstract
Dynamic processes have always been of profound interest for scientists and engineers alike. Often, the mathematical models used to describe and predict time-variant phenomena are uncertain in the sense that governing relations between model parameters, state variables and the time domain are incomplete. In this paper we adopt a recently proposed algorithm for the detection of model uncertainty and apply it to dynamic models. This algorithm combines parameter estimation, optimum experimental design and classical hypothesis testing within a probabilistic frequentist framework. The best setup of an experiment is defined by optimal sensor positions and optimal input configurations which both are the solution of a PDE-constrained optimization problem. The data collected by this optimized experiment then leads to variance-minimal parameter estimates. We develop efficient adjoint-based methods to solve this optimization problem with SQP-type solvers. The crucial test which a model has to pass is conducted over the claimed true values of the model parameters which are estimated from pairwise distinct data sets. For this hypothesis test, we divide the data into k equally-sized parts and follow a k-fold cross-validation procedure. We demonstrate the usefulness of our approach in simulated experiments with a vibrating linear-elastic truss.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Model uncertainty
- Optimum experimental design
- Sensor placement
- Optimal input configuration
- k-fold cross-validation
1 Introduction
In science and technology, dynamic processes are often described by time-variant mathematical models. However, the accurate prediction of the motion and behavior of technical systems is still challenging. Due to our incomplete knowledge of the internal relations between model parameters, state variables and the time domain, the user frequently encounters model uncertainty [20]. In [12] we developed an algorithm to identify this model uncertainty using parameter estimation, optimal experimental design and classical hypothesis testing. It is the aim of this paper to extend this approach to dynamic models. In the following, we adopt the framework described in [12] and extend it to meet mathematical models which are comprised of time-variant partial differential equations (PDE).
There is abundant literature on the assessment of descriptive and predictive qualities of dynamic models. Most common are techniques like residual analysis [27, 29] and interval simulation [25], maximum likelihood methods [29] and Bayesian model updating [31]. Our approach comes from a frequentist perspective and offers an alternative: we minimize the extent of data uncertainty by optimizing the experimental design and employ a k-fold cross-validation to test the model’s fitness and consistency. The validation is hereby performed via a classical hypothesis test in the parameter space.
A subproblem that needs to be solved in our approach to detect model uncertainty is the PDE-constrained optimal experimental design (OED) problem where the PDE is time-dependent. We specifically focus on experiments where sensors need to be positioned and inputs must be chosen in order to achieve a maximum information gain for the estimated values of the model parameters. Optimal sensor placement has been addressed within the PDE-context in [1, 2, 23] and optimal input configuration has been extensively analyzed for both linear and nonlinear ordinary differential equations in various engineering applications [6, 17, 21, 22, 28]. However, in these cases the problem dimension is small compared to a (discretized) time-variant PDE and thus, gradient-based optimization with a sensitivity approach, as suggested by [4] and [19], works fine. In our case, this approach is no longer computationally tractable. Our framework meets high-dimensionality by employing efficient adjoint techniques in a sequential quadratic programming (SQP) solver scheme.
This paper is organized as follows. In Sect. 2 and Sect. 3, we introduce our model equations and briefly present the concepts of parameter estimation and OED followed by efficient solution techniques for the OED problem. Then, in Sect. 4 we show how our algorithm to detect model uncertainty is adapted to the dynamic setting. Section 5 contains numerical results for the OED problem applied to vibrations of a truss and the application of our algorithm to detect model uncertainty. We end the paper with concluding remarks.
2 Model Equations of Transient Linear Elasticity and Their Discretization
Let be a bounded Lipschitz domain with sufficiently smooth boundary \( \partial G = \varGamma _\mathrm {D} \cup \varGamma _\mathrm {F} \cup \varGamma _\mathrm {N} \) where \( \varGamma _\mathrm {D}, \varGamma _\mathrm {F}, \varGamma _\mathrm {N} \) are pairwise disjoint and non-empty. Furthermore, let (0, T) , with \( T > 0 \), be an open and bounded time interval. We consider the parameter-dependent equations of motion for the linear-elastic body G of mass density \( \varrho > 0 \) and weak damping constant \( a > 0 \), see [15, Sec. 7.2]:
![](http://media.springernature.com/lw407/springer-static/image/chp%3A10.1007%2F978-3-030-77256-7_22/MediaObjects/494324_1_En_22_Equ1_HTML.png)
We include Rayleigh damping in our modeling by the generalized law of Hooke:
where \( b > 0 \) is the strong damping constant, \( \varepsilon (y) = \frac{1}{2} \!\left( \nabla y^\top + \nabla y\right) \) is the linearized strain and \( \mathcal {C} \; : \; \varepsilon \mapsto p_1 \cdot \mathrm {trace}\left( \varepsilon \right) I + 2 p_2 \cdot \varepsilon \) is the fourth order elasticity tensor, see also [7]. The parameters in this PDE are the well known Lamé constants . It is evident from (1) that the displacement
is caused by the traction
alone.
After adopting the weak formulation of (1) according to [15, Sec. 7.2] and [9] we perform a finite-dimensional approximation of this weak formulation known as the Galerkin ansatz. We employ standard quadratic finite elements for the elastic body G for the space discretization. Then the finite element approximation leads to the (high-dimensional) second-order ordinary differential equation
with the stiffness matrix A(p) , the mass matrix M and the boundary mass matrix N. For the Rayleigh damping term, we introduce the damping matrix
where \( a, b > 0 \) are the damping constants as before.
We want to use a numerical time-update scheme with a predefined step size \( \varDelta t \) to solve (2). Therefore, we rewrite (2) in the form
with the acceleration vector \( a_n = \partial ^2_{t t} y(t_n) \), the velocities \( v_n = \partial _{t} y(t_n) \) and the displacements \( d_n = y(t_n) \) for time steps \( t_n \), where \( n = 1, \ldots , n_\mathrm {t} \), respectively. The implicit Newmark method is suitable to solve this equation. It can be implemented in the following way. First, we choose constants \( \beta _\mathrm {N} = \frac{1}{4} \), \( \gamma _\mathrm {N} = \frac{1}{2} \) for stability reasons, see [15, 30], and define with them other constants \( \alpha _1, \ldots , \alpha _6 \) as depicted in [30, Sec. 6.1.2]. Then the iteration scheme reads as follows:
This scheme can be written in matrix form:
where \( y = (y_1, \ldots , y_{n_\mathrm {t}})^\top \) are the states, \( u = (u_1, \ldots , u_{n_\mathrm {t}})^\top \) are the boundary forces at all time points, \( y_n = (a_n, v_n, d_n)^\top \) and \( u_n = (u_{n,x}, u_{n,y})^\top \). The matrices L and F have the block form
where
with \( D(p) := \alpha _1 M + \alpha _4 C(p) + A(p) \) and
For the following optimization problems let be the state space and let
be the space of admissible inputs with \( n_{\mathrm {y}} = n_{\mathrm {d}} n_{\mathrm {t}} \) being the product of the space dimension after discretization \( n_{\mathrm {d}} \) and the number of time steps \( n_{\mathrm {t}} \). Furthermore, let be an operator defining the state equation as
and denote its unique solution by y(p, u) . We assume the operator \( \partial _{y} e(y, p, u) \) to be continuously invertible such that we can use the Implicit Function Theorem to define a mapping \( p \mapsto y(p, u) \). Its derivatives \( s_i := \partial _{p_i} y(p, u) \) for \( i = 1,2 \) are computed by solving
which in our setting is equivalent to
Thus, the sensitivity variable \( s := [s_1, s_2] \in {{\,\mathrm{\textit{Y}}\,}}\times {{\,\mathrm{\textit{Y}}\,}}\) depends on the solution of the state equation and on the parameters, i.e., \( s_i = s_i(y(p, u), p_i) \). Equations (6) are solved by rewriting them using the iteration scheme (4).
For the input space \( {{\,\mathrm{\textit{U}_{\mathrm {ad}}}\,}}\) we employ a time discretization with linear finite elements and denote by \( M_{\mathrm {T}} \) the mass matrix and by \( A_{\mathrm {T}} \) the stiffness matrix in the time domain.
3 Lamé-Parameter Estimation and the Optimal Experimental Design Problem
Given a set of experimental data, we are concerned with an accurate estimation of the Lamé-parameters which are part of the model equations. The measurements are taken at selected points on the discretized free boundary part \( \varGamma _{\mathrm {F}} \) of the elastic body G with specified sensor types. We denote by \( n_{\mathrm {s}} \) the number of available sensors. In order to compare the output of the model equations, i.e., the state, with experimental data we introduce a nonlinear observation operator
that maps components of the state to quantities that are actually measured during the experiment at all \( n_{\mathrm {t}} \) time steps.
Within the framework of optimal experimental design, we introduce binary weights \( \omega \in \left\{ 0, 1\right\} ^{n_{\mathrm {s}}} \) for all sensor locations and types. These weights operate as a selection tool, i.e., \( \omega _k = 1 \) if, and only if, sensor k is used at its specified location. Since the position of these sensors and their usage throughout the experiments stay the same, the values of \( \omega \) are copied \( n_{\mathrm {t}} \) times and summarized in the diagonal matrix , where \( n_{\mathrm {z}} = n_{\mathrm {s}} n_{\mathrm {t}} \). In addition, each sensor has a fixed operating precision, i.e., standard deviation, which we associate by the variable
. We again summarize \( n_{\mathrm {t}} \) copies of \( \sigma _{\mathrm {pr}} \) in a diagonal matrix
.
The data is used to estimate the parameters
by solving a least-squares problem:
![](http://media.springernature.com/lw294/springer-static/image/chp%3A10.1007%2F978-3-030-77256-7_22/MediaObjects/494324_1_En_22_Equ7_HTML.png)
where \( r(z, y(p, u)) := {{\,\mathrm{\textit{h}}\,}}(y(p, u)) - z \) are the residuals and y(p, u) is the unique solution of (5) for given p and u. Since the measurements are random variables \( z = z^*+ \varepsilon \) with unknown true values \( z^*\) and noise \( \varepsilon \), so are the parameters. We model the noise to be Gaussian, i.e., \( \varepsilon \in \mathcal {N}(0, {{\,\mathrm{\varOmega }\,}}^{-1}{{\,\mathrm{\varSigma }\,}}^2) \). In a first order approximation, like in a Gauss-Newton solver scheme, the parameters are also Gaussian with unknown mean \( p^*\) and covariance matrix C, see [8, 19]. Then the confidence region of the parameters with a fixed confidence level \( 1 - \alpha \), where \( \alpha \in (0,1) \), is given by
![](http://media.springernature.com/lw449/springer-static/image/chp%3A10.1007%2F978-3-030-77256-7_22/MediaObjects/494324_1_En_22_Equ34_HTML.png)
We assume that the solution \( \overline{p}\) of (7) for given z and \( \omega \), emerging from the Gauss-Newton algorithm, is sufficiently close to \( p^*\). Thus, we make the assumption that for a given data set, \( \overline{p}\) is a fairly good approximation of \( p^*\). Then the covariance matrix C can be approximated by employing the Gauss-Newton scheme as well and it has the following form [8]:
We aim at minimizing the confidence region where the estimated parameters \( \overline{p}\) lie:
![](http://media.springernature.com/lw504/springer-static/image/chp%3A10.1007%2F978-3-030-77256-7_22/MediaObjects/494324_1_En_22_Equ35_HTML.png)
The reduction of the size of the confidence ellipsoid K is equivalent to reducing the “size” of the covariance matrix. This is realized by choosing best sensor locations, determined by the weights \( \omega \), and by finding optimal inputs u. In practice, there are various design criteria \( \varPsi \) that measure the “size” of a matrix C, see [11]. In this paper we decide to use the E-criterion which is related to the maximal expansion of K:
We add a cost term \( P_\varepsilon (\omega ) \) to penalize the number of used sensors and a regularizer \( R(u) := u^\top (M_{\mathrm {T}} + A_\mathrm {T}) u \) to the objective function. Moreover, we relax the binary restriction on \( \omega \) to employ gradient-based solution techniques for the following optimal experimental design problem.
Definition 1
Let be an estimate of \( p^*\) and let \( \kappa , \beta > 0 \) be fixed. Furthermore, choose \( \varepsilon \in (0,1] \). Then we call \( (\overline{\omega }, \overline{u}) \) an optimal design of an experiment with the linear-elastic body G if it is the solution of
where (u, y, s) are subject to the equality constraints
for \( i = 1, 2 \) and \( (\omega , u) \) satisfy the inequality constraints
The penalty term \( P_\varepsilon (\omega ) \) is a smooth approximation of the \( l_0 \)-“norm”. It ensures sparse solutions in \( \omega \) for suitable choices of \( \kappa \) but does not lead to \( \left\{ 0, 1\right\} \)-valued weights yet. To achieve the latter, we adopt a continuation strategy as described in [1, 2].
Note, that the penalty parameter \( \kappa \) must not be chosen too large since the matrix \( C_{\mathrm {GN}} \) becomes singular if too many weights \( \omega \) are switched to zero. We refer to [19] for more details on lower bounds for the sum of the weight variables.
In practice, problem (8)–(10) is solved using its reduced formulation, i.e., by eliminating the equality constraints (9) and inserting y(p, u) and s(y(p, u), p) into the objective function.
3.1 Derivative and Adjoint Computation
Let \( J(\omega , u, y, s_1, s_2) \) be the objective function in (8). We show how the derivative of the reduced objective function \( \hat{J}(\omega , u) \), where the solutions y(p, u) and s(y(p, u), p) of (9) have been inserted into J, with respect to the inputs u is efficiently computed. To do so, we follow a standard Lagrangian view of the optimization problem (8)–(10). For simplicity, we ignore the inequality constraints (10) and still denote by \( \partial _y \varPsi \) the derivative of \( \varPsi \) with respect to y even though we used the Clarke directional derivatives in the case of \( \varPsi = \varPsi _{\mathrm {E}} \), cf. [13]. Let \( \mu , \lambda _{1}, \lambda _{2} \in {{\,\mathrm{\textit{Y}^{*}}\,}}\) be Lagrange multipliers and let the Lagrangian be defined as
The adjoint equations follow from \( \partial _{y} \mathcal {L} = \partial _{s_i} \mathcal {L} = 0 \) for \( i = 1, 2 \):
The fact that the matrix L(p) is transposed on the left hand side of (11) leads to an iteration scheme backwards in time. We demonstrate this for the second and third adjoint equations in order to obtain \( \lambda _i \) whereby adopting ideas from [18, Sec. 5.4]. Let \( \lambda = \lambda _{i} \) and \( r := \partial _{s_i} \varPsi \) for \( i \in \left\{ 1,2\right\} \). Note that \( r = (r_1, \ldots , r_{n_{\mathrm {t}}})^\top \) and \( r_n = (r_n^d, 0, 0) \) since the velocities and accelerations do not enter \( \varPsi \). In the terminal point \( t_{n_{\mathrm {t}}} \) we have to solve
or equivalently \( \lambda _{n_{\mathrm {t}}}^a = 0, \, \lambda _{n_{\mathrm {t}}}^v = 0 \) and
For other time points \( t_n, \; n \ne 1 \) the current iterate is obtained from the one which is a step forward in time:
or equivalently
In order to obtain the adjoint variable at the initial time point \( t_1 \) we solve
or equivalently
The matrix vector product \( q := \left[ \partial _{p_i} L(p)\right] ^\top \lambda _i \) is computed likewise using the iteration scheme.
Finally, the full derivative of the reduced objective function \( \hat{J}(\omega , u) \) with respect to the inputs u is given by
where \( \mu \in {{\,\mathrm{\textit{Y}^{*}}\,}}\) is the adjoint variable obtained from (11).
3.2 Computational Remarks
In order to solve (8)–(10) we employ an SQP algorithm with BFGS updates [10] for the Hessian \( H_k \) of the Lagrangian. We modify the update formula in the following way:
where \( d^k \) is the current step, \( y^k \) is the difference between gradients of the Lagrangian at the new and old iterate and
with \( \theta = \frac{0.8 (d^k)^\top H_k d^k}{(d^k)^\top H_k d^k -(y^k)^\top d^k} \). After every tenth iteration we reset the Hessian to \( H_0 \) to avoid matrix filling and to ensure a gradient descent with respect to \( \omega \) from time to time.
4 Detection of Uncertainty in Dynamic Models
We adopt the algorithm presented in [12] and describe the main differences when applied to a time-variant model \( \mathcal {M} \) of a dynamic process. In general, we presuppose that a valid model should reproduce all measurements obtained with all admissible inputs at all sensor locations with the same set of parameters. Our approach is summarized in Algorithm 1.
First, initial (or artificial) data \( z_{\mathrm {ini}} \) is needed for an appropriate guess \( p_{\mathrm {ini}} \) of the parameter values. Having fixed these parameters, one can solve the OED problem (8)–(10) to obtain best sensor positions \( \overline{\omega }\) and optimal input configurations \( \overline{u}\), see lines 02 and 03.
The acquisition of experimental data z in line 04 is done at the optimal sensor locations and for inputs close to the optimum. Since we assume that the true values of the model parameters remain the same for all inputs \( u \in {{\,\mathrm{\textit{U}_{\mathrm {ad}}}\,}}\), we can ensure that our data are truly divers by performing measurements for different input values within a small neighborhood of the optimum \( \overline{u}\). Evidently, the size of the confidence ellipsoid K stays small because of continuity of the objective function (8) with respect to the inputs.
Recall, that for time-variant systems each measurement at a given time depends on the past. Since the order of the data is important, the splitting of z into one calibration and one validation set must not happen over the time axis. Since our methodology is different from forecasting [5] we do not allow such splittings over the time domain.
We perform the division regarding the different inputs in a k-fold cross-validation manner [16]. Divide the data into k groups where each group is distinguished by the input for which it was collected in the whole time domain. We then use \( k-1 \) groups for calibration and the remaining group for validation. When repeating this procedure we run through all k possible combinations.
![figure a](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-3-030-77256-7_22/MediaObjects/494324_1_En_22_Figa_HTML.png)
For the validation itself, we perform a classical hypothesis test from line 08 onward as documented in [12]. The threshold \( \mathtt {TOL} \) is identical to the error of the first kind. It is common to set a \( 5 \% \) limit to this error. The \( \alpha _{\mathrm {min}} \) which is computed in line 08 is the p-value of the statistical test. This is the smallest test level for which the null hypothesis can only just be rejected.
There is no need to account for the problem of multiple testing here, since we are using a k-fold cross-validation manner to divide the data which ensures pairwise disjoint validation sets.
5 Numerical Results for Simulated Vibrations of a Truss
We employ a 2D-truss consisting of nine beams and six connectors with about 5 000 spatial degrees of freedom in order to exemplify the application of Algorithm 1. The Dirichlet boundary \( \varGamma _{\mathrm {D}} \) is located at the two outer top connectors and the Neumann boundary \( \varGamma _{\mathrm {N}} \) on the bottom left connector, see Fig. 1. We use pairs of strain gauges as sensors that can measure either the axial deflection or the displacement caused by bending of the beams, see [14] and [26]. The strain gauges are located on the upper and lower boundaries of the beams, indicated as black bullets and connecting lines in the figure, which are part of the free boundary \( \varGamma _{\mathrm {F}} \) of the body G. Each strain gauge measures the relative displacement of two adjacent nodes: \( \varepsilon _{\mathrm {u}} = y_{\mathrm {N1}} - y_{\mathrm {N2}} \) and \( \varepsilon _{\mathrm {\ell }} = y_{\mathrm {N3}} - y_{\mathrm {N4}} \), see Fig. 1a. For simplicity, we compute the square of the axial deflection \( {{\,\mathrm{\textit{h}}\,}}_{\mathrm {a}}(y) \) and the square of the displacement caused by bending \( {{\,\mathrm{\textit{h}}\,}}_{\mathrm {b}}(y) \):
Thus, the overall observation operator \( {{\,\mathrm{\textit{h}}\,}}\) consists of \( {{\,\mathrm{\textit{h}}\,}}_{\mathrm {a}} \) and \( {{\,\mathrm{\textit{h}}\,}}_{\mathrm {b}} \) at all time points and we create for each such sensor five weight variables. These additional weights shall give the experimenter information about which pairs of strain gauges are more important than others. The discretization of the truss allows for 117 sensors in total. Hence, we have \( n_s = 117 \times 2 \times 5 = {1\,170} \) weight variables.
Throughout our numerical simulations we use pure stiffness damping, i.e., \( a = 0 \) in (3). This is promising to provide better resemblance with actual experimental data, see [3] and [24]. The accuracy of our sensors is fixed to \( \sigma _{\mathrm {pr}, k}\) = 10 \(\upmu \mathrm {m}\) for \( k = 1, \ldots , n_{\mathrm {s}} \).
We simulated vibrations of the truss for \( n_{\mathrm {t}} = 600 \) time steps with a step size of \( \varDelta t = 5 \) ms. Thus, three seconds were simulated in total and the solution of the PDE (1) involves about 3 000 000 degrees of freedom. Initially, there were all 117 pairs of strain gauges used measuring both the axial deflection and the displacement caused by bending with maximum weight, respectively. We also use a constant maximally feasible force as a starting point for the inputs u. The excitation forces u act solely on the Neumann boundary \( \varGamma _{\mathrm {N}} \).
Since we were not able to conduct real experiments, all the data was simulated, i.e., generated on the computer with random numbers. Thus, line 05 in Algorithm 1 became obsolete. We assume the beams of the real truss \( \mathcal {R} \) to have an equal cross-sectional area in the displacement-free state except for two beams having a \( 5 \% \) and a \( 7\% \) smaller diameter, respectively. For the detection of model uncertainty it is not important to know which beams differ from the standard diameter. However, our model \( \mathcal {M} \) operates on the assumption that all beams have the same cross-sectional area. This directly impacts the mathematical terms in the mass, damping and stiffness matrices, see (2), since a model with different cross-sectional beam areas would induce other finite element terms. It is our aim in this section to show that Algorithm 1 successfully detects model uncertainty in \( \mathcal {M} \) when compared to \( \mathcal {R} \).
Since we only simulate experiments, we skipped line 02 in Algorithm 1 and adopted textbook values for \( p_{\mathrm {ini}} \), namely, the well-known Lamé-constants for steel \( \lambda _{\mathrm {L}} = \) 121 154 N/mm\(^2 \) and \( \mu _{\mathrm {L}} = \) 80 769 N/mm\(^2 \). These are the values which we use to generate all measurements from the real truss \( \mathcal {R} \). Problem (8)–(10) is solved after about 80 iterations with an overall computation time of about 8 h on an AMD EPYC 48 \(\times \) 2.8 GHz machine. The design criterion decreased by \(\approx \)99% which means that the maximal expansion of the confidence ellipsoid decreased by \(\approx \)98% compared to the initial design, see Fig. 2. The final design employs only two pairs of strain gauges that measure the axial deflection, the upper with weight two the lower with weight five, cf. Fig. 1b.
Let \( \overline{u}\) be the optimal input force obtained from solving (8)–(10). For the application of the hypothesis test in Algorithm 1, consider the following perturbed inputs:
where \( \delta _1, \delta _4 \sim \mathcal {N}(0, 4 \cdot I) \) and \( \delta _2(t), \delta _3(t) \sim \mathcal {N}(0, 4t/n_{\mathrm {t}} \cdot I)\) for all \( t \in \left\{ t_1, \ldots , t_{n_{\mathrm {t}}}\right\} \) with equal time step size \( \varDelta t \) as introduced before. With these inputs we generate eight different data sets and perform an 8-fold cross-validation. We use seven sets for calibration and one set for validation in line 06 of Algorithm 1. Thus, we conducted eight different hypothesis tests, four of which are shown in Table 1. It is clearly seen, that the model \( \mathcal {M} \) does not pass any test when a threshold of \( 5 \% \) is applied to \( \alpha _{\mathrm {min}} \). According to our assumption that a valid model should reproduce all measurements conducted with all admissible inputs with the same set of parameters, this is a significant indication of model uncertainty.
6 Conclusion
In this paper we showed that our algorithm to detect model uncertainty, which was first presented in [12], is applicable to dynamic models. We efficiently solved the OED problem with time-dependent PDE-constraints using modified BFGS-updates and adjoint methods within an SQP solver scheme. Thus, in finding optimal sensor positions and optimal inputs we were able to significantly reduce the size of the confidence region of the estimated model parameters. By an 8-fold cross-validation using hypothesis tests in the parameter space, we demonstrated on simulations of vibrations in a truss that our algorithm is able to detect inaccuracies of the linear-elastic model which is deficient in the geometrical description of the truss. It is the object of further investigation to show that our algorithm detects other forms or kinds of model uncertainty as well.
References
Alexanderian, A., Petra, N., Stadler, G., Ghattas, O.: A-optimal design of experiments for infinite-dimensional Bayesian linear inverse problems with regularized \(\ell _0\)-sparsification. SIAM J. Sci. Comput. 36(5), A2122–A2148 (2014)
Alexanderian, A., Petra, N., Stadler, G., Ghattas, O.: A fast and scalable method for A-optimal design of experiments for infinite-dimensional Bayesian nonlinear inverse problems. SIAM J. Sci. Comput. 38(1), A243–A272 (2016)
Alipour, A., Zareian, F.: Study rayleigh damping in structures; uncertainties and treatments. In: Proceedings of the 14th World Conference on Earthquake Engineering, Beijing, China, pp. 1–8 (2008)
Bauer, I., Bock, H.G., Körkel, S., Schlöder, J.P.: Numerical methods for optimum experimental design in DAE systems. J. Comput. Appl. Math. 120(1–2), 1–25 (2000)
Bergmeir, C., Benítez, J.M.: On the use of cross-validation for time series predictor evaluation. Inf. Sci. 191, 192–213 (2012)
Chianeh, H.A., Stigter, J., Keesman, K.J.: Optimal input design for parameter estimation in a single and double tank system through direct control of parametric output sensitivities. J. Process Control 21(1), 111–118 (2011)
Ciarlet, P.G.: Mathematical Elasticity. Studies in Mathematics and its Applications, vol. 20. North-Holland Publishing Co., Amsterdam (1988)
Donaldson, J.R., Schnabel, R.B.: Computational experience with confidence regions and confidence intervals for nonlinear least squares. Technometrics 29(1), 67–82 (1987)
Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics, vol. 19, 2nd edn. American Mathematical Society, Providence (2010)
Fletcher, R.: Practical Methods of Optimization, 2nd edn. A Wiley-Interscience Publication. John Wiley & Sons Ltd, Chichester (1987)
Franceschini, G., Macchietto, S.: Model-based design of experiments for parameter precision: state of the art. Chem. Eng. Sci. 63(19), 4846–4872 (2008)
Gally, T., Groche, P., Hoppe, F., Kuttich, A., Matei, A., Pfetsch, M.E., Rakowitsch, M., Ulbrich, S.: Identification of model uncertainty via optimal design of experiments applied to a mechanical press. Optim. Eng. (2021). https://doi.org/10.1007/s11081-021-09600-8
Hiriart-Urruty, J.B., Lewis, A.S.: The Clarke and Michel-Penot subdifferentials of the eigenvalues of a symmetric matrix. Comput. Optim. Appl. 13(1), 13–23 (1999)
Hoffmann, K.: An Introduction to Stress Analysis Using Strain Gauges (1987)
Hughes, T.J.R.: The Finite Element Method. Prentice Hall Inc., Hoboken (1987)
James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning, vol. 112. Springer, Heidelberg (2013)
Jauberthie, C., Bournonville, F., Coton, P., Rendell, F.: Optimal input design for aircraft parameter estimation. Aerosp. Sci. Technol. 10(4), 331–337 (2006)
Kolvenbach, P.: Robust optimization of PDE-constrained problems using second-order models and nonsmooth approaches. Ph.D. thesis, TU Darmstadt (2018)
Körkel, S., Kostina, E., Bock, H.G., Schlöder, J.P.: Numerical methods for optimal control problems in design of robust optimal experiments for nonlinear dynamic processes. Optim. Methods Softw. 19(3–4), 327–338 (2004)
Mallapur, S., Platz, R.: Uncertainty quantification in the mathematical modelling of a suspension strut using Bayesian inference. Mech. Syst. Signal Process. 118, 158–170 (2019). https://doi.org/10.1016/j.ymssp.2018.08.046
Mehra, R.: Optimal input signals for parameter estimation in dynamic systems-survey and new results. IEEE Trans. Autom. Control 19(6), 753–768 (1974)
Morelli, E.A., Klein, V.: Optimal input design for aircraft parameter estimation using dynamic programming principles. In: Proceedings of the 17th Atmospheric Flight Mechanics Conference (1990). https://doi.org/10.2514/6.1990-2801
Neitzel, I., Pieper, K., Vexler, B., Walter, D.: A sparse control approach to optimal sensor placement in PDE-constrained parameter estimation problems. Numer. Math. 143(4), 943–984 (2019)
Otani, S.: Nonlinear dynamic analysis of reinforced concrete building structures. Can. J. Civ. Eng. 7(2), 333–344 (1980)
Puig, V., Quevedo, J., Escobet, T., Nejjari, F., de las Heras, S.: Passive robust fault detection of dynamic processes using interval models. IEEE Trans. Control Syst. Technol. 16(5), 1083–1089 (2008)
Rohrbach, C.: Handbuch für elektrisches Messen mechanischer Größen. VDI-Verlag, Düsseldorf (1967)
Simani, S., Fantuzzi, C., Patton, R.J.: Model-based fault diagnosis techniques. In: Model-Based Fault Diagnosis in Dynamic Systems Using Identification Techniques, pp. 19–60. Springer (2003)
Stigter, J.D., Keesman, K.J.: Optimal parametric sensitivity control of a fed-batch reactor. Automatica 40(8), 1459–1464 (2004)
Willsky, A.S.: Detection of abrupt changes in dynamic systems. In: Detection of Abrupt Changes in Signals and Dynamical Systems, pp. 27–49. Springer (1985)
Wriggers, P.: Nonlinear Finite Element Methods. Springer Science & Business Media, Hoboken (2008)
Yuen, K.V., Kuok, S.C.: Bayesian methods for updating dynamic models. Appl. Mech. Rev. 64(1) (2011). https://doi.org/10.1115/1.4004479
Acknowledgements
This research was funded by the German Research Foundation (DFG) – project number 57157498 – CRC 805 within the subproject A3. We would like to thank the DFG for funding.
The authors would also like to thank Philip Kolvenbach for providing efficient finite element code.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2021 The Author(s)
About this paper
Cite this paper
Matei, A., Ulbrich, S. (2021). Detection of Model Uncertainty in the Dynamic Linear-Elastic Model of Vibrations in a Truss. In: Pelz, P.F., Groche, P. (eds) Uncertainty in Mechanical Engineering. ICUME 2021. Lecture Notes in Mechanical Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-77256-7_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-77256-7_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77255-0
Online ISBN: 978-3-030-77256-7
eBook Packages: EngineeringEngineering (R0)