1 Introduction

Complex and highly detailed Finite Element (FE) models have been developed to assess the structural performance in crash scenarios. However, the conflicting requirements of weight reduction and passenger safety urge for advanced numerical methods such as optimisation, uncertainty quantification or robustness studies. Due to the high computational cost of these multi-query assessments simplification techniques to obtain efficient surrogate models are inevitable (Duddeck 2008).

For the creation of such low-fidelity models data-driven reduction techniques have gained popularity in recent years. Thereby, previously computed results from full order FE-simulations are collected and combined into a so-called snapshot matrix. With this matrix a subspace can be identified, which approximates the system by a reduced number of unknowns. The main advantage of data-driven model order reduction (MOR) is its capability to represent non-linear problems, which cannot be reduced by classical projection methods. Moreover, if a database of already existing simulations is available data-driven MOR techniques are especially beneficial. In general, non-intrusive and intrusive techniques are available. They are categorised according to the required changes within the FE-program. The intrusive method is implemented within the solver environment itself, whereby a Galerkin projection is applied to set up the system of motion in a smaller subspace (Farhat et al. 2014). The procedure consists of a Proper Orthogonal Decomposition (POD) via Singular Value Decomposition (SVD) in order to construct the reduced basis. In the case of non-linear problems an additional so-called hyper-reduction step is required to create an efficient surrogate model. Various approaches have been developed to combine a projection method with hyper-reduction techniques, e.g. gappy POD (Willcox 2006) or the Discrete Empirical Interpolation Method (Chaturantabut and Sorensen 2010). In this work, the Energy Conserving Sampling and Weighting Technique (ECSW) (Farhat et al. 2014, 2015; An et al. 2008; Hernández et al. 2017) is adopted due to its superior numerical stability properties within FE-analysis. For a comprehensive discussion the reader is referred to Rutzmoser (2018).

The intrusive MOR scheme here presented has been recently studied within varying fields of applications. For example Zahr et al. (2017); Caicedo et al. (2019); Hernández (2020) and Rocha et al. (2020a, b) demonstrated its potential within the scope of multi-scale analysis and fracture mechanics.

Furthermore, the present intrusive MOR scheme was applied to explicit structural dynamics and crashworthiness problems (Bach et al. 2019b). Hereby, Bach et al. (2018); Krysl et al. (2001) revealed that the Galerkin projection has a beneficial effect on the critical time step within the scope of explicit dynamics. Bach et al. (2019a) also compared different algorithms to evaluate the SVD for large scale snapshot matrices, which typically appear in industrial applications. Other approaches suggest an efficient snapshot selection to reduce the dimension of the snapshot matrix e.g. (Phalippou et al. 2020). Moreover, in the field of structural topology optimisation first principal component based surrogate models have been developed by (Alaimo et al. 2018; Xiao et al. 2020).

In contrast to the intrusive MOR, the non-intrusive MOR technique uses data-fitted meta-models or machine learning techniques to construct the surrogate model. Thereby, subspace procedures as for the intrusive scheme are utilised to compute the reduced basis. However, the basis vectors are then weighted and combined with the help of a machine learning approach. In this context, Gaussian regression models are a popular choice as presented in (Kast et al. 2020; Guo and Hesthaven 2017, 2019). K-nearest neighbours and neural networks were also successfully applied in the scope of MOR schemes (Swischuk et al. 2019; Kneifl et al. 2021). Additionally, Yu et al. (2019) published a comprehensive survey regarding developments towards fluid applications.

In a recent study, Kneifl et al. (2021) have shown that multiple machine learning algorithms are suitable for structural dynamic applications in the field of crashworthiness. Moreover, Le Guennec et al. (2018) reduce the frontal part of a car with a non-intrusive model, which combines a CUR matrix decomposition with a k-means clustering method. The so-called ReCUR technique was further enriched with a random forest model and the empirical interpolation method (Assou et al. 2019; Gstalter et al. 2020). Also Fehr et al. (2016) analysed and Ren et al. (2020) optimised crash problems by splitting the domain into linear and non-linear parts, whereby only linear reduction techniques were tested.

As intrusive and non-intrusive MOR are classically two independent research areas, the aim of this work is to evaluate the efficiency of both methods by comparing its training phase, training data, computational cost and complexity to highlight their benefits and limits for non-linear structural examples. The novelty of the presented work lies in the application of both schemes to an optimisation study for a crashworthiness problem. With the presented studies it can be shown that the mentioned MOR schemes are able to support multi-query analysis also for non-linear transient structural problems. Furthermore, the presented work shall provide a general guideline for the application of intrusive and non-intrusive schemes. Limitations and potentials for future developments are further discussed to provide a comprehensive review.

In the following, the theoretical background of both projection based non-linear MOR methods are individually described and compared regarding crashworthiness. Section 2 focuses on the SVD and hyper-reduction for the intrusive scheme, and Sect. 3 explains the applied non-intrusive MOR technique. To assess both projection methods for crashworthiness a dynamic crash box test case is evaluated and their benefits and drawbacks are discussed in terms of inter- and extrapolation in Sect. 5. Afterwards, an optimisation scheme including both MOR techniques is presented in Sect. 6, whereby its theoretical background is given in Sect. 4.

2 Intrusive model order reduction

The key idea of intrusive MOR is to construct a new subspace, such that the desired solution can be represented in a lower dimension. During every classical FE-analysis the system of equations is solved for a resultant vector with N degrees of freedom. To reduce the number of unknowns, the output resultants, e.g. the displacement vector, can be multiplied with a projection matrix. Thus, the projection of the degrees of freedom \({\mathbf {u}} \in \mathbb {R}^N\) breaks down to a linear affine transformation with the reduced basis \({\mathbf {V}} \in \mathbb {R}^{N \times k}\), which are added to the initial state \({\mathbf {u}}_0\):

$$\begin{aligned} {{\mathbf {u}}} \approx {\mathbf {u}}_{0} + {\mathbf {V}} \mathbf {u_{r}}. \end{aligned}$$
(1)

Thereby, \(\mathbf {u_r} \in \mathbb {R}^k\) denotes the vector of unknowns of size k in the reduced subspace. Finally, we aim on a reduction such that \(k \ll N\), with N being the number of unknowns in the full order model. To this end, the system of equations is projected into the reduced subspace. For dynamic structural problems the partial differential equation takes the following well-known form:

$$\begin{aligned} {\mathbf {M}} \ddot{{\mathbf {u}}} + {\mathbf {f}} ( {\mathbf {u}}, \dot{{\mathbf {u}}} ) = {\mathbf {g}}, \end{aligned}$$
(2)

with \({\mathbf {M}} \in \mathbb {R}^{N \times N}\) being the mass matrix, \({\mathbf {u}}\), \(\dot{{\mathbf {u}}}\) and \(\ddot{{\mathbf {u}}}\) denoting the displacement, velocity and acceleration, \({\mathbf {f}}\) and \({\mathbf {g}}\) representing the internal and external forces. In this work, a Galerkin projection-based model order reduction with a linear affine transformation is adopted to approximate the solution manifold. Due to the nature of the Galerkin projection the over-determined system is constrained to be orthogonal to the space spanned by the residual. Thus, introducing the approximation of Eq. (1) into the equation of motion for general non-linear dynamics yields:

$$\begin{aligned}&\underbrace{{\mathbf {V}}^T {\mathbf {M}} {\mathbf {V}}}_{{\mathbf {M}}_r} \ddot{{\mathbf {u}}} + \underbrace{{\mathbf {V}}^T {\mathbf {f}}}_{{\mathbf {f}}_r} ( {\mathbf {V}} {\mathbf {u}}, {\mathbf {V}} \dot{{\mathbf {u}}}, t ) \nonumber \\&\quad = \underbrace{{\mathbf {V}}^T {\mathbf {g}}}_{{\mathbf {g}}_r}. \end{aligned}$$
(3)

Consequently, an optimal construction of the orthonormal projection matrix \({\mathbf {V}}\) is crucial to construct an efficient and accurate surrogate model. Therefore, the subspace is built in a previous training phase, which is also referred to as offline phase. After the training is completed the system of equations (3) is evaluated in the online phase as illustrated in Fig. 1. The figure also depicts that the subspace construction is followed by a hyper-reduction step. In the next sections, both training steps are explained in detail.

Fig. 1
figure 1

The workflow for intrusive MOR divided into online and offline phase

2.1 Reduced subspace

For the construction of the projection matrix Sirovich (1987) first proposed a data-driven technique, as classical approaches are not suitable for non-linear problems. Thereby, a quantity of interest \(\mathbf { u_i}(t,\mu )\) with \(t \in T\) as time and \(\mu \in P\) with the parameter domain P, are extracted from full-order training simulations. These resultant vectors, here displacements, have N entries corresponding to each degree of freedom. To construct a snapshot matrix \({\mathbf {A}} \in \mathbb {R}^{N \times n}\), n so-called snapshots \(\mathbf { u_i} \in \mathbb {R}^N\) are collected and concatenated along the horizontal axis. The total number of snapshots n consists of the snapshots at varying time instances \(t_1, t_2, ..., t_{n_t} \in T\) and different parameter configurations \(\mu _1,\mu _2, ..., \mu _{n_{\mu }} \in P\), such that \(n = n_t n_{\mu }\). A reduction based on a snapshot matrix from multiple simulations is also known as global POD (Taylor 2001; Taylor and Glauser 2004; Schmit and Glauser 2004).

As the number of snapshots can be arbitrarily large, matrix \({\mathbf {A}}\) is reformulated to retrieve a practically feasible reduced subspace.

Applying the Singular Value Decomposition (SVD) on \({\mathbf {A}}\) yields the left-singular vectors \({\mathbf {U}} \in \mathbb {R}^{N \times n}\), the diagonal matrix \(\Sigma \in \mathbb {R}^{n \times n}\) containing non-negative singular values \(\sigma _i\) in descending order and the right-singular matrix \({\mathbf {Z}} \in \mathbb {R}^{n \times n}\). In a next step, the decomposed terms are truncated to approximate matrix \({\mathbf {A}}\) by a reduced matrix of rank k, as shown in Eq. (4). Thereby, \({\mathbf {U}}_k\) can be identified as the projection matrix \({\mathbf {V}} := {\mathbf {U}}_k \in \mathbb {R}^{N \times k}\), which resembles the mapping from the full order space to the reduced subspace.

$$\begin{aligned} {\mathbf {A}} = {\mathbf {U}} \Sigma {\mathbf {Z}}^{T} \approx {\mathbf {U}}_{k} \Sigma _{k} {\mathbf {Z}}^{T}_{k} = {\mathbf {V}} \Sigma _{k} {\mathbf {Z}}^{T}_{k}. \end{aligned}$$
(4)

An optimal low rank approximation can be assured by the Eckart-Young-Mirsky theorem (Eckart and Young 1936; Mirsky 1960). As a lower bound for k is not known a priori, an optimal value \({\tilde{k}}\) under the constraint of an approximation error \(\epsilon\) is obtained, such as:

$$\begin{aligned} \begin{aligned} \text {k = min }{\tilde{k}} \quad \text { with} \quad \dfrac{\Vert {\mathbf {A}} - {\mathbf {U}}_ {{\tilde{k}}} \Sigma _{{\tilde{k}}} {\mathbf {Z}}^T_{{\tilde{k}}} \Vert _F}{\Vert {\mathbf {A}} \Vert _F} \le \epsilon . \end{aligned} \end{aligned}$$
(5)

However, for large scale matrices a conventional evaluation of SVD can be infeasible as its complexity exhibits \(O(n^2)\). To tackle such problems randomised or incremental SVD techniques (Bach et al. 2019a; Oxberry et al. 2017) may be applied.

2.2 Hyper-reduction

In case of linear problems the projection, as described by Eq. (3), directly results in efficient surrogate models. However, for non-linear problems the internal forces still depend on the system variables in the full space. To circumvent the underlying dependencies hyper-reduction is employed. Thereby, a subset of all elements is selected to approximate the non-linear internal forces without the need of the full order model (Farhat et al. 2014, 2015). The Energy Conserving Sampling and Weighting (ECSW) technique, from the field of global cubature methods, is a promising hyper-reduction approach for structural FE analysis. By applying the ECSW technique the internal forces are evaluated over a reduced number of elements, while conserving the global energy of the system. Moreover, ECSW yields symmetric system matrices and exhibits superior numerical stability properties compared to other hyper-reduction methods.

To identify potential candidates for the hyper-reduction, a non-negative weighting factor is computed for each element to quantify its impact on the solution vector. In general, the global internal force vector \({\mathbf {f}}_\text {int}\) is assembled through the connectivity matrix \({\mathbf {L}}_e\) with the force vectors \({\mathbf {f}}^e_\text {int}\) for all elements \(e \in \varOmega\). Through the hyper-reduction the full-order space \(\varOmega\) is approximated by a sub-set \({\mathcal {E}}\) and a weighting vector \(\mathbf {\zeta ^{*}}\) of a weights \(\zeta ^{*}_e\) for each element:

$$\begin{aligned} {\mathbf {V}}^{T} {\mathbf {f}}_\text {int} = {\mathbf {V}}^{T} \sum _{e \in \varOmega } {\mathbf {L}}_{e}^{T} {\mathbf {f}}^{e}_\text {int} \approx {\mathbf {V}}^{T} \sum _{e \in {\mathcal {E}}} \zeta ^{*}_{e}{\mathbf {L}}_{e}^{T} {\mathbf {f}}^{e}_\text {int}. \end{aligned}$$
(6)

The product of \({\mathbf {V}}^T {\mathbf {f}}_\text {int}\) can be interpreted as the virtual work, which constitutes virtual displacement multiplied by a force vector. Consequently the energy can be conserved by choosing appropriate weighting factors (Farhat et al. 2014). To compute the weights \(\zeta ^{*}_e\), the unassembled training forces are collected in the matrix \({\mathbf {G}} \in \mathbb {R}^{n \times N_e}\)

$$\begin{aligned} {\mathbf {G}} = \begin{bmatrix} G_{1,1} &{} \cdots &{} G_{1,N_e}\\ \vdots &{} \ddots &{} \\ G_{n,1} &{} &{} G_{n,N_e} \end{bmatrix}, \quad {\mathbf {G}}_{i,e} = \mathbf {(V}^{T})_{e} {\mathbf {f}}_{e}^{(i)} \end{aligned}$$
(7)

with \({\mathbf {f}}_e^{(i)}\) being defined as the i-th snapshot of the unassembled internal force vector of element e, \(N_e\) the number of elements, and n the number of snapshots. Next the vector \({\mathbf {b}} \in \mathbb {R}^{n}\), the sum over all elements \({\mathbf {b}}_i = \sum _{e \in \varOmega } G_{i,e}\) is set up. The linear equations \({\mathbf {b}} = {\mathbf {G}} \zeta\) are formulated, whereby all entries of \(\zeta\) are ones. A a minimization problem is solved with a predetermined tolerance \(\tau\), such as:

$$\begin{aligned} \text {argmin} \left\Vert \zeta \right\Vert _{0} ~~~ \text {s.t.} ~~ \left\Vert {\mathbf {G}} \zeta ^{*} - {\mathbf {b}} \right\Vert _{2} < \tau \left\| {\mathbf {b}} \right\| _{2} ~~~ \text {and} ~~~ \zeta ^{*} \ge {\mathbf {0}}. \end{aligned}$$
(8)

Equation (8) is approximated by a sparse non-negative least square method. A solution can be obtained using greedy algorithms, where elements are added until the condition of Eq. (8) is satisfied.

3 Non-intrusive model order reduction

In contrast to the intrusive scheme, the non-intrusive method is independent of the FEM solver and merely based on data-fitted meta-models. However, the workflows share many similarities such as the distinction between online and offline phase, as illustrated in Fig. 2. Besides the constructed reduced basis the non-intrusive MOR relies on an additional regression model. This meta model represents the full order system in the subspace spanned by the reduced basis (Guo and Hesthaven 2017; Swischuk et al. 2019; Kast et al. 2020).

Fig. 2
figure 2

The workflow for non-intrusive MOR divided into online and offline phase

3.1 Training phase

Figure 2 depicts the workflow of the presented non-intrusive MOR approach divided in an offline (training) and online phase. Consistent with the construction of intrusive models, a snapshot matrix \({\mathbf {A}} \in \mathbb {R}^{N \times n_{t}n_{\mu }}\) is built from displacement snapshots of full order analysis. However, dimensionality reduction is not limited to displacement data. In principle, the described procedure could be applied to every quantity of interest.

Compared to intrusive approaches, a list of time \(t_i \in T\) and parameter configurations \(\mu _i \in P\) corresponding to the snapshots is prepared, since the non-intrusive meta models explicitly parameterise them. Hence, this data is additionally required as an input of the training phase. POD is performed on the snapshot matrix \({\mathbf {A}}\) and a truncated set of basis vectors is chosen , as discussed in Sect. 2.1.

Next, a machine learning approach is introduced, in particular a regression or supervised learning technique. The three different regression models, polynomial regression function (poly), k-nearest neighbour regression (kNN), and Gaussian Process regression (GPR) are tested for this study. The corresponding model \(\phi : X \rightarrow Y\) maps the input \(t,\mu \in X\) to the outputs \({\mathbf {u}}_r(t,\mu ) \in Y\). Note, that \({\mathbf {u}}_r(t,\mu )\) is the vector of unknowns in the reduced space. To prepare the training data, a multiplication with the transposed matrix \({\mathbf {V}}\) projects the data into the reduced space.

To approximate an unknown regression function f we assume a training set \(\mathbf {x,y} \in D\) is available, where for each pair \(x_i,y_i: y_i=f(x_i)\). With a k-nearest neighbour algorithm (Friedman et al. 1977) new data points are approximated by linear combinations of their k-nearest neighbours in the training set D. Each neighbour is weighted by its distance to the data point. In contrast to the local approximation of the kNN algorithm, the polynomial regression function belongs to the global approximation techniques. Therefore, the weights vector \({\mathbf {b}}\) of a polynomial function

$$\begin{aligned} f({\mathbf {x}}) = \mathbf {b_0} + \mathbf {b_1 x} + \mathbf {b_2 x}^2 + \cdots + \mathbf {b_n x}^d \end{aligned}$$
(9)

with order d are fitted to the output quantity vector \(f({\mathbf {x}})\). The weights are computed by a least square solver minimizing the error of the approximation function and the training data.

Gaussian Process regression is a probabilistic regression approach, whereby training points are treated as random variables with joint Gaussian distributions. A Gaussian Process can be completely defined with a mean \(m({\mathbf {x}})\) and covariance function \(\kappa ({\mathbf {x}},{\mathbf {x}}')\).

$$\begin{aligned} f({\mathbf {x}}) \sim GP(m({\mathbf {x}}),\kappa ({\mathbf {x}},{\mathbf {x}}')) \end{aligned}$$
(10)

describing a real process \(f({\mathbf {x}})\). The correlation between two random variables at locations x and \(x'\) can be formulated with correlation functions, most commonly by the squared exponential function

$$\begin{aligned} \kappa (x,x') = \text {exp} \left( - \frac{|| x - x' ||^2}{l_{c} ^{2}} \right) . \end{aligned}$$
(11)

However, many other kernels could be chosen. These kernel functions include a set of hyper-parameters, e.g. the correlation length \(l_c\), which are optimized for the training set. In general, a prediction for new test data can be computed through a reformulation of its joint distribution with the existing covariance matrix. For a detailed description the reader is referred to Rasmussen and Williams (2006).

3.2 Online phase

A low fidelity model to predict the system’s behaviour is constituted by the regression model in the reduced subspace. The online phase of the presented scheme is further depicted in Fig. 2 represented by the the lower grey bar. For a parameter configuration \(\mu\) and time instance t the chosen regression model provides a system answer \(u_r(t, \mu )\). To interpret the predicted output a back projection into the physical space is required. Moreover, the projection \({\mathbf {u}}={\mathbf {V}}{\mathbf {u}}_r\) can also be interpreted as a weighted linear combination of the basis vectors \({\mathbf {v}}\). Recalling Sect. 2.1, every column \(\mathbf {v_i}\) in the matrix \({\mathbf {V}} \in \mathbb {R}^{N \times k}\), represents one of the reduced basis vectors:

$$\begin{aligned} {\mathbf {V}} = \begin{bmatrix} \mathbf {v_1}&\mathbf {v_2}&\cdots&\mathbf {v_k}. \end{bmatrix} \end{aligned}$$
(12)

The full order displacement vector \({\mathbf {u}}\) as stated in Eq. (13) is weighted by \(\pmb {\alpha } = {\mathbf {u}}_r\), such that:

$$\begin{aligned} {\mathbf {u}} \approx \sum _{i=1}^{k} \mathbf {v_i} \alpha _i(t,\mu ). \end{aligned}$$
(13)

Depending on the input parameter of the surrogate model the scalar values \(\alpha _i\) define the influence of each basis vector.

In comparison to conventional surrogates, which are commonly used for multi-query analysis, the non-intrusive MOR method operates in the subspace and does not predict the full order system directly. Moreover, the corresponding model is restricted to physical solutions based on a strongly reduced number of degrees of freedom.

4 Optimisation

Equipped with the surrogate models an optimisation study for large scale problems can be performed. In general, an optimisation problem can be formulated as following:

$$\begin{aligned} \begin{array}{cc} \text {min} \quad \sum _{i=1}^r f_i(x) \quad \text {for} \quad i=1,2,...,r\\ s.t \quad g_j(x) \ge 0 \quad \text {for} \quad j=1,2,...,m\\ \mathbf {x^l} \le {\mathbf {x}} \le \mathbf {x^u} . \end{array} \end{aligned}$$
(14)

The overall aim is to find the minimum of the objective function, whereby multiple objective functions can be expressed by the sum of r single functions \(f_i(x)\) (Fang et al. 2017). The objectives \(f_i\) are normalised with respect to their initial values \(f_i^0\) and may be weighted with a factor \(w_i\) according to a user-defined significance criterion.

$$\begin{aligned} f(x) = w_{1} \frac{f_{1}}{f_{1}^{0}} + w_{2} \frac{f_{2}}{f_{2}^{0}} + \cdots + w_{r} \frac{f_{r}}{f_{r}^{0}}. \end{aligned}$$
(15)

In addition, m number of constraints \(g_j(x)\) are formulated for the design variables \({\mathbf {x}}\) with lower limits \(\mathbf {x^l}\) and upper limits \(\mathbf {x^u}\).

In the present work, we focus on population based algorithms, such as the differential evolution optimisation technique (Storn and Price 1997). This non-gradient based method is initialised by a population of candidates, whose fitness levels are evaluated in an iterative manner. To further scan the design space, a trial vector is set up by the principle of mutation and crossover. The best performing candidates are selected and a new generation is built. The iterating process is continued until the algorithm terminates if a predefined error tolerance is achieved or a maximum number of generations is exceeded. For a more detailed explanation it is referred to (Storn and Price 1997).

5 Crashworthiness test case

In this section, the results of a transient, non-linear example including contact formulations are presented.For the intrusive MOR scheme the software LS-Dyna including a special implementation interface is utilised. The implementation was done within a previous work by Bach (2020) and has been further extended to perform optimisation studies. A stand-alone Python code holds the algorithm for the non-intrusive scheme, which was developed for this study.

To assess the two algorithms, a crash box exampleFootnote 1 is introduced. The crash box, as depicted in Fig. 11, is modelled as an elasto-plastic tube with a wall thickness of 2.0 mm and a length of 272.5 mm, whereby imperfections are introduced along the tube to trigger the folding mechanism. A rigid plate with an initial velocity of 40 km/h in negative z-direction crushes the tube, which is clamped at the bottom.

All deformable parts are discretised with fully integrated Reissner-Mindlin shell elements (ELFORM=16), which have translational and rotational degrees of freedom. In contrast to the element formulation the remaining contact and material parameter have not been changed in comparison to the template\(^{1}\). Penalty formulations are applied to model the contact between plate and crash box as well as the self-contact (CONTACT_AUTOMATIC_SINGLE_SURFACE). The material properties for the crash box are as follows: The mass density is \(\rho = 7830\) kg/m\(^3\), the Young’s modulus is \(E = 200\) GPa, the Poisson’s ratio is \(\nu = 0.30\) and the yield strength is \(\sigma _y = 0.366\) GPa with a piece-wise linear plasticity model. The reduced order models are constructed for the tube discretised by 1924 nodes and a termination time of T = 20 ms is set.

For the intrusive scheme, the projection matrix \({\mathbf {V}}\) is orthonormalised with respect to the mass matrix and built up for displacement and rotational degrees of freedom separately. As the focus is on transient analysis an error measure considering the full time domain is defined. To evaluate the accuracy the global mean relative error (GMRE) is computed using the full order displacements, \({\mathbf {u}}\), and the reduced order displacements \(\mathbf {u_r}\) for the full time domain T as follows:

$$\begin{aligned} \varepsilon _{GMRE} := \frac{\sqrt{ \sum \limits _{t \in T } ({\mathbf {u}}(t) - {\mathbf {V}}\mathbf {u_{r}}(t))^{T} ({\mathbf {u}}(t) - {\mathbf {V}}\mathbf {u_{r}}(t)) } }{ \sqrt{ \sum \limits _{t \in T } {\mathbf {u}}^{T}(t) {\mathbf {u}}(t) }}. \end{aligned}$$
(16)

In the following, the intrusive and non-intrusive MOR schemes are applied to the crash box example and their results are compared regarding accuracy and numerical effort. The discussion of the results is split into training accuracy and online accuracy. The former evaluates the ability to reproduce the training data, while the latter examines the performance on parameters that are not present in the training data. Furthermore, their inter- and extrapolation capabilities are studied, whereby the impacting kinetic energy and the thickness of the crash box are varied. To evaluate their overall computational cost a comparative analysis is presented. In Sect. 6 the reduced models of the crash box are embedded into an optimisation workflow.

5.1 Training accuracy

To test the intrusive and non-intrusive scheme for transient analysis the first study evaluates a model reproducing the training simulation. Thereby, the parameter domain P is neglected and only the time domain T is considered. Uniformly, every \(t = 0.01\)ms a snapshot is allocated to the snapshot matrix.

Figure 3 shows the results of two crash box simulations computed by the intrusive MOR, whereby the grey wireframe represents the full order model (FOM) and the orange and green shells indicate the selected hyper-reduction elements for two different reduction levels (Bach 2020). The parameter k is the number of basis vectors for the Galerkin projected reduced order model (ROM), and \(\tau\) is the tolerance value for the hyper-reduction algorithm in Eq. (8).

The Galerkin ROM and the hyper-reduced model (HROM), as depicted in Fig. 3, result in a computational speed-up factors of 4.7 and 7.1, respectively.

Fig. 3
figure 3

The deformed crash box at \(t=20\) ms for two different reduction levels, with the FOM as grey wireframe and the HROM results shown by the coloured elements (Bach 2020)

To further validate the approach, Fig. 4 shows the displacement using intrusive and non-intrusive MOR of two reference nodes (highlighted in blue and green in Fig. 5), which are included in the folding mechanism. The intrusive reduced order model (ROM) on the left shows the nodal displacement result of the FOM and the Galerkin ROM and HROM with larger \(\Delta t\), as Galerkin projection leads to a higher critical time step for explicit solvers (Bach et al. 2018; Krysl et al. 2001). On the right of Fig. 4 the results of the reference node are plotted utilising the non-intrusive model with \(k=20\) basis vectors and a Gaussian Process regression (isotropic Matérn kernelFootnote 2). Hereby, no physical system is solved and no contact algorithm is evaluated.

This example illustrates, that the non-intrusive regression model can be fitted to the example data. To analyse the quality of non-intrusive ROMs, different regression models are fitted and tested with varying input parameter configurations in the next section.

Fig. 4
figure 4

Displacement-time curve for two reference nodes (marked in Fig. 5 in blue and green correspondingly) using the intrusive (\(k=20, \tau = 0.01\)) and non-intrusive (\(k=20\), Matérn kernel) approach

5.2 Online accuracy

The previous section presented the ability of the MOR schemes to create a simplified model for a non-linear structural simulation. However, a replication of the training simulation is not the intention of MOR methodology as no computational efficiency is gained. Considering a multi-query analysis, such as optimisation or probabilistic analysis, the idea is to roughly identify the parameter space and perform a couple of high-fidelity simulations beforehand. These are utilised for the construction of the reduced model to enable fast online simulations within the multi-query analysis. Therefore, the inter- and extrapolation capabilities within the online phase are the key point for an efficient application. A study investigating two different non-linear manifolds should illustrate and compare the capabilities of intrusive and non-intrusive models for transient analysis. We first restrict us to a one-dimensional parameter space, whereby a larger space is evaluated within the optimisation study in the next section.

To evaluate the effect of parameter variations the kinetic energy of the impacting plate and the thickness of the crash box are identified as suitable variables. For both parameters individual models are created for a variation of \(\pm 20 \%\) for all following studies. To change the kinetic energy applied to the crash box, the mass of the impacting plate is varied by \(\pm 20 \%\), as it deviates proportionally.

The variation of kinetic energy and thickness under constant velocity has a drastic impact on the folding mechanism of the crash box depicted in Fig. 5. It is also noticeable, that the effects of the varying kinetic energy have a smaller impact on the system than \(20 \%\) deviation of thickness. By discussing both variables we can investigate the different “levels” of non-linearity and their effects on the online accuracy.

Fig. 5
figure 5

Crash box deformation for a variation of \(\pm 20 \%\) of the tube’s thickness and the impacting kinetic energy realised by a variation of the plate’s mass

To analyse the capabilities of ROMs representing varying manifolds, first a study focusing on the non-intrusive approach is presented. The non-intrusive ROMs exploiting the machine learning (ML) techniquesFootnote 3 polynomial regression (poly), k-nearest neighbour regression (kNN) and Gaussian Process regression (GPR) are compared individually for the two cases: impacting kinetic energy and thickness of the tube. As the models highly depend on the quality of the training phase, not only the different regression techniques are assessed, but also an increasing number of training simulations.

To judge data-driven meta models, a prior distinction between training and test data enables the computation of multiple error measures such as the \(R^2\)- value or global mean relative error (GMRE). A training data set is created by Sobol samplingFootnote 4 with increasing sample set \(\mu _1,\mu _2, ..., \mu _{n_{\mu }} \in P\) with \(n_{\mu }\) of 2, 4, 8, 16, and 32. In addition, a testing set of 32 samples is built by a random Sobol sequence, whereby each number is multiplied by a scaled random value. The distribution of the corresponding training and test samples are depicted in Fig. 6, whereby the test points are highlighted with vertical blue lines.

Fig. 6
figure 6

Visualisation of training sets \(\mu _1,\mu _2, ..., \mu _{n_{\mu }} \in P\) with varying sample number \(n_{\mu }\) for the ROMs \(\phi (t,t_{tube})\), \(\phi (t,e_{impact})\) corresponding to study of Fig. 8 and its 32 test samples, represented by blue lines

The training and testing configurations are scaled for the parameter space thickness of the crash box \(t_{tube} \in P\) and energy of the impacting plate \(e_{impact} \in P\). Full order analyses are simulated accordingly and snapshots are collected uniformly every 0.1ms. Five ROMs \(\phi (t,t_{tube})\) are individually built for the sets of 2, 4, 8, 16, and 32 training points \(n_{\mu }\), whereby each simulation contributes 200 snapshots \(t_1,t_2,..,t_{200} \in T\). To create five ROMs \(\phi (t,e_{impact})\) the procedure is repeated. All models are based on a reduced subspace with \(k=20\) basis vectors.

In Fig. 8 the displacement GMRE of Eq. (16) is plotted, as an error measure for all 32 test samples. The bar is drawn from the smallest to the largest GMRE and the mean GMRE of all samples is highlighted individually by a marker for kNN, poly and GPR. For all models, the accuracy increases with the number of training simulations. One can notice, that the poly models of order seven, constructed from two training simulations do not provide useful surrogates, possibly due to over fitting. Despite this exception, the regression techniques kNN with five neighbours, poly and GPR using an anisotropic Matérn kernel show a GMRE in similar ranges.

Table 1 compares the GMRE and \(R^2\) for the models \(\phi _8(t,t_{tube})\) and \(\phi _8(t,e_{impact})\) built from 8 training simulations for all three ML techniques. It is noticeable, that the GPR performs best in the framework of non-intrusive MOR with a GMRE of 0.28% and \(R^2 = 0.999\) for the parameter space \(e_{impact}\). For this example, the models built by kNN have a higher accuracy than those obtained by polynomial regression. As the kNN technique averages a data point with the k-nearest neighbours, its quality to approximate the very first and last time step is reduced. The performance of kNN trained on sparse data, especially in the time domain can drop significantly, as also observed by Kneifl et al. (2021). However, kNN is a fast and robust technique and especially for high number of data points a simpler regression model can be beneficial.

Comparing \(R^2\) and GMRE for the different parameter domains, all \(\phi (t,e_{impact})\) regression models show a higher \(R^2\) than \(\phi (t,t_{tube})\). This can also be observed in Fig. 8 for models with increasing training sets. On the left \(\phi (t,t_{tube})\) models have continuously higher GMREs as \(\phi (t,e_{impact})\) surrogates. As a smaller range of displacement patterns (Fig. 5) corresponds to the variation of the impacting kinetic energy \(e_{impact}\), this could be expected.

To further understand the particular meta models, the displacement in x, y, and z direction of the first test simulation are visualised for two folding points (marked in Fig. 5) in Fig. 7. Similar to the table 1 poly, kNN, and GPR are based on snapshots from 8 transient training simulations and a subspace of \(k=20\) basis vectors. An artificially smooth function can be observed for the polynomial regression of order seven. Compared to the other techniques, GPR using an anisotropic Matérn kernel has superior accuracy (Fig. 8) and is able to depict more irregular data points, as also exploited by Guo and Hesthaven (2019).

Fig. 7
figure 7

Displacement-time curve for two reference nodes (marked in Fig. 5 in blue and green correspondingly) for varying regression models poly (p = 7), kNN (k = 5), GPR (anisotropic Matérn kernel) and a subspace \(k=20\)

Fig. 8
figure 8

Displacement GMRE of 32 test simulations and its mean for varying ROMS with a subspace \(k=20\) and poly (p = 7), kNN (k = 5), GPR (anisotropic Matérn kernel) for sets of 2, 4, 8, 16, 32 training simulations

Table 1 Comparison of mean error measures \(R^2\) and GMRE for varying ROMS with a subspace \(k=20\) and poly (p = 7), kNN (k = 5), GPR (anisotropic Matérn kernel) for 8 training simulations and 32 test simulations

The next study compares the non-intrusive to the intrusive approach for varying parameter domains. For the intrusive MOR the question rises if the projected system of equation enables an extrapolation of design variables. Therefore, the projection matrix is constructed from a single training simulation and tested in the online phase with extrapolating design variables.

The displacement GMRE by Eq. (16) of the intrusive MOR are plotted on the right of Fig. 9a and b. The model is trained, collecting snapshots every 0.01ms, using a plate mass of 150 kg and a wall thickness of 2.0 mm. It can be observed, that the error increases with the distance to the training configuration. Also the relative error due to a change in wall thickness is generally higher than the error associated with a change of the plate’s mass. This further supports the observation of higher nonlinearities associated with a varying tube thickness.

In contrast to the small extrapolation capabilities of the intrusive ROMs, the non-intrusive scheme is restricted to interpolation. Training a non-intrusive model with snapshots corresponding to one parameter instance would yield unreasonable results, as the surrogate is essentially a regression model and only enriched by physical phenomena. For a comparison of the techniques, the minimum number of training simulations to reach a similar accuracy level is investigated. The snapshot time increments are enlarged to 1ms, such that a total number of 20 snapshots are collected from each transient analysis. The number of training simulations was successively increased until the non-intrusive ROMs achieved GMREs in the same range as the intrusive approach. As for non-intrusive models the computational cost mainly depends on the number of training simulations; this is an import factor to compare the overall efficiency gains.

On the left of Fig. 9a and b the displacement GMRE of the non-intrusive MOR results are depicted. The training samples are obtained by Latin hypercube sampling and marked with grey circles in Fig. 9a and b. Hence, three training simulations are needed for varying mass and thirteen for varying thickness, in order to obtain similar error values compared to the intrusive MOR method. As already observed before, the accuracy of the simplified model strongly depends on the non-linear manifold and the quality of the training phase. Note, that for the variation of thickness in Fig. 9b the number of training data rises to thirteen, which would be equivalent to an increasing online error while keeping the same number of training simulations.

Fig. 9
figure 9

Comparison of the displacement GMRE with parameter variation of intrusive (Bach 2020) and non-intrusive ROMs (anisotropic Matérn kernel) with a subspace \(k=40\). (a) Overall displacement GMRE for non-intrusive and intrusive ROMs with varying mass of the impacting plate. (b) Overall displacement GMRE for non-intrusive and intrusive ROMs with varying thickness of the crash box

It could be shown, that non-intrusive ROMs are capable of representing the crash box example, whereby especially kNN and GPR provided reliable regression models. The GPR is the most accurate technique and kNN is favorable in terms of efficiency and robustness. Moreover, the accuracy of ROMs highly relies on the amount of training simulations and snapshot intervals in time and parameter domain. Note, that the characteristic change in time and parameter space defines the non-linear manifold and therewith the required training set. A comparison to the intrusive approach illustrates, that similar accuracy can be achieved by non-intrusive ROMs with a higher number of training data. For the intrusive ROMs of the crash box, extrapolation within the parameter domain is possible for a \(20 \%\) range, however a general conclusion cannot be drawn from this example analysis. Further studies in the field of crashworthiness are required e.g. including error measures.

5.3 Computational cost

After examining the accuracy of the models, we now focus on the computational speedup of the reduced models. The construction cost of the machine learning model is negligibly small compared to the evaluation of the training simulations for all techniques. One ROM evaluation using kNN or poly is 4 orders of magnitude and for GPR 3 orders of magnitude smaller than a full order analysis. However, for a higher number of DOFs, but especially for a higher number of design parameters and snapshots, the cost of constructing a Gaussian Process can significantly increase. For a detailed comparison of online and offline costs for the different ML techniques it is referred to Kneifl et al. (2021).

Here, we focus on the comparison of costs of the intrusive and non-intrusive method. Table 2 shows the elapsed time for the crashbox example of Fig. 9. The single online simulations of the corresponding ROMs are measured on a Intel Xeon 3.5 GHz processor with 4 CPUs. The intrusive scheme has a speed-up factor of approximately 4.7 and the non-intrusive scheme with GPR is of 3 magnitudes faster than the FOM. However, including the offline phase into the evaluation of the computational cost the results appear to be different. Figure 10 compares the cost function of intrusive and non-intrusive with \(\phi (t,e_{impact})\) and \(\phi (t,t_{tube})\), in grey, blue and green respectively. The x-axis shows the number of online evaluations, whereby the start represents the training effort. The values on the y-axis are normalised by the computational cost of one full order simulation. Thus, the intrusive training cost equals to 1 and the non-intrusive to 3 and 13. With a speed-up factor of four and \(10^4\) the efficiency of the non-intrusive scheme overtakes the performance of the intrusive for 10 and 57 online evaluations of \(\phi (t,e_{impact})\) and \(\phi (t,t_{tube})\) respectively. Note, that the costs of SVD and hyper-reduction are neglected here. The interested reader is referred to a detailed evaluation by Bach (2020).

Fig. 10
figure 10

Normalised computational cost for the crashbox example of Fig. 9 comparing full order simulation (FOM) intrusive (Galerkin ROM and HROM) and non-intrusive ROM for an increasing number of online simulations

Table 2 Elapsed time for the crashbox example of Fig. 9 comparing full order simulation (FOM), intrusive (Galerkin ROM and HROM) (Bach 2020) and non-intrusive ROM

6 Optimisation study

The previous section evaluated the effects of training parameters on the online accuracy exemplarily for the crash box example. Next, an optimisation is proposed to further illustrate the capabilities of the presented schemes. Therefore, the crash box is adapted for a dimensional optimisation of tailor welded blanks. The automotive industry discovered that for thin-walled structures tailor welded blanks can be exploited to improve its lightweight and crashworthiness properties simultaneously (Fang et al. 2017). The task is to combine multiple blanks with different thicknesses or material properties to a single structure to advance its mechanical behaviour. In literature multiple optimisation studies searching for the best thickness distribution of welded (Chen et al. 2019; Xu et al. 2014) or rolled (Duan et al. 2016; Sun et al. 2017; Klinke and Schumacher 2018) blanks can be found. All optimisation schemes commonly employ surrogate models such as radial basis function (Klinke and Schumacher 2018; Sun et al. 2017) or support vector regression (Duan et al. 2016). Here, the two MOR schemes are used as surrogate models during an optimisation analysis, inspired by (Chen et al. 2019), but utilizing a differential evolutionFootnote 5 algorithm. Notice, that we focus on the performance of the surrogate models in a realistic application rather than the finding of new results of the optimisation study itself.

Fig. 11
figure 11

Initial and folded crash box of the optimisation study at \(t = 0\)ms and \(t = 35\)ms with optimised design variables \(t_1=1.08\)mm and \(t_2=1.85\)mm

For the optimisation the crash box of Sect. 5 is divided into three circles representing three blanks, as shown with the colour highlighted parts in Fig. 11. Starting from the top, the thickness of the first ring \(t_1\) and the second ring \(t_2\) are unknown design parameters, whereby \(t_3\) is assumed to be a constant thickness of 2 mm. The blanks are simplified to rings, whereby the transition zones are neglected, such that the design parameters \(x = [t_1, t_2]\) are varied in the range from \(x^l = 1.0\) to \(x^u = 2.0\) mm. In addition, the mass of the impacting plate \(m_{plate}\) is set as a design variable with a range of \(80 -170\)kg. The termination time is extended to 35ms, such that the tube is completely folded for all possible choices of design variables.

As a first objective the kinetic energy of the impacting plate should be maximized, which is denoted by the objective \(f_1\) in [J]. Secondly, the acceleration a(t) and force F(t) resulting from the impacting plate are expected to be constant for optimal crashworthiness designs. Therefore, the objective function includes a term corresponding to the displacement curve over time u(t) of the impacting plate. Starting from \(t = 1m\)s, the displacement of the middle node of the impacting plate is observed, whereby the peak forces are neglected here. Transferring the objective to the accelerations \(\ddot{u}(t) = a(t)\), the deviation to the ideal quadratic displacement curve (i.e. related to a constant acceleration) is minimised with the second objective function \(f_2(u,t)\) in [mm]. A conflicting requirement is a light weight structure, therefore also the mass of the crash box is minimised with \(f_3(t_1, t_2)\) in [mm].

Thus, the total objective function \(f(x) = w_1 f_1 + w_2 f_2 + w_3 f_3\), with the weights \(w_i\), as also depicted in Table 3, maximizes the energy absorption and minimizes the acceleration and mass to improve the crash box design. Note, that normalized values of \(f_1 -f_3\) are combined, to avoid problems due to dissimilar units.

The optimisation algorithm terminates if the tolerance \(\epsilon < 0.01\), a population’s standard deviation divided by the average of its energy, or a maximum iteration number is reached.

First the study was performed with the FOM and repeated for ten runs with varying population sizes \((12 {-} 21)\), recombination factors \((0.7 {-} 0.9)\) and max. iterations number (\(30 {-}50\)) to gain a reference solution. With the resulting thicknesses \(t_1\) and \(t_2\) of all runs being similar, as marked in Fig. 12c under Reference, one can conclude that a defined minimum is found. Moreover, the resulting design parameters are averaged over all trials and listed in Table 4 under Reference.

Table 3 Combined objective functions with corresponding weights \(f(x) = w_1 f_1 + w_2 f_2 + w_3 f_3\) for the optimisation study

In Fig. 12a, the objective function is plotted over the full two-dimensional design space \(t_1\) and \(t_2\) and in vicinity of the optimum. Within the design space, 200 points are analysed by full order simulation and the minimum is marked. The results of the optimisation algorithm can be confirmed by the clear minimum of the objective function observable in Fig. 12a.

The thickness of the first ring \(t_1\) is at the lower boundary of the range and the second ring \(t_2\) at the upper limit, which also relates to the ideas of an optimised crash box by Chen et al. (2019). Corresponding to the objective \(f_1\) of maximising the kinetic energy, the mass of the plate \(m_{plate}\) has reached the upper range. Exemplarily, the convergence of the design parameter \(t_1\) and \(t_2\) with a population size of 18 is displayed for 600 evaluations in Fig. 12b and the corresponding deformed crash box is depicted in Fig. 11.

Table 4 Average optimised design variables for the thickness of ring \(t_1\), ring \(t_2\) and the mass of the impacting plate \(m_{plate}\) comparing full order simulation (FOM), intrusive (Galerkin ROM) and non-intrusive ROM

Next the optimisation is performed with the intrusive and the non-intrusive MOR scheme. For the non-intrusive scheme 150 training simulations are created with a Latin hyper cube sampling corresponding to the ranges of the three design variables. The subspace is spanned by 50 basis vectors and a Gaussian regressor with an anisotropic Matérn kernelFootnote 6 is optimised to create the meta-model. The averaged results of the design variables are also listed in Table 4 and visualised in Fig. 12c in blue. In addition, the optimised values of the intrusive MOR scheme using Galerkin projection are plotted in green. It is built with 30 training simulations and set up by 50 degrees of freedom in the reduced space.

Both approximation methods can replicate the overall minimum and result in similar optimised design variables. The error of the optimised design variable lies in the range of \(0.1 {-} 7\) percent. However in other regions, especially at the borders of the design space the system answers deviate more from the reference solution. The interested reader is referred to Fig. 13 in appendix A, which depicts the objective function over the design space \(t_1\) and \(t_2\) with \(m_{plate}=\) 170kg. The approximation by the intrusive scheme has a larger area of low objective values in the vicinity of the minimum and therefore shows less robust optimization results in comparison to the non-intrusive approximation. This also explains that the intrusive MOR has a higher error than the non-intrusive approach, which is in contrast to the previous studies of Sect. 5.2, e.g. Fig. 9.

The former analysis included ROMs for a single parameter, whereas for the optimisation a parameter space of dimension three is spanned. Within the construction of the intrusive ROM it was noticeable, that the enlarged parameter space reduces its accuracy significantly, as more variance is present in the data and therefore more basis vectors in the reduced basis have to be considered. Hereby, the subspace projection restricts the intrusive MOR capturing higher parameter spaces, also observed by Bach (2020). As the projection is the decisive reduction step, only Galerkin ROM is utilised for the optimisation study. Since intrusive and non-intrusive methods differ in speedup by the order of 4 in magnitude, this limitation does not affect the conclusion of this study and the estimate presented in Sect. 5.3 is reasonable.

Fig. 12
figure 12

Evaluation of optimisation algorithm applied to the crash box example simulated via full order (FOM), intrusive (Galerkin ROM) and non-intrusive ROM. (a) Objective function plotted over the design space t1 and t2 for mplate = 170kg computed by full order reference simulation. (b) Convergence of the design variables t1 and t2 of FOM reference optimisation with a population size of 18 and recombination factor 0.7 terminated after 646 iterations. (c) Optimised design variables t1 and t2 resulting from reference (FOM), intrusive and non-intrusive MOR models after 783 (FOM), 819 (non-intrusive ) and 452 (intrusive) evaluations in average

Multiple aspects for intrusive and non-intrusive MOR are still challenging and require further analysis. For the non-intrusive scheme, the system of equation is exchanged by a pure data-driven regressor. In general, a regression model can suffer from over-fitting or sensitive hyper-parameters. Moreover, the amount of training simulations has a large impact on the quality of the subspace. Both points can lead to an unstable model, which was, however, not observed as a high challenge within this study. Following the nature of a data-driven approach, non-intrusive MOR can only be applied for online simulations interpolating between the parameter configuration of the training simulations.

As shown in Sect. 5.2 the intrusive MOR scheme also enables extrapolation as a higher amount of physical knowledge is included in the model. However, one can notice that the optimized design variables deviate from the reference solution by up to \(7\%\), which can be critical for certain applications. Since an optimisation requires large parameter variation, a global POD was used to combine training simulations within the parameter space. During the performed analysis one could observe that the dimension of the subspace must be increased if the design space is enlarged. Thus, the global POD can be critical as it destroys the optimal approximation property of Galerkin projection. A detailed discussion about Galerkin projection and optimality can be found in Carlberg et al. (2017). In addition, if the underlying manifold is non-linear, it is critical to fit a low-dimensional hyper-plane through the data with a linear method.

The results show that especially the non-intrusive MOR is capable to be used in applications with considerable parameter changes although extrapolation should be avoided. In contrast, the intrusive ROM shows slight extrapolation and interpolation capabilities (see Sect. 5.2), however, the usability for optimization has to be further investigated. In summary, the application of intrusive and non-intrusive MOR lead to reasonable results for the presented optimization study, but can be critical for large parameter spaces with underlying highly non-linear manifolds.

7 Conclusion and outlook

Within this contribution, data-driven model order reduction (MOR) techniques for structural transient, multi-query applications were discussed. We focused on intrusive MOR, which projects the system of equations into a reduced subspace in comparison to a non-intrusive approach, a pure data-driven technique. First, similarities and opposite concepts of the implemented techniques are explained. However, the main interest lies in their applicability and efficiency within optimisation schemes for crashworthiness.

Multiple studies showed, that projection based MOR can provide suitable surrogate models for crashworthiness analysis, which reduce the computational effort while maintaining acceptable accuracy. In addition to the Galerkin projection for intrusive MOR, hyper-reduction overcomes the bottleneck for non-linear equations.

Through a sensitivity study the inter- and extrapolation abilities of intrusive and non-intrusive MOR were compared for a crashworthiness example. In comparison to the non-intrusive MOR methods, intrusive MOR is able to extrapolate input parameters in a small range, and needs fewer training simulations than non-intrusive. Additional modifications to the solver are needed, which increases the complexity of implementation and normally excludes the usage of commercial FEM solvers. In contrast, non-intrusive methods are easier to implement and lead to much faster online evaluations. They need generally more training simulations to achieve acceptable accuracy and are more sensitive to hyper-parameter changes.

Moreover, a parameter study illustrated the performance of non-intrusive approaches in relation to its training set, whereby the performance strongly depends on the non-linear underlying manifold. In general, non-intrusive ROMs require a large amount of training data and its efficiency is only assured if the number of training simulations does not exceed that of the online evaluations of the multi-query analysis.

With an exemplary optimisation study we could show that the non-intrusive MOR scheme is able to perform efficient multi-query analyses for structural, highly non-linear problems. However, when increasing the number of design parameters especially the presented intrusive scheme reaches its limits. To increase accuracy and robustness of the ROM while maintaining low dimension of the reduced space non-linear dimensionality reduction methods such as kernel principal component analysis (kPCA) or basis interpolation methods (Amsallem and Farhat 2008) may be used. Moreover, sampling methods are of high importance for non-intrusive schemes. They should be further investigated to assure the quality of the snapshot matrix. In the field of crashworthiness only smaller test cases have been conducted, the application to a full-scale crash simulation is still missing.