Automatised selection of load paths to construct reducedorder models in computational damage micromechanics: from dissipationdriven random selection to Bayesian optimization
 1.8k Downloads
 18 Citations
Abstract
In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reducedorder model can be built.
Keywords
Model order reduction Computational homogenisation Reduced basis Hyperreduction Damage mechanics Multiscale1 Introduction
RVE problems were traditionally solved approximately using analytical or semianalytical approaches [2, 3, 9, 10, 11]. In the last 20 years, computational homogenisation has emerged as an interesting alternative approach [12, 13, 14, 15, 16], whereby the RVE problem is solved using direct numerical simulation. In linear elasticity, the homogenised constitutive relation can be precomputed by performing a small set of material tests. The results of these tests are then assembled in the form of a homogenised Hooke tensor that can be readily used at the coarsescale. In a nonlinear setting, a “naive” implementation of computational homogenisation requires to solve the RVE problem at every (quadrature) point of the macroscopic domain, which, although attractive due to its generality, may render the approach prohibitively expensive. A considerable amount of recent work aims at providing an answer to this dilemma. On the one hand, the community that relied heavily on semianalytical approaches to solve RVE problems has developed methods to circumvent the limitations due to the restrictive assumptions upon which these approaches were traditionally based, at the cost of increased computational requirements. The (non) uniform transformation analysis [17, 18, 19] (see also [20, 21]) and the Voronoi cell approach developed in [22] are remarkable instances of such developments. On the other hand, the community that relied primarily on computational homogenisation methods has tried to reduce the amount of RVE computations by using metamodelling, often called mesomodelling in this context. Such developments include the R3M [23, 24] and the method developed in [25], which both rely on a combination of a proper orthogonal decomposition (POD) expansion [26, 27] for the solution field, and a surface response approach to interpolate the coefficients of this expansion over the space of admissible loading conditions. Our proposed approach is a further step in this direction, which bypasses the need for the surface response step and replaces it by reducedorder modelling (ROM).
Projectionbased ROM is an increasingly popular technique for the fast solution of parametrised boundaryvalue problems. The key idea is to represent the parametric variations of the solution in a lowdimensional subspace. This subspace can be identified using the snapshotPOD [28, 29, 30, 31, 32, 33, 34, 35, 36], which compresses the posterior information contained in an exhaustive sampling of the parameter domain, or the reduced basis method [37, 38, 39, 40, 41], which searches for this attractive subspace in the form of a linear combination of samples chosen quasioptimally via a greedy algorithm (“offline stage”). In a second stage, the boundary value problem is projected into this subspace, for instance by a Galerkin method, resulting in a reduced model of number of unknowns equal to the dimension of the attractive space. This reduced model is used to deliver an approximation of the solution to the parametric BVP for any set of parameters, and as such can be seen as an implicit interpolation method over the parameter domain (“online stage”). Early contributions concerning these type of methods have shown an increased accuracy compared to traditional response surface methods, for a given sampling of the parameter domain. Perhaps more importantly, these methods are based on approximation theories, and therefore “naturally” incorporate reliability estimates (e.g., [29, 35, 37, 40]).
In this paper, we propose to reformulate the nonlinear RVE problem as a parametrised boundary value problem, and subsequently to approximate it using projectionbased ROM. Without loss of generality, we will consider an elastic damageable material represented by a network of damageable beams, with nonhomogeneous material properties representing a random distribution of stiff inclusions into a softer matrix. The RVE problem will be parametrised by its farfield loading, represented by homogeneous Dirichlet conditions that belong to a vector space of dimension six (three in two 2D), the time evolution of the coefficients of the associated linear combination being a priori unknown, which effectively results in a parametric space of infinite dimension for the RVE. Therefore, our aim is to characterise the solution of the RVE problem for any history of the farfield load, within the restriction of ellipticity (which implicitly define the bounds of the parameter domain).
In a first attempt to approximate this parametrised solution, we will generate random loadings, enforcing a minimum amount of energy dissipation at each timestep and deploy the GalerkinPOD methodology to derive a reliable ROM. In a second, more advanced approach, we will develop a tailored reduced basis approach to sample the infinitedimensional parameter space in a reliable and efficient manner. Our procedure relies on two major ingredients. Firstly, we will make use of a gradient algorithm to find points of the parameter space that need to be corrected during the iterates of the greedy algorithm. Although the gradientbased optimisation proposed in [39] is a promising strategy, we will make use of an alternative optimisation technique based on Bayesian optimisation [42]. More precisely, the load path of worst prediction will be found using a Gaussian process regression of an error indicator following [43]. The second ingredient of our approach is to coarsen the a priori infinitedimensional parameter by applying the complex macroscopic load hierarchically, following a sequence of piecewise linear trajectories. Specifically, we will fully train a reduced basis method in a space of proportional loadings. We will then train a new reduced basis model in an enriched parameter domain, by representing the macroscopic load as a sequence of two piecewise linear loads, and further enrich our parameter domain in this hierarchical manner until a stagnation criterion is reached.
We will pay particular attention in the efficiency of the proposed strategy. In particular, projectionbased ROM in the nonlinear setting is known to require an additional level of approximation to remain efficient, known as “hyperreduction” or “system approximation” [31, 32, 33, 38, 44, 45, 46, 47]. We will make use of tailored version of the discrete empirical interpolation method (DEIM) [38, 46], which is, to date, the most widely used system approximation methodology. The original DEIM will be slightly modified to allow for the approximation of a vanishing nonlinear term in the balance equations of the discrete RVE problem. We will also propose a way to choose a good ratio between level of approximations in the truncation of attractive subspace versus system approximation.
The paper is organised as follows. In Sect. 2, we define the class of nonlinear homogenisation problems that we want to reduce, and explain how these problems can be parametrised. In Sect. 3, we develop specific model order reduction approaches based on the snapshotPOD and the reduced basis methodologies. We highlight the pros and cons of these two distinct approaches in the context of nonlinear homogenisation, and show results for each method. Conclusions are drawn in Sect. 4.
2 Computational homogenisation setting
2.1 RVE boundary value problem
2.2 Scale coupling
2.3 Space discretisation and Newton solution algorithm
2.4 Parametrised RVE problem: description of the macroscopic load
In a FE\(^2\) setting, the RVE problem is solved independently for every quadrature point of the macroscopic mesh. In order to apply our ROM technique, we recast the RVE problem as a family of boundary value problems subject to parameter dependency.
The parameters are the three independent components of the far field load tensor \(\varvec{\epsilon }^M\) (\(\epsilon ^M_{xx},\,\epsilon ^M_{yy}\) and \(\epsilon ^M_{xy}\)). Physically, they correspond to scalar descriptors of the loading history applied to the macroscopic material point. We emphasise the fact that these parameters are three functions of time, which is not a classical setting for model order reduction. This high (theoretically infinite) dimensionality is a challenge. Some realisations of the loading functions are depicted in Fig. 4.
The next step is to define the parameter domain, or in other words the space in which the three load functions can vary freely. This seems to be a largely problemdependent issue, and we will focus the discussion on the class of rateindependent, damageable elastic materials. In this case, the first remark is that homogenisation loses its meaning once ellipticity is lost at the macroscopic level. Therefore, bounds are implicitly and collectively defined on the values of the loading functions by enforcing that the macroscopic tangent should remain positive definite. A second remark is that the speed at which the load is applied has no influence on the RVE solution; only the load path matters, which eliminates the need to describe loads that would be applied at different speeds but would essentially result in the same path.
Note that in this timediscrete setting, the number of parameters is two^{2} times the number of pseudotime steps \(n_t,\) which highlights the highdimensionality of the problem.
3 Reduction of the RVE boundary value problem

How can we identify the reduced space?

How can we find the generalised coordinates in an efficient and stable manner?

How can we evaluate the reliability of the approach?
The answers to the first and third questions are strongly intertwined, and we describe in the following paragraphs two different manners to approach the problem.
A PODbased approach looks for the best reduced space, in the sense of the minimisation of the projection error on average over the parameter domain. In practice, this optimization problem is reduced to a problem of minimum projection error over a representative set of solutions to the parametrised problem, the socalled snapshots [26]. In the case of large parametric dimensions, the sampling of the parameter domain needs to be done in such a way that it overcomes the “curse of dimensionality”, for instance by using quasirandom sampling techniques. The reliability of the approach can then be evaluated by resampling (crossvalidation, bootstrap,\(\ldots \)) or other statistical tools. This approach suffers from two major drawbacks. Firstly, the optimality of the reduced space is established in an average sense over the parameter domain, which potentially results in inaccurate representation of outliers even for large dimensions of the reduced model. Secondly, the exhaustive sampling of the parameter domain might be prohibitively expensive, and is, in any case, inefficient if performed in a (statistically) uniform manner. The interested reader can find possible ways to tackle this difficult in [50]. Nonetheless, the PODbased methodology remains attractive because the optimization problem associated with the search of the reduced space can be solved using standard linear algebra tools, namely singular value decomposition.
The reduced basis [37] methodology aims at minimising the maximum projection error over the parameter domain. In practice, this is performed in a suboptimal manner using a greedy algorithm: the ROM is constructed iteratively by enriching the reduced space in order to decrease the error at the point of the parameter domain where some measure of projection error is at its largest. When reliable error estimates are available for the projection, the search for the highest level of error over the parameter domain is very efficient, which makes the approach very attractive. The sampling of the parameter domain is performed in a rational manner, which ensures that the construction of the ROM remains affordable. When error estimates are not available, the approach remains attractive in the context of large parametric dimensions. Indeed, the point of the parameter domain that corresponds to the largest level of projection error can be found using gradientbased optimization, whose numerical complexity may be made independent of the parametric dimension by using the adjoint methodology [39] to compute the sensitivities. In this setting, the “curse” of dimensionality can be overcome whilst retaining reliability of the ROM over the entire parameter domain.^{4}
In the remainder of this section, we explore these two different possibilities for the reduction of the nonlinear RVE problem. We first propose a snapshotPOD approach, where the sampling is performed randomly, enforcing the random samples to undergo a minimum dissipation at each time step. In a second stage, we will develop a reduced basis approach for general loading, which allows for a more continuous approach which takes into account the error of the reduced model not only at the snapshots, but also between the snapshots thanks to a Gaussian process regression. We will propose specific ideas to overcome the “curse of dimensionality”.
3.1 Galerkin projection of the governing equations in a reduced space
3.1.1 Snapshot POD
3.1.2 System approximation
Constricting the displacement in a lowdimensional space does not provide a significant computational gain, even if the systems to be solved are of smaller dimension. This is because the material of study is nonlinear and historydependent, and its stiffness varies not only in different areas of the material but also with time. This requires to evaluate the stiffness everywhere in the material and this at each time step of the simulation. This means that the numerical complexity remains despite the simplification on the displacement. Hence, to decrease the numerical complexity, thedomain itself needs to be approximated. Several authors have looked into that. Notable contributions include the hkyperreduction method [45], the missing point estimation [44], system approximation [32], DEIM [51] or more recently the energyconserving and weighting method [47]. Those methods share the idea that the material properties will be evaluated only at a small set of points or elements within the material domain (Fig. 6). They differ in the way of selecting those points and in the treatment of that reduced information. In this paper, we will use the “gappy” method [52], very much like in [32, 51].
Remark
Note that once the “offline” stage operations are done, the bases \(\mathbf {\varvec{\Phi }}\) and \(\mathbf {\varvec{\Psi }}\) are calculated and the set of control points \({\mathcal {I}}\) is selected and the gappy operator is evaluated. In the “online” stage, all that remains to do is build a system of dimension equal to the size of the displacement basis and solve it which is computationally much cheaper. In particular, the evaluation of \(\mathbf {K}\) will be substituted by the evaluation of \(\mathbf {E}^T \mathbf {K},\) which allows great time savings.
Selection of the controlled elements: the selection of the control elements will be done using the DEIM [51]. This method finds a set of degrees of freedom \({\mathcal {I}}\) in a greedy manner from the internal forces basis \(\mathbf {\varvec{\Psi }}.\) We briefly describe the method.
3.2 A first “brute force” model reduction approach using snapshot POD on a snapshot randomly generated ensuring dissipation
In this section, we present the construction of a reduced model based on a random selection of the snapshot, constricting the random load paths to dissipate some energy of the structure each timestep. This is done to ensure the variability of the load paths so that maximum knowledge can be gained from the snapshot. In following, we show the method used to approximate the generation of such snapshots.
3.2.1 Random sampling of the parameter domain
To insure load paths that do not “turn back on themselves”, we enforce them to dissipate some energy in the structure at each timestep. The idea is that if no energy is dissipated, the structure will deform in an elastic manner, which will not add to the complexity of the snapshot space and will not be informative. We want the snapshot to be as varied as possible so that the reduced basis built from it can be exhaustive (in the sense that it is able to represent any solution resulting from any load path with a controlled error). Note that one could not put any dissipation constraint on the random load paths, but one would have to generate a much larger snapshot for it to statistically extend to the edges of the parameter space. Forcing dissipation saves computational time by computing only the most “informative” solutions.
To generate snapshots following this dissipation property, we will divide the load paths in increments, and enforce that at each increment, the maximum value of load path history is increased in either tension in \(x,\,y\) or shear. This is an approximation, since this is not strictly equivalent to dissipating energy. However, this constraint is explicit, easy to implement, and provides essentially the extended snapshots we are looking for.
3.2.2 Application of the random snapshotPOD procedure and numerical findings
System approximation: we follow the procedure described in 3.1.2. The basis \(\mathbf {\varvec{\Psi }}\) is extracted from the snapshot space generated by the same loading paths used for the displacement basis \(\mathbf {\varvec{\Phi }}.\) The set of controlled elements is selected using the DEIM [51]. The amount of vectors in the basis \(\mathbf {\varvec{\Psi }}\) is chosen so that the error generated by the system approximation is of the same order than the global error of the ROM.
Remark
Note that in Eq. (31), we defined the quantity \(\widetilde{{\mathcal {Q}}^{{\text {HR}}}(\mu )}\) which defines the error between the reduced and the hyperreduced model which is different from \({\mathcal {Q}}^{{\text {HR}}}(\mu ),\) which defines the error between the exact solution and the hyperreduced model.
Numerical savings in this section, we will test the performance of the method by comparing the relative error between the “truth” solution of the RVE problem, which is the solution obtained when using the full order model, and the ROM.
The following load path considered for testing the efficiency of the model is set using the following effective strain: \({\varvec{\epsilon }}^{\mathbf{M}}(t) = \frac{t}{T} \cdot \begin{bmatrix} 1&1\\ 1&1 \end{bmatrix}.\) Note that this case is not in the snapshot set.

As expected, the error decreases when the number of either the displacement or static bases vectors increases. A higher dimensional representation of the solution leads unsurprisingly to more accuracy.

The time gained using the reduced model becomes more and more important when the number of vectors in the bases decreases.

Looking at Fig. 11b, it can be seen that the speedup is roughly dependent on the size of the static bases, rather than on the displacement basis. Indeed, the number of controlled elements, which is linked to the amount of computations to be done, is directly linked to the dimension of the static basis \({\varvec{\Psi }}.\)

To have a well defined ROM, the dimension of the static basis \({\varvec{\Psi }}\) should at least match the dimension of the displacement basis \({\varvec{\Phi }}.\) However, it can be seen that to achieve a reasonable tolerance on the error, the dimension of the static basis should actually be relatively larger.
We will deal with this issue in the next section by using a Bayesianoptimized snapshot selection which will allow to guess the error between the discrete solutions computed for the snapshot set.
3.3 Model reduction using a PODgreedy algorithm based on a Bayesianoptimized snapshot selection designed for dealing with highdimensional parameter spaces
As said in the previous section, it may not be satisfactory to use an arbitrary sampling method, since some important information could be unwittingly dropped out. The accuracy of the reduced model greatly depends on the snapshot space and how well it samples the parameter space. Here, the parameter space contains any load path (based on the macrostrain \({\varvec{\epsilon ^M}}(t)\)) over a certain period of time until ellipticity of the mechanical problem is lost. After time discretisation, the parameter space is of dimension \(2 \times n_{{\text {t}}},\) since in two dimensions the load can be uniaxial in the x or y direction or in shear, and we set a fixed load increment norm between two timesteps. \(n_{{\text {t}}}\) stands for the number of time steps required to reach fracture.

First, the highdimensional parameter space \(\mathcal {P}\) is restricted to a hierarchical sequence of much lower dimensional pseudoparameter spaces \(\widehat{{\mathcal {P}}}^n\) which enable to avoid the “curse of dimensionality”. Starting from a pseudoparameter space \(\widehat{{\mathcal {P}}}^0\) containing proportional loadings only, it is iteratively refined until reaching some “convergence”. This approach is described in Sect. 3.3.1.

Second, within each pseudoparameter spaces \(\widehat{\mathcal {P}}^n,\) rather than a random and fine sampling typically used in traditional PODgreedy approaches, an effective selection procedure allowing few evaluations of an error indicator is done using a Gaussian process predictor. This strategy is explained in Sect. 3.3.2.

A statistical correspondence between the error indicator and the true error is built using Gaussian process regression to control the convergence of the procedure. This is described in Sect. 3.3.3.
3.3.1 Definition of a sequence of surrogate parameter spaces of low dimension
3.3.2 Exhaustive sampling of the surrogate parameter spaces using a Gaussian process predictor
In this section, given a dimension for the surrogate parameter space, we are looking for the value of the parameter leading to the highest error between the exact solution and the solution computed using our reduced model.
In practice, error bounds \(\Delta ^k(\mu )\) are available for linear problems. In the general nonlinear case, no sharp error bound is available, and one has to rely on an error indicator at the parameter value \(\mu \) instead: \({\mathcal {J}}(\mu ).\) This error indicator, does not provide a bound on the error, but rather a measure of its magnitude. Though less expensive than computing the exact solution at \(\mu ,\) we will see in the next section that the evaluation of this indicator at all values of a fine discretisation of the parameter space is not affordable. To alleviate this issue, the error indicator surface over the parameter space will be approximated using a Gaussian process predictor to allow for only a few, well chosen, evaluations of the indicator.
Remark
Note that the residual \({\mathcal {R}}\) will almost always not be null. Indeed, what is solved in the reduced model leads to the cancellation, at each timestep, of the projected residual:\({\varvec{\Phi }}^T \mathbf {G} \mathbf {E}^T{\mathbf {f}}_{{\text {int}}}(\mathbf {\bar{u}}(t_k;\,V{\varvec{\epsilon }^M}) + {\varvec{\Phi }} {\varvec{\alpha }}(t_k;\,\mathbf {\varvec{\epsilon }^M})),\) using Newton iterations.
Gaussian process regression of the residual surface for efficient evaluation of the error in traditional PODgreedy procedures, a discrete set \(\Xi \in {\mathcal {P}}^n\) is built arbitrarily to sample the parameter space. It is typically very fine. The goal of this section is to define a set \(\Xi \) that is of relatively small cardinality but is chosen so that it is likely to contain the values of the parameter leading to the highest error. To this purpose, we follow a procedure similar to the one described in [42, 43].
The first ingredient is Gaussian process regression [53] (also called kriging in the literature): starting from an initial set \(\Xi _0\) chosen randomly containing few values of the parameter and an associated set of values of the error indicator \(\{{\mathcal {J}}(\mu _i) \mu _i \in \Xi _0 \},\) a Gaussian process regression approximating the error indicator \({\mathcal {J}}\) with a confidence interval over the entire parameter domain \({\mathcal {P}}^n,\) will be constructed for each step m of the sampling process. This regression will be used to iteratively enrich \(\Xi _m\) with values of the parameter \({\varvec{\mu }}_m\) where the probability of having large values of the error indicator \({\mathcal {J}}\) is the highest.
3.3.3 Construction of a Gaussian regression between the exact error and the error estimator to monitor the convergence of the procedure
3.3.4 Optimal choice of the size of the reduced spaces to achieve a userdefined tolerance
One important matter, once a new solution has been computed together with a singular value decomposition of its error on the current reduced basis \({\varvec{\Phi }},\) is to choose how many basis vectors \(({{\varvec{\phi }}_{{\text {add}}}}_i)_{i = 1,\ldots ,n_{{\text {add}}}}\) should be concatenated to the basis \({\varvec{\Phi }},\) so that the ROM achieves some userdefined target tolerance \(\epsilon .\) The same question goes for the number of basis vectors \({\varvec{\Psi }}\) representing the internal forces for the system approximation.
We choose to tackle this issue in two stages, by first making sure the size of the displacement reduced basis \({\varvec{\Phi }}\) is large enough for \({\mathcal {Q}}^{{\text {R}}}\) to achieve a certain fraction of the tolerance \(\epsilon \), and then choosing the dimension of \({\varvec{\Psi }}\) to achieve the tolerance as well as insuring a monotonic decrease of the reduced and hyperreduced residuals \({\mathcal {J}}^{{\text {R}}}\) and \({\mathcal {J}}^{{\text {HR}}}.\)
The procedure starts by computing an initial solution, chosen for an arbitrary value of the parameter \(\mu ,\) as well as its residuals \({\mathcal {J}}^{{\text {R}}}_{ini},\,{\mathcal {J}}^{{\text {HR}}}_{ini}.\) These two residuals will be used as initial residual tolerances.
Determining the size of the displacement reduced basis \({\varvec{\Phi }}\) assume we are at step k of the greedy algorithm. We denote the displacement basis \({\varvec{\Phi }}^k.\) The snapshot was enriched with a new exact solution whose projection error with the current reduced basis \({\varvec{\Phi }}^k\) was decomposed into a POD expansion \({\varvec{\Phi }}_{{\text {add}}}\), i.e., \(e_{{\text {proj}}} \simeq \sum _i^{n_{{\text {add}}}} \alpha _i {{\varvec{\phi }}_{{\text {add}}}}_i.\)
The residual tolerance is updated as: \(\nu ^{{\text {R}}}_{{\text {current}}} = {\gamma ^{{\text {R}}}_\nu }\cdot \nu ^{{\text {R}}}_{{\text {current}}},\) with \({\gamma ^{{\text {R}}}_\nu }<1.\) The condition on the residual ensures its decrease throughout the procedure. This is important since the indicator quantity \({\mathcal {J}}^{{\text {R}}}\) (influencing \({\mathcal {J}}^{{\text {HR}}}\)) drives the procedure, the exact error being used for the stopping criterion only (through the Gaussian process regression between error indicator and actual error). The value of \(\nu ^{{\text {R}}}_{{\text {current}}}\) is initialized with the value of the initial residual of the initial ROM, whose size is chosen minimal, typically only of dimension 1 to start with.
Note that this step is quite expensive, as it requires to evaluate the reduced solution several times with no hyperreduction. It could be made cheaper by substituting the evaluations of the nonhyperreduced ROM by a finely (i.e., with a highdimensional basis \({\varvec{\Psi }}\)) hyperreduced ROM which would be cheaper to evaluate. However, the construction of the hyperreduction ROM is expensive in itself since it requires evaluation of the nonhyperreduced counterpart to build the snapshot necessary to build the internal forces basis \({\varvec{\Psi }}.\) A tradeoff would have to be found. In our case, we keep the strategy as it is, keeping in mind that although computationally intensive, this procedure is performed offline.
Remark
Note that this step is not computationally expensive since it only requires evaluations of the hyperreduced model.
3.3.5 Application of the Bayesian PODgreedy algorithm
We now proceed to apply the PODgreedy Algorithms 2–4 on the RVE problem described in Sect. 2. We define the target tolerance \(\epsilon = 10^{3},\,\gamma ^{{\text {R}}}_Q = \frac{1}{2},\,\gamma ^{{\text {R}}}_\nu = \frac{1}{2}\) and \(\gamma ^{{\text {HR}}}_\nu = 0.9.\) We proceed to build a ROM achieving tolerance \(\epsilon \) on the successive pseudo parameter spaces \(\widehat{{\mathcal {P}}}\) of dimensions 2, 5 and 8. The very initial parameter value is the proportional loading of equal value in all directions (that is in \(\epsilon _{xx},\,\epsilon _{yy}\) and \(\epsilon _{xy}\)). Results are displayed in Fig. 17.
After achieving the tolerance for snapshots in the initial pseudo parameter space of dimension 2, the pessimistic value of the error (up to one standard error) \(\widehat{{\mathcal {Q}}^{+}}\) increases slightly when moving on to the space of dimension 5. This is not surprising since the ROM was constructed to achieve the tolerance on the space of dimension 2 and does not represent as well the space of dimension 5. However, this error increase is small and remains underneath the target tolerance \(\epsilon .\) Moving on to the space of dimension 8 leads to the same analysis. When considering the space of dimension 11, we can see that the error decreases. This means that despite the last computed solution belongs to a space of larger dimension (and is the least well represented one) than the space used to build the ROM, it is correctly approximated. One can then argue that the current ROM is accurate enough to represent the solutions issued from parameter spaces of any dimension. Hence, there is no need to consider any finer spaces and the procedure can stop there.
4 Conclusion and perspectives

The sampling is done randomly, in a brute force manner, whilst enforcing that a certain increment of energy dissipation occurs at each timstep of the discrete load history. The reduced space is found by using the POD.

The problem is solved using a PODgreedy reducedbasis method. To reduce the dimensionality of the RVE problem to tractable levels, the parameter space is substituted by a hierarchy of approximate spaces of small and increasing dimensions. A ROM is computed for each of these approximate spaces, using a PODgreedy training algorithm, in conjunction with a Bayesianoptimisationbased a posteriori error estimate.
Coming back to the context of multiscale modelling, we can question the necessity of computing reduced models of RVE problems in the space of arbitrary farfield loads. Indeed, in practical applications, only specific loadings may actually be applied to the RVE, making the pursuit of an exhaustive snapshot irrelevant. We are currently investigating the possibility of integrating some knowledge about potential macroscopic solutions in order to restrict the size of the parameter domain a priori.
Footnotes
 1.
We make the formulation in a 2D context, but the same principles apply in 3D.
 2.
It is not 3 since we fixed the value of the load between two successive time steps.
 3.
 4.
This is arguable as the gradientbased optimizer will converge to a local minimum in the parameter domain, see [39] for a more detailed discussion and the proposition of a remedy.
 5.
We work at a fully discrete level with vectors of degrees of freedom corresponding to continuous fields that belong to FE spaces, but we will refer to such quantities as “fields” or simply “displacements” to avoid unnecessary complication of the explanations.
 6.
Note that in practice, \(n_t\) is different between different load cases. Here we try to keep simple notations.
 7.
To simplify the notations we denote \({\mathbf {f}}_{{\text {int}}}({\bar{\mathbf {u}}} + {\varvec{\Phi }} {\varvec{\alpha }})\) by simply \({\mathbf {f}}_{{\text {int}}}({\varvec{\Phi }} {\varvec{\alpha }})\) in the remaining of this paper.
Notes
Acknowledgments
The authors thank the financial support of EPSRC High End Computing Studentship for Mr. Olivier Goury as well as the support of Cardiff and Glasgow University’s Schools of Engineering. Pierre Kerfriden thanks EPSRC Funding under Grant EP/J01947X/1 “Towards rationalised computational expense for simulating fracture over multiple scales”. Stéphane Bordas also thanks partial funding for his time provided by the European Research Council Starting Independent Research Grant (ERC Stg Grant Agreement No. 279578) entitled towards real time multiscale simulation of cutting in nonlinear materials with applications to surgical simulation and computer guided surgery. Olivier Goury was also supported by the FP7 Multifrac Multiscale Methods for Fracture International Research Staff Exchange Scheme which allowed him to be a Visitor at Northwestern University. Wing Kam Liu is supported by AFOSR Grant No. FA95501410032.
References
 1.SanchezPalencia E (1980) Non homogeneous media and vibration theory. In: Lecture notes in physics, vol 127. Springer, BerlinGoogle Scholar
 2.Suquet P (1987) Elements of homogenization for inelastic solid mechanics. In: SanchezPalencia E, Zaoui A (eds) Homogenization techniques for composite media. Springer, BerlinGoogle Scholar
 3.NematNasser S, Hori M (1999) Micromechanics: overall properties of heterogeneous materials, vol 2. Elsevier, AmsterdamzbMATHGoogle Scholar
 4.Milton GW (2002) The theory of composites, vol 6. Cambridge University Press, CambridgeCrossRefzbMATHGoogle Scholar
 5.Fish J, Chen W (2001) Higherorder homogenization of initial/boundaryvalue problem. J Eng Mech 127(12):1223–1230CrossRefGoogle Scholar
 6.Forest S, Pradel F, Sab K (2001) Asymptotic analysis of heterogeneous Cosserat media. Int J Solids Struct 38(26–27):4585–4608MathSciNetCrossRefzbMATHGoogle Scholar
 7.Allaire G (1992) Homogenisation and twoscale convergence. SIAM J Math Anal 23(6):1482–1518MathSciNetCrossRefzbMATHGoogle Scholar
 8.Buryachenko V (2007) Micromechanics of heterogeneous materials. Springer, New YorkCrossRefzbMATHGoogle Scholar
 9.Mori T, Tanaka K (1973) Average stress in matrix and average elastic energy of materials with misfitting inclusions. Acta Metall 21(5):571–574CrossRefGoogle Scholar
 10.Willis JR (1977) Bounds and selfconsistent estimates for the overall properties of anisotropic composites. J Mech Phys Solids 25(3):185–202MathSciNetCrossRefzbMATHGoogle Scholar
 11.Zohdi T, Feucht M, Gross D, Wriggers P (1998) A description of macroscopic damage through microstructural relaxation. Int J Numer Methods Eng 43(3):493–506CrossRefzbMATHGoogle Scholar
 12.Feyel F, Chaboche JL (2000) FE\(^2\) multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials. Comput Methods Appl Mech Eng 183(3–4):309–330CrossRefzbMATHGoogle Scholar
 13.Fish J, Shek K, Pandheeradi M, Shephard MS (1997) Computational plasticity for composite structures based on mathematical homogenization: theory and practice. Comput Methods Appl Mech Eng 148:53–73MathSciNetCrossRefzbMATHGoogle Scholar
 14.Miehe C (2002) Straindriven homogenization of inelastic microstructures and composites based on an incremental variational formulation. Int J Numer Methods Eng 55(11):1285–1322MathSciNetCrossRefzbMATHGoogle Scholar
 15.Zohdi TJ, Wriggers P (2005) Introduction to computational micromechanics. Lecture notes in applied and computational mechanics, vol 20. Springer, HeidelbergGoogle Scholar
 16.Geers MGD, Kouznetsova VG, Brekelmans WAM (2010) Multiscale computational homogenization: trends and challenges. J Comput Appl Math 234(7):2175–2182CrossRefzbMATHGoogle Scholar
 17.Dvorak GJ (1992) Transformation field analysis of inelastic composite materials. Proc R Soc Lond A 437(1900):311–327MathSciNetCrossRefzbMATHGoogle Scholar
 18.Michel JC, Suquet P (2003) Nonuniform transformation field analysis. Int J Solids Struct 40(25):6937–6955MathSciNetCrossRefzbMATHGoogle Scholar
 19.Fritzen F, Böhlke T (2010) Threedimensional finite element implementation of the nonuniform transformation field analysis. Int J Numer Methods Eng 84(7):803–829MathSciNetCrossRefzbMATHGoogle Scholar
 20.Oskay C, Fish J (2007) Eigendeformationbased reduced order homogenization for failure analysis of heterogeneous materials. Comput Methods Appl Mech Eng 196(7):1216–1243MathSciNetCrossRefzbMATHGoogle Scholar
 21.Fish J, Filonova V, Yuan Z (2013) Hybrid impotentincompatible eigenstrain based homogenization. Int J Numer Methods Eng 95(1):1–32MathSciNetCrossRefGoogle Scholar
 22.Ghosh S (2011) Micromechanical analysis and multiscale modeling using the Voronoi cell finite element method. CRC Press, Taylor & Francis Group, Boca RatonGoogle Scholar
 23.Yvonnet J, He QC (2007) The reduced model multiscale method (R3M) for the nonlinear homogenization of hyperelastic media at finite strains. J Comput Phys 223(1):341–368MathSciNetCrossRefzbMATHGoogle Scholar
 24.Monteiro E, Yvonnet J, He QC (2008) Computational homogenization for nonlinear conduction in heterogeneous materials using model reduction. Comput Mater Sci 42(4):704–712CrossRefGoogle Scholar
 25.Breitkopf P, Xia L (2014) A reduced multiscale model for nonlinear structural topology optimization. Comput Methods Appl Mech Eng 280:117–134MathSciNetCrossRefGoogle Scholar
 26.Sirovich L (1987) Turbulence and the dynamics of coherent structures. Part I: coherent structures. Q Appl Math 45:561–571MathSciNetzbMATHGoogle Scholar
 27.Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25(1):539–575MathSciNetCrossRefGoogle Scholar
 28.LeGresley PA, Alonso JJ (2001) Investigation of nonlinear projection for pod based reduced order models for aerodynamics. AIAA Pap 926:2001Google Scholar
 29.Meyer M, Matthies HG (2003) Efficient model reduction in nonlinear dynamics using the Karhunen–Loeve expansion and dualweightedresidual methods. Comput Mech 31(1):179–191CrossRefzbMATHGoogle Scholar
 30.Kunisch K, Volkwein S (2003) Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM J Numer Anal 40(2):492–515MathSciNetCrossRefzbMATHGoogle Scholar
 31.Kerfriden P, Gosselet P, Adhikari S, Bordas S (2011) Bridging proper orthogonal decomposition methods and augmented Newton–Krylov algorithms: an adaptive model order reduction for highly nonlinear mechanical problems. Comput Methods Appl Mech Eng 200(5–8):850–866MathSciNetCrossRefzbMATHGoogle Scholar
 32.Carlberg K, BouMosleh C, Farhat C (2011) Efficient nonlinear model reduction via a leastsquares Petrov–Galerkin projection and compressive tensor approximations. Int J Numer Methods Eng 86(2):155–181MathSciNetCrossRefzbMATHGoogle Scholar
 33.Kerfriden P, Goury O, Rabczuk T (2012) A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics. Comput Methods Appl Mech Eng 256:169–188MathSciNetCrossRefGoogle Scholar
 34.Radermacher A, Reese S (2014) Model reduction in elastoplasticity: proper orthogonal decomposition combined with adaptive substructuring. Comput Mech 54(3):677–687MathSciNetCrossRefzbMATHGoogle Scholar
 35.Kerfriden P, Ródenas JJ, Bordas SPA (2014) Certification of projectionbased reduced order modelling in computational homogenisation by the constitutive relation error. Int J Numer Methods Eng 97(6):395–422MathSciNetCrossRefGoogle Scholar
 36.Goury O (2015) Computational time savings in multiscale fracture mechanics using model order reduction. PhD Thesis, Cardiff UniversityGoogle Scholar
 37.Prud’homme C, Rovas DV, Veroy K, Machiels L, Maday Y, Patera AT, Turinici G (2002) Reliable realtime solution of parametrized partial differential equations: reducedbasis output bound methods. J Fluids Eng 124(1):70–80CrossRefGoogle Scholar
 38.Barrault M, Maday Y, Nguyen NC, Patera AT (2004) An ‘empirical interpolation’ method: application to efficient reducedbasis discretization of partial differential equations. C R Math 339(9):667–672MathSciNetCrossRefzbMATHGoogle Scholar
 39.BuiThanh T, Willcox K, Ghattas O (2008) Model reduction for largescale systems with highdimensional parametric input space. SIAM J Sci Comput 30(6):3270–3288MathSciNetCrossRefzbMATHGoogle Scholar
 40.Quarteroni A, Rozza G, Manzoni A (2011) Certified reduced basis approximation for parametrized partial differential equations and applications. J Math Ind 1(3):1–44MathSciNetzbMATHGoogle Scholar
 41.Constantine PG, Wang Q (2012) Residual minimizing model interpolation for parameterized nonlinear dynamical systems. SIAM J Sci Comput 34:118–144MathSciNetCrossRefzbMATHGoogle Scholar
 42.Jones DR (2001) A taxonomy of global optimization methods based on response surfaces. J Glob Optim 21(4):345–383MathSciNetCrossRefzbMATHGoogle Scholar
 43.PaulDuboisTaine A, Amsallem D (2015) An adaptive and efficient greedy procedure for the optimal training of parametric reducedorder models. Int J Numer Methods Eng 102(5):1262–1292MathSciNetCrossRefGoogle Scholar
 44.Astrid P, Weiland S, Willcox K, Backx ACPM (2008) Missing point estimation in models described by proper orthogonal decomposition. IEEE Trans Autom Control 53(10):2237–2251MathSciNetCrossRefGoogle Scholar
 45.Ryckelynck D (2008) Hyperreduction of mechanical models involving internal variables. Int J Numer Methods Eng 77(1):75–89MathSciNetCrossRefzbMATHGoogle Scholar
 46.Chaturantabut S, Sorensen DC (2010) Nonlinear model reduction via discrete empirical interpolation. SIAM J Sci Comput 32:2737–2764MathSciNetCrossRefzbMATHGoogle Scholar
 47.Farhat C, Avery P, Chapman T, Cortial J (2014) Dimensional reduction of nonlinear finite element dynamic models with finite rotations and energybased mesh sampling and weighting for computational efficiency. Int J Numer Methods Eng 98(9):625–662MathSciNetCrossRefGoogle Scholar
 48.Arslan A, Ince R, Karihaloo BL (2002) Improved lattice model for concrete fracture. J Eng Mech 128(1):57–65Google Scholar
 49.Kerfriden P, Schmidt KM, Rabczuk T (2013) Statistical extraction of process zones and representative subspaces in fracture of random composites. Int J Multiscale Comput Eng 11(3):253–287CrossRefGoogle Scholar
 50.Jouhaud JC, Braconnier T, Ferrier M, Sagaut P (2011) Towards an adaptive POD/SVD surrogate model for aeronautic design. Comput Fluids 40(1):195–209MathSciNetCrossRefzbMATHGoogle Scholar
 51.Chaturantabut S, Sorensen DC (2004) Discrete empirical interpolation for nonlinear model reduction. In: Proceedings of the 48th IEEE conference on decision and control, 2009 held jointly with the 2009 28th Chinese control conference. CDC/CCC 2009. IEEE, Shanghai, pp 4316–4321Google Scholar
 52.Everson R, Sirovich L (1995) Karhunen–Loeve procedure for gappy data. J Opt Soc Am A 12(8):1657–1664CrossRefGoogle Scholar
 53.Rasmussen CE (2006) Gaussian processes for machine learning. The MIT Press, CambridgeGoogle Scholar
 54.Wirtz D, Sorensen DC, Haasdonk B (2014) A posteriori error estimation for DEIM reduced nonlinear dynamical systems. SIAM J Sci Comput 36(2):A311–A338MathSciNetCrossRefzbMATHGoogle Scholar
 55.Drohmann M, Carlberg K (2015) The ROMES method for statistical modeling of reducedordermodel error. SIAM/ASA J Uncertain Quantif 3(1):116–145MathSciNetCrossRefzbMATHGoogle Scholar
 56.Stein M (1987) Large sample properties of simulations using Latin hypercube sampling. Technometrics 29(2):143–151MathSciNetCrossRefzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.