Integration efficiency for model reduction in micro-mechanical analyses

Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.


Introduction
In recent years the use of composite or multi-phase, high performance steels and hybrid materials in advanced structures has increased significantly. Due to the ability to tailor their mechanical properties, these materials are applied in high-tech systems and structures where optimal material properties such as a high strength to weight ratio are essential. To achieve these excellent properties, the underlying microstructures in these heterogeneous materials tend to become increasingly complex, with strongly non-linear behavior of the constituents.
As a consequence, the material testing procedures have become intricate, time-consuming and expensive. The number of required experiments can be reduced drastically by reliable modeling of the material behavior. Computational homogenization is one of the most accurate modeling techniques currently available to analyze and design the material behavior of heterogeneous materials on a structural scale. Microstructures of heterogeneous materials can be modeled adequately, but it remains a computationally demanding problem when applied on materials with a large scale separation between the structural dimensions and the dimensions of the material heterogeneities.
The early concepts of computational homogenization were introduced by Suquet [1] and extended and refined by several authors over the years, e.g. Renard and Marmonier [2], Guedes and Kikuchi [3] and Terada and Kikuchi [4]. The framework captures the behavior on the macroscopic scale by solving a boundary value problem on a representative volume element of the microstructure of the material. Smit et al. [5] adapted the framework to account for large deformations and rotations. Miehe et al. [6] combined this framework with a crystal plasticity material model to study texture development in polycrystalline metals. By means of a volume averaging technique, the homogenized relations between stress, stiffness and strain can be found [7]. Feyel and Chaboche modeled a SiC/Ti composite material using computational homogenization in [8], where the term FE 2 was coined for the first time. A second-order computational homogenization framework which inherently accounts for the size of the microscale model was proposed by Kouznetsova et al. [9]. More recently, computational homogenization has been used in various fields to analyze the material behavior, such as acoustics [10], composites [11] among many other fields. A detailed overview of the advances in computational homogenization is presented by Geers et al. [12] and Matouš et al. [13].
The common denominator in the aforementioned models is that the material behavior on the macroscale is obtained from a detailed model of the microstructure of the material. When the microscopic model is sufficiently large and detailed to accurately capture the homogenized behavior of the material it is referred to as a representative volume element (RVE). In solving a macroscopic problem one can use the computationally homogenized model of the representative volume element or in materials design analysis one can use the RVE to analyze the resulting macroscopic response of different microstructures. This requires the same microscopic RVE problem to be resolved numerous times for different loadcases.
One approach to decrease the required computation time is to increase the efficiency of solving the microscopic system of equations, Michel et al. [14] used a Fast-Fourier Transform to reduce the computational costs of solving a stress-controlled microscopic problem. Another approach is to reduce the number of unknowns of the microscopic model. This can be achieved by using a more suitable global basis to solve the problem as done using a classical Rayleigh-Ritz technique [15].
This reduced order modeling technique was originally proposed in the late 70's begin 80's with Almroth et al. and Noor et al. [16,17]. The global basis is generated a-priori from a local basis using Proper Orthogonal Decomposition (POD) [18] going back to the works of Pearson [19] and Schmidt [20]. Variants to this technique are known as principle component analysis (PCA) [21], Karhunen-Loève transform (KLT) [22,23], proper orthogonal eigenfunctions [24], factor analysis [25] and total least squares [26]. The procedure of finding the global basis is strongly related to the singular value decomposition [26].
The number of global basis functions required to capture the solution accurately is typically orders of magnitude smaller than the number of local basis functions used in classical discretization techniques such as the Finite Element Method. Rewriting the problem onto the global basis reduces the size of the algebraic system of equations that needs to be resolved at every iteration. A detailed overview of the advances in reduced order modeling is presented in [27] and [28].
As pointed out by Rathinam and Petzold [29] the reduction of the number of degrees of freedom does not necessarily result in a reduction of the computational costs as the integration of the nonlinear terms in the model is not tackled by the reduced basis approach. To resolve the high computational costs of evaluating the nonlinear integrand, a number of methods have been proposed, such as Proper Generalized Decomposition [30], which reduces the costs through separation of variables. Transformation Field Analysis is another semi-analytical approach, introduced by Dvorak [31], in which the computational costs of the FE 2 scheme are reduced by assuming constant plastic strain fields. The method was later extended by Michel and Suquet [32] to account for nonuniform plastic strain fields. As pointed out by Hernández et al. [33], hyper-reduction techniques can be divided into two categories; (1) models reducing the costs by interpolation of the nonlinear terms, i.e. Empirical Interpolation Methods (EIM) and (2) models reducing the costs of the integration scheme, minimizing the number of evaluations of the nonlinear term required, i.e. Cubature Methods.
Empirical interpolation of the nonlinear stress is done by projecting locally sampled values, also known as gappy data, onto a basis for the nonlinear term. The reconstruction of gappy data using modes is pioneered by Everson and Sirovich [34] from which several techniques originated such as the Empirical Interpolation Method [35], and later the Best Point Interpolation Method (BPIM) [36], Missing Point Estimation (MPE) [37], Discrete Empirical Interpolation Method (DEIM) [38]. A comparison between different interpolating methods is given by Soldner et al. [39]. This review considers a geometrically non-linear hyper-elastic RVE which is reduced using three hyper-reduction methods, namely Discrete Empirical Interpolation Method (DEIM), Gappy-POD and Gauss-Newton with Approximated Tensors (GNAT). The focus of the review by Soldner et al. lies on the robustness of the methods.
On the other hand, cubature methods reduce the cost of the integration of the nonlinear integrand by using a reduced set of integration points (or elements) with weights that minimize the integration error. The first cubature method was proposed by An et al. [40] and later Farhat et al. [41] introduced the Energy-Conserving Sampling and Weighting (ECSW) hyper reduction method in the field of computational mechanics.
This paper studies the accuracy and efficiency of hyperreduction techniques in the context of microstructural volume element analysis. To this purpose, a detailed comparison will be made between the Empirical Interpolation Method on the one hand and the Cubature Method on the other hand. We will compare two specific implementations of these methods, namely the High-Performance Reduced Order Model- ing technique (HP-ROM) [42] and the Empirical Cubature Method [33], respectively. Both implementations are selected for their capability to handle the singularities arising when hyper-reduction techniques are applied to microstructural models loaded through macroscopic prescribed fields. In this comparison emphasis is given on the reduction of the computational costs of evaluating the nonlinear stress, the efficiency in terms of computational time and accuracy in resolving the RVE problem. In particular, this comparison will focus on the path-dependency originating from the elasto-viscoplastic material behavior.
The paper is organized as follows. In Sect. 2, a smallstrain computational homogenization procedure is outlined, after which the standard Reduced Order Modeling approach is introduced in Sect. 3. In Sect. 4 the two classes of hyperreduction will be explained briefly. The core of this paper is the evaluation of different hyper-reduction methods to resolve the expensive computation of the stress field, as presented in Sect. 5. The conclusions will be presented in Sect. 6.

Computational homogenization
In this section, the two-scale computational homogenization procedure will be outlined briefly. In the computational homogenization framework a high-fidelity problem at the macroscopic length scale l M involving microscopic details with a characteristic length scale l m l M is decomposed into two boundary value problems at separate scales, as schematically depicted in Fig. 1. By decomposing the problem into a micro-and macroscale problem, the numerical costs of solving the full-scale problem are reduced drastically.
At the macroscale, where the variables and fields are denoted by subscript M, the constitutive behavior is derived from the homogenized response of a Representative Volume Element living at the microscale. Computational homogenization is easily formulated as a deformation driven problem, i.e. the RVE is loaded with a deformation tensor from a macroscopic point, where the resulting stress tensor is required. Using a Hill-Mandel based homogenization procedure, the macroscopic stress can be derived from the microscopic stress-field of the RVE. The macroscale problem, the microscale problem and the coupling will be outlined subsequently.

Macroscale problem
The macroscopic problem is defined as a material body V M which consist of a heterogeneous material with a particular microstructure. A material point at the macroscopic scale is described by its position vector X(t). The position vector on the reference configuration at time t = 0 is denoted as X 0 . In a small-strain framework the motion of a material point over time can be described by in which u = u(X 0 , t) is the displacement vector. Assuming small-strain kinematics, the strain at the macroscale, ε M = ε M (X 0 , t) is then defined as Static equilibrium at the macroscale implies where the stress tensor σ M (ε M , ξ ) depends on the strain tensor ε M and the history parameters ξ . The body forces are denoted by b. The macroscopic stress σ M is obtained from the RVE problem through the micro-macroscale transition.

Microscale problem
To characterize the microstructure of the material a Representative Volume Element V m is defined. The position of a material point in the RVE over time is described by x(t). The reference configuration at t = 0 is denoted by x 0 . The displacement of a material point over time is a superposition of the displacement induced by the macroscopic deformation ε M · x 0 and the microscopic fluctuation w = w(x 0 , t), The microscopic strain ε m is then given by The influence of the body forces is assumed to be negligible at this scale, such that the linear momentum balance of the RVE simplifies to where σ m = σ m (ε m , ξ ) is the microscopic Cauchy stress and ξ are the history variables required for the constitutive relation.

Coupling of the scales
The homogenization is based on two principles. The first principle is the volume average of the stress or the strain, which should match the macroscopic strain and stress respectively.
The second principle is the Hill-Mandel condition [43] which prescribes that the virtual work performed per unit volume at the macroscale should equal the volume average of the virtual work at the microscale.
It has been shown that periodic boundary conditions on the microfluctuation field w + = w − comply with these homoge-nization equations. To discriminate rigid body displacements and rotations, the microfluctuations on the corners of the RVE S m are constrained by w = 0. The macroscopic stress σ M is then given by volume averaging the microscopic stress σ m and the macroscopic tangent stiffness can be deduced using, for example, direct condensation [44].

Spatial discretization
Both the macroscopic and microscopic problems, Eqs. (3) and (6) can be put in the weak form by following the Bubnov-Galerkin approach. Integrating these equations with test-functions φ M (X 0 ) and φ m (x 0 ) respectively and application of the divergence theorem, yields: Both the macro-and microscopic domain are discretized using standard, Lagrangian finite elements. The discretization introduces interpolation functions N i (X 0 ), and nodes for i = 1, . . . , n d M at the macroscale and N j (x 0 ) for j = 1, . . . , n d m at the microscale. Here, n d M and n d m denote the number of nodes and thereby implicitly define the number of unknowns.
The trial functions φ M (X 0 ) and φ m (x 0 ) are approximated by their discrete counterparts . A similar approximation is used for the displacement-field u(X 0 ) ≈ u h (X 0 ) and the microfluctua- where w h (x 0 ) is constrained using periodic boundary conditions on the RVE's boundary and vanishing at the corners. The macroscopic problem takes conventional boundary conditions, e.g. displacements u h = u * or tractions σ M · n = t * . The Newton-Raphson method is used to solve the resulting equilibrium Eqs. (7) and (8).

Computational size and cost of the problem
Applying computational homogenization to a high-fidelity problem with ample microscopic details entails a significant increase of the required time and computational resources.
in which # flop LA is the number of required FLOPs to solve the matrix-vector systems in the homogenization problem.
The same scaling relation of the number of floating point operations holds for the number of evaluations of the ordinary differential equation in the material model. The computational costs associated with FE 2 problems are quantitatively illustrated using a straightforward example. Let's consider a micro-structural element that is discretized with a grid of 100 × 100 quadrilateral elements with 4 integration points per element, yielding n

Reduced order modeling
As shown in the previous section, the computational cost of the homogenization scales with the number of DOFs n d M and n d m and the number of integration points n g M and n g m . By reducing the amount of DOFs of the RVE problem, the costs of solving the matrix-vector equations in the computational homogenization problem will decrease proportionally.
Using a reduced basis, the finite element discretization is mapped onto a global basis. In most physical problems this yields a significant reduction of the required number of DOFs n d m since the problem will have considerably less physical modes for the kinematics then the number of local basis functions required to capture all the local features. Reduced Order Modeling using the Reduced Basis technique is outlined in Sect. 3.1.
After the reduction of the number of DOFs the number of integration points n g m is still equivalent to the original problem. The reduction of the integral is done using High-Performance Reduced Order Modeling or Empirical Cubature, outlined in Sect. 4.

Reduced basis approach
The number of DOFs n d scales with the spatial resolution of the mesh. The number of kinematic modes n w present in the RVE is independent of the spatial resolution. Moreover, many modes have a negligible contribution. When discretizing the RVE, without prior knowledge, the number of DOFs used to accurately capture these kinematics is relatively high with respect to the relevant number of relevant modes.
If there is prior knowledge of the kinematics occurring in the RVE, it is possible to construct a basis that captures the kinematics more efficiently with a limited number of DOFs. Since Computational Homogenization often involves repetitive calculations of an RVE, it calls for an additional optimization step. The extra computational costs for identifying a Reduced Basis for the problem are justified by the higher costs for solving the RVE multiple times.
To find a more optimal basis for the microscale problem, prior knowledge of the relevant kinematics needs to be obtained. This is done by repetitively solving the RVEproblem under different macroscopic strains ε i M (t j ) for i = 1, . . . , n and j = 1, . . . , n t to gather a database with snapshots of the occurring microfluctuation fields. Here, each separate time-step of each load-case is considered as a snapshot, which is numbered as k. In total there are n s = n · n t snapshots of the microfluctuation field. The snapshot of the microfluctuation field w h k (x 0 ) is then expressed by Equation (9) can be rewritten as a matrix-vector equation where the snapshots are collected in the so-called snapshot- An orthonormal basis is derived from the snapshot matrix X using the Proper Orthogonal Decomposition (POD). This mathematical procedure provides the most optimal basis (in the least-square sense) to represent the snapshots in combination with their corresponding eigenvalue λ i 1 . The eigenvalues denote the energy-content of their corresponding eigenmodes v i in representing the modes of the DOF.
Based on their eigenvalues the most important modes can be selected to construct a Reduced Basis for the microscale problem. The eigenvalues and their corresponding eigenvectors are sorted in descending order, allowing to construct the basis from the first n w eigenvectors. The required number of modes to capture the snapshot up to a given tolerance δ ∈ [0, 1] is found by increasing n w until the following relation holds 2 The Proper Orthogonal Decomposition is outlined schematically in Fig. 2.

Construction of the reduced order model
After determining the number of required modes V = [v 1 , v 2 , . . . , v n w ] the reduced basis functions R l (x 0 ) can be constructed as a linear combination of the traditional basis functions N m (x 0 ) using the modes v l resulting from the POD.
Using a Galerkin approach for discretizing the reduced system, the test and trial functions are discretized using the reduced basis 1 Note that the singular values σ i resulting from singular value decomposition (SVD) are related to the eigenvalues of the proper orthogonal decomposition via σ i = √ λ i 2 This relation holds only for fluctuation fields that are present in the snapshot matrix.
which Φ j and W i are the reduced test and microfluctuation coefficients. Substituting the reduced functions into the weak formulation of the microscopic momentum balance (8) the following reduced internal forcef int is found (13) with i = 1, . . . , n w and W the reduced vector with unknowns [W 1 , W 2 , . . . , W n w ] T used to discretize the microfluctuation field. The number of modes n w is often orders of magnitude smaller than the number of DOFs n d . The linear momentum balance is solved using the same Newton-Raphson procedure for the microscale problem. The computational costs for solving the system of equations scales proportionally to the number of microfluctuation modes n w .
After reducing the number of microfluctuation DOFs, the computational costs for solving the RVE problem are determined by the solution of the material-model. Due to the nonlinear character of the internal force, the integral in equation (13) can not be precomputed. This procedure remains computationally expensive since the material model needs to be resolved on each integration point to compute the integral of the internal force.

High-performance reduced order modeling
In this section two approaches to reduce the costs of evaluating the nonlinear internal force are reviewed: 1. The Empirical Interpolation Method (EIM) [42] projects the nonlinear term onto a reduced basis for the stress field. The stress field is approximated by the reduced modal basis, enabling pre-computation of the integral. 2. Alternatively, the integration scheme can be reduced directly. This approach is known in literature as Energy-Conserving Sampling and Weighting (ECSW) hyper reduction method or Empirical Cubature Method (ECM) [33]. The integration costs are reduced by using a prese-

Empirical interpolation method
In the Empirical Interpolation Method, the stress field is approximated by interpolating between weighted stress modes Ψ j (x 0 ) dependent on the microscopic coordinates, which are scaled by the coefficients C j (ε M , ξ ) that depend on the macroscopic deformation ε M and material point history ξ . The advantage of introducing modes for the stress-field is that the integration of these modes can be performed in the off-line stage since the interpolating coefficients no longer have a spatial dependency. The weighted strain modes Φ k (x g ) are constructed by taking the symmetric gradient of the micro-fluctuation modes weighted by the square-root of the integration weight (including the Jacobian). The weighted stress modes μ k (x g ) are obtained from the square-root weighted stress snapshot matrix X σ . The orthogonal weighted stress-basis can be obtained by applying a standard POD procedure on the vectorized components of the sampled stresses or by using a tensorial decomposition such as the procedure outlined by Roussette et al. [45].
The square-root weighted stress and strain can be approximated as Substituting the stress basis (14) into the reduced order model (12) after applying Gaussian quadrature to calculate the integral leads to the following expressions for the internal force vector.
The modal coefficients C j (ε M , ξ ) are to still be determined for the current strain state. The stress modes are weighted by selecting a set of integration points as sampling points and applying a Least-Square fitting such that the error between these points and the approximated stresses will be minimized in the Least-Squares sense.

Reformulation of the ill-posed stress-strain problem
The stress-and strain basis are composed out of stress-and strain-fields resulting from the converged linear momentum balance (8). This leads to an ill-posed system of equations since the reduced linear momentum balance (18) constructed using the modal bases for the stress-and strain fields is in equilibrium regardless of the coefficients C σ , as stated in [42].
In order to solve this problem, the Expanded Basis Approach (EBA) is applied, where the stress-basis Ψ and corresponding coefficients C σ are enriched with the weighted strain basis Φ and corresponding (inadmissible) coefficients C ε , such that the basis does not satisfy equilibrium independently of the coefficients C σ .
By rewriting the strains and stresses in Voigt notation, the tensorial bases Ψ and Φ are reformatted into the matrix formats Ψ and Φ respectively. An example of this procedure is illustrated for a set of two-dimensional strain modes: The gappy stress basisΨ is formed by selecting all the rows corresponding to the selected integration points x g for g ∈ I where I is the set with selected integration point indices.
The coefficients for the expanded basis (19) C = [C T σ , C T ε ] T are identified from a set ofn g n g sampled gappy stresses in Voigt notationσ to approximate the stress in the Least-Squares sense using: Using the Schur complement, the coefficients C can be expressed by in which the matrixΨ † is the pseudo-inverse of the gappystress basis matrix and S is Schur's complement matrix defined bŷ Matrix S is invertible since the sampled stressed are chosen such thatΨ ex is of full rank. The problem can be rewritten into finding a solution for which the inadmissible coefficients vanish, i.e. C ε = 0, such that the stress solution is interpolated using only stress basis vectors in equilibrium. Since S is non-singular, as shown by [42], the coefficients can only be 0 when the following holds: This form of the problem is referred to as the hyper-reduced problem [42].

Gappy point selection
To complete the hyper-reduced model a set of integration points x g for which g ∈ I suitable for the Empirical Interpolation of the stress field is chosen. This is done by using the snapshots of the stress field as samples to find a set of integration points that yield a good approximation of the complete stress-field. It is computationally intractable to evaluate the approximative qualities of the subset of integration points for all n ĝ n g possible combinations. Therefore the sub-optimal Greedy algorithm is applied to find a set of integration points which has a good interpolating quality for the snapshot data [42].
To improve the stability of the system of equations, the selected points are complemented with a second set of points selected from the remaining integration points. These points are selected using the Greedy algorithm using a criterion that aims at optimizing the conditioning of the resulting reduced tangent stiffness matrix.
Accuracy points Hernández et al. [42] select the integration points based on minimizing the error between the stress snapshots σ j and the interpolated stress σ * j (Ψ, I) constructed using the subset of integration points x g with g ∈ I. The error in the stress approximation is given by which is split up by Hernández et al. [42] into a truncation error that represents the error introduced due to the use of a reduced stress basis and a reconstruction error rec . The reconstruction error measures the error introduced by the Least-Squares fit of the stresses onto the modal basis, which is given by This error measure is used to select the integration points that are best suited for an accurate stress interpolation.
Stability points Hernández reported that using integration points that are only selected on the basis of accuracy, provides a system of equations that is not unconditionally stable. Therefore, the set of selected integration points needs to be complemented with a set of extra points to ensure the stability of the system. The same Greedy selection procedure is used with a criterion that aims to optimize the positive-definiteness of the tangent stiffness matrix. To ensure maximum stability, the criterion given in [42] that needs to be minimized reads This yields a second set of integration points in favor of the stability of the Empirical Stress Interpolation.

Reconstruction of the reduced stress field
The resulting weighted stress field in the RVE is given by the Empirical Interpolation (17). The coefficients C σ can be found using the expression (21) by substituting C ε = 0.
where R is referred to as the weighted reconstruction matrix.
To reconstruct the macroscopic stress, the reconstructed stress-field σ m (x g , t) has to be integrated over the RVE and volume averaged. The relation between the gappy-stress and the macroscopic stress is then given by the following linear operator: which projects the gappy stressesσ onto the macroscopic stress σ M = Tσ .

Empirical cubature method
Instead of approximating the stress field using a Proper Orthogonal Basis constructed from the stress snapshots, the expensive integral of the internal forces can be approximated using the Empirical Cubature Method. For a polynomial FEM basis one can construct an exact quadrature scheme. This classical Gaussian quadrature scheme approximates the integral summing up weighted samples of the integrand at specific points, the Gauss integration points where w g denote the Gaussian quadrature weights (including the determinant of the Jacobian).
When rewriting the problem using the reduced-order basis to (12) one can imagine that it is possible to reduce the integration as well since the number of unknowns describing the integrand has decreased drastically. The underlying idea of the reduced integration is to select a subset of integration points and appoint a (positive) weight to them such that they approximate the integral as accurately as possible. This concept is demonstrated in [33,40,41].
The method will be outlined briefly in two steps. First the determination of the integration weights is discussed; next the selection procedure for the integration points is presented.

Determining the integration weights
To determine the weights, the internal force contributions at each integration point f p i (x g ) under snapshot p = 1, . . . , n s resulting from each modal virtual strain ∇ s R i (x g ) for i = 1, . . . , n w are considered. The modal internal force contributions for each snapshot are collected in a snapshot matrix X f ∈ R n g ×n w ·n s .
furthermore the resulting integrals are collected in the right- The integration error for a selected subset of integration points G ⊂ {1, 2, . . . , n g } using the associated integration weights α i for i ∈ G is found by where the matrix J G is defined by To find the optimal weights, the following minimization problem has to be solved in whichn g integration points need to be selected and their corresponding (positive) integration weights α need to be determined.

Reformulation of the ill-posed minimization problem
Note that problem (35) is again ill-posed for the case in which the integrals of the internal force fields are all equilibrium solutions of the RVE problem. This yields a right-hand side vector b = 0, therefore the minimization problem (35) yields the trivial solution to the problem α = 0. The ill-posed problem is regularized by integrating an extra function g(x) = 1 which needs to result in the volume of the RVE |V m | [33].

Cubature weights
It is intractable to evaluate all the n ĝ n g possible combinations. To obtain a good approximation of the minimum a Greedy procedure is applied to select the integration points that are suitable to reduce the integration error. For a given subset of integration points G the coefficients α can be determined using a Least-Square (and when negative values occur a Non-Negative Least-Square) algorithm.
The criterion used to identify candidate points, selects the point that is currently most aligned to the residual of the snapshot integrals. This procedure repeats until enough integration points are selected to drop the residual under the required tolerance for the integration accuracy or the maximum number of gappy integration pointsn g is reached.

Box 1: Construction of the Empirical Cubature Scheme
Start out with a set of available integration points A ← {1, 2, .., n g }, an empty set S ← ∅ and counter m ← 0 for the selected points and the integration residual r ← b.
|V m | > δ AND m <n g 1. Select a candidate integration point i for which g is most aligned to the residual:  It is not trivial to reconstruct the microscopic stress field from the sampled stresses as the Empirical Cubature Method does not use stress-modes. One can however follow the approach proposed by [33] to reconstruct the microscopic stress field using the weighted reconstruction matrix R for the selected integration points.

Reduced integration of the RVE
The linear momentum balance with reduced internal force (12) is solved by using the Empirical Cubature scheme to resolve the integral. This leads to the following reduced integration schemē (37) for which the unknown reduced micro-fluctuations W can be solved using a Newton-Raphson method.

A comparative analysis of EIM versus ECM
To assess the performance of the hyper-reduced models, a critical comparison is made between the empirical models, the reduced order models and the full-order model of a RVE. Focus is put on the suitability for the hyper-reduced models as a stand-in replacement of the complex full-order model of the RVE in a computational homogenization framework. Small plane-strain is assumed in the model of the RVE. The RVE consists of fibers in a soft matrix. The matrix is modeled using a small-strain elasto-viscoplastic material constitutive law of De Souza Neto et al. [46, p. 148]. The fibers are elastic. The difference in stiffness of the fibers and the matrix is chosen to be a factor 10 to replicate a carbon-fiber epoxy bundle. The properties of both phases in the RVE are listed in Table 1.
In which E is the Young's modulus, ν the Poisson's ratio, γ 0 the initial slip rate, n the rate exponent, σ y 0 the initial yield stress, H the hardening parameter and m the hardening exponent. A time-increment t = 2.5 × 10 −4 s is used.
The dimensions and topology of the composite microstructure are shown in Fig. 3. The topology is meshed using 20,602

Snapshot construction
The reduced order models are initialized using an orthogonal set of macroscopic strains (computed off-line). The full-order model results for the microfluctuations and stresses are stored in the snapshot matrix for each time-increment. The loading paths chosen to initialize the model run from 0 to 0.2ε eq M in 20 steps in the ε x x , ε yy and ε xy direction. In this way 60 equilibrium configurations are obtained.
The microfluctuation and stress-modes are determined from the snapshot matrices using POD. The eigenvalues that correspond to these modes are shown in Fig. 4. To investigate the influence of the number of modes taken into account, two reductions are formulated with a different number of modes. The reduced microfluctuation basis is formed out of the n w = 10 and n w = 20 most dominant modes of the POD. In the remainder, these reductions will be denoted as the low-and high-fidelity models, respectively. The same number of modes are used to approximate the stress-field in the empirical interpolation model. In correspondence to Hernández [42], the number of sampling points used for accuracy and stability corresponds to the number of stress-and strainmodes respectively.
To generate the empirical cubature scheme, a cut-off tolerance on the integration error is required. The order of magnitude of the interpolation error is predicted by the sum of the squared eigenvalues of the truncated modes of the stress The eigenvalues corresponding to the microfluctuation and stress modes Table 2 The number of microfluctuation modes used in the reduced and hyper-reduced models, the number of modes used for the empirical interpolation of the stress-field and the number of sample points in the empirical cubature model required to achieve the specified integration tolerance ROM EIM ECM n w n σ δ →n g Low-fidelity 10 10 10 −3 → 67 High-fidelity 20 20 10 −4 → 321 basis. These interpolation errors δ 1 = 10 −3 and δ 2 = 10 −4 are used as tolerances to cut of the integration point selection algorithm.
In the remainder of this paper, three different techniques will be compared: (1) standard reduced order (with full integration), (2) empirical interpolation and (3) empirical cubature models. A summary of all the specifications of the (Hyper-)Reduced Order Models in terms of modes and sampling points are given in Table 2.
In order to assess the achieved approximative qualities of the reduced models, the microfluctuations and micro-and macroscopic stresses are compared to the full-order model. In the reduced order model there is no loss of information present during the integration of the stress-field to obtain the internal forces. Therefore, this model can be regarded as the best-approximation for the applied reduced basis and serves as a reference model to investigate the performance of the hyper-reduced models. The achieved time reduction will be evaluated by comparing the CPU-time required to perform the computations to the CPU-time required to solve the original full-order model.

Uniaxial loading
The accuracy of the empirical methods are first investigated for one of the load-cases used to generate the snapshots (ε x x M ).
This case serves to highlight the influence of the reduced stress integration in the (hyper-)reduced order models. Note that the modes required to represent the microfluctuation and the stress-fields are then present up to the cut-off tolerance of the POD basis. The effects of the stress-field reconstruction for EIM and integral reconstruction for ECM using gappy sampling on the final deformation and stress fields will therefore be dominant. The Representative Volume Element is loaded with a macroscopic strain ε M x x from 0 to 0.2 in 20 time-increments (each with the specified time-step t). Figure 5 shows the RVEs loaded up to a macroscopic strain of ε M,x x = 0.2. The deformed RVEs solved by the full-order model and the lowfidelity hyper-reduced models are plotted in Fig. 5a-c. Not surprisingly, the essential deformation modes for this case are all captured by the low-fidelity models. The microfluctuation w x along the line A-B and the errors, defined as for the hyper-reduced models, are plotted in Fig. 5d, e respectively. The error of the hyper-reduced RVEs are both of order O 10 −4 or lower. Secondly, the errors in the microscopic stress field are analyzed. The stress fieldsσ (x) resulting from the (hyper-)reduced models are compared to the stress field σ (x) of the FOM model. It should be noted that the local-stress field for the ECM method is obtained by creating the reconstruction operator R for the sampled points. The error in the stress in the xx-direction σ xx is defined as The stress residuals are depicted in Fig. 6. For both the LoFi and the HiFi model, the EIM method performs better than the ECM method. This difference results from the extra information that the empirical interpolation model uses from the stress-modes. Since these modes contain a lot more spatial details, the stress-field is captured more accurately. Furthermore, it can be concluded that the error bounds, although they are lower, do not differ significantly between the LoFi and the HiFi models. The volume averaged error in the stress dropped significantly for both models.
The macroscopic stress resulting from full-and (hyper-)reduced order models are plotted in Figs. 7 and 8. For this load-case the resulting macroscopic stresses show no large deviations from the full-order result. Interestingly, the errors plotted on the right hand side, show a clear difference in the EIM and ECM approximation of the macroscopic stress. The EIM approximation follows the error trend of the reduced order model, while the error made by the ECM model is several decades higher than the error of the reduced order model. The error in the microfluctuations of the hyper-reduced models (w) relative to the full-order model (w) is plotted in Fig. 9. The error is defined as follows: The error in the microfluctuation fields decreases with increasing number of strain modes. The reduction in error It is noteworthy that the reduction by the empirical interpolation method seems to outperform the empirical cubature method only to small extent, which is remarkable since the difference in approximation errors of the macroscopic stress for both models differed by several decades.
The computation times for all models are assessed on a single CPU (Intel ® Xeon ® CPU E5-2667 v3 processor @ 3.20 GHz), without parallelization to keep the comparison clear. The speed-up with respect to the full-order model are presented in Table 3. The speed-up for the ROM model is relatively insensitive to the number of modes used and the gain in computational time is small compared to the fullorder model of the RVE. This illustrates the poor performance of classical ROM for non-linear reduced models. The EIM model speeds up between a factor 112 and 123 times. The speed-up in the ECM models are between 73 and 113 times. The ECM model is somewhat slower due to the larger amount of integration points required.

RVE under biaxial loading
A biaxial load is applied next to test the performance of the reduced models under loads not present in the snapshots used to train them. The macroscopic strain is increased to ε M = 0.2, 0.0 0.0, 0.2 in 20 equal load-increments.
From the FOM and ECM stress fields presented in Fig. 10, it is clear that all the dominant plastic zones are recovered by the reduced model. The stress-strain curves of the biaxial load-case presented in Fig. 11, indicate that in the linear regime the EIM model outperforms the ECM model. Since the corresponding modes are present, the stress approximation of the interpolating model is more accurate. However, when entering the plastic regime, the ECM model seems to perform at consistent error level, while the error in the macroscopic von-Mises stress predicted by the EIM model increases due to lack of the sampling points required to accurately interpolate the stress modes for this load case.
The cpu-times of the simulations are given in Table 4. The hyper-reduced methods EIM and ECM are 54 and 88 times faster than the full-order method respectively. The difference in speed-up between the empirical interpolation and  the empirical cubature method can be explained by the extra gappy points required by ECM to obtain accurate solutions.

Path dependency
To investigate the sensitivity of both reduced models to history dependent behavior, the models are preloaded with 10 increment is plotted in Fig. 12.
In the first regime, the EIM model is more accurate than the ECM model due to availability of the correct modes. When entering the shear strain regime at increment 10, the ECM method starts to perform better than the EIM method. The EIM model depends strongly on the stress-modes found during the off-line phase. Since the reduced-model was only trained using RVEs with virgin material without prior loading history, the required modes are not present to accurately capture the path-dependent stress fields. The small dependence on stress modes makes the ECM easier to train. Obviously, strain modes are also different for different loading paths, whereas the modes found in the snapshots include monotonic paths only. For this small-strain model, this mainly effects the stresses, since the ECM does not show a significant drop in accuracy after increment 10. The speed-up achieved by the (hyper-)reduced models for the path-dependent case is presented in Table 5. The speed-ups achieved during the path-dependent loading are comparable to the speed-up factors found in the biaxial testcase presented in Table 4.

Cyclic loading
Finally, the behavior of the models under repetitive loading is analyzed. The RVE is exposed to a cyclic load in the x x-direction, shifting its history increasingly further away from its virgin state. The loading direction corresponds to the snapshots, excluding the influence of any interpolation in between different loading directions.
The stress-strain curve is shown in Fig. 13. The error in between the full-order model and the hyper-reduced models increases slowly over the increments. Due to the single loading direction the difference between the EIM and the ECM method is much less pronounced than in the path dependent case. When switching between tensile and compressive loading, the error shows significant jumps indicating that the unloading stage, which is absent in the off-line initialization stage, is hard to capture for both the ECM and EIM models.
The speed-up achieved by the (hyper-)reduced models for the cyclic load-case are presented in Table 6. Although the runtime of the simulations is longer due to the extra increments, the speed-ups achieved during the path-dependent  loading are comparable to the speed-up factors found in the previous test-cases presented in Tables 4 and 5.

Conclusions
Both the empirical interpolation method and empirical cubature method yield a significant reduction in computation time and memory footprint compared to traditional ROM. Both models are capable of interpolating between the sampled deformations as shown for the biaxial load-case. When the load-case includes unsampled states of the history parameters, the empirical interpolation method result in errors above 1% for the macroscopic stress with respect to the full-order model. The empirical cubature method is less sensitive to the extrapolation of the snapshot space, since it only depends on the strain modes and not on the stress-modes. The main asset of the ECM method is that it naturally preserves the stability of the full-order model and therefore does not require the added stabilization needed for the EIM method. However, with the stabilizing integration points added, the EIM models showed good convergence for all the cases demonstrated in Sect. 5, using the (heuristic) criterion of n w stabilization points suggested in literature.
The empirical cubature method requires more computational time and has a larger memory footprint than the empirical interpolation method due to the extra integration points needed to accurately capture the behavior of the full-order model. The EIM method outperforms the ECM method when on-line computational efficiency is considered due to the extra information it has available in the stress-modes. However, this dependence on stress-modes makes EIM also more vulnerable for ill-sampled path dependent problems, as demonstrated in the path-dependent and cyclic examples.
The largest reduction in required memory and computation time can therefore be achieved using the empirical interpolation method in a computational homogenization setting, due to the smaller amount of memory required to store the history of the sampling. To get accurate results, the highdimensional snapshot space needs to be sampled sufficiently well such that the paths found in the macroscopic problem can be interpolated sufficiently accurate by the EIM model.
Herein lies an open challenge, since the material model used in the above examples requires 7 history parameters and 3 strain parameters to calculate the macroscopic stress, the sample space of the model is 10 dimensional. It is therefore not trivial to obtain a sufficiently dense sampling of the parameter-space to capture the path-dependent macroscopic stress.
In the considered computational homogenization problems, the microscopic RVE has to be evaluated numerous times in the deformation and history space. It is therefore necessary to construct an accurate yet compact snapshot space. The empirical interpolation method is expected to be sufficiently accurate for many engineering purposes (within 10% error). However, when a higher accuracy is required, ECM methods are more promising since they are less prone to produce large interpolation errors. In problems with a strong path-dependence, it is more straightforward to construct an accurate snapshot space for ECM models.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.