An Experimental, Computational, and Statistical Strategy for the Bayesian Calibration of Complex Material Models

The study of solids and structures under extreme conditions often relies on simulations that employ complex material models. These, in turn, are formulated using analytical expressions that depend on parameters whose values need to be adjusted for optimally reproducing available experimental results and, especially, out-of-sample predictiveness. In this article we review the process required to calibrate all the parameters of the Johnson-Cook and Zerilli-Armstrong models for a nickel-based superalloy. To this end, we present in an unified fashion the thermomechanical problem, its numerical implementation, a complete experimental campaign that suffices to obtain the material constants, and a Bayesian calibration procedure that can be employed to obtain the optimal values for the model parameters as well as their uncertainty. The advocated methodology is ideally designed to calibrate strain rate-, temperature-, and age-dependent elastoplastic models. The procedure is, however, general enough to be employed as guideline for other complex calibrations.


Introduction
Novel materials with advanced properties can lead to radical new designs of mechanical parts and structures. Precisely by exploiting these advance features, parts can be made lighter, stronger, tougher, or better suited for extreme conditions such as the ones in high temperatures, corrosive environments, etc. For example, in the case of aircraft engines, new metallic alloys are being used that possess stable mechanical properties under demanding temperature and stress cycles. Defense applications, as well, depend on tough protection materials that must ensure the integrity of structures and personnel, while being as light as possible (see e. g., [1][2][3][4] for some illustrations of the use of new advanced materials).
In order to take advantage of these new materials, mechanical parts often have to be completely redesigned. Computer Madrid, Spain models are widely employed for that purpose since they allow inexpensive virtual tests that speed up prototyping and rule out far-from-optimal designs. In computer models, geometric modifications can be made rapidly and affordably. As a result, prototype manufacturing and testing are considerably reduced and pushed to latter phases of the design process, potentially saving large amounts of money and time.
This common approach is used throughout the mechanical, naval, aerospace, and civil industries. In order to be effective, however, computer models need to be robust and accurate. In the design approach described before, in particular, material models play a critical role. New materials demand new models that need to be validated before they can be employed in actual designs, a process that is simple if the material itself is operated in the linear regime but complex otherwise. Material models suitable for the simulation of extreme conditions, in particular, are difficult to validate, especially because they depend on many parameters whose precise value is unknown a priori. Moreover, the interaction among parameters can be so complex that optimal values for one set of conditions can be terribly inexact for another.
To model metals subjected to large inelastic deformations, variable temperature and high strain rates, engineers often rely on phenomenological models, most notably the ones due to Johnson-Cook [5], Zerilli-Armstrong [6], and similar others. All of these models predict reasonably well the behavior of metals in a wide range of conditions and have proven, for decades, their robustness. A common and deleterious feature of all these models is that they depend on a fairly large number of material constants (between 10 and 20, when damage is included), making their calibration cumbersome, and delicate [7,8].
In view of the reasons alluded to before, in this article we would like to provide a detailed account of the complete methodology required to calibrate and validate a complex material constitutive law. More precisely, we will use it to model the response of a nickel-based superalloy of the type employed for jet engine parts subject to high temperatures and potential high-velocity impacts. In order to comply with the confidentiality agreement signed with the funding industrial partner of this study, however, all the results are normalized by undisclosed values. While this normalization limits the findings of this article with regard to the material itself, it does not affect any of the methodological contributions that are the goal of this work.
For completeness, the starting point of this article is a succinct description of all the ingredients that participate in a complex solid mechanics model of the type we aim to calibrate. These include finite strain transient thermomechanics, inelastic material modeling, and space/time discretizations, including time-integration methods. These topics can be found scattered in the literature (e.g., [9][10][11] for thermomechanics, [12,13] for the numerical implementation and [14][15][16] for general presentations of inelastic constitutive modeling), but we find convenient to present them together and reap the benefits stemming from a unified notation.
After describing the models that are most often employed for the applications in mind, the chosen calibration procedure is described. This is an active topic of research, given the ubiquitousness of computer models in all fields of science. A straightforward calibration of the model can be performed by a least squares minimizing of the mismatch between some experimental measurements and model predicted values [7,17,18]. This approach, however, might require an artificial regularization that affects the calibrated parameters and, moreover, does not provide any information regarding the uncertainty in the calibration. More powerful calibration methods, based on Bayesian inference, have gained popularity across disciplines (e.g., [19][20][21][22][23]) and in particular in solid mechanics and materials science [24][25][26][27][28]. In all of them, the expert's knowledge on the parameter values is accounted for by a prior probability distribution which is updated by making this probability optimally consistent with available data. In addition, by working with random variables instead of point estimates, Bayesian calibration intrinsically provides uncertainty measures for the fitted parameters.
In this work, we adhere to the Bayesian approach to calibration, using it to obtain stochastic distributions of the parameters for a complex material model. The methodology requires experimental data to feed the process. For that, we describe a complete experimental campaign that can be used to incrementally determine the values of the unknown parameters. These tests will, naturally, include quasistatic and dynamic tests at different temperatures. Moreover, we modify the properties of the material by exposing it to high temperature during a long period of time, a situation that is common, for example, in aircraft engine operations. The softening due to heat exposure is also included in the calibration through additional parameters.
The goal of this article is thus to give a reasonably complete and justified answer to the question: How do I calibrate a complex material model so that it can be later used in simulations? As detailed, any answer to this question involves aspects from mechanics, materials science, numerical methods, experimental mechanics, and statistics. This article will describe all the necessary ingredients in a systematic way, explaining the relationship between them, and the unifying role of Bayesian inference. Such a complete account can not be found in the literature, to the authors' knowledge and, together with the bibliographic references provided, should serve in the future to guide new calibrations.
The remainder of the article consists of the following sections. In Sect. 2 we summarize the model we want to calibrate; more specifically, we provide a brief account of the finite strain inelastic thermomechanics, the material models of interest, and the numerical solution of the coupled problem. Section 3 presents the theory of Bayesian calibration and explains the roles of the experimental and numerical data in this approach. In Sect. 4 we describe, step-by-step, the actual calibration procedure and we include information of the experimental campaign that needs to be carried out. Since every calibration needs to be validated, we describe in Sect. 5 tests that are representative of the harsh conditions where the material models investigated will be employed. Finally, Sect. 6 closes the article with a summary of the main results and some final comments.

The Model
The first step in a model calibration is, naturally, the model formulation. For the problems of interest here, the models that we need to build must provide approximate solutions to complete thermomechanical problems. This will involve finite strain thermomechanics, its space/time discretization and the constitutive model (see Fig. 1).

Nonlinear Thermomechanics
We summarize first the governing equations of stronglycoupled thermomechanics in the finite strain regime. This initial boundary-value problem needs to accommodate the standard mechanical relations that govern the equilibrium of forces with the equations that describe the evolution of energy and, especially, the coupling effects that take place between the two phenomena. We are particularly interested in modeling rate-dependent and rate-independent plasticity, and the attendant dissipative processes that are responsible for the coupling.

Balance Equations
Nonlinear thermomechanics is governed by the equations of the balance of linear momentum, the balance of energy, and is closed with a constitutive model. Here, we focus on the thermomechanics of materials with internal variables, a class of models that employ one or more ancillary variables to account for the history of states at each point, the quantity and type of which depend on the phenomena they aim to represent (viscosity, plasticity, damage, etc.). As it is customary in continuum mechanics, we study a body B 0 , an open bounded set in R 3 with points denoted as X, with smooth boundary ∂B 0 and reference density ρ 0 : B 0 → R + . The kinematics of this body is described by a motion ϕ : where T is the length of the time interval of interest and we write ϕ t (X) = ϕ(X, t). At every instant, ϕ t restricted to ϕ ⊂ ∂B 0 is known to be equal to a given functionφ t . Additionally, the motion is assumed to be smooth enough and its deformation gradient F t = Dϕ t must satisfy det(F t ) > 0.
Let us assume that the body is subjected to forces per unit mass given by a vector field B : Then, the material form of the balance of linear momentum equation is given by in the interior of B 0 , withφ t = V t . In Eq. (1), and below, V t = V (·, t) corresponds to the material velocity, the notation( ) refers to the derivative with respect to time, ∇ X · is the material divergence operator, and P is the first Piola-Kirchhoff stress tensor, whose specific relationship with the motion and internal variables is discussed later. Equation (1) needs to be complemented with the equilibrium equation at the traction boundary, namely, where N is the unit outward normal to the boundary of the reference body. Balance of angular momentum implies the symmetry relation P F T = F P T .
To state the conservation of energy, consider that the thermodynamic properties of the material at equilibrium derive from its internal energy E, a thermodynamic potential of the form E =Ê(F, ξ , S), where F is the deformation gradient, ξ a set of internal variables, and S the entropy per unit of reference volume. The number and type of internal variables depend on the specific material model and serve to account for dissipative processes such as the ones arising in viscous and plastic phenomena. The functional dependence ofÊ on its arguments cannot, in addition, be completely arbitrary. We refer to standard references on thermomechanics for these and other important technical considerations.
The balance of energy equation states that in a thermomechanical context the rate of change of the internal energy satisfẏ where H is the nominal heat flux, H represents the heat influx per unit of reference volume and the dot between tensors or vectors indicates their inner product. Standard arguments based on the Clausius-Planck form of the second law of thermodynamics allow to identify the relations between the derivatives of the energy with respect to its arguments and other thermodynamic quantities. In particular we have where is the absolute temperature and β are the variables that are conjugate to ξ . It is helpful to rewrite the balance of energy in terms of the free energy ψ =ψ(F, ξ , ). We note that the entropy can be recovered from the free energy with the relation S = −∂ ψ and thus we have where we have introduced the stress-temperature tensor M, the dissipative coupling array γ and the heat capacity c 0 as Inserting the definitions (6) in Eq. (5), we verify that the balance of energy can be equivalently written as which is the form most often employed in thermomechanical simulations. Completing this statement, the boundary of the solid must be split as in ∂B 0 = ∪ H . On the subset , Dirichlet boundary conditions on temperature must be imposed. On the remaining part H , Neumann boundary conditions for heat must be included of the form whereH is the heat flux applied from the exterior of the body.
To close this section, we observe that the initial boundary value problem of thermomechanics requires that initial values are provided for the temperature, the deformation, and the material velocity. Hence, for all X ∈ B 0 ,

Thermodynamic Relations
Equilibrium thermodynamics provides the constitutive relations that are required to define the stationary properties of the solid. As introduced in the previous section, the fundamental ingredient is the thermodynamic potential, e.g., the internal energy or the free energy, from which all equilibrium relations can be obtained.
In this section we focus on nonlinear thermo-elasto-plastic solid models, and include rate-dependent effects, motivated by the kind of problems that we are interested in studying. From the outset, we accept the multiplicative decomposition of the deformation gradient into an elastic and a plastic part, i.e., F = F e F p and only model isotropic hardening which we denote as ζ . Hence, the internal variables of all the forthcoming models are ξ = (F p , ζ ). The plastic flow is assumed to be isochoric and hence J = det(F) = det(F e ). Moreover, the free energy function is assumed to be always of the form here ψ e is the free energy of the elastic deformations, ψ h is the free energy of the hardening, ψ th is the thermal free energy and ψ c is the coupling free energy. We would limit our models to isotropic materials and thus a standard argument gives that where b e = F e F e T is the elastic left Cauchy-Green deformation and e = 1 2 log b e is the Hencky elastic strain tensor. In order to formulate a plasticity theory in the current configuration it is convenient to work with the Kirchhoff stress tensor τ = P F T . Using the chain rule it can be shown that The other equilibrium relations required, following Eq. (5) and the definition of the free energy, are For concreteness, in all the examples employed in this article, we will always use the same functional form for the free energy, one that is commonly used for modeling metals. More precisely, using the split assumed in Eq. (10), we will consider the expressions where κ, μ are, respectively, the bulk and shear moduli, H a hardening parameter, α the thermal expansion coefficient, and 0 a reference temperature.

Kinetic Equations
Inelastic material models of the type considered heretofore are closed with kinetic relations governing the evolution of heat and the internal variables. For the heat transport, we will use a simple Fourier law. The nominal heat flux will be related with the material gradient of the temperature by the expression where K is a material conductivity tensor. The spatial heat flux vector h = J −1 F H is also related to the spatial gradient of the temperature by the analogue relation with k = J −1 F K F T being the spatial conductivity tensor and θ = • ϕ −1 .
To define the elastoplastic behavior, we will use isotropic J 2 yield functions of the form where τ dev ≡ τ − 1 3 tr[τ ]I is the deviatoric part of the Kirchhoff stress and σ y is the yield stress of the virgin material. Then, evolution equations, or flow rules, must be provided for the two internal variables F p and ζ . A standard argument based on the postulate of maximum plastic dissipation [29] gives that their evolution is governed by whereγ is the plastic multiplier and d p is the plastic rate of deformation, defined as the symmetric tensor d p = sym[Ḟ p F p −1 ]. The differential equations (18) must be completed with the Karush-Kuhn-Tucker conditions Given the specific choice made for the internal variables in plasticity models, i.e. ξ = {F p , ζ }, and the internal energy (10), the energy Eq. (7) reads with the affinities Often, to simplify Eq. (20), a new material constant is introduced [30], the so-called Taylor-Quinney parameter χ such that It is a common assumption to assign a constant value to χ , typically around 0.7−0.8. See, e.g., [31] for a detailed description and alternative definitions of the parameter. With this definition, the energy equation in temperature form, namely Eq. (20), simplifies to which is the expression that will be used in our simulations.

Temperature-and Rate-Dependent Elastoplastic Material Models
In this work we are concerned with elastoplastic materials of the type described in Sect. 2.1 whose yield stress depends on the temperature, the plastic slip, and the rate of plastic deformation, that is, for some smooth functionσ y : . This class of rate-dependent plasticity models is widely employed for the study of metals subject to impact and other high strain rate problems. Among the elastoplastic material models encompassed by Eq. (24), we concentrate on the well-known models of Johnson-Cook [32] and Zerilli-Armstrong [6]. In the Johnson-Cook (JC) model, the yield stress is the function where A, B, C, n, m are model parameters, ε p ≡ ζ is the plastic slip,ε p * ≡ε p /ε 0 is the dimensionless plastic slip rate, witḣ andε 0 is a reference strain rate. The homologous or nondimensional temperature * is defined as * ≡ − 0 where m and 0 are, respectively, the melting and reference temperatures for the given material. The second material model we would use is due to Zerilli-Armstrong (ZA) and employs the following expression for the yield stress: here C 1 , C 2 , C 3 , C 4 , C 5 and n are material constants and can be used as yet another material parameter. In contrast with Johnson-Cook's model, the structure of the yield stress function in the Zerilli-Armstrong's model is motivated by physical characteristics of the metal under consideration. For example, FCC metals have always C 1 = C 5 = 0, while in BCC materials the condition C 2 = 0 holds.

Model Enhancements
Rate-and temperature-dependent material models such as JC and ZA are widely employed in mechanics. As explained, their functional form is chosen so as to capture the effects on the yield stress of the hardening, the strain rate, and the temperature softening. In the same spirit, these models can be further enhanced to include in a phenomenological fashion additional terms that account for relevant physical phenomena.

Modeling Ageing
Some materials, when subjected to high temperatures for long periods of time, reduce their bearing capacity [33]. To account for it in a simple way, we propose to replace the expressions (25) and (28) for the yield stress with respectively. In both expressions, a 1 and b 1 are material parameters, and E is a normalized age accounting for the operational life to which the material point has been already subjected. The normalized exposure is defined as E = L M P ex p − L M P re f , where L M P ex p , L M P re f are the Larson-Miller parameters (LMP) for the exposed and reference materials, respectively [34]. We note that the expression selected for the ageing softening allows for an initial regime in which no softening takes place.

Modeling Damage
Among all the damage models, one of the most commonly employed for high strain rates is also due to Johnson and Cook [35]. This model accounts for the cumulative damage of a metal subjected to dynamic loading in a wide range of conditions, defining ultimately a plastic strain capacity. The fundamental ingredient of this model, referred to later as JCF, is the definition of a scalar damage variable where ε f is a material parameter representing the equivalent plastic strain to fracture. In our examples we will choose the fracture strain to be a function of the stress triaxiality σ * , defined as where tr[·] is the trace operator and σ eq is the von Mises equivalent stress. Following Bao and Wierzbicki [36], the equivalent strain to fracture is defined piecewise as in where D 1 to D 9 are material parameters,ε * is the dimensionless plastic strain rate and * is the homologous temperature, as defined in Eqs. (25) and (27), respectively. The fracture model itself scales the yield stress by the factor 1 − D, effectively reducing the strength of the material as it is damaged. It must be noted that, according to the different behavior of the strain rate hardening under traction and compression stress, two different parameters D 4 are employed in the piecewise fracture model, interpolating between them in the range −0.3 ≤ σ * < 0.0 to avoid discontinuities.

Numerical Methods
The approximate solution of the coupled thermomechanical problem requires a spatial and a time discretization.
Among the existing alternatives, the finite element method is probably the most common avenue to introduce a spatial approximation. Then, the method of lines can be used to integrate in time the ordinary differential equations that result from the first step.

Finite Element Discretization
To formulate the finite element equations we start by casting the initial boundary value problem of thermomechanics in weak form. Using the same symbols introduced in this section, in the weak formulation of thermomechanics one looks for an admissible deformation ϕ t and a temperature field t such that for all admissible deformation and temperature variations δϕ, δ . We note that the complete definition of the problem requires, additionally, the Dirichlet boundary conditions for the deformation and temperature, the initial conditions, and the constitutive laws for the thermomechanical material. The finite element spatial semidiscretization is simply obtained by replacing the solution and weighting spaces by finite dimension subspaces, both obtained as the linear combination of (known) piecewise polynomials, the so-called shape functions. More precisely, let = ∪ e K e be a partition of the body into disjoint closed polyhedra K e , or finite elements, connected at nodes collectively forming the nodal set N . Then, given piecewise polynomials {N a } with compact support, and assuming for simplicity that the same elements are employed for the mechanical and thermal balance equation, the deformation and temperature fields are approximated as where ϕ a , a are the nodal deformation and temperature values, both depending on time. Likewise, the variations of these fields are approximated as where now the nodal values of the variations are independent of time. Replacing the approximations (35)- (36) in the weak forms (34) we obtain, after standard manipulations, the coupled system of equations where Φ, Θ are the global vectors of nodal deformation and temperature, M is the mass matrix, C is the capacity matrix, F int , F ext are the internal and external forces, respectively, and H int , H ext are the internal and external heat fluxes, respectively. This procedure is standard and details are thus omitted. We note that the system (37) consists of two strongly coupled ordinary differential equations. The mechanical equation is a second order differential equation since it comes from the (hyperbolic) wave equation; the thermal part, however, is only of first order since it is the discretization of the (parabolic) heat equation.

Time Integration Methods
The initial boundary-value problem described before yields, after the finite element spatial discretization, a system of nonlinear, coupled ordinary differential equations. In order to approximate its solution, we describe next suitable time integration schemes, without their complete analysis. More precisely, we consider first an explicit time integration scheme based on the central difference method and then an implicit integrator based on Newmark's method. We refer to other works for the numerical analysis of their convergence [15,37].
To define both time integration methods, we consider first a partition of the time interval of interest As customary, we denote as f k the approximation of any variable f at time t k , that is, f k ≈ f (t k ).

Hyperbolic-Parabolic Explicit Integration
Starting with the explicit scheme, the selected approach for the time integration of the mechanical equation is the centraldifferences method. To describe it, let Φ k , V k , A k denote, respectively the nodal deformation, velocity and acceleration at time t k . Similarly, let Θ k , Ω k denote the nodal temperature and its rate, also at t k . Then, Eq. (37a) can be integrated using the recurrence formula for k = 0, 1, . . . and initial acceleration . We note that in order for the method to be explicit, M must be the lumped mass matrix of the body.
The energy Eq. (37b) must also be integrated in time. As in the case of the mechanical time marching method, let C be now the lumped heat capacity matrix and employ the update relations with initial temperature rate Ω 0 obtained from the identity We note that Eqs. (38) and (39) have to be updated simultaneously, and with the same time step size. In fact, as in all explicit integrators, the size of the latter might be a limiting factor for the solution of the problem using the proposed scheme since standard stability arguments [38] indicate that in all steps where h e is the minimum element length, K e is the thermal diffusivity tensor in the element (assumed to be constant), ρ e is the density, while λ and μ are the Lamé parameters. In particular, the dependence on h 2 e of this bound set a stringent restriction on the time step size for fine meshes.

Hyperbolic-Parabolic Implicit Integration
Explicit methods such as the one described in Sect. 2.5.1, while extremely convenient for the simulation of short dynamic problems like impacts, are not suited for the approximation of long transient simulations. In these cases, implicit methods prove more practical since they can employ much larger time step sizes.
For the implicit solution of coupled thermomechanical problems we will employ a combination of Newmark's method for the second order structural semidiscrete equation and the generalized midpoint rule for the thermal equation. As in the case of the explicit method, the starting point is the coupled system of ordinary differential equations (37) and the definition of the rates V k , A k , Ω k . Newmark's method introduces the updates where β and γ are parameters of the method. On the other hand, using the generalized midpoint rule for integrating the energy balance equation we introduce the update where α is the symbol chosen for the only parameter of the method.
The implicit solution of Eq. (37) with the time-stepping methods (41) and (42) has to be done with a nonlinear solver, for example, Newton-Raphson's. For that, the coupled mechanical and thermal equations are solved in a step-bystep fashion, solving iteratively at each equilibrium instant the linearized equations. Hence, given a trial solution at time t k+1 consisting of the pair where the blocks of the tangent stiffness matrix are evaluated at the known solution and ( R Φ,k+1 , R Θ,k+1 ) are the incremental residual vectors. From Eqs. (37), (41), and (42) it follows that the tangent blocks have the expressioñ The previously presented implicit integration scheme is unconditionally stable for linear problems.

Bayesian Calibration
To solve a practical problem in thermomechanics, the model of Sect. 2 needs to be supplemented with material-dependent parameters for the constitutive relation. In the case of complex materials, determining their optimal value is far from simple, a process that is referred to as model calibration.
An obvious choice for model calibration consists in selecting those parameters that minimize the squared error between certain experimental results and the corresponding model predictions [7,18]. This optimization path can provide good fits for some experimental results, but it might be ill-posed if there is insufficient data. Moreover, it sometimes requires regularization techniques that, in turn, establish an artificial dependence of the parameters on the regularization employed.
An alternative approach for model calibration that is theoretically sound and unconditionally well-posed is based on Bayesian inference and Gaussian processes. In addition to optimal parameter values, this technique provides a rigorous avenue to quantify the uncertainty on the model [19,20,39]. The Bayesian calibration does not give the optimal values of the model parameters, but rather the parameter probability distribution that is optimally consistent with our pre-existing knowledge and the experimental evidence. As we will explain now, moreover, this approach is flexible enough so as to encompass model inadequacy and experimental errors, all within the same framework. In particular, we will employ the ideas originally proposed by Kennedy and O'Hagan [19], which have been successfully employed in very different fields (see, e. g., [40][41][42]).

Model and Metamodel
To describe the Bayesian calibration process in general terms, let us denote as x an array of input data that can be controlled before performing an experiment or a field measurement. In the context of mechanical calibration, these can be the temperature, the loading, the impact velocity, the geometrical dimensions of the specimen, etc. For a set of variables x the output of the experiment or observation, assumed to be a scalar for simplicity and indicated as z, satisfies where here ζ refers to the physical phenomenon and ε is the experimental error. A model or simulator of ζ is a function or a computer code η that uses x as input and also depends on some parameters t that are not physically measurable but nevertheless modulate its result. For example, in the case of the material models described in Sect. 2.2, these could be the Johnson-Cook constants A, B, C, n or the thermal diffusivity, the density, etc. In general, thus, the measurements verify where δ(x) is the model inadequacy, an error that accounts for the inherent limitation of any simplified model and θ is the value of the parameters t that best represents the physical system under consideration and whose precise value is the goal of the calibration.
The key assumption in the Bayesian approach to calibration is that the parameters θ are a multidimensional random variable whose distribution can only be approximated by combining our pre-existing knowledge about it, if any, and experimental measurements. More precisely, the prior knowledge about the parameters is corrected using Bayes' theorem so as to obtain a posterior probability distribution that is as consistent as possible with the experimental observations and the assumed errors. See Fig. 2.
In practical situations, the evaluation of the posterior probability requires the use of Markov Chain Monte Carlo methods (see, e.g., [43,44]). These numerical methods need to evaluate the simulator a large number of times in order to obtain a stationary probability distribution that approximates the posterior. For the kind of problems we envision in this work, the simulations will involve quasistatic or transient finite element solutions with thousands of degrees of freedom that are computationally too expensive to be ran many times. Acknowledging this common situation, Bayesian calibration can be performed replacing η with a model surrogateη that is much cheaper to run than the emulator, at the expense of depending on additional parameters that need to be calibrated as well.
Surrogates or metamodels for computer models are often referred to as emulators. They can be based on polynomial chaos expansions, radial functions, neural networks, etc. [45][46][47][48][49]. In the context of Bayesian calibration, Gaussian processes are often favored since their structure fits naturally in the framework outlined above and can be used to represent not only the computer model, but its inadequacy as well.

Gaussian Processes
Gaussian processes are sets of random variables indexed by y ∈ R d such that any finite subset of them has a Gaussian multivariate distribution [50]. They are completely charac-terized by their mean and covariance functions. The standard notation to denote that a scalar function f is distributed as a Gaussian process of mean m and covariance k is Both the mean and the covariance are usually given analytical expressions that depend on so-called hyperparameters, which are precisely the additional unknowns that need to be calibrated. For instance, the mean is often assumed to be of the form where g i : R d → R are known real functions and β i are unknown coefficients collected in the array β = {β 1 , β 2 , . . . , β M }. The covariance is the most critical ingredient of the Gaussian process. Often, it is chosen to be stationary, meaning that it depends only on the difference y − y , and even isotropic, in which case the covariance depends on the evaluation parameters only through the distance | y − y | 1/2 , hence it can be written as here σ 2 is a hyperparameter modulating the spread and ρ : R + → R + is a nonnegative, differentiable kernel. Several alternative expressions are customarily employed for the latter and in this work we have chosen the Màtern C 5/2 function, commonly used in statistical fitting, that has expression where ψ is a length-scale hyperparameter. Thus, the Gaussian process described above depends on a collection of hyperparameters χ = (β, σ 2 , ψ).

Gaussian Processes as Emulators
To explain the relationship between Gaussian processes and calibration, we consider as before a physical process for which we have a model η depending on the controllable variables x and parameters t whose optimal value θ we aim to find. Since the model is expensive to evaluate, we replace it with a Gaussian process of the type described in Sect. (3.2). If we define w = (x, t), then the processη(w, χ η ) will be the surrogate of η(x, t), and will depend on the hyperparameters χ η = (β η , σ 2 η , ψ η ). The discrepancy function δ in Eq. (46) has an unknown expression. Hence, it proves convenient to model it also as a Gaussian processδ(x, χ δ ) with hyperparameters χ δ = (δ δ , σ 2 δ , ψ δ ). As a consequence of these two assumptions, the relationship between the control variables and the observations, e. g. (46), is now replaced by We note that this new relation has replaced the costly simulator and the unknown discrepancy error by two Gaussian processes with pre-determined analytical expressions, but depending on unknown hyperparameters χ η , χ δ whose value, or probability distribution, has to be found in the calibration. The complete description of the model requires some specific choice for the experimental error. For simplicity, will be chosen to be a random variable with a normal distribution of zero mean and unknown variance λ 2 . This is not only a sensible choice, but simplifies some of the later algebraic manipulations.

Available Data
The goal of the Bayesian inference is to determine the most likely distributions of the hyperparameters χ η , χ δ and the variance λ 2 appearing in Eq. (51), given our best knowledge about them-the prior-and observations. More precisely, there are two types of data available for the calibration: those coming from experimental measurements, and those coming from the evaluation of the costly model η.
The numerical observations (z ex+1 , z ex+2 , . . . , z ex+num ) are obtained from the model computations are chosen from a design of experiments. In Eq. (51), these latter set of data correspond to known values of w, x and λ 2 = 0, but unknown hyperparameters.

Priors
Bayes theorem can be used to rigorously update the prior information we have about any unknown random variable, given some observed samples. In the case of the Gaussian process described, the starting point is thus some prior probability distributions for the unknown variables (θ , χ η , χ δ , λ 2 ). Denoting as π the joint prior probability of these variables, and assuming for simplicity that they are all independent, we can write π(θ, χ η , χ δ , λ 2 ) = π θ (θ ) π η (χ η ) π δ (χ δ ) π (λ 2 ).
In this equation, π θ refers to the prior probability of the parameters that we wish to calibrate; π η , π δ , respectively, to the probability of all the hyperparameters in the two the Gaussian processesη,δ; last, π is the prior probability for the variance in the experimental error. A prior probability density function has to be proposed for each of the unknown model parameters and hyperparameters. Regarding the former, information may be obtained from the literature or expert opinion. Bayes theorem shows that the posterior probability will correct the prior with each available experimental data point. However, given the limited amount of experimental tests that can be carried out, the choice of prior can have a non-negligible effect on the calibrated probability for the model parameters.

Bayes Rule
Given the observed data z = (z 1 , . . . , z ex , z ex+1 , . . . , z ex+num ) for the input variables w = (w 1 , . . . , w ex+num ) in Eq. (51), Bayes rule states that the conditioned probability of the unknown random variables of the model satisfies where L is the likelihood function and we have assumed that the model parameters are independent as in Eq. (52). The actual evaluation of the conditioned probability in Eq. (53) needs to normalize the right hand side, which is precisely what Markov Chain Monte Carlo methods do.
To conclude the Bayesian inference, hence, the likelihood needs to be calculated. However, since finite dimensional random variables sampled from a Gaussian process have a multivariate normal distribution, the likelihood L can be calculated in closed form. Noting that the right-hand side of Eq. (51) is a Gaussian process itself, we conclude that with where m η , m δ are the mean function of the Gaussian processesη,δ evaluated at w i , x i , respectively. Likewise, η and δ are the two covariance matrices of the Gaussian processes evaluated at pairs (w i , w j ) and (x i , x j ), respectively. Expression (53) gives the joint posterior probability density of all the unknown parameters in the model, that is, θ, the hyperparameters ofη,δ, and the experimental error. Since we are only interested in the posterior probability of the parameters θ , being these the only ones in the original model η, the probability (53) needs to be marginalized as in The steps of the sequential calibration are summarized, for convenience, in Box 1. More details on the whole calibration procedure can be found, for example, in [19,20,51]. 3. Obtain synthetic data z i = η (k) (w i , , θ (k) i ;θ), for i = ex + 1, ex + 2, . . . , ex + num. 4. Given prior probability distributions π θ (k) , π χ η , π χ δ , π , 5. Using the priors and the pairs (w i , z i ), i = 1, . . . , ex+num, compute the joint posterior probability distribution for θ (k) , χ η , χ δ , using MCMC. 6. Marginalize the probability distribution for the parameters θ (k) . 7. Append θ (k) toθ

Calibration Process
In this section we describe the complete methodology employed to calibrate the temperature-and strain ratedependent elastoplastic constitutive relations considered in this work and summarized in Sect. 2.2, when they are employed to model a metallic alloy. For that, we will use experimental data, Bayesian methods of the type introduced in Sect. 3, and finite element transient simulations using the discretizations presented in Sect. 2.4. The focus on this work is the identification of the complete sequence of steps required to calibrate a complex material model, not the material itself whose actual experimental results are presented in non-dimensional form. Moreover, we restrict the calibration to the parameters in the JC and ZA model, fixing the remaining parameters in the mechanical model (density, conductivity, Taylor-Quinney factor,...) to their nominal value.
The suggested methodology proceeds incrementally, calibrating the model parameters progressively, employing experimental data and simulation for restricted classes of tests in which a subset of the model parameters have known or irrelevant value. With this in mind, quasistatic or slow isothermal tests will be employed first to calibrate the equilibrium parameters in the models. Then, the temperature in the tests will be modified to gauge the corresponding parameters. Next, impact and fracture tests will be considered to calibrate the strain rate dependence and fracture parameters. To measure the impact of ageing, these tests and calculations will be repeated with samples of varying exposure times.

Experimental Campaign
We describe first the sequence of controlled experiments that need to be carried out to calibrate the parameters in the material models. Tables 1 and 2 provide the complete list of all the tests performed in a prototypical experimental campaign. Most tests are repeated to account for the variability of the process, which is later modeled by the Bayesian inference.
Following the notation of Sect. 3, we carry out tests to generate data that will later be employed for the calibration. More precisely, data points are of the form x = (ε,ε, , E, σ * ), or a subset of these. As before, the symbols refer, respectively, to the strain, the strain rate, the absolute temperature, the normalized exposure age, and the triaxiality.
In theory, we could follow strictly the program set up in Sect. 3, calibrating all the parameters in the JC and ZA models. However, to simplify the Bayesian calibration, which is otherwise extremely costly, we propose instead to do incremental calibrations, working each step on a reduced set of model parameters while holding the remaining ones at fixed values. In order to do so we outline a strategy where the parameters whose value is fixed have no influence whatsoever in the obtained data, or have already been calibrated. See below and Sect. 4.4 for details.
Next we describe the tests that have been performed indicating clearly the inputs for each of them and the measured outputs. Tables 1 and 2 gather all the experiments for ease of reference.
1. Quasistatic traction tests are carried out in a universal testing machine employing unnotched specimens at room temperature ( 0 = 20 • C, corresponding to type I experiments) and without exposure (E = E 0 ). In each of the tests, the material is pushed to its ultimate stress and force-displacement pairs are converted to nominal uniaxial stress-strain data which are recorded for the calibration. The input variables for these tests are thus ε, which is measured by the test machine,ε = 0 s −1 , = 0 , E = E 0 and σ * = 1/3. 2. Split Hopkinson bar tests are performed at room temperature on unnotched specimens (see type II in Table 1 for set-up data). The input data are ε, measured during each test,ε = 10 3 s −1 , = 0 , E = E 0 , and σ * = 1/3. Table 1 for the precise value of the temperatures). As before, the controlled variables for these tests are ε, measured by the test machine,ε = 0 s −1 , , set according to Table 1, E = E 0 , and σ * = 1/3. 4. Additional quasistatic traction and dynamic tests on unnotched specimens are repeated at room temperature, now under different exposure conditions corresponding, respectively, to type III and IV tests in Table 2. The strain ε is controlled by the testing machine,ε = 0 s −1 , temperature is 0 , σ * = 1/3 and E −E 0 = 1.0 and E −E 0 = 1.9. 5. To assess the influence of the stress triaxiality on the material behavior, several quasistatic traction tests are performed with non-cylindrical specimens (see Fig. 3) with stress distributions that follow Bridgman equation [52]. In these tests, hence,ε = 0, = 0 , the triaxiality σ * and the exposure E is known from the geometry and history of the specimens, respectively, and ε is controlled by the testing machine. In Tables 1 and 2, these experiments correspond to types V, VI, IX and X. Similarly, quasistatic compression and shear tests are performed under the same values of the control variables but with specimens as indicated in Fig. 4. These tests correspond to types VII and VIII in Tables 1 and 2. 6. The traction, compression, and shear tests are completed at medium and high strain rates, all at room temperature. In this, both ε andε are controlled by the experiment, as well as ≡ 0 , the exposure E and the triaxiality σ * . These tests are indicated with the roman numbers XI -XXI in Tables 1 and 2. Some of these tests are repeated at high temperatures, as indicated also in Tables 1 and 2.

Testing Facilities
We provide next a brief description of the testing facilities that we used in our experimental campaign. We present it for completeness and to give an idea of the type of equipment that is required to carry out a full experimental characterization of a metallic alloy under high strain rate and temperature conditions. Obviously, similar machines could alternatively be used. Compression, shear and quasistatic tensile tests were performed using an INSTRON 3384 universal testing machine. To measure deformation during compression and tensile testing, a linear variable differential transformer (LVDT) was used. The digital image correlation (DIC) technique was utilized to determine local shear deformation during shear testing. A basecoat of white paint was applied on the surface of each shear sample followed by black speckles. Photos of samples during shear testing were recorded with a high frequency (which depended on the cross-head speed) by a digital camera. These images were uploaded into the Vic-2D Table 1 Number and type of experimental tests to be used in the calibration procedure, considering non-exposed material   2009 DIC software for full-field strain analysis and generation of plastic deformation maps. High strain rate testing was carried out using a Split Hopkinson pressure bar. For high temperature strain rate testing, a specially designed furnace was used. To control the sample temperature, a K-type thermocouple was directly attached to the specimens. The deformation of the sample during high strain rate testing was tracked using a high-speed camera Phantom-VEO 710 L. All testing parameters are summarized in Tables 1 and 2.
Drop-weight impact tests were performed using a testing system INSTRON CEAST 9350 and 50 × 50 × 1 mm 3 samples. A hemispherical punch with a diameter of 16 mm was employed for testing, made of hard alloy steel with hardness of 60 HRC. The samples were clamped by a pneumatic system. A multipurpose lubricant (Super Lube, Syncolon © ) was The ballistic tests were performed using a special set-up. A K-type thermocouple was attached to the frame holding the plate in the furnace. The plate was heated to the testing temperature and soaked for 15 min to ensure a stable temperature. A single-stage gas gun operated with compressed air was used to shoot cylindrical projectiles made of high carbon bright steel (BS-1407). The hardness of the projectiles was in the range of 58-61 HRC, while they had a diameter of 6 mm and a length of 30 mm. They were shot with different initial velocities to search for ballistic limit. The speed of projectile before impact and speed of plug (if projectile penetrated through) was measured by a high speed camera Phantom-VEO 710 L. The accuracy of the speed measurements was ±1%.

The Priors
The first step for the Bayesian calibration is the definition of the prior probability distributions of all random variables in the material model and the Gaussian processes as discussed in Sect. 3.5. See also Fig. 2.
The prior distributions for the variance of the experimental error λ 2 and the hyperparameters χ δ of the discrepancy function are given in Table 3, while the rest of the hyperparameters are selected using modularization [53], not needing an explicit prior distribution. The prior distributions for the model parameters are given in Table 4. These functions are selected based on preliminary measurements [54,55], noting that whatever initial inaccuracies will be corrected by the Bayesian calibration.  16,20.98) All the values are adimensionalized

Incremental Bayesian Calibration
As previously mentioned, the Bayesian calibration strategy selected in this work will be incremental to avoid the large computational costs that a monolithic calibration would entail. More specifically, the parameters in the constitutive models will be split into disjoint sets that will be studied independently. The subsets will collect the parameters that model one physical aspect of the material (stiffness, rate dependency, strain rate dependency, etc.). Once a set of parameters is calibrated, its value will be fixed at its mean value for the subsequent steps. Once this simplification is accepted, and acknowledging that it will reduce the ability to quantify the uncertainty of the interaction between calibrated parameters, we proceed to produce in silico samples, i.e., z ex+1 , z ex+2 , . . . , z ex+num , following the notation of Sect. 3. As in the experimental campaign, the goal is to generate pairs (x, z), where x = (ε,ε, , E, σ * ) is determined by the numerical model and initial conditions and z = σ is the stress tensor computed after integrating the thermomechanical equations and the model response. The generation of numerical samples falls into two categories. In the first one, samples are generated at the material point level, integrating the material response for a timeparameterized sequence of strains and temperatures with fixed exposure. These computations are carried out with MUESLI, an open-source library of material models that includes the constitutive laws of JC and ZA [56]. These calculations are inexpensive and can be used to explore the control space for a large number of strains, rates, temperatures, and exposure times.
The second type of computations are designed to mimic the more complex experiments that involve non-trivial stress triaxialities. These calculations employ complete finite element discretizations of the specimens (see Fig. 5 for an illustration), solving the finite strain thermomechanical problem described in Secs. 2.1-2.3 and using implicit time integration schemes for the coupled equations, as defined in Sect. 2.5. These space and time approximations inevitably introduce further errors in the model but we will assume that they can be subsumed into the discrepancy error. To ensure that these errors are as small as possible, mesh and time step size tests have to be performed once for every set-up. By doing so, the calibration process employs true stress/strain data obtained from experiments and the finite element solution of virtual tests with complete specimens. This approach bypasses the need to determine, for each state of the specimen, the complex strain/stress tensors at a single material point, as we did in the first set of samples described above. Instead, the Bayesian calibration infers the optimal values of the material parameters (which are the same in all material points) in such a way that the overall specimen virtual response fits best the experimental measurements.
The whole calibration methodology is outlined next, noting that for each simulation type we obtain a number of data points between 10 and 20. In all cases, the model parameters are sampled from the prior PDFs (see Table 4) using the Latin Hypercube method [57].
1. First, to calibrate the JC parameters A, B and n, the ZA parameters C 0 and C 2 , as well as the elastic constants E and ν, experimental data is taken from tests of type I at room temperature. Numerical computations at the material point level are performed with the same control variables and values of the parameters sampled from their prior distributions (see Table 4). In all cases, the output variable z = σ is the average stress in the central cross section. 2. Next, the calibration of the parameters C and C 4 in the JC and ZA models, respectively, will be undertaken. For that, the previously calibrated parameters of both material models are taken to be fixed with values corresponding to their means. To calibrate the JC parameter, experimental results from tests of type I and II are used at room temperature as well as numerical simulations of these same experiments. In the latter, the uncalibrated parameters are sampled according to the PDFs given in Table 4. In the case of the ZA parameter, the experiments include all the temperature values for experiments I and II indicated in the same table. 3. In order to calibrate parameters m and C 3 of the JC and ZA models, respectively, we use again the experimental and numerical data points corresponding to tests of type I at all the temperatures of Table 1, fixing the previously calibrated parameters at their means and sampling the two parameters in the numerical simulations using the PDFs of Table 4. Since the alloy under study has an FCC crystalline structure, the constants C 1 and C 5 are exactly zero and need not be calibrated. See Sect. 2.3. 4. Last, and to obtain the optimal distributions for the ageing parameters a 1 and b 1 , we fixed the previously obtained parameters at their means and collect experimental and numerical data points for tests types I and III at room temperature. As in previous steps, the parameters to be calibrated are sampled for the numerical tests from the distributions in Table 4.  Elastic constants E = 1.00 2.64 × 10 −3 ν = 1.00 5.00 × 10 −3 The steps described up to here provide experimental and numerical data points that are enough to complete the calibration of the two material models, barring damage. As discussed in Sect. 2.3, an isotropic damage model is added to the constitive behavior using a scalar function that scales down the yield stress in terms of the accumulated plastic strain and a material parameter ε f which depends on triaxiality through the relation (33). Hence, ultimately, to calibrate the damage behavior we need to find the values of the constants D 1 , D 2 , . . . , D 9 from experiments with different triaxialities.
The Johnson-Cook fracture model (JCF) is independent of the exposure. Hence, the value of this control variable will be fixed when obtaining samples and the input data will consist of tuples x = (ε,ε, , σ * ). Moreover, since the goal of this second calibration is to model fracture, the output variable that will be measured (experimentally and numerically) and used to calibrate the model is z = ε f , the accumulated plastic strain at failure. In the traction tests (both physical as well as numerical), this variable is calculated as ε f = log A 0 A f , where A 0 and A f are the cross section area of the specimen at the start of the test and the fracture point, respectively [54]. In Fig. 6 Prior vs. posterior probability distribution functions of parameters A ≡ θ 1 , B ≡ θ 2 , n ≡ θ 3 , in the modified JC model the shear tests, the accumulated fracture strain is obtained from DIC data (see Sect. 4.2); in the simulations, its value is taken as to be equal to its maximum in the elements on the fractured section. 5. For the calibration of the parameters D 1 − D 3 and D 6 − D 9 , numerical simulations must be performed that mimic test types I, III and V-X. 6. To calibrate parameters D 4 1 and D 4 2 , new data points have to be obtained from simulations of all test types (I-XXI), only at room temperature, using the means of the already known JCF fracture parameters. 7. Finally, parameter D 5 is calibrated using data obtained from experiments and numerical simulations of test types I, III, V, VI, VII, IX and X, at all temperatures, fixing, as Once all the tests have been performed and data has been collected, a complete calibration of the model parameters, including their uncertainties, can be performed following the ideas of Sect. 3. See Fig. (2).

Calibration Results
An incremental calibration, as described in Sect. 4.4, was done for the JC and ZA models applied to the superalloy under analysis. As explained, the methodology starts by calibrating the elastoplastic part of the response and subsequently, the fracture model.
Using the priors of Tables 3 and 4, the results for the first part of the calibration are reported in Table 5 and in Figs. 6, 7, 8 and 9. The mean values of the parameters given in Table 5 will be used in subsequent simulations.   9 , of the modified ZA model ibrated parameters, but indicate that the procedure is able to significantly reduce our uncertainty with regards to these parameters. In the case of the ZA model, however, Fig. 8 reveals that, whereas the calibration for C 0 , C 2 and C 3 is able Fig. 9 Prior vs. posterior parameter probability distributions for the ageing parameters a 1 ≡ θ 10 , b 1 ≡ θ 11 , and the (elastic) Young modulus and Poisson ratio E ≡ θ 12 , ν ≡ θ 13 to reduce their variance, the results for C 4 are surprisingly uninformative.
The posterior probability distribution of parameter C 4 has a large variance that hints at problems in the calibration of the  [55]. Since the material is clearly strain rate-dependent, the failure in calibrating C 4 must be the consequence of deficiencies in our procedure. A critical analysis of the results hints at the fact that the experimental and synthetic data points  The inaccuracy just discussed regarding the calibration of C 4 must be seen as an advantageous feature of Bayesian calibration. A pointwise calibration of this material couldpotentially-identify an optimal value for C 4 without any assessment of its variance. In contrast, the Bayesian calibration not only gives an optimal value for this parameter, but signals a large uncertainty in its calibration, indicating that the calibration is incomplete and that the ZA model with the obtained parameters should not be used for arbitrary strain rate and temperature conditions. More precisely, unless additional experimental and numerical samples are employed in the calibration, we can only be sure that the optimal value of the parameter probably lies on a wide interval, depending on the test. Since with the available data we can not rely on the ZA model, we will discard it as a valid constitutive model for all operational ranges.
The calibration is concluded finding the posterior distributions for the fracture model parameters, using only the JC model for the elastoplastic yield stress. These distributions are given in Table 5 and depicted in Figs. 10, 11, and 12. Let us note that in Sect. 2 we assumed that the fracture model for JC and ZA was the same. Hence, the calibrated fracture parameters can be used with either constitutive model. We will see, however, in Sect. 5 that the limitations of the ZA model identified before preclude this model from being employed.

Intermediate Results of the Incremental Calibration
To verify the outcome of the incremental calibration, we compare next the experimental measurements with the model predictions at intermediate steps of the calibration. Also, the results of the simulation code will be compared against the values predicted by the meta-models obtained. For both the modified JC and the ZA models, data from tensile tests at different strain rates, temperatures, and exposure conditions will be compared with the simulated results employing the calibrated values. The comparisons hence correspond to models of increasing complexity in the incremental calibration. To illustrate the optimality of the fit, the experimental data will be compared, at every stage, with the results of computations obtained with four perturbed parameter sets sampled at a distance of ±3σ and ±5σ from the calibrated values, according to each posterior PDF found in the calibration. In this procedure, chosen for its simplicity, the parameters are taken as uncorrelated random variables. Since, in reality, the parameters are correlated, this is not equivalent to sampling the joint parameter distribution within a confidence interval of size ±3σ and ±5σ .
The comparison between the experimental data and those obtained from the simulations employing the best set of parameters is shown in Figs. 13,14,15,16. Tables 6 and  7 summarize the RMSE in the model predictions for the optimal and perturbed parameter sets, confirming that the calibrated values yield-in average-the smallest errors.
To carry out a similar comparison in the case of the modified JCF model, tensile, compression and shear fracture tests at RT and quasi-static conditions are considered. The results are shown in Fig. 17. Again, it is checked through Table 8 that the RMSE for the calibrated set is the smallest.
Next, the meta-model produced as an outcome of the Bayesian inference will be compared with the simulation code calibrated with the best parameter sets. For the JC and the ZA material models, we show the σ − ε curves obtained with the finite element code at quasi-static conditions and room temperature with the meta-model obtained in the calibration of the A, B, n and C 0 , C 2 model parameters (only the mean values of the Gaussian processη are considered). These results are shown in Fig. 18, and the good agreement between the two sets of predictions should be noted.
In the case of the JCF model, we will compare pairs (σ * , ε f ) obtained as the outcome of the simulations at quasistatic, room temperature and varying triaxiality conditions, employing the JCF with the best set of parameters, against the predictions of the meta-model. More precisely, the latter will be obtained using the mean values of the Gaussian process that results from the inference step that calibrates parameters D 1 − D 3 and D 6 − D 9 . The results are shown in Fig. 19 and whereas the meta-model has a non-negligible error in all datapoints, it qualitatively captures the dependence of the fracture strain on the triaxiality.
The previous results illustrate that the Bayesian calibration is capable of extracting valuable information from the experimental data and incorporating it into the optimal parameter selection, including its uncertainty. The final test for the predictiveness of the calibrated models must be done, however, using out-of-sample data points. Only by using experimental results not accounted for in the calibration one can be certain of the generality of the approach. This aspect will be explored next.

Validation and Discussion
After a model has been calibrated -with Bayesian techniques or otherwise-it needs to be validated with out of sample experiments that can verify its predictiveness. We will show next two complex experiments and compare their Fig. 16 Experimental vs. simulated data at quasi-static/room temperature (left) and quasi-static/high temperature (760 • C) (right), with an exposure condition corresponding to an LMP=E 0 +1.9, using the modified ZA model with the set of best parameters found. Stress values are adimensionalized results with the predictions provided by numerical simulations using the calibrated model

Drop-Weight Impact Tests
In the first validation, we will study drop-weight impact experiments where a rigid punch hits a plate. In them, we analyze the plastic strain and temperature increase on the plate as a function of the punch energy and plate thickness. The numerical models employ finite elements and an explicit coupled solver for thermomechanics as detailed in Sect. 2.5. The material model is JC with parameters selected to be the mean values of the calibrated distributions. The results of the low-velocity impact tests are summarized in Table 9 and illustrated in Fig. 20. According to these data, the average difference between the experimental and the simulated plastic strain is 8.32%, while in the case of the temperature increase, the error is 2.59%, using a Taylor-Quinney parameter χ = 0.79. Both indicate a good agreement between the calibrated model and the experiments.

Ballistic Impact Tests
High-velocity, ballistic impact tests are carried out to validate the calibrated model. Specifically, we consider the determination of the ballistic limit-the impact velocity leading to  The average error for the ballistic limit predictions using the JC model is 3.53%. In fact, it was observed that the calibrated model overestimates the strength of the thick plates, but underestimates it for the thin samples. This could be due to the fact that the failure modes of the two types of plates is significantly different: whereas the thick plates exhibit large shear strains before plugging, the thin plates fail due to tearing. After repeating the simulations with several mesh densities, we have ruled out that the difference in the failure predictions are due to the discretization. Instead, we specu-    late that the fracture model has been calibrated in such a way that it can not predict accurately both failure modes with the same parameters, and needs to reach a compromise. Let us note, however, that recent works [58][59][60][61] report large average errors when reproducing similar ballistic tests (up to 52.1%). We conclude that the approach described in the current work is fairly accurate. As explained in Sect. 4, the Bayesian calibration has revealed that the ZA constitutive model is not appropriate to model the alloy under study at high strain rates and temperatures. We carried out, however, simulations of ballistic impact tests with the ZA calibrated parameters and the same fracture model obtained with the JC calibration. The simula-tions (wrongly) predicted plate perforations for all velocities, confirming the conclusion of Sect. 4.

Summary and Main Conclusions
In this article we have described a complete methodology to perform Bayesian calibration of a metallic alloy modeled with Johnson-Cook and Zerilli-Armstrong constitutive relations, including damage and ageing softening. To present it, we have identified all the necessary tests that are required to identify, progressively, all the model parameters. This experimental campaign includes standard quasistatic and dynamic  Exposure increment is relative to un-exposed specimens. The symbol indicates that the error is abnormally large and data is thus rejected as inconsistent Exposure increment is relative to un-exposed specimens. The symbol indicates that the error is abnormally large and data is thus rejected as inconsistent Sim. non-exposed (E 0 ) Sim. ΔE = 1.0 Sim. ΔE = 1.9

Fig. 23
Relative error of the predicted ballistic limits for the thick plates employing the JC+JCF models tests that provide sufficient information for the two models. A combination of experimental and computational results are finally employed in the Bayesian calibration to obtain probability distributions for the model parameters.
The purpose of this work has been to explain, in a complete fashion, how to calibrate a model for a new material. Figure 25 summarizes the methodology advocated in this article. The three ingredients of the procedure are collected in a graphical way: first, the nonlinear mechanics model itself used to provide numerical samples; then the experimental tests, which are used to feed experimental data; finally, the Bayesian calibration with its main parts, combining all inputs to yield optimal values of the model parameters as well as their uncertainties.
The methodology outlined has been used for the parameter calibration of a nickel-based superalloy. The article only provides non-dimensional results for this material, but shows that the strategy followed serves to obtain the sought calibra- Sim. non-exposed (E 0 ) Sim. ΔE = 1.0 Sim. ΔE = 1.9

Fig. 24
Relative error of the predicted ballistic limits for the thin plates employing the JC+JCF models  tion, hence proving that it can be also employed for other materials. Remarkably, the Bayesian inference is able to reveal that the Zerilli-Armstrong constitutive model is illsuited for reproducing the behaviour of the chosen alloy, at least for the strain rates and temperature ranges under consideration. Finally, the calibrated model is validated with ballistic impact tests, verifying that the proposed route leads to a predictive model in complex scenarios.
The calibrated models show certain limitations in the toughness predictions of high-velocity impact tests. This should be attributed to the assumed expression of the fracture model that is, by force, simplistic. More elaborated models might improve the predictiveness of the material models at the expense of further calibration. In any case, and based on the experimental campaign selected, the calibration procedure yields optimal values of the model parameters.
The output of the Bayesian calibration are probability distributions for all model parameters, but also for the experimental error. Moreover, fast surrogate models for the constitutive laws and the discrepancy errors have been obtained as by-products. In this work we have detailed the calibration process and then employed the mean values of the parameter distributions for validation computations. This is the main outcome of the process, but not the only one. For example, it could be interesting to study the global sensitivity of the model parameters on selected experiments and the uncertainty in critical quantities of interest that is due to the uncertainty on the parameters. Using the surrogates (already obtained) it would be fairly inexpensive to calculate sensitivities and propagated uncertainties. These goals fall, however, outside the scope of the current work. right holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.