Tolerancebased Pareto optimality for structural identification accounting for uncertainty
 213 Downloads
 1 Citations
Abstract
Structural parameter identification often requires an estimate, at least in a qualitative fashion, of the uncertainty of the solution. This uncertainty quantification should account for the sensitivity of the response to the sought parameters, the error in the measurement and the repeatability of the test. In this paper, repeatability is taken into account into a multiobjective framework, while a nonstandard definition of Pareto dominance based on a given tolerance in the objective satisfaction allows one to consider uncertainty in the experimental data. The solution of the identification is not given as a single value, but a region of the parameter space which is compatible with the data and accounts for uncertainties and response sensitivity to model parameters. The procedure is applied to an experimental test on a masonry panel, showing its effectiveness in discriminating identifiable parameters from those affected by higher uncertainty.
Keywords
Inverse problems Multiobjective optimisation Sensitivity analysis Genetic algorithms1 Introduction
Simulation of the mechanical response of structural systems has significantly improved in accuracy in the last decades, thanks to the combined availability of more sophisticated theories and increased computational resources. Nearly any feature of structural behaviour may be now represented by a numerical model, and topics as viscoplasticity [1], large deformations [2], contact [3] and fracture mechanics [4] are familiar to researchers and practitioners. However, any numerical representation is accurate insofar as the parameters entering its formulation are realistic. In many cases, standard material tests are not able to provide the information needed to fully characterize the mathematical model of the mechanical process. This is particularly true when the number of model parameters is large or their physical meaning not straightforward, or when exhaustive material tests cannot be performed for practical reasons, e.g., insitu characterization. In such cases, inverse analysis techniques [5, 6], aimed at inferring the material properties/boundary conditions (parameter identification) from the knowledge of the response of the structure under certain loading conditions, can be effective in estimating material parameters. Different procedures may be recognised according to the loading condition, i.e., dynamic [7] or static [8], and the numerical method to obtain the solution, i.e., direct [9, 10] or indirect [6, 11], in which a functional of the mismatch (discrepancy) between experimental and computed response is minimised.
In general, the results of an identification process are unavoidably affected by some uncertainty, mainly deriving from (a) propagation of model or measurement errors, (b) low sensitivity of the response to the sought parameters and (c) scattering coming from the aleatory nature of repeated tests. Application of stochasticbased (Bayesian) approaches [5, 12] to inverse analysis allows one to quantify the propagation of uncertainty from the data to the parameter estimation [12]. However, probabilistic approaches suffer from two limitations that restrict their use, namely (1) they strongly depend on the prior assumptions about the nature of uncertainties, and (2) fully describing the posterior probability density function is a costly operation in terms of computational effort, especially when the number of sought parameters increases (curse of dimensionality). For this reason, deterministic approaches [13] are more widely used, with the notable drawback that a single result is the output of the process and uncertainty is not directly quantified. Introducing some sort of uncertainty quantification without resorting to full Bayesian approaches is thus attractive even in deterministic applications.
In [14], fuzzy arithmetic coupled to a formulation making use of genetic algorithms and particle swarm optimisation method is used to estimate the uncertainty caused by the measurement noise in modal parameters, and the procedure is validated by means of a numerical application on a frame structure. In [15], interval updating is proposed as a means to estimate the uncertainty in the solution based on the measured modal response. Uncertainty is also quantified in [16, 17] through an identification procedure based on interval FEM. The method provides an estimation of the intervals of material parameter values which are consistent with a prior assumption about the uncertainty of the measurements. The presence of model and measurement errors, which can be reduced but never removed, has two important effects on the optimisation problem: (1) the discrepancy function has a nonzero global minimum, and (2) the minimum discrepancy solution may be shifted from the true value. This implies that the real solution may not correspond to the global minimum of the discrepancy function, and other solutions with similar or greater discrepancy values should be considered as well. In [18], families of model parameters that predict the observed data within the same tolerance are considered as equivalent solutions and analysed to quantify the confidence to assign to the given identification output and the risk in the prediction. In this sense, uncertainty quantification is related to the general area of calibration and validation [19, 20]. Consistently, in [21] a procedure involving parameter identification using calibration, testing and validation of an Artificial Neural Network is developed, and a parent solution is detected. Based on this, on ensemble of offspring solutions is created considering normal, lognormal and uniform statistical distributions for the material and geometrical parameters, and finally the probability of failure of the system is assessed following common procedures of reliability analysis.
The need of improving the usual approach in which a single solution showing maximum fidelity to the recorded data is given as a solution is also recognised in [22], with the observation that in minimising discrepancy, compensations between various forms of uncertainties and errors become inevitable. In this respect, the authors use infogap theory and multiobjective genetic algorithms (MOGA) to search for a solution which is the best compromise in terms of fidelity and robustness.
In this paper, an identification procedure is proposed and described. Its main features are:

An optimisation procedure able to handle multiple inputs (test responses) in a multiobjective optimisation process, to take into account test repeatability;

A nonstandard definition for Pareto dominance, accounting for the resolution under which twoobjective values are not distinguishable from each other because of data errors;

A postprocessing phase, in which the analysis of the optimisation results provides the information needed to determine the uncertainty in the estimation.
The estimate of the model parameters is not given as a single value (as in deterministic inverse analysis), nor can be defined as probability density function (as in the probabilistic method), but as a region of the parameter space which is compatible with the available data, in the sense which will be defined later. All the elements in this region will be considered as solutions of the inverse problem, given the data.
2 Methodology
2.1 Overview of the deterministic inverse problem
Generally, the computed response is deterministic, and thus given p and x, the response is univocally evaluated.
2.2 Data from different sources
2.2.1 Multiobjective optimisation and Pareto dominance
The Pareto Front is the set of all solutions which are not dominated by any other and represents the general solution of the identification problem. From the PF, the analyst can select a unique solution a posteriori if needed, as shown, for instance, in [25] and in [26] for identification of a bridge under ambient vibration and a phenomenological models for steel members, respectively.
Having turned the search for a unique solution into tracking a set of equally acceptable solutions, some authors have proposed to use these to define an uncertainty range for the sought parameters. For example, [27] solved the inverse problem of detecting damage in truss structures by multiobjective optimisation and plotted the PF solutions as histograms in the parameter space, to define uncertainty range for the parameters. The same approach was proposed by [28] for the detection of damage in plates, where all individuals in the Pareto Front are considered as solutions. The multiobjective approach produced a diverse set of solution estimates, which happened to cluster near to the ‘‘true’’ damage locations even in the presence of significant measurement errors. The rationale under this approach is the following [29]. The PF consists of a region of the solution space whose members best approximate the experimental responses, in the sense defined by the concept of nondomination. It means that each element in the PF has different degree of fitting to each test response, but, unless a ranking of the tests is preliminarily defined, there is no reason to prefer one solution over another. The PF solutions may be investigated in the parameter space (Fig. 1), where they identify an uncertainty region which may be analysed through simple statistical tools. The dispersion of a parameter, its average location, possible correlations with other variables may be easily detected by postprocessing the PF at the end of the analysis.
2.2.2 Tolerancebased definition of Pareto dominance
While the definition of uncertainty based on Pareto Front distribution is not a completely new idea, it should be recognised that its application to real cases may result in some unrealistic results. For example, let us consider a situation as that depicted in Fig. 2, where a hypothetical PF is shown. Both points P_{1} and P_{2} belong to the PF, but while point P_{2} is clearly better than P_{1} according to objective ω_{2}, the gain in objective ω_{1} shifting from P_{2} to P_{1} is hardly visible. In other words, while they both numerically belong to PF, intuitively P_{2} is “more optimal” than P_{1}.
The effect of relaxing the notion of Pareto dominance depends on the shape of the original PF. This is shown in Fig. 3, where typical PF configurations of a twoobjective problem are displayed. In it, ∆ω_{ i } is the absolute value difference in the objective i when going from point P_{1} to point P_{2} (optimal according to objective ω_{1} and ω_{2} respectively), while ε_{ i } is the tolerance in the ith objective. In Fig. 3a, the effect of the nonzero resolution ε_{ i }>∆ω_{ i } is to increase the spread of the solutions. Conversely, in Fig. 3b, the original PF is characterised by nearly horizontal and nearly vertical regions, in which small degradation of the minimum objective (ω_{1} starting from P_{1} following the arrow) entails substantial improvement of the other. In this case, the effect of the tolerancebased formulation is to focus on the region near the “corner” of the Lshaped PF, which is that in which both objectives assume low values. On the contrary, a concave PF, as that shown in Fig. 3c, has the feature that the minimum objective (again ω_{1} starting from P_{1}) may be significantly degraded without improving the other objective (almost horizontal branch near P_{1} and vertical one near P_{2}). This is an indicator of scarce consistency of the two objectives, and the introduction of a finite resolution highlights this circumstance by splitting the original Pareto Front into separate subsets.
Moving to the parameter space, the benefits of using the tolerancebased definition of Pareto optimality is avoiding either over or underconfidence in the parameter uncertainty estimation. Firstly, if the multiple tests are very consistent to each other, i.e., the PF is located in a very small region of the objective space (Fig. 3a), lowsensitivity parameters may present unrealistic limited associated uncertainty (overconfidence). Secondly, if the PF is characterised by nearly vertical or horizontal branches (Fig. 3b) it may span considerable regions of the parameter space for a limited improvement in fidelity to one test, leading thus to large uncertainty intervals for the parameters and consequently underconfidence in the results. In this respect, the nonstandard definition of Pareto dominance proposed herein avoids both drawbacks, increasing the PF bounds in the first case, and focusing in the corner region of the PF in the second case, and thus leading to a more realistic uncertainty estimation. The practical usefulness of the approach will be shown with reference to an identification problem involving masonry panels in Sect. 3.
2.2.3 Numerical solution of the identification problem
To solve problem (5), it is necessary to use an optimisation algorithm able to track the entire PF_{t}. In this respect, populationbased metaheuristics are preferable over gradientbased algorithms because they work on ensembles of alternatives, and thus are naturally designed to converge towards a region of the solution space instead of a single value. In this work, the Nondominated Sorting Genetic Algorithm II (NSGAII) [33] was used to solve the multiobjective identification problem. This stateofart approach to multiobjective optimisation was implemented in the software TOSCA [34]. It exploits the concepts of nondomination ranking and crowding distance to reach convergence to the PF while maintaining diversity in the population. At the end of each generation, the individuals are divided into progressive nondomination fronts. Inside each front, the individuals are ranked based on a densityestimation metric, called crowding distance, which represents a measure of how close (in terms of objective values) an individual is to its neighbours, and more isolated points are favoured to increase diversity in the population. Even though in the original formulation the domination ranking is associated to tournament selection, this is not mandatory. In the examples reported in this paper, Stochastic Universal Sampling [35] was used as selection operator. Blend crossover [36] and aleatory mutation were then applied to create a new population.
3 Estimation of elastic properties of masonry from diagonal compression tests
The procedure described above was applied onto the results of an experimental activity involving brick–mortar unreinforced masonry and carried out at the University of Trieste (Italy). This was part of a broader experimental programme aimed at designing a novel experimental–numerical procedure for identification of masonry properties [37]. Two masonry types (MT1 and MT2) were prepared in different periods. The main material parameters obtained from standard tests on small specimens are reported in Table 1, as average values and coefficients of variation (CV).
Material properties as estimated from tests on small samples
Property  Symbol  Masonry MT1  Masonry MT2  

Average  CV (%)  Average  CV (%)  
Mortar compressive strength  f _{m}  7.855 MPa  6.24  14.15 MPa  5.61 
Brick compressive strength  f _{b}  18.27 MPa  14.08  24.72 MPa  6.71 
Brick tensile strength  f _{vb}  4.233 MPa  3.94  5.50 MPa  7.49 
Brick Young modulus  E _{b}  11.23 GPa  16.28  8.92 GPa  9.35 
Brick Poisson ratio  ν_{b}  0.19  35.71  0.13  11.12 
Masonry compressive strength (orthogonal to the bed joints)  f _{wc, ┴}  12.88 MPa  13.72  19.68 MPa  7.58 
Masonry compressive strength (parallel to the bed joints)  f _{wc, //}  9.32 MPa  2.52  17.74 MPa  3.43 
3.1 The experimental programme
Two 640 × 640 × 90 mm^{3} square panels per masonry type were prepared to be tested under diagonal compression. The tests are labelled as CDxMTy, where x the identification number of the test and y the masonry type. After curing, the panels were rotated and placed on a stiffened steel angle; a similar angle was applied on the opposite corner for the load application. Between the angles and the specimen, a thin layer made of chalk and sand in 1:1 proportion by volume (plaster) was applied for a uniform stress distribution. The load was applied by means of a hydraulic jack (load capacity 200kN). The specimen was loaded until 20kN, then unloaded and loaded until failure; only for the specimen 1 of MT2 (CD1–MT2) the first loading reached 100kN. Displacements were acquired using 12 Linear Variable Displacement Transducers (LVDTs) of 25 mm stroke, with six of them placed on either side of the panel to measure the diagonal displacements and the displacement of the four edges (Fig. 4a). To avoid capturing local effects, the sensors were not placed on the first and last brick layers. Aluminium bars were used to connect each LVDT to the opposite gauge point.
The results of the four diagonal compression tests in terms of LVDT displacement on the diagonals are summarised in Figs. 5 and 6, assuming positive sign for LVDT lengthening and negative for LVDT shortening. An initial linear elastic branch is generally recognisable, especially for masonry MT1 (Fig. 5); the same can be noticed for specimen CD1MT2, while behaviour of CD2MT2 appears nonlinear from the beginning (Fig. 6). This could be due to problems of load eccentricity causing geometrically nonlinear effects onto the specimen. This will be taken into consideration and properly analysed in the Sect. 3.4. For the calibrations described in Sect. 3.4, the elastic stiffness was defined as the secant stiffness between 20 and 60 kN.
3.2 Description of the FE model
A threedimensional finite element model of the test, shown in Fig. 7, was created in Abaqus 6.9 [38]. All materials were discretised by fournode tetrahedral elements (C3D4) with regular patterns along the three axes. The level of refinement was based on preliminary convergence analysis, here not reported. The characteristic length of the tetrahedral mesh element representing the bricks and the steel elements was 27.5, 45 and 30 mm along x, y, and zaxis, respectively, while for the mortar and plaster materials the number of elements along the thickness was set equal to two (5 mm characteristic length). According to this scheme, the total number of elements was equal to 17,904. The two steel angles were modelled using very stiff solid elements (E = 300 GPa) at the top and bottom of the panel; the bottom angle was fully restrained. Four vertical forces F_{00}, F_{01}, F_{10}, F_{11}, 184 and 90 mm spaced in X and Ydirections, respectively, were applied on the top angle: by changing the magnitude of each relatively to the others it is possible to simulate accidental eccentricities in X and Ydirections (Fig. 7). The parameters e_{ x } and e_{ y } identify the eccentricity of the loading application point: the four forces shown in Fig. 7 are related to the total force F by the expressions:
It is easy to verify that the sum of the forces is always equal to F. Perfectly centred load is characterised by e_{ x } = e_{ y } = 0.5.
The head joint Young modulus was assumed different from bed joint E_{m}. Very little is reported in the literature about the effects of the head joints on the response of a masonry wall. In general, due to lack of significant normal stress, shrinkage of the head joints and the subsequent loss of bond between the unit and mortar, their contribution to the shear transfer is usually considered less than that of the bed joints. While many works in the literature focus on the influence of head joints on strength [39, 40], to the author’s knowledge there is lack of significant studies investigating the elastic properties, which are likely to depend on a large number of factors, as joint thickness, environmental conditions (shrinkage) and quality of workmanship. For this reason, in this work, the effective head joint stiffness contribution was accounted for by considering a different material, the Young modulus of which being evaluated as r·E_{m}, with \(~0.0 \leqslant r \leqslant 1.0\).
The connection between the masonry and steel is a critical element in the FE approximation. Nonlinear interfaces or contact elements would be the most realistic way to simulate the connection between the two materials. However, that would turn the model into nonlinear, dramatically increasing computation time and making the identification analysis cumbersome. To keep the model elastic, thus, such connection types were not applied. However, a rigid connection would be similarly unfeasible as highly inaccurate to model the transfer of stress from steel to masonry. For all these reasons, a layer of elastic material with different elastic properties was used between the steel plate and the masonry panel, with elastic properties to be identified. Even though the elastic assumption is a very crude representation of reality, it allows one to deal with a fully elastic model.
3.3 Global sensitivity analysis
The global sensitivity measure is the finite distribution F_{ i } composed of all possible EE_{ i }. It may be represented by the values of the mean and standard deviation, but Campolongo et al. [42] proposed to use the value \(\mu _{i}^{*}=\frac{1}{N}\mathop \sum \limits_{{i=1}}^{N} \left {{\text{E}}{{\text{E}}_i}} \right\) to rank parameters, as a large value of \(\mu _{i}^{*}\) indicates an input with important “overall” influence on the output. This parameter was used in this work.
The N different EEs may be computed by different techniques, starting from the original formulation based on trajectories [41]. Here, the procedure proposed in [43] based on sampling via Sobol sequence was followed. The input parameters (and the increments \(~{\Delta _i}\)) are previously made nondimensional with respect to their variation range, and so they all can vary from 0 to 1. The parameters entering the model can be divided into:

Brick parameters: E_{b}, υ_{b};

Mortar parameters: E_{m}, υ_{m}, r;

Plaster parameters: E_{pl}, υ_{pl};

Boundary conditions: e_{ x }, e_{ y }.
They represent the parameter vector p. The variation ranges for the global sensitivity analysis are shown in Table 2.
Variation range for the material parameters in the sensitivity analysis
Parameter  Lower bound  Upper bound 

E_{b} (N/mm^{2})  5000  20,000 
υ _{b}  0.0  0.5 
E_{m} (N/mm^{2})  1000  20,000 
υ _{m}  0.0  0.5 
r  0.0  1.0 
E_{pl} (N/mm^{2})  10  2000 
υ _{pl}  0.0  0.5 
e _{ x }  0.0  1.0 
e _{ y }  0.0  1.0 
Ten (N = 10) sample points in the parameter space were selected according to the procedure proposed in [43]. The total number of evaluation is thus N(k + 1) = 100, where k = 9 is the number of sought parameters. The results in terms of \({\mu ^*}\) for the cost function in both masonry types are displayed in Fig. 8. The plot shows that in all cases, the most influential parameters in the recorded responses are estimated to be e_{ y }, E_{b} and E_{m}. The other parameters have very low influence, meaning that they are expected to be identified with greater uncertainty.
3.4 Calibration of the elastic parameters

Population: 50 individuals;

Initial population generated by the Sobol algorithm;

Number of generations: 100;

Selection: Stochastic Universal Sampling, with linear ranking based on domination and scaling pressure equal to 2.0;

Crossover: Blendα, with α = 2.0;

Crossover probability: 1.0;

Mutation probability: 0.005.
Both the operators and the GA internal variables were selected based on the results of previous research [34]. In particular, quasirandom sequences as the Sobol algorithm [44] explore the parameter space more uniformly than simple random sequence, allowing to reduce the population size, which was defined based on preliminary sensitivity analysis. Stochastic Universal Sampling avoids the phenomenon of genetic drift; Blendα crossover with α = 2.0 is designed to preserve the probability density function of the population, while keeping its ability of yielding novel solutions in finite population case [45]. According to the same principle, scaling pressure and number of generations were designed to gradually narrow the probability distribution function of the population. Crossover and mutation probabilities are based on previous research and are consistent with the general literature assumptions.
3.4.1 Masonry type MT1
The Pareto Front identified by the algorithm considering no uncertainty (ε = 0 in Eq. (8a, 8b)) is shown in Fig. 9a in the objective plane. It is evident that even though the solution is not unique, the deterioration of one objective due to the satisfaction of the other objective is quite limited (Δω < 0.001 mm).
The good consistency between the two tests is evidenced by the scatter plot of the Pareto solutions in the parameter space (Fig. 9b–g). Apart from mortar Young modulus and headjointtobedjoint stiffness ratio r, all parameters seem to be identified mostly univocally. The load seems to have a considerable eccentricity e_{ x } in both tests (Fig. 9b), while rather smaller outofplane eccentricity e_{ y } is identified (Fig. 9c). The inplane eccentricity may be due to the accidental rotation of the steel angle due to crushing of the plaster layer beneath. The brick Young modulus is identified as about 11.7GPa, very close to the experimental estimation by means of compressive tests (Table 1). Conversely, the mortar Young modulus is not identified, since all values greater than 16GPa give discrepancy values belonging to the PF. This is reasonable, as if mortar is very stiff, its effect on the displacement field becomes negligible and the deformability is governed by brick. The calibration of this parameter is thus not bounded, and any upper limit would represent a feasible solution. The headtobedjoint stiffness ratio r is not adequately identified, assuming values between 0.6 and 1.0. Poisson’s ratio for brick and mortar seems identified, but while the former assumes reasonable values around 0.13, similar to those recorded in the test on brick samples (Table 1) and generally reported in the literature [46], the value of 0.5 may seem unrealistically high for mortar. It should be said, however, that mortar Poisson’s ratio highly depends on the stress state and higher values than 0.5 were recorded in [47] for different mortar mixtures with low levels of confining pressure. The assumption of low confining pressure seems sound in this case, because if the mortar is stiffer than the brick, the difference in stiffness results in tension for the mortar and compression for the brick [47] orthogonally to the direction along which the masonry is compressed. Finally, plaster properties seem identified too, with E_{pl} ≈ 200 MPa and υ_{pl} = 0.5. Here, according to the author, the high value of Poisson’s ratio are justified by the fact that, as previously acknowledged, the linearly elastic behaviour for the plaster layer is a rather rough approximation of the real mechanical response, which shows an almost immediate modeII (shear) failure. It means that the (damaged) shear stiffness is very low compared to the axial stiffness, and, since in the numerical elastic model \(\frac{G}{E}=\frac{1}{{2\left( {1+\upsilon } \right)}}\), the high plaster Poisson’s ratio tries to reproduce this loss of stiffness. Conversely, the value for the plaster Young modulus seems reasonable.
Figure 10a compares numerical results and experimental data for the 12 measurements. The numerical values are those obtained using the best solutions of each objective. It is possible to notice that the solution fits seven data points, which is a feature of L_{1}norm regression. The other points, some of which appear to be outliers, present greater discrepancy values.
Even though some counterintuitive results (i.e., the high mortar Poisson’s ratio) may be explained, it seems unrealistic to be able to identify with the low uncertainty displayed in Fig. 9 plaster properties or mortar and brick Poisson’s ratio. The reason may be found in the high consistency between the two tests, in a situation similar to that shown in Fig. 3a. In Fig. 11 the results of the identification analysis with resolution ε = 0.002 mm (typical of the LVDTs used to acquire the experimental data) are shown. It is now possible to distinguish the parameters which are identified with low uncertainty (e_{ x }, e_{ y }, E_{b}) and those that, under the chosen resolution, are not identifiable (E_{m}, r, E_{pl}). High values for υ_{m} and υ_{pl} (greater than 0.4), and low values for υ_{b} (less than 0.2) are detected, so the arguments previously made still hold.
3.4.2 Masonry type MT2
The PF with associated uncertainty ε = 0 in the objective plane is displayed in Fig. 12a. Unlike the previous case, the satisfaction of one objective implies a considerable increase in the other objective (Δω ≈ 0.01 mm). Implicitly, in the definition of PF, if the solutions of the “basic” optimisation problems (i.e., in which the two objectives are considered separately) are very different, the front becomes wider, both in the objective space (Fig. 12a), and in the parameter space (Fig. 12b–g). This provides a more uncertain identification of most parameters.
Like in the previous case, the determination of the boundary condition parameters e_{ x } and e_{ y } seems affected by low uncertainty, but here the eccentricities seem more pronounced. Brick Young modulus is very narrowly dispersed around a value of 8.8 GPa, which corresponds well to the experimental outcome reported in Table 1. Similarly to MT1, it is not possible to state anything about mortar Young modulus because of the high uncertainty, except that it is higher than brick Young modulus. All the other parameters (Fig. 12e–g) cannot be identified because of the high scattering.
This gives us the opportunity to underline the features of the proposed multiobjective approach. The availability of several tests, which are processed in the multiobjective optimisation framework, allows for the definition of the reliability in the estimation due to repeatability. For masonrytype MT1, since the two tests were consistent with each other, the reliability was high, and thus the estimated parameters showed little scattering in the parameter space. For masonry type MT2, a large spread of PF solutions in the parameter space can be observed. The uncertainty in the estimation of some parameters, i.e., mortar and plaster properties and brick Poisson’s ratio, is thus larger. In Fig. 13 the comparison between numerical and experimental measurement is shown.
3.4.3 The role of sensitivity analysis
As last remark, it is interesting to see that a global sensitivity analysis, as the EE method utilised in this work, may identify a parameter as important while in the specific case it is not. An example is the mortar Young modulus of this example. The reason is that the sensitivity indices may largely vary in the variation range of the parameter: when E_{m} is small compared to E_{b}, its influence on the response is large as it governs the deformation field. On the contrary, when the mortar is very stiff, the displacement of the specimen is controlled by the brick Young modulus, while large variations in the E_{m} have low or negligible influence. Conversely, the parameter e_{ x } was detected as having low influence, meaning that based on the preliminary sensitivity analysis one could have removed it from the set of the parameters to identify, possibly leading to wrong results in the estimation. The approach followed in this paper overcomes these drawbacks. It does not impose a preliminary choice of the identifiable parameters by means of a sensitivity analysis, but is still able to associate a solution to its uncertainty derived by sensitivity, test repeatability and precision of experimental data. In Table 3, the PF solutions are shown in form of midrange value and interval as final results of the identification procedure.
Results of the identification process
Parameter  MT1 ε = 0.0  MT1 ε = 0.002 mm  MT2 ε = 0.0 

E_{b} (GPa)  11.7 ± 0.2  12.8 ± 1.5  8.8 ± 0.4 
υ _{b}  0.13 ± 0.0  0.11 ± 0.11  0.14 ± 0.14 
E_{m} (GPa)  18.4 ± 1.6  14.3 ± 5.7  18.2 ± 1.79 
υ _{m}  0.5 ± 0.0  0.45 ± 0.05  0.23 ± 0.23 
r  0.86 ± 0.14  0.51 ± 0.49  0.63 ± 0.37 
E_{pl} (GPa)  0.18 ± 0.03  0.77 ± 0.76  1.34 ± 0.66 
υ _{pl}  0.5 ± 0.0  0.44 ± 0.06  0.29 ± 0.21 
e _{x1}  0.71 ± 0.01  0.73 ± 0.12  0.87 ± 0.03 
e _{x2}  0.40 ± 0.0  0.44 ± 0.10  0.52 ± 0.03 
e _{y1}  0.57 ± 0.01  0.56 ± 0.07  0.58 ± 0.03 
e _{y2}  0.56 ± 0.01  0.54 ± 0.06  0.31 ± 0.01 
4 Discussion and conclusions
In this paper, a strategy for identification of material parameters in structural problems accounting for uncertainty has been proposed. It is based on multiobjective optimisation of an appropriate functional of the discrepancy between experimental and numerical results, formulated for multiple experimental responses. The solution of the multiobjective optimisation is represented by the Pareto Front of the nondominated solutions, which is then postprocessed to study the uncertainty of the identification. A nonstandard formulation for Pareto dominance allows one to account for limited precision of the experimental data.
The applicative example regards a diagonal compression test, generally utilised to estimate the shear strength of masonry and here studied as a means to identify elastic properties of its components. Two tests for each of two different masonry types were considered to assess the procedure. The main results may be reported in the following list:
 1.
The standard definition of Pareto Front accounts for the uncertainty related to the repeatability of the test. If the two tests are consistent (like for masonry MT1, where the deterioration of one objective to reach the optimum in the other objective was limited), the identification may provide a relatively precise (i.e., with low uncertainty) identification of the sought parameters, even though they are not actually influential in the response. If the two tests are not consistent, i.e. the maximum fidelity solution is attained for remarkably different choices of parameter values, like for masonry type MT2 of the example, the uncertainty estimate will be accordingly larger.
 2.
The tolerancebased definition of Pareto optimality allows one to consider uncertainty due to the tolerance ε typical of the experimental data. In the case of masonry type MT1, this uncertainty exceeded that due to repeatability of the test.
 3.
The analysis of PF and subsequent determination of uncertainty can avoid the reduction of the number of parameters based on the preliminary sensitivity analysis, which is a commonly used approach in identification problems. This reduction, if not performed carefully, may lead to erroneous results when the parameters removed are important in the global response.
In the case analysed, with a resolution ε = 0.002 mm, typical of common LVDTs, only boundary conditions and brick Young modulus could be obtained with limited uncertainty. In future research, the procedure will be applied to different case studies of realworld tests to further validate the methodology.
Notes
Acknowledgements
The author is grateful to Prof. Amadio from the University of Trieste for fruitful discussions about uncertainty in the solution, and Dr Franco Trevisan and the laboratory staff of the University of Trieste for the technical support necessary to the successful completion of experimental tests described.
References
 1.Zienkiewicz O, Cormeau I (1974) Viscoplasticity—plasticity and creep in elastic solids—a unified numerical solution approach. Int J Numer Meth Eng 8(4):821–845CrossRefMATHGoogle Scholar
 2.Moresi L, Dufour F, Mühlhaus HB (2003) A Lagrangian integration point finite element method for large deformation modeling of viscoelastic geomaterials. J Comput Phys 184(2):476–497CrossRefMATHGoogle Scholar
 3.Wriggers P (2006) Computational Contact Mechanics. Springer Heidelberg, BerlinCrossRefMATHGoogle Scholar
 4.Anderson T (2005) Fracture mechanics: fundamentals and applications. CRC Press, Boca RatonMATHGoogle Scholar
 5.Tarantola A (2005) Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics, PhiladelphiaCrossRefMATHGoogle Scholar
 6.Buljak V (2011) Inverse analyses with model reduction. Springer, BerlinMATHGoogle Scholar
 7.Cunha A, Caetano E (2006) Experimental modal analysis of civil engineering structures. Sound Vib 6(40):12–20Google Scholar
 8.Sanayei M, Imbaro G, McClain J, Brown L (1997) Structural model updating using experimental static measurements. J Struct Eng 123(6):792–798CrossRefGoogle Scholar
 9.Caddemi S, Morassi A (2013) Multicracked EulerBernoulli beams: mathematical modeling and exact solutions. Int J Solids Struct 50(6):944–956CrossRefGoogle Scholar
 10.Wang M, Dutta D, Kim K, Brigham J (2015) A computationally efficient approach for inverse material characterization combining Gappy POD with direct inversion. Comput Methods Appl Mech Eng 286:373–393MathSciNetCrossRefGoogle Scholar
 11.Avril S, Bonnet M, Bretelle AS, Grediac M, Hild F, Ienny P, Latourte F, Lemosse D, Pagano S, Pagnacco E, Pierron F (2008) Overview of identification methods of mechanical parameters based on fullfield measurements. Exp Mech 48(4):381–402CrossRefGoogle Scholar
 12.Isaac T, Petra N, Stadler G, Ghattas O (2015) Scalable and efficient algorithms for the propagation of uncertainty from data through inference to prediction for largescale problems, with application to flow of the Antarctic ice sheet. J Comput Phys 296:348–368MathSciNetCrossRefMATHGoogle Scholar
 13.Maier G, Buljak V, Garbowski T, Cocchetti G, Novati G (2014) Mechanical characterization of materials and diagnosis of structures by inverse analyses: Some innovative procedures and applications. Int J Comput Methods. 11:1343002CrossRefGoogle Scholar
 14.Erdogan YS, Bakir PG (2013) Inverse propagation of uncertainties in finite element model updating through use of fuzzy arithmetic. Eng Appl Artif Intell 26(1):357–367CrossRefGoogle Scholar
 15.Khodaparast HH, Mottershead JE, Badcock KJ (2011) Interval model updating with irreducible uncertainty using the Kriging predictor. Mech Syst Signal Process 25(4):1204–1226CrossRefGoogle Scholar
 16.Liu J, Han X, Jiang C, Ning HM, Bai YC (2011) Dynamic load identification for uncertain structures based on interval analysis and regularization method. Int J Comput Methods 8(4):667–683MathSciNetCrossRefMATHGoogle Scholar
 17.Fedele F, Muhanna R, Xiao N, Mullen R (2015) Intervalbased approach for uncertainty propagation in inverse problems. J Eng Mech 141(1):06014013CrossRefGoogle Scholar
 18.FernándezMartínez J, FernándezMuñiz Z, Pallero J, PedrueloGonzález L (2013) From Bayes to Tarantola: new insights to understand uncertainty in inverse problems. J Appl Geophys 98:62–72CrossRefGoogle Scholar
 19.Roy CJ, Oberkampf WL (2011) A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput Methods Appl Mech Eng 200:2131–2144MathSciNetCrossRefMATHGoogle Scholar
 20.The American Society For Mechanical Engineers (2006) Guide for verification and validation in computational solid mechanics. ASMEGoogle Scholar
 21.Gokce H, Catbas F, Gul M, Frangopol D (2013) Structural identification for performance prediction considering uncertainties: case study of a movable bridge. J Struct Eng 139(10):1703–1715CrossRefGoogle Scholar
 22.Atamturktur S, Liu Z, Cogan S, Juang H (2014) Calibration of imprecise and inaccurate numerical models considering fidelity and robustness: a multiobjective optimizationbased approach. Struct Multidiscip Optim 51(3):659–671CrossRefGoogle Scholar
 23.Claerbout Jf, Muir F (1973) Robust modeling with erratic data. Geophysics 38(5):826–844CrossRefGoogle Scholar
 24.Miettinen K (1999) Nonlinear multiobjective optimization, Springer, New YorkMATHGoogle Scholar
 25.Jin SS, Cho S, Jung HJ, Lee JJ, Yun CB (2014) A new multiobjective approach to finite element model updating. J Sound Vib 333(11):2323–2338CrossRefGoogle Scholar
 26.Chisari C, Francavilla AB, Latour M, Piluso V, Rizzano G, Amadio C (2017) Critical issues in parameter calibration of cyclic models for steel members. Eng Struct 132:123–138CrossRefGoogle Scholar
 27.Jung S, Ok SY, Song J (2010) Robust structural damage identification based on multiobjective optimization. Int J Numer Meth Eng 81:786–804MATHGoogle Scholar
 28.Wang M, Brigham JC (2014) Assessment of multiobjective optimization for nondestructive evaluation of damage in structural components. J Intell Mater Syst Struct 25(9):1082–1096CrossRefGoogle Scholar
 29.Shim MB, Suh MW (2002) “A study on multiobjective optimization technique for inverse and crack identification problems. Inverse Probl Eng 10(5):441–465CrossRefGoogle Scholar
 30.Farina M, Amato P (2004) A fuzzy definition of “optimality” for manycriteria optimization problems. IEEE Trans Syst Man Cybern Part A Syst Hum 34(3):315–326CrossRefGoogle Scholar
 31.Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multiobjective optimization. Evol Comput 10(3):263–282CrossRefGoogle Scholar
 32.Santoso BJ, Chiu GM, Mumpuni R (2015) An efficient gridbased framework for answering tolerancebased skyline queries. In: Proceedings of International Conference on Information & Communication Technology and Systems (ICTS), Surabaya, Indonesia,Google Scholar
 33.Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
 34.Chisari C (2015) Inverse techniques for model identification of masonry structures, University of Trieste: PhD Thesis,Google Scholar
 35.Baker JE (1987) Reducing bias and inefficiency in the selection algorithm. In: Proceedings of Second International Conference on Genetic Algorithms and their Application, Hillsdale, New Jersey,Google Scholar
 36.Eshelman LJ, Schaffer JD (1992) Realcoded genetic algorithms and interval schemata. In: Foundations of genetic algorithms. MorganKaufman, San Mateo, pp 187–202Google Scholar
 37.Chisari C, Macorini L, Amadio C, Izzuddin BA (2015) An experimentalnumerical procedure for the identification of mesoscale material properties for BrickMasonry. In: Proceedings of the Fifteenth International Conference on Civil, Structural and Environmental Engineering Computing, Prague,Google Scholar
 38.Dassault Systemes (2009) ABAQUS 6.9 Documentation, Providence, RIGoogle Scholar
 39.Mann W, Müller H (1982) Failure of shearstressed masonry—an Enlarged theory, tests and application to shear walls. Proc Br Ceram Soc 30(1):223–235Google Scholar
 40.Ganz H (1985) Masonry Walls Subjected to Normal and Shear Forces, Institute of Structural Engineering, ETH Zurich: PhD ThesisGoogle Scholar
 41.Morris M (1991) Factorial sampling plans for preliminary computational experiments. Technometrics 33(2):161–174CrossRefGoogle Scholar
 42.Campolongo F, Cariboni J, Saltelli A (2007) An effective screening design for sensitivity analysis of large models. Environ Model Softw 22:1509–1518CrossRefGoogle Scholar
 43.Campolongo F, Saltelli A, Cariboni J (2011) From screening to quantitative sensitivity analysis. A unified approach. Comput Phys Commun 182:978–988CrossRefMATHGoogle Scholar
 44.Antonov IA, Saleev VM (1979) An economic method of computing LP tausequence. USSR Comput Math Math Phys 19(1):252–256CrossRefMATHGoogle Scholar
 45.Kita H, Yamamura M (1999) A functional specialization hypothesis for designing genetic algorithms. In: IEEE international conference on systems, man, and cybernetics. IEEE SMC’99 conference proceedings, vol 3. IEEE, pp 579–584Google Scholar
 46.CUR (1994) Structural masonry: a experimental/numerical basis for practical design rules. CUR, Gouda,Google Scholar
 47.McNary W, Abrams DP (1985) Mechanics of masonry in compression. J Struct Eng 111(4):857–870CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.