Tolerance-based Pareto optimality for structural identification accounting for uncertainty

Open Access
Original Article

Abstract

Structural parameter identification often requires an estimate, at least in a qualitative fashion, of the uncertainty of the solution. This uncertainty quantification should account for the sensitivity of the response to the sought parameters, the error in the measurement and the repeatability of the test. In this paper, repeatability is taken into account into a multi-objective framework, while a non-standard definition of Pareto dominance based on a given tolerance in the objective satisfaction allows one to consider uncertainty in the experimental data. The solution of the identification is not given as a single value, but a region of the parameter space which is compatible with the data and accounts for uncertainties and response sensitivity to model parameters. The procedure is applied to an experimental test on a masonry panel, showing its effectiveness in discriminating identifiable parameters from those affected by higher uncertainty.

Keywords

Inverse problems Multi-objective optimisation Sensitivity analysis Genetic algorithms 

1 Introduction

Simulation of the mechanical response of structural systems has significantly improved in accuracy in the last decades, thanks to the combined availability of more sophisticated theories and increased computational resources. Nearly any feature of structural behaviour may be now represented by a numerical model, and topics as visco-plasticity [1], large deformations [2], contact [3] and fracture mechanics [4] are familiar to researchers and practitioners. However, any numerical representation is accurate insofar as the parameters entering its formulation are realistic. In many cases, standard material tests are not able to provide the information needed to fully characterize the mathematical model of the mechanical process. This is particularly true when the number of model parameters is large or their physical meaning not straightforward, or when exhaustive material tests cannot be performed for practical reasons, e.g., in-situ characterization. In such cases, inverse analysis techniques [5, 6], aimed at inferring the material properties/boundary conditions (parameter identification) from the knowledge of the response of the structure under certain loading conditions, can be effective in estimating material parameters. Different procedures may be recognised according to the loading condition, i.e., dynamic [7] or static [8], and the numerical method to obtain the solution, i.e., direct [9, 10] or indirect [6, 11], in which a functional of the mismatch (discrepancy) between experimental and computed response is minimised.

In general, the results of an identification process are unavoidably affected by some uncertainty, mainly deriving from (a) propagation of model or measurement errors, (b) low sensitivity of the response to the sought parameters and (c) scattering coming from the aleatory nature of repeated tests. Application of stochastic-based (Bayesian) approaches [5, 12] to inverse analysis allows one to quantify the propagation of uncertainty from the data to the parameter estimation [12]. However, probabilistic approaches suffer from two limitations that restrict their use, namely (1) they strongly depend on the prior assumptions about the nature of uncertainties, and (2) fully describing the posterior probability density function is a costly operation in terms of computational effort, especially when the number of sought parameters increases (curse of dimensionality). For this reason, deterministic approaches [13] are more widely used, with the notable drawback that a single result is the output of the process and uncertainty is not directly quantified. Introducing some sort of uncertainty quantification without resorting to full Bayesian approaches is thus attractive even in deterministic applications.

In [14], fuzzy arithmetic coupled to a formulation making use of genetic algorithms and particle swarm optimisation method is used to estimate the uncertainty caused by the measurement noise in modal parameters, and the procedure is validated by means of a numerical application on a frame structure. In [15], interval updating is proposed as a means to estimate the uncertainty in the solution based on the measured modal response. Uncertainty is also quantified in [16, 17] through an identification procedure based on interval FEM. The method provides an estimation of the intervals of material parameter values which are consistent with a prior assumption about the uncertainty of the measurements. The presence of model and measurement errors, which can be reduced but never removed, has two important effects on the optimisation problem: (1) the discrepancy function has a non-zero global minimum, and (2) the minimum discrepancy solution may be shifted from the true value. This implies that the real solution may not correspond to the global minimum of the discrepancy function, and other solutions with similar or greater discrepancy values should be considered as well. In [18], families of model parameters that predict the observed data within the same tolerance are considered as equivalent solutions and analysed to quantify the confidence to assign to the given identification output and the risk in the prediction. In this sense, uncertainty quantification is related to the general area of calibration and validation [19, 20]. Consistently, in [21] a procedure involving parameter identification using calibration, testing and validation of an Artificial Neural Network is developed, and a parent solution is detected. Based on this, on ensemble of offspring solutions is created considering normal, lognormal and uniform statistical distributions for the material and geometrical parameters, and finally the probability of failure of the system is assessed following common procedures of reliability analysis.

The need of improving the usual approach in which a single solution showing maximum fidelity to the recorded data is given as a solution is also recognised in [22], with the observation that in minimising discrepancy, compensations between various forms of uncertainties and errors become inevitable. In this respect, the authors use info-gap theory and multi-objective genetic algorithms (MOGA) to search for a solution which is the best compromise in terms of fidelity and robustness.

In this paper, an identification procedure is proposed and described. Its main features are:

  • An optimisation procedure able to handle multiple inputs (test responses) in a multi-objective optimisation process, to take into account test repeatability;

  • A non-standard definition for Pareto dominance, accounting for the resolution under which two-objective values are not distinguishable from each other because of data errors;

  • A post-processing phase, in which the analysis of the optimisation results provides the information needed to determine the uncertainty in the estimation.

The estimate of the model parameters is not given as a single value (as in deterministic inverse analysis), nor can be defined as probability density function (as in the probabilistic method), but as a region of the parameter space which is compatible with the available data, in the sense which will be defined later. All the elements in this region will be considered as solutions of the inverse problem, given the data.

2 Methodology

2.1 Overview of the deterministic inverse problem

Performing a calibration test implies applying some known boundary conditions and loading, which can be described by control parameter vector x, to a specimen and recording some response data d obs . The test can be formally expressed by a relationship as \({{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}}=\mathcal{H}({\varvec{x}},\varvec{\varepsilon})\), where ε is an error vector which describes aleatory uncertainty of the response. In most cases, the aleatory uncertainty is assumed additive, and thus the experimental response may be expressed as:
$${{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}}=\mathcal{H}\left( {\varvec{x}} \right)+\varvec{\varepsilon}$$
(1)
with \(\varvec{\varepsilon}\) sample of a suitable probabilistic distribution.
Correspondingly, a numerical model, resulting from the discretisation of the differential equations describing the physical process, represents a vector-valued functional \(\mathcal{F}\) which associates a computed response vector d c to a vector of model parameters \({\varvec{p}} \in {\varvec{P}}\), with \({\varvec{P}}\) parameter space, and to the control parameters x:
$${{\varvec{d}}_{\varvec{c}}}=\mathcal{F}\left( {{\varvec{p}},{\varvec{x}}} \right)$$
(2)

Generally, the computed response is deterministic, and thus given p and x, the response is univocally evaluated.

The inverse problem consists of using the actual results of some measurements \(~{{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}}\) to infer the values of the parameters \(\tilde {{\varvec{p}}}\) characterising the system. Considering (1) and (2), this means solving the equation \(\mathcal{H}\left( {{\varvec{x}},\varvec{\varepsilon}} \right)=\mathcal{F}\left( {{\varvec{p}},{\varvec{x}}} \right)\) with respect to p. As this equation may not have solution, the identification (or calibration) problem is often converted into the following optimisation problem:
$$\left\{ {\begin{array}{*{20}{c}} {{\text{Given}}}&{{\varvec{x}},~\varvec{\varepsilon}} \\ {{\text{find}}}&{{\varvec{p}} \in {\varvec{P}}} \\ {{\text{min}}}&{\omega \left( {{{\varvec{d}}_{\varvec{c}}},{{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}}} \right)=\omega \left( {\mathcal{F}\left( {{\varvec{p}},{\varvec{x}}} \right),\mathcal{H}\left( {{\varvec{x}},\varvec{\varepsilon}} \right)} \right)} \end{array}} \right.$$
(3)
which gives an approximate solution to the inverse problem. In it, \(\omega \left( {{{\varvec{d}}_{\varvec{c}}},{{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}}} \right)\) is a suitable discrepancy function measuring the inconsistency between the experimental and computed quantities. The most common formulation for the discrepancy function is:
$$\omega ={\left( {{\text{~}}\left\| {{{\varvec{d}}_{{\varvec{o}}{\varvec{b}}{\varvec{s}}}} - {{\varvec{d}}_{\varvec{c}}}_{q}} \right\|} \right)^q}$$
(4)
where \({\left\| \cdot \right\|_q}\) with \(1 \leqslant q \leqslant \infty\), is the weighted L q -norm of a vector. The choice of a particular norm has a statistical meaning, which is implicit but still holding in the deterministic framework. In the case of q = 2 (Euclidean norm), the solution is in a least-square sense. This is the most common formulation for the inverse problem, and it derives directly from the assumption that all measurements are samples of a Gaussian probability distribution [5]. When measurements are significantly affected by errors and outliers not easily detectable, a robust formulation as that given by L1-norm is preferable [20, 23].

2.2 Data from different sources

2.2.1 Multi-objective optimisation and Pareto dominance

Let us now suppose to have S tests performed on specimens made of the same material. They could be either of the same type (\({{\varvec{x}}_{\varvec{i}}}={{\varvec{x}}_{\varvec{j}}}\), with i, j = 1, …, S) or not. As the discrepancy function in (3) depends on both x (setup) and \(\varvec{\varepsilon}\) (aleatory error), the solution of the optimisation problem p i is generally different from any solution p j , even though they nominally refer to the same material. The presence of multiple tests may thus give a measure of the uncertainty in the identification result due to repeatability. Generally speaking, we search for the solution \(\tilde {{\varvec{p}}}\) which at the same time minimises the discrepancy function for all given tests. This leads to the natural definition of multi-objective optimisation problem:
$$\tilde {{\varvec{p}}}=\arg \mathop {\hbox{min} }\limits_{{\varvec{p}}} \left[ {{\omega _1},~{\omega _2},~ \ldots ,~{\omega _S}} \right]$$
(5)
where S is the number of tests, and \({\omega _i}\), with i = 1, …,S, is the discrepancy function evaluated as in (4) for the ith test. The solution of a multi-objective optimisation problem is represented by a set of non-dominated alternatives, called Pareto Front (PF) [24] after Italian engineer Vilfredo Pareto (1848–1923) who first formulated the concept. The characteristic of the elements of the PF is that none of the objective functions can be improved in value without degrading some of the other objectives. In a minimisation problem, a solution p1 is said to dominate p2 (\({{\varvec{p}}_1} \succ {{\varvec{p}}_2}\)) if and only if:
$$\begin{gathered} {\omega _i}\left( {{{\varvec{p}}_1}} \right) \leqslant ~{\omega _i}\left( {{{\varvec{p}}_2}} \right)~\forall i=1, \ldots ,S \hfill \\ {\omega _j}\left( {{{\varvec{p}}_1}} \right)<~{\omega _j}\left( {{{\varvec{p}}_2}} \right)~\exists j=1, \ldots ,S \hfill \\ \end{gathered}$$
(6)

The Pareto Front is the set of all solutions which are not dominated by any other and represents the general solution of the identification problem. From the PF, the analyst can select a unique solution a posteriori if needed, as shown, for instance, in [25] and in [26] for identification of a bridge under ambient vibration and a phenomenological models for steel members, respectively.

Having turned the search for a unique solution into tracking a set of equally acceptable solutions, some authors have proposed to use these to define an uncertainty range for the sought parameters. For example, [27] solved the inverse problem of detecting damage in truss structures by multi-objective optimisation and plotted the PF solutions as histograms in the parameter space, to define uncertainty range for the parameters. The same approach was proposed by [28] for the detection of damage in plates, where all individuals in the Pareto Front are considered as solutions. The multi-objective approach produced a diverse set of solution estimates, which happened to cluster near to the ‘‘true’’ damage locations even in the presence of significant measurement errors. The rationale under this approach is the following [29]. The PF consists of a region of the solution space whose members best approximate the experimental responses, in the sense defined by the concept of non-domination. It means that each element in the PF has different degree of fitting to each test response, but, unless a ranking of the tests is preliminarily defined, there is no reason to prefer one solution over another. The PF solutions may be investigated in the parameter space (Fig. 1), where they identify an uncertainty region which may be analysed through simple statistical tools. The dispersion of a parameter, its average location, possible correlations with other variables may be easily detected by post-processing the PF at the end of the analysis.

Fig. 1

Uncertainty in the parameter space as given by the Pareto Front

2.2.2 Tolerance-based definition of Pareto dominance

While the definition of uncertainty based on Pareto Front distribution is not a completely new idea, it should be recognised that its application to real cases may result in some unrealistic results. For example, let us consider a situation as that depicted in Fig. 2, where a hypothetical PF is shown. Both points P1 and P2 belong to the PF, but while point P2 is clearly better than P1 according to objective ω2, the gain in objective ω1 shifting from P2 to P1 is hardly visible. In other words, while they both numerically belong to PF, intuitively P2 is “more optimal” than P1.

Fig. 2

Pareto Front with almost vertical and horizontal branches

In this respect, the definition of Pareto dominance given in (6) can be sometimes too severe considering the limited precision of instruments used to record the experimental data and it is reasonable to consider a numerical tolerance under which two-objective values should be considered undistinguishable. In mathematical terms, the equality and inequality relationships between real numbers a and b under uncertainty ε read:
$$\begin{array}{*{20}{c}} {a=b}& \Rightarrow &{\left| {a - b} \right|<\varepsilon } \end{array}$$
(7a)
$$\begin{array}{*{20}{c}} {a<b}& \Rightarrow &{a<b - \varepsilon } \end{array}$$
(7b)
The possibility of embedding a tolerance in the objective definition in the context of multi-objective optimisation is explicitly considered in [30], using fuzzy arithmetic. Without considering such a complex approach, particularly suited for many objective optimisation, the resolution (or tolerance) ε i under which the difference between the objective values ω i a and ω i b = ω i a + ε i is insignificant may be explicitly accounted for by considering a non-standard formulation for the Pareto dominance definition. This comes directly from definitions (7a) and (7b) and original Pareto optimality (Eq. (6)). A solution p1 is said to dominate p2 with tolerance \(\varvec{\varepsilon}\) (p1 ≻ ε p2) if and only if:
$${\omega _i}\left( {{{\varvec{p}}_1}} \right) \leqslant ~{\omega _i}\left( {{{\varvec{p}}_2}} \right)+{\varepsilon _i}\quad \forall i=1, \ldots ,S$$
(8a)
$${\omega _j}\left( {{{\varvec{p}}_1}} \right)<~{\omega _j}\left( {{{\varvec{p}}_2}} \right) - {\varepsilon _j}\quad \exists j=1, \ldots ,S$$
(8b)
with ε i  ≥ 0 being the resolution of the ith objective. The set of tolerance-based non-dominated solutions will be called tolerance-based Pareto Front PFt. While this definition resembles the concept of additive ε-dominance proposed by Laumanns et al. [31], it differs from that because of Eq. (8b), absent in ε-dominance, which enforces a strict (tolerance-based) inequality for at least one objective. Even though the dominance relation as proposed in Eq. (8a, 8b) is not transitive in general, and thus the original pairwise comparison could fail in ordering the population and extracting the PFt, efficient algorithms have recently been developed [32], which convert tolerance-based skyline queries problem into Pareto-based dominance checking tasks over grid space. In particular, it is demonstrated in [32] that for any given tolerance vector ε = [ε1,…, ε S ], a solution p belongs to PFt if and only if \(\forall {\varvec{r}} \in PF\), r ε p, i.e. r does not dominate p according to the definition (8a, 8b).

The effect of relaxing the notion of Pareto dominance depends on the shape of the original PF. This is shown in Fig. 3, where typical PF configurations of a two-objective problem are displayed. In it, ∆ω i is the absolute value difference in the objective i when going from point P1 to point P2 (optimal according to objective ω1 and ω2 respectively), while ε i is the tolerance in the ith objective. In Fig. 3a, the effect of the non-zero resolution ε i >∆ω i is to increase the spread of the solutions. Conversely, in Fig. 3b, the original PF is characterised by nearly horizontal and nearly vertical regions, in which small degradation of the minimum objective (ω1 starting from P1 following the arrow) entails substantial improvement of the other. In this case, the effect of the tolerance-based formulation is to focus on the region near the “corner” of the L-shaped PF, which is that in which both objectives assume low values. On the contrary, a concave PF, as that shown in Fig. 3c, has the feature that the minimum objective (again ω1 starting from P1) may be significantly degraded without improving the other objective (almost horizontal branch near P1 and vertical one near P2). This is an indicator of scarce consistency of the two objectives, and the introduction of a finite resolution highlights this circumstance by splitting the original Pareto Front into separate subsets.

Fig. 3

Different effects of relaxing Pareto dominance on the spreading of the solutions: a increasing, b decreasing, c splitting into subsets

Moving to the parameter space, the benefits of using the tolerance-based definition of Pareto optimality is avoiding either over- or under-confidence in the parameter uncertainty estimation. Firstly, if the multiple tests are very consistent to each other, i.e., the PF is located in a very small region of the objective space (Fig. 3a), low-sensitivity parameters may present unrealistic limited associated uncertainty (over-confidence). Secondly, if the PF is characterised by nearly vertical or horizontal branches (Fig. 3b) it may span considerable regions of the parameter space for a limited improvement in fidelity to one test, leading thus to large uncertainty intervals for the parameters and consequently under-confidence in the results. In this respect, the non-standard definition of Pareto dominance proposed herein avoids both drawbacks, increasing the PF bounds in the first case, and focusing in the corner region of the PF in the second case, and thus leading to a more realistic uncertainty estimation. The practical usefulness of the approach will be shown with reference to an identification problem involving masonry panels in Sect. 3.

2.2.3 Numerical solution of the identification problem

To solve problem (5), it is necessary to use an optimisation algorithm able to track the entire PFt. In this respect, population-based meta-heuristics are preferable over gradient-based algorithms because they work on ensembles of alternatives, and thus are naturally designed to converge towards a region of the solution space instead of a single value. In this work, the Non-dominated Sorting Genetic Algorithm II (NSGA-II) [33] was used to solve the multi-objective identification problem. This state-of-art approach to multi-objective optimisation was implemented in the software TOSCA [34]. It exploits the concepts of non-domination ranking and crowding distance to reach convergence to the PF while maintaining diversity in the population. At the end of each generation, the individuals are divided into progressive non-domination fronts. Inside each front, the individuals are ranked based on a density-estimation metric, called crowding distance, which represents a measure of how close (in terms of objective values) an individual is to its neighbours, and more isolated points are favoured to increase diversity in the population. Even though in the original formulation the domination ranking is associated to tournament selection, this is not mandatory. In the examples reported in this paper, Stochastic Universal Sampling [35] was used as selection operator. Blend crossover [36] and aleatory mutation were then applied to create a new population.

3 Estimation of elastic properties of masonry from diagonal compression tests

The procedure described above was applied onto the results of an experimental activity involving brick–mortar unreinforced masonry and carried out at the University of Trieste (Italy). This was part of a broader experimental programme aimed at designing a novel experimental–numerical procedure for identification of masonry properties [37]. Two masonry types (MT1 and MT2) were prepared in different periods. The main material parameters obtained from standard tests on small specimens are reported in Table 1, as average values and coefficients of variation (CV).

Table 1

Material properties as estimated from tests on small samples

Property

Symbol

Masonry MT1

Masonry MT2

Average

CV (%)

Average

CV (%)

Mortar compressive strength

f m

7.855 MPa

6.24

14.15 MPa

5.61

Brick compressive strength

f b

18.27 MPa

14.08

24.72 MPa

6.71

Brick tensile strength

f vb

4.233 MPa

3.94

5.50 MPa

7.49

Brick Young modulus

E b

11.23 GPa

16.28

8.92 GPa

9.35

Brick Poisson ratio

νb

0.19

35.71

0.13

11.12

Masonry compressive strength (orthogonal to the bed joints)

f wc, ┴

12.88 MPa

13.72

19.68 MPa

7.58

Masonry compressive strength

(parallel to the bed joints)

f wc, //

9.32 MPa

2.52

17.74 MPa

3.43

3.1 The experimental programme

Two 640 × 640 × 90 mm3 square panels per masonry type were prepared to be tested under diagonal compression. The tests are labelled as CDx-MTy, where x the identification number of the test and y the masonry type. After curing, the panels were rotated and placed on a stiffened steel angle; a similar angle was applied on the opposite corner for the load application. Between the angles and the specimen, a thin layer made of chalk and sand in 1:1 proportion by volume (plaster) was applied for a uniform stress distribution. The load was applied by means of a hydraulic jack (load capacity 200kN). The specimen was loaded until 20kN, then unloaded and loaded until failure; only for the specimen 1 of MT2 (CD1–MT2) the first loading reached 100kN. Displacements were acquired using 12 Linear Variable Displacement Transducers (LVDTs) of 25 mm stroke, with six of them placed on either side of the panel to measure the diagonal displacements and the displacement of the four edges (Fig. 4a). To avoid capturing local effects, the sensors were not placed on the first and last brick layers. Aluminium bars were used to connect each LVDT to the opposite gauge point.

Fig. 4

Masonry panels used for the diagonal compression test: a scheme of the panel geometry and LVDT layout; b photo of a representative panel during the test

The results of the four diagonal compression tests in terms of LVDT displacement on the diagonals are summarised in Figs. 5 and 6, assuming positive sign for LVDT lengthening and negative for LVDT shortening. An initial linear elastic branch is generally recognisable, especially for masonry MT1 (Fig. 5); the same can be noticed for specimen CD1-MT2, while behaviour of CD2-MT2 appears nonlinear from the beginning (Fig. 6). This could be due to problems of load eccentricity causing geometrically nonlinear effects onto the specimen. This will be taken into consideration and properly analysed in the Sect. 3.4. For the calibrations described in Sect. 3.4, the elastic stiffness was defined as the secant stiffness between 20 and 60 kN.

Fig. 5

Load–displacement curves for MT1

Fig. 6

Load–displacement curves for MT2

3.2 Description of the FE model

A three-dimensional finite element model of the test, shown in Fig. 7, was created in Abaqus 6.9 [38]. All materials were discretised by four-node tetrahedral elements (C3D4) with regular patterns along the three axes. The level of refinement was based on preliminary convergence analysis, here not reported. The characteristic length of the tetrahedral mesh element representing the bricks and the steel elements was 27.5, 45 and 30 mm along x-, y-, and z-axis, respectively, while for the mortar and plaster materials the number of elements along the thickness was set equal to two (5 mm characteristic length). According to this scheme, the total number of elements was equal to 17,904. The two steel angles were modelled using very stiff solid elements (E = 300 GPa) at the top and bottom of the panel; the bottom angle was fully restrained. Four vertical forces F00, F01, F10, F11, 184 and 90 mm spaced in X- and Y-directions, respectively, were applied on the top angle: by changing the magnitude of each relatively to the others it is possible to simulate accidental eccentricities in X- and Y-directions (Fig. 7). The parameters e x and e y identify the eccentricity of the loading application point: the four forces shown in Fig. 7 are related to the total force F by the expressions:

Fig. 7

View of the FE model of the diagonal compression test

$$\begin{array}{*{20}{c}} {{F_{00}}={e_x} \cdot {e_y} \cdot F}&{{F_{01}}={e_x} \cdot \left( {1 - {e_y}} \right) \cdot F} \\ {{F_{10}}=\left( {1 - {e_x}} \right) \cdot {e_y} \cdot F}&{{F_{11}}=\left( {1 - {e_x}} \right) \cdot \left( {1 - {e_y}} \right) \cdot F} \end{array}$$
(9)

It is easy to verify that the sum of the forces is always equal to F. Perfectly centred load is characterised by e x  = e y  = 0.5.

The head joint Young modulus was assumed different from bed joint Em. Very little is reported in the literature about the effects of the head joints on the response of a masonry wall. In general, due to lack of significant normal stress, shrinkage of the head joints and the subsequent loss of bond between the unit and mortar, their contribution to the shear transfer is usually considered less than that of the bed joints. While many works in the literature focus on the influence of head joints on strength [39, 40], to the author’s knowledge there is lack of significant studies investigating the elastic properties, which are likely to depend on a large number of factors, as joint thickness, environmental conditions (shrinkage) and quality of workmanship. For this reason, in this work, the effective head joint stiffness contribution was accounted for by considering a different material, the Young modulus of which being evaluated as r·Em, with \(~0.0 \leqslant r \leqslant 1.0\).

The connection between the masonry and steel is a critical element in the FE approximation. Nonlinear interfaces or contact elements would be the most realistic way to simulate the connection between the two materials. However, that would turn the model into nonlinear, dramatically increasing computation time and making the identification analysis cumbersome. To keep the model elastic, thus, such connection types were not applied. However, a rigid connection would be similarly unfeasible as highly inaccurate to model the transfer of stress from steel to masonry. For all these reasons, a layer of elastic material with different elastic properties was used between the steel plate and the masonry panel, with elastic properties to be identified. Even though the elastic assumption is a very crude representation of reality, it allows one to deal with a fully elastic model.

3.3 Global sensitivity analysis

A preliminary global sensitivity analysis was performed on the parameters entering the model. The method of elementary effects (EE) [41] was used in this work because of its efficiency and ease to apply. The EE method is a screening method aimed at determining if the effect of each parameter is (a) negligible, (b) linear and additive, (c) nonlinear or involved in interactions with other inputs, with a reasonable computational effort (much less than Monte Carlo-based methods). It is based on the evaluation of the elementary effect EE i of the parameter p i on the scalar response d(p) when it is moved by a step \(~{\Delta _i}\) while all the other parameters are fixed. It is defined as:
$${\text{E}}{{\text{E}}_i}=\frac{{d\left( {{p_1}, \ldots ,{\text{~}}{p_{i - 1}},{p_i}+{\Delta _i},{p_{i+1}}, \ldots ,{\text{~}}{p_k}} \right) - d\left( {{p_1}, \ldots ,{p_k}} \right)}}{{{\Delta _i}}}$$
(10)
with k number of parameters.

The global sensitivity measure is the finite distribution F i composed of all possible EE i . It may be represented by the values of the mean and standard deviation, but Campolongo et al. [42] proposed to use the value \(\mu _{i}^{*}=\frac{1}{N}\mathop \sum \limits_{{i=1}}^{N} \left| {{\text{E}}{{\text{E}}_i}} \right|\) to rank parameters, as a large value of \(\mu _{i}^{*}\) indicates an input with important “overall” influence on the output. This parameter was used in this work.

The N different EEs may be computed by different techniques, starting from the original formulation based on trajectories [41]. Here, the procedure proposed in [43] based on sampling via Sobol sequence was followed. The input parameters (and the increments \(~{\Delta _i}\)) are previously made non-dimensional with respect to their variation range, and so they all can vary from 0 to 1. The parameters entering the model can be divided into:

  • Brick parameters: Eb, υb;

  • Mortar parameters: Em, υm, r;

  • Plaster parameters: Epl, υpl;

  • Boundary conditions: e x , e y .

They represent the parameter vector p. The variation ranges for the global sensitivity analysis are shown in Table 2.

Table 2

Variation range for the material parameters in the sensitivity analysis

Parameter

Lower bound

Upper bound

Eb (N/mm2)

5000

20,000

υ b

0.0

0.5

Em (N/mm2)

1000

20,000

υ m

0.0

0.5

r

0.0

1.0

Epl (N/mm2)

10

2000

υ pl

0.0

0.5

e x

0.0

1.0

e y

0.0

1.0

The discrepancy function on which sensitivity analysis must be performed is:
$$\omega (p)=\frac{1}{L}\mathop \sum \limits_{{i=1}}^{L} \left| {{u_{i,\exp }} - {u_{i,c}}(p)} \right|$$
(11)
where L = 12 is the number of measurements in the single test, ui,exp is the ith measured value and ui,c is the corresponding numerical value computed by the FE model for a given value of the parameters p. Since this definition of the function regards one single test, it follows that the sensitivity measures should be evaluated for each test and each masonry type.

Ten (N = 10) sample points in the parameter space were selected according to the procedure proposed in [43]. The total number of evaluation is thus N(k + 1) = 100, where k = 9 is the number of sought parameters. The results in terms of \({\mu ^*}\) for the cost function in both masonry types are displayed in Fig. 8. The plot shows that in all cases, the most influential parameters in the recorded responses are estimated to be e y , Eb and Em. The other parameters have very low influence, meaning that they are expected to be identified with greater uncertainty.

Fig. 8

Results of the global sensitivity analysis for masonry type a MT1 and b MT2

3.4 Calibration of the elastic parameters

The procedure described in Sect. 2 was then applied onto the results of the diagonal compression tests. For each masonry type, the parameter vector p is represented by the material parameters Eb, υb, Em, υm, r, Epl, υpl, plus the eccentricity parameters ex1, ex2, ey1, ey2 defined for each of the two tests. The solution \(\tilde {{\varvec{p}}}\) is attained by solving the multi-objective problem:
$${\mathbf {\tilde {p}}}=\arg \mathop {\hbox{min} }\limits_{\mathbf {p}} \left[ {\frac{1}{{{N^{CD1}}}}\mathop \sum \limits_{{i=1}}^{{{N^{CD1}}}} \left| {u_{{i,\exp }}^{{CD1}} - u_{{i,c}}^{{CD1}}({\mathbf {p}})} \right|,\frac{1}{{{N^{CD2}}}}\mathop \sum \limits_{{i=1}}^{{{N^{CD2}}}} \left| {u_{{i,\exp }}^{{CD2}} - u_{{i,c}}^{{CD2}}({\bf p})} \right|} \right]$$
(12)
where \(~{N^{{\text{CD1}}}},{N^{{\text{CD2}}}}\) is the number of measurements in test CD1 and CD2, respectively; u i is the ith LVDT displacement and the subscripts exp and c refer to experimental and computed data. To solve the problem (12), a genetic algorithm with the following parameters was utilised:
  • Population: 50 individuals;

  • Initial population generated by the Sobol algorithm;

  • Number of generations: 100;

  • Selection: Stochastic Universal Sampling, with linear ranking based on domination and scaling pressure equal to 2.0;

  • Crossover: Blend-α, with α = 2.0;

  • Crossover probability: 1.0;

  • Mutation probability: 0.005.

Both the operators and the GA internal variables were selected based on the results of previous research [34]. In particular, quasi-random sequences as the Sobol algorithm [44] explore the parameter space more uniformly than simple random sequence, allowing to reduce the population size, which was defined based on preliminary sensitivity analysis. Stochastic Universal Sampling avoids the phenomenon of genetic drift; Blend-α crossover with α = 2.0 is designed to preserve the probability density function of the population, while keeping its ability of yielding novel solutions in finite population case [45]. According to the same principle, scaling pressure and number of generations were designed to gradually narrow the probability distribution function of the population. Crossover and mutation probabilities are based on previous research and are consistent with the general literature assumptions.

3.4.1 Masonry type MT1

The Pareto Front identified by the algorithm considering no uncertainty (ε = 0 in Eq. (8a, 8b)) is shown in Fig. 9a in the objective plane. It is evident that even though the solution is not unique, the deterioration of one objective due to the satisfaction of the other objective is quite limited (Δω < 0.001 mm).

Fig. 9

Inverse analysis results for MT1. Pareto front and first non-dominated fronts in terms of: a objectives, b, c boundary conditions, dg elastic properties for brick, mortar and plaster

The good consistency between the two tests is evidenced by the scatter plot of the Pareto solutions in the parameter space (Fig. 9b–g). Apart from mortar Young modulus and head-joint-to-bed-joint stiffness ratio r, all parameters seem to be identified mostly univocally. The load seems to have a considerable eccentricity e x in both tests (Fig. 9b), while rather smaller out-of-plane eccentricity e y is identified (Fig. 9c). The in-plane eccentricity may be due to the accidental rotation of the steel angle due to crushing of the plaster layer beneath. The brick Young modulus is identified as about 11.7GPa, very close to the experimental estimation by means of compressive tests (Table 1). Conversely, the mortar Young modulus is not identified, since all values greater than 16GPa give discrepancy values belonging to the PF. This is reasonable, as if mortar is very stiff, its effect on the displacement field becomes negligible and the deformability is governed by brick. The calibration of this parameter is thus not bounded, and any upper limit would represent a feasible solution. The head-to-bed-joint stiffness ratio r is not adequately identified, assuming values between 0.6 and 1.0. Poisson’s ratio for brick and mortar seems identified, but while the former assumes reasonable values around 0.13, similar to those recorded in the test on brick samples (Table 1) and generally reported in the literature [46], the value of 0.5 may seem unrealistically high for mortar. It should be said, however, that mortar Poisson’s ratio highly depends on the stress state and higher values than 0.5 were recorded in [47] for different mortar mixtures with low levels of confining pressure. The assumption of low confining pressure seems sound in this case, because if the mortar is stiffer than the brick, the difference in stiffness results in tension for the mortar and compression for the brick [47] orthogonally to the direction along which the masonry is compressed. Finally, plaster properties seem identified too, with Epl ≈ 200 MPa and υpl = 0.5. Here, according to the author, the high value of Poisson’s ratio are justified by the fact that, as previously acknowledged, the linearly elastic behaviour for the plaster layer is a rather rough approximation of the real mechanical response, which shows an almost immediate mode-II (shear) failure. It means that the (damaged) shear stiffness is very low compared to the axial stiffness, and, since in the numerical elastic model \(\frac{G}{E}=\frac{1}{{2\left( {1+\upsilon } \right)}}\), the high plaster Poisson’s ratio tries to reproduce this loss of stiffness. Conversely, the value for the plaster Young modulus seems reasonable.

Figure 10a compares numerical results and experimental data for the 12 measurements. The numerical values are those obtained using the best solutions of each objective. It is possible to notice that the solution fits seven data points, which is a feature of L1-norm regression. The other points, some of which appear to be outliers, present greater discrepancy values.

Fig. 10

Comparison of numerical results and experimental data for MT1: specimens a CD1-MT1, b CD2-MT1

Even though some counter-intuitive results (i.e., the high mortar Poisson’s ratio) may be explained, it seems unrealistic to be able to identify with the low uncertainty displayed in Fig. 9 plaster properties or mortar and brick Poisson’s ratio. The reason may be found in the high consistency between the two tests, in a situation similar to that shown in Fig. 3a. In Fig. 11 the results of the identification analysis with resolution ε = 0.002 mm (typical of the LVDTs used to acquire the experimental data) are shown. It is now possible to distinguish the parameters which are identified with low uncertainty (e x , e y , Eb) and those that, under the chosen resolution, are not identifiable (Em, r, Epl). High values for υm and υpl (greater than 0.4), and low values for υb (less than 0.2) are detected, so the arguments previously made still hold.

Fig. 11

Inverse analysis results for MT1 with resolution ε = 0.002. Pareto Front and first non-dominated fronts in terms of: a objectives, b, c boundary conditions, dg elastic properties for brick, mortar and plaster

3.4.2 Masonry type MT2

The PF with associated uncertainty ε = 0 in the objective plane is displayed in Fig. 12a. Unlike the previous case, the satisfaction of one objective implies a considerable increase in the other objective (Δω ≈ 0.01 mm). Implicitly, in the definition of PF, if the solutions of the “basic” optimisation problems (i.e., in which the two objectives are considered separately) are very different, the front becomes wider, both in the objective space (Fig. 12a), and in the parameter space (Fig. 12b–g). This provides a more uncertain identification of most parameters.

Fig. 12

Inverse analysis results for MT2. Pareto Front and first non-dominated fronts in terms of: a objectives, b, c boundary conditions, dg elastic properties for brick, mortar and plaster

Like in the previous case, the determination of the boundary condition parameters e x and e y seems affected by low uncertainty, but here the eccentricities seem more pronounced. Brick Young modulus is very narrowly dispersed around a value of 8.8 GPa, which corresponds well to the experimental outcome reported in Table 1. Similarly to MT1, it is not possible to state anything about mortar Young modulus because of the high uncertainty, except that it is higher than brick Young modulus. All the other parameters (Fig. 12e–g) cannot be identified because of the high scattering.

This gives us the opportunity to underline the features of the proposed multi-objective approach. The availability of several tests, which are processed in the multi-objective optimisation framework, allows for the definition of the reliability in the estimation due to repeatability. For masonry-type MT1, since the two tests were consistent with each other, the reliability was high, and thus the estimated parameters showed little scattering in the parameter space. For masonry type MT2, a large spread of PF solutions in the parameter space can be observed. The uncertainty in the estimation of some parameters, i.e., mortar and plaster properties and brick Poisson’s ratio, is thus larger. In Fig. 13 the comparison between numerical and experimental measurement is shown.

Fig. 13

Comparison of numerical results and experimental data for MT2: specimens a CD1-MT2, b CD2-MT2

3.4.3 The role of sensitivity analysis

As last remark, it is interesting to see that a global sensitivity analysis, as the EE method utilised in this work, may identify a parameter as important while in the specific case it is not. An example is the mortar Young modulus of this example. The reason is that the sensitivity indices may largely vary in the variation range of the parameter: when Em is small compared to Eb, its influence on the response is large as it governs the deformation field. On the contrary, when the mortar is very stiff, the displacement of the specimen is controlled by the brick Young modulus, while large variations in the Em have low or negligible influence. Conversely, the parameter e x was detected as having low influence, meaning that based on the preliminary sensitivity analysis one could have removed it from the set of the parameters to identify, possibly leading to wrong results in the estimation. The approach followed in this paper overcomes these drawbacks. It does not impose a preliminary choice of the identifiable parameters by means of a sensitivity analysis, but is still able to associate a solution to its uncertainty derived by sensitivity, test repeatability and precision of experimental data. In Table 3, the PF solutions are shown in form of mid-range value and interval as final results of the identification procedure.

Table 3

Results of the identification process

Parameter

MT1

ε = 0.0

MT1

ε = 0.002 mm

MT2

ε = 0.0

Eb (GPa)

11.7 ± 0.2

12.8 ± 1.5

8.8 ± 0.4

υ b

0.13 ± 0.0

0.11 ± 0.11

0.14 ± 0.14

Em (GPa)

18.4 ± 1.6

14.3 ± 5.7

18.2 ± 1.79

υ m

0.5 ± 0.0

0.45 ± 0.05

0.23 ± 0.23

r

0.86 ± 0.14

0.51 ± 0.49

0.63 ± 0.37

Epl (GPa)

0.18 ± 0.03

0.77 ± 0.76

1.34 ± 0.66

υ pl

0.5 ± 0.0

0.44 ± 0.06

0.29 ± 0.21

e x1

0.71 ± 0.01

0.73 ± 0.12

0.87 ± 0.03

e x2

0.40 ± 0.0

0.44 ± 0.10

0.52 ± 0.03

e y1

0.57 ± 0.01

0.56 ± 0.07

0.58 ± 0.03

e y2

0.56 ± 0.01

0.54 ± 0.06

0.31 ± 0.01

4 Discussion and conclusions

In this paper, a strategy for identification of material parameters in structural problems accounting for uncertainty has been proposed. It is based on multi-objective optimisation of an appropriate functional of the discrepancy between experimental and numerical results, formulated for multiple experimental responses. The solution of the multi-objective optimisation is represented by the Pareto Front of the non-dominated solutions, which is then post-processed to study the uncertainty of the identification. A non-standard formulation for Pareto dominance allows one to account for limited precision of the experimental data.

The applicative example regards a diagonal compression test, generally utilised to estimate the shear strength of masonry and here studied as a means to identify elastic properties of its components. Two tests for each of two different masonry types were considered to assess the procedure. The main results may be reported in the following list:

  1. 1.

    The standard definition of Pareto Front accounts for the uncertainty related to the repeatability of the test. If the two tests are consistent (like for masonry MT1, where the deterioration of one objective to reach the optimum in the other objective was limited), the identification may provide a relatively precise (i.e., with low uncertainty) identification of the sought parameters, even though they are not actually influential in the response. If the two tests are not consistent, i.e. the maximum fidelity solution is attained for remarkably different choices of parameter values, like for masonry type MT2 of the example, the uncertainty estimate will be accordingly larger.

     
  2. 2.

    The tolerance-based definition of Pareto optimality allows one to consider uncertainty due to the tolerance ε typical of the experimental data. In the case of masonry type MT1, this uncertainty exceeded that due to repeatability of the test.

     
  3. 3.

    The analysis of PF and subsequent determination of uncertainty can avoid the reduction of the number of parameters based on the preliminary sensitivity analysis, which is a commonly used approach in identification problems. This reduction, if not performed carefully, may lead to erroneous results when the parameters removed are important in the global response.

     

In the case analysed, with a resolution ε = 0.002 mm, typical of common LVDTs, only boundary conditions and brick Young modulus could be obtained with limited uncertainty. In future research, the procedure will be applied to different case studies of real-world tests to further validate the methodology.

Notes

Acknowledgements

The author is grateful to Prof. Amadio from the University of Trieste for fruitful discussions about uncertainty in the solution, and Dr Franco Trevisan and the laboratory staff of the University of Trieste for the technical support necessary to the successful completion of experimental tests described.

References

  1. 1.
    Zienkiewicz O, Cormeau I (1974) Visco-plasticity—plasticity and creep in elastic solids—a unified numerical solution approach. Int J Numer Meth Eng 8(4):821–845CrossRefMATHGoogle Scholar
  2. 2.
    Moresi L, Dufour F, Mühlhaus H-B (2003) A Lagrangian integration point finite element method for large deformation modeling of viscoelastic geomaterials. J Comput Phys 184(2):476–497CrossRefMATHGoogle Scholar
  3. 3.
    Wriggers P (2006) Computational Contact Mechanics. Springer Heidelberg, BerlinCrossRefMATHGoogle Scholar
  4. 4.
    Anderson T (2005) Fracture mechanics: fundamentals and applications. CRC Press, Boca RatonMATHGoogle Scholar
  5. 5.
    Tarantola A (2005) Inverse problem theory and methods for model parameter estimation. Society for Industrial and Applied Mathematics, PhiladelphiaCrossRefMATHGoogle Scholar
  6. 6.
    Buljak V (2011) Inverse analyses with model reduction. Springer, BerlinMATHGoogle Scholar
  7. 7.
    Cunha A, Caetano E (2006) Experimental modal analysis of civil engineering structures. Sound Vib 6(40):12–20Google Scholar
  8. 8.
    Sanayei M, Imbaro G, McClain J, Brown L (1997) Structural model updating using experimental static measurements. J Struct Eng 123(6):792–798CrossRefGoogle Scholar
  9. 9.
    Caddemi S, Morassi A (2013) Multi-cracked Euler-Bernoulli beams: mathematical modeling and exact solutions. Int J Solids Struct 50(6):944–956CrossRefGoogle Scholar
  10. 10.
    Wang M, Dutta D, Kim K, Brigham J (2015) A computationally efficient approach for inverse material characterization combining Gappy POD with direct inversion. Comput Methods Appl Mech Eng 286:373–393MathSciNetCrossRefGoogle Scholar
  11. 11.
    Avril S, Bonnet M, Bretelle A-S, Grediac M, Hild F, Ienny P, Latourte F, Lemosse D, Pagano S, Pagnacco E, Pierron F (2008) Overview of identification methods of mechanical parameters based on full-field measurements. Exp Mech 48(4):381–402CrossRefGoogle Scholar
  12. 12.
    Isaac T, Petra N, Stadler G, Ghattas O (2015) Scalable and efficient algorithms for the propagation of uncertainty from data through inference to prediction for large-scale problems, with application to flow of the Antarctic ice sheet. J Comput Phys 296:348–368MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Maier G, Buljak V, Garbowski T, Cocchetti G, Novati G (2014) Mechanical characterization of materials and diagnosis of structures by inverse analyses: Some innovative procedures and applications. Int J Comput Methods. 11:1343002CrossRefGoogle Scholar
  14. 14.
    Erdogan YS, Bakir PG (2013) Inverse propagation of uncertainties in finite element model updating through use of fuzzy arithmetic. Eng Appl Artif Intell 26(1):357–367CrossRefGoogle Scholar
  15. 15.
    Khodaparast HH, Mottershead JE, Badcock KJ (2011) Interval model updating with irreducible uncertainty using the Kriging predictor. Mech Syst Signal Process 25(4):1204–1226CrossRefGoogle Scholar
  16. 16.
    Liu J, Han X, Jiang C, Ning HM, Bai YC (2011) Dynamic load identification for uncertain structures based on interval analysis and regularization method. Int J Comput Methods 8(4):667–683MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Fedele F, Muhanna R, Xiao N, Mullen R (2015) Interval-based approach for uncertainty propagation in inverse problems. J Eng Mech 141(1):06014013CrossRefGoogle Scholar
  18. 18.
    Fernández-Martínez J, Fernández-Muñiz Z, Pallero J, Pedruelo-González L (2013) From Bayes to Tarantola: new insights to understand uncertainty in inverse problems. J Appl Geophys 98:62–72CrossRefGoogle Scholar
  19. 19.
    Roy CJ, Oberkampf WL (2011) A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput Methods Appl Mech Eng 200:2131–2144MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    The American Society For Mechanical Engineers (2006) Guide for verification and validation in computational solid mechanics. ASMEGoogle Scholar
  21. 21.
    Gokce H, Catbas F, Gul M, Frangopol D (2013) Structural identification for performance prediction considering uncertainties: case study of a movable bridge. J Struct Eng 139(10):1703–1715CrossRefGoogle Scholar
  22. 22.
    Atamturktur S, Liu Z, Cogan S, Juang H (2014) Calibration of imprecise and inaccurate numerical models considering fidelity and robustness: a multi-objective optimization-based approach. Struct Multidiscip Optim 51(3):659–671CrossRefGoogle Scholar
  23. 23.
    Claerbout Jf, Muir F (1973) Robust modeling with erratic data. Geophysics 38(5):826–844CrossRefGoogle Scholar
  24. 24.
    Miettinen K (1999) Nonlinear multiobjective optimization, Springer, New YorkMATHGoogle Scholar
  25. 25.
    Jin S-S, Cho S, Jung H-J, Lee J-J, Yun C-B (2014) A new multi-objective approach to finite element model updating. J Sound Vib 333(11):2323–2338CrossRefGoogle Scholar
  26. 26.
    Chisari C, Francavilla AB, Latour M, Piluso V, Rizzano G, Amadio C (2017) Critical issues in parameter calibration of cyclic models for steel members. Eng Struct 132:123–138CrossRefGoogle Scholar
  27. 27.
    Jung S, Ok S-Y, Song J (2010) Robust structural damage identification based on multi-objective optimization. Int J Numer Meth Eng 81:786–804MATHGoogle Scholar
  28. 28.
    Wang M, Brigham JC (2014) Assessment of multi-objective optimization for nondestructive evaluation of damage in structural components. J Intell Mater Syst Struct 25(9):1082–1096CrossRefGoogle Scholar
  29. 29.
    Shim M-B, Suh M-W (2002) “A study on multiobjective optimization technique for inverse and crack identification problems. Inverse Probl Eng 10(5):441–465CrossRefGoogle Scholar
  30. 30.
    Farina M, Amato P (2004) A fuzzy definition of “optimality” for many-criteria optimization problems. IEEE Trans Syst Man Cybern Part A Syst Hum 34(3):315–326CrossRefGoogle Scholar
  31. 31.
    Laumanns M, Thiele L, Deb K, Zitzler E (2002) Combining convergence and diversity in evolutionary multiobjective optimization. Evol Comput 10(3):263–282CrossRefGoogle Scholar
  32. 32.
    Santoso BJ, Chiu G-M, Mumpuni R (2015) An efficient grid-based framework for answering tolerance-based skyline queries. In: Proceedings of International Conference on Information & Communication Technology and Systems (ICTS), Surabaya, Indonesia,Google Scholar
  33. 33.
    Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
  34. 34.
    Chisari C (2015) Inverse techniques for model identification of masonry structures, University of Trieste: PhD Thesis,Google Scholar
  35. 35.
    Baker JE (1987) Reducing bias and inefficiency in the selection algorithm. In: Proceedings of Second International Conference on Genetic Algorithms and their Application, Hillsdale, New Jersey,Google Scholar
  36. 36.
    Eshelman LJ, Schaffer JD (1992) Real-coded genetic algorithms and interval schemata. In: Foundations of genetic algorithms. Morgan-Kaufman, San Mateo, pp 187–202Google Scholar
  37. 37.
    Chisari C, Macorini L, Amadio C, Izzuddin BA (2015) An experimental-numerical procedure for the identification of mesoscale material properties for Brick-Masonry. In: Proceedings of the Fifteenth International Conference on Civil, Structural and Environmental Engineering Computing, Prague,Google Scholar
  38. 38.
    Dassault Systemes (2009) ABAQUS 6.9 Documentation, Providence, RIGoogle Scholar
  39. 39.
    Mann W, Müller H (1982) Failure of shear-stressed masonry—an Enlarged theory, tests and application to shear walls. Proc Br Ceram Soc 30(1):223–235Google Scholar
  40. 40.
    Ganz H (1985) Masonry Walls Subjected to Normal and Shear Forces, Institute of Structural Engineering, ETH Zurich: PhD ThesisGoogle Scholar
  41. 41.
    Morris M (1991) Factorial sampling plans for preliminary computational experiments. Technometrics 33(2):161–174CrossRefGoogle Scholar
  42. 42.
    Campolongo F, Cariboni J, Saltelli A (2007) An effective screening design for sensitivity analysis of large models. Environ Model Softw 22:1509–1518CrossRefGoogle Scholar
  43. 43.
    Campolongo F, Saltelli A, Cariboni J (2011) From screening to quantitative sensitivity analysis. A unified approach. Comput Phys Commun 182:978–988CrossRefMATHGoogle Scholar
  44. 44.
    Antonov IA, Saleev VM (1979) An economic method of computing LP tau-sequence. USSR Comput Math Math Phys 19(1):252–256CrossRefMATHGoogle Scholar
  45. 45.
    Kita H, Yamamura M (1999) A functional specialization hypothesis for designing genetic algorithms. In: IEEE international conference on systems, man, and cybernetics. IEEE SMC’99 conference proceedings, vol 3. IEEE, pp 579–584Google Scholar
  46. 46.
    CUR (1994) Structural masonry: a experimental/numerical basis for practical design rules. CUR, Gouda,Google Scholar
  47. 47.
    McNary W, Abrams DP (1985) Mechanics of masonry in compression. J Struct Eng 111(4):857–870CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Civil and Environmental EngineeringImperial College LondonLondonUK

Personalised recommendations