Sensitivity or Bayesian model updating: a comparison of techniques using the DLR AIRMOD test data

Deterministic model updating is now a mature technology widely applied to large-scale industrial structures. It is concerned with the calibration of the parameters of a single model based on one set of test data. It is, of course, well known that different analysts produce different ﬁnite element models, make different physics-based assumptions, and parameterize their models differently. Also, tests carried out on the same structure, by different operatives, at different times, under different ambient conditions produce different results. There is no unique model and no unique data. Therefore, model updating needs to take account of modeling and test-data variability. Much emphasis is now placed on what has become known as stochastic model updating where data are available from multiple nominally identical test structures. In this paper two currently prominent stochastic model updating techniques (sensitivity-based updating and Bayesian model updating) are described and applied to the DLR AIRMOD structure.


Introduction
Finite element model updating is a cornerstone of model validation and verification, its purpose being to adjust an analytical model such that its outputs agree with experimental data. The statistical approach of Collins et al. in 1974 [1] is the source of two streams of research on (i) sensitivity-based [2][3][4][5] and (ii) Bayesian methods [6][7][8] that have developed largely independently. The purpose of this paper is to make a comparison of these two approaches in terms of the claims made for them, the assumptions upon which the methods are based and an assessment of the performance of the techniques when applied to the same experimental data from the DLR AIRcraft MODel (AIRMOD) [9,10]. The purpose of model updating depends upon the problem at hand. If the updated model is to be used to predict the forced vibration response of the system in the range of the first few modes, then it is required only that the parameters be adjusted so that the eigenvalues and eigenvectors are accurately determined in the range of interest. Conversely, if the updated model is to be used to predict stresses, or to identify the location and extent of damage [11] in a structure, then the updating parameters have a much greater significance-they must be physically meaningful.
The two approaches, sensitivity or Bayesian, are philosophically quite different. The sensitivity method throughout its development in the latter decades of the twentieth century was entirely deterministic. Even when the physics is linear, the outputs are generally nonlinear in the updating parameters. This relationship requires linearization based on an assumption of small perturbation. A residual is formed and minimized in the weighted squared sense and a system of overdetermined equations is formed in the small change in the updated parameters. This leads to a sequence of iterations which continues until convergence is achieved.
Both techniques use the same data-one set of output measurements (i.e. one point in the multi-dimensional output space). However, whereas the traditional sensitivity method is deterministic, the Bayesian approach is probabilistic and based on the application of Bayes' rule [8,12,13]. The required solution, known as the posterior, is a multi-dimensions probability distribution function in the updating parameter space. This is generally a difficult distribution, in principle an integral over the parameter space, approximated by statistical averaging using modern algorithms making use of specialized versions the Markov Chain Monte Carlo (MCMC) method [14]. The mode of the distribution, also known as the maximum-a-posteriori (MAP), defines the updated model and the distribution around the mode provides a measure of confidence in it. A sharply peaked distribution would tend to imply high confidence in the updated model, whereas shallowness might cast doubt on the choice of parameters.
The term 'stochastic model updating' was introduced in 2006 by Mares et al. [15,16] to describe a frequentist approach, based on the sensitivity method, using multiple sets of output data (i.e. a distribution of points in the multi-dimensional output space). In this and more recent work [17][18][19], the objective is not to reproduce a single point in the output space, but to replicate a distribution of output measurements. This approach might typically be applied to the inspection of motor car bodies-in-white coming from a production line. All the bodies-inwhite are nominally identical but measured variability in natural frequencies, of interest to the company, is caused by the accumulation of manufacturing tolerances. Just as the outputs form a distribution, so too do the updating parameters; but the updating parameter distributions have a different meaning from those traditionally obtained by Bayesian updating. In stochastic model updating the distribution represent the physical variability in the updating parameters. Returning to the bodies-in-white example, knowing the physical variability of a parameterized joint between the door post and the floor pan might enable better control over variability in natural frequencies.
The parameterization of a finite element model for updating is of vital importance and a point of divergence between the sensitivity and Bayesian methods. Of course, by the sensitivity method it is a necessary, but not sufficient condition that the outputs should be sensitive to small changes in the updating parameters. Thus the choice of sensitive updating parameters has to be reinforced by formal procedures as well as engineering understanding of both the finite element model and test structure. One such procedure is that of subset selection. Lallement et al. [20] tested one-by-one the closeness of the columns of the sensitivity matrix to the vector of output residuals. This approach was extended by Friswell et al. [21] to allow the simultaneous consideration of multiple columns by investigating the angles between subspaces-small angles representing a good subset of parameters. There are numerous examples where the choice of sensitive updating parameters has been justified by engineering understanding (e.g. [22,23]). The inspection of vibration modes can be especially helpful, typically when a parameter is needed to improve the prediction of one natural frequency, whilst another (already well predicted) should remain unchanged. In such a case, one would aim to identify a region of the structure that is active (strained) in the first mode but passive (unstrained) in the second. Parameterization of this region of the structure will produce the desired effect.
In stochastic model updating, Fang et al. [24,25] used the analysis of variance (ANOVA) for parameter selection. They carried out a statistical F-test to evaluate the contribution of each updating parameter (or a group of parameters) to the total variance of each measured output. Silva et al. [26] demonstrated in a numerical example that sensitive parameters chosen to correctly reproduce the mean of the output distribution would not necessarily reproduce the variance. They developed a method for parameter selection based on decomposition of the output covariance matrix.
Bayesian model classification is consistent with the philosophy of Bayesian model updating [8]. One of the advantages of such an approach is that it is very general. The purpose is to find the most probable model from a number of different model classes, each model class being a candidate parameterization of a model of the test structure. The different model classes might, for example, be different parameterizations of the same finite element model, but could also include different model structures, i.e. different finite element idealizations of the same structure. Muto and Beck [27] showed that the log evidence could be expressed as two terms, the first being a data-fitting term and the second a parsimony term (also known as the information gain measure [28]) that aims to choose the simplest model class.
There are generally multiple ways in which to parameterize a chosen region of the finite element model for updating and much attention has been concentrated on mechanical joints. It was shown by Mottershead et al. [29] that the parameters chosen for joint updating can be critical to obtaining an updated model with physical meaning. They considered generic element parameters based on eigenvalue decomposition of a substructure stiffness matrix, joint offset parameters, geometric parameters and masses.
Both methods aim to correct the model by using a reasonably small number of updating parameters. In the sensitivity method, an overdetermined system of updating equations is required. One is constrained by the frequency bandwidth of the modal test that produces the data and usually would not wish to have many more parameters than the number of vibration modes in the range of the test. Natural frequencies are generally sensitive to well-chosen parameters and although mode-shape terms can certainly be used, they are usually less sensitive than natural frequencies. This means that although the system of updating equations may be overdetermined, the quantity of information present in the data is limited. Consequently, the overdetermined system is ill-conditioned and requires regularization [30]. This may be done using classical Tikhonov regularization to penalize the deviation in the parameters from their original values, or side constraints may be applied to embed some physical understanding of the structure and the chosen parameters. Ahmadian et al. [31] used the L-curve method and applied a side constraint to prevent the deviation of nominally identical joints from each other. It may be shown (e.g. [11]) that the MAP estimate produced by Bayesian model updating incorporates Tikhonov regularization automatically.
There is, however, a second reason why the small number of updating parameters is important for the Bayesian method. As the number of updating parameters increases, so too does the dimension of the posterior and generally the complexity of the geometry of the posterior. This makes sampling from the posterior more complicated with more MCMC rejections and convergence becomes excessively slow.
In the sensitivity-based model updating community, the usual validation procedure is to check the accuracy with which the updated model predicts data not used in the updating process. Examples include the prediction of either out-of-range modes or modes of the structure after modification, typically by adding masses or constraints to change the modes of the system [32]. This approach could of course be used by the Bayesian updating community, but the authors are not aware that this has been done. A Bayesian updated model is generally considered valid if the posterior is sharply peaked.
In the following sections, the sensitivity and Bayesian methods are briefly described. The performances of the two techniques are compared by means of a small-scale numerical example and the experimental DLR AIRMOD structure, which was tested multiple times after disassembly and reassembly. Finally, remarks are made on remaining challenges for the model updating community.

The sensitivity method
The sensitivity method [2][3][4][5] in its deterministic form has already been very successful in numerous updating exercises carried out on large-scale industrial structures (e.g. [22,23]), including updating of FE models with nonlinear stiffness and damping terms [33]. It was first applied to problems of variability in the dynamics of nominally identical test pieces by Mares et al. [15,16] using a multivariate gradient-regression approach. Hua et al. [17] and Haddad Khodaparast et al. [18] considered the uncertainty of multiple nominally identical test pieces from the frequentist viewpoint using perturbation methods. Computation of the Hessian matrix was shown in [18] to be unnecessary. Govers and Link [19] extended the classical sensitivity model updating method by a Taylor series expansion of the analytical output covariance matrix and obtained parameter mean values and covariances. Covariance formulas produced by Haddad Khodaparast et al. [18] and Govers and Link [19] were shown to be identical within an assumption of small perturbation by Silva et al. [26]. Sensitivity methods for covariance and interval updating [34] have recently been compared [35] using data obtained by repeated disassembly and reassembly of the DRL AIRMOD structure [9,10]-a replica of the GARTEUR SM-AG19 testbed [36].

Updating the mean
Updating the mean is a deterministic problem, usually cast as, where typicallyz e is a vector of mean values of experimentally measured natural frequencies and mode-shape terms,z j is the corresponding vector of predictions determined from an analytical model with mean parameters θ j and j denotes the iteration index. The updated vector of parameter mean values is then given by, where the transformation matrixT j is the generalized pseudo inverse of the mean sensitivity matrixS j , and W ε and W ϑ are weighting matrices. This allows for the regularization of ill-posed sensitivity equations [30,31].

Covariance updating
Haddad Khodaparast et al. [18] used a perturbation approach to develop an expression for the covariance matrix of the updating parameters as, whilst Govers and Link [19] used the Frobenius norm for the minimization of the difference between measured and analytical output covariance matrices. They obtained an expression in the form of, Later, Silva et al. [26] showed that within an assumption of small variability Eqs. (4) and (5) were identical.

Bayesian model updating
Bayesian model updating, making use of the well-known Bayes' rule, came to the attention of the engineering research community mainly through the pioneering work of Beck and Katafygiotis [6,7] in the late 1990s-a detailed up-to-date exposition is given by Yuen [8]. At first, the most serious obstacle to practical engineering application was the excessive levels of computer resource needed. This problem persists, but considerable progress has been made in addressing it, as will be discussed after a brief review of the mathematical prerequisites. We will begin with the set of uncertain updating parameters, θ, represented by a prior probability distribution p (θ |M )conditioned upon a chosen mathematical model, M, that incorporates known information such as expert opinion or practical experience and is assumed to be true. The updated (posterior) distribution, p (θ |D, M ), is then given by Bayes' rule [12,13] as, where the likelihood function, p (D | θ, M ), is the probability of obtaining the data, D, when the value of the updating parameters θ is fixed, and a specific model M is chosen. The denominator term, known as the marginal likelihood or the evidence, is a normalizing factor, which ensures that, In model updating, the data, D, typically consist of the eigenvalue residual, . . , n represent the square of the ith measured natural frequency and the ith eigenvalue of a finite element model, respectively, but could include mode-shape and other residuals based on the dynamic response. The likelihood function is often chosen to be a multivariate normal distribution, where N denotes the number of data points. It is assumed that the data are of zero mean,ε = 0, and covariance = Cov(ε(θ), ε( θ)), often justified on the basis that the information entropy [37] for a given mean and covariance is maximized by a multivariate normal distribution. An assumption that the data are independent is commonly applied in Bayesian model updating, especially when only a single data point, N = 1, is available, The solution of Eq. (6) for multi-degree of freedom structural dynamics problems is generally intractable by analysis, so that sampling techniques using modern variants of the Markov chain Monte Carlo (MCMC) method are the accepted procedure. The Metropolis-Hastings (MH) algorithm [38,39] carries out a random walk in the parameter space, concentrated on regions of significant probability. Successive samples θ +1 drawn from the so-called proposal distribution are dependent only upon the immediate past samples θ in the chain (subscript denotes the th step in the chain) and independent of preceding samples. Based upon the current sample, the algorithm picks a candidate next sample, which is tested for acceptance or rejection according to whether or not it is more probable than the current sample. In the case of a rejection, the sample remains unchanged and is used again in the next iteration. It can be shown that convergence is asymptotic upon an equilibrium distribution of the updating parameters with increasing numbers of samples. If the data are large and sufficiently informative, then convergence is found to be independent of the proposal distribution. Therefore, the choice of proposal is not critical, but affects only the rate of convergence. It is generally necessary to discard a considerable number of initial samples in the burn-in period.
The classical MH is not suitable for high dimensional problems where the posterior is concentrated in a small region of the sample space so that too many samples are rejected and huge computing resource is required. This restriction has now been overcome to a considerable extent by the development of modern sampling routines-efficient variants of the MH algorithm. Beck and Au [40] introduced the adaptive Metropolis-Hastings (AMH) method capable of simulating peaked, flat and multi-modal posterior PDFs. A similar approach using intermediate (or transitional) PDFs was followed by Ching and Chen [41] with the transitional Markov chain Monte-Carlo (TMCMC) method overcoming inefficiencies inherent to the use kernel density estimation in AMH by a resampling approach. The method is based on a series of intermediate PDFs, where k denotes the intermediate stage number. It can be seen that the series begins with the prior and ends with the posterior PDF therefore fulfilling the model updating requirement with much improved efficiency over the classical MH algorithm, whilst retaining all the desirable convergence properties. Upon completion of the kth stage, p k (θ) becomes the new prior and the subscript k is replaced by k + 1 in Eq. (11) above.
Other efficient variants of the MH algorithm include the method described by Lam et al. [42] whereby the sampling process is divided into multiple levels to explore the parameter space efficiently. This multilevel approach, which is similar to simulated annealing, includes kernel density estimation of the posterior marginal PDFs and a new stopping criterion. The practical value of the method was demonstrated by its application in a coupled-slab system [42] based on field test data. In the field test verification, the enhanced MCMC algorithm was applied twice for approximation of the posterior marginal PDFs of uncertain parameters conditional on two model classes. The use of surrogate models (artificial neural networks [43], Gaussian process emulators [44] or the polynomial chaos expansion [45]) is essential for the Bayesian model updating of even moderately large engineering systems.

Numerical example: 3DoF mass-spring system
The example considered is the 3 degree of freedom mass-spring system shown in Fig. 1  In this example, the three uncertain stiffnesses, k 1 , k 2 , k 5 , are correctly deemed responsible for observed variability in the three natural frequencies of the system. The measured data consisted of 30 separate measurement points (30 points in the 3 dimensional space of the natural frequencies) and the predictions were represented by 1000 points, needed for forward propagation by Latin hypercube (LHC) sampling with imposed correlation from a normal distribution θ j ∈ N n (θ j , Cov(θ j , θ j )), in order to determine z j from θ j . The synthetic data in the form of measured pdfs are shown in Fig. 2. 4.1 Results obtained by the sensitivity method Equation (5) above was applied and an initial cloud of predicted natural frequencies was made to converge upon the cloud of 'measured' natural frequencies. Parameter convergence is seen in Fig. 3 to be obtained after 6 iterations, and the converged output clouds are shown to be in excellent agreement with the synthetic data in Fig. 4.  Parameter probability densities, shown in Figs. 5 and 6, were obtained by the Bayesian-MCMC approach after a 30% burn-in period from 12,000 samples, and the converged output clouds, again in excellent agreement with the data, are shown in Fig. 7. The model updating results obtained by the sensitivity and Bayesian methods are summarized in Table 1, where it can be seen that excellent agreement with the data is obtained in both cases. It will be appreciated  that the parameter standard deviations used to produce the data are not exactly the same as either the standard deviations of the updated parameters or the data itself because of the limitation of using just 30 measurement points. The wall-clock time for the Bayesian approach is approximately 10 times greater than for the sensitivity method.

Experimental example: DLR AIRMOD structure
The experimental example is the DLR AIRMOD structure shown in Fig. 8 and described in detail in references [9,10,36]. The structure essentially consists of six beam-like components with bolted joints at the connections, which were disassembled and reassembled 130 times, producing a maximum of 260 modal data sets from single point excitation at two locations. The structure was supported on soft bungee cords to closely represent free-free boundary conditions. The modes used in updating were the lower modes with the generally simpler shapes shown in Fig. 9. The first four modes are the rigid body modes in yaw, roll, pitch and heave. Modes 5-8 include the first two out-of-plane wing bending modes and the first antisymmetric and the first symmetric wing torsion modes. Mode 10 is the third wing bending mode. Modes 11 and 12 are the first two   in-plane wing bending modes and modes 14, 19 and 20 are local tail-plane modes. This combination of modes ensures that the measured data include information from all the joints in strained conditions. The modes not included are all the higher-order wing bending modes with more complicated shapes that include also bending and torsion of the tail-plane. These latter modes are useful in validation of the updated model. Modal tests resulted in the statistical data shown in Table 2. The finite element model described in [10] consisted of 1440 CHEXA and 6 CPENTA elements representing the main aluminium structure, 561 CELAS1 elements that form the elastic connections between the elastic beams at the joints, 55 CMAS1 elements and 18 CONM2 elements that account for additional non-structural masses (such as cables, sensors and bungee connectors) and 3 CROD elements for the bungee chords.
The chosen parameters, both stiffness and mass, are listed in Table 3. The stiffness parameters include the support stiffnesses (θ 1 -θ 5 ) and joint stiffnesses (θ 6 , θ 7 and θ 14 -θ 18 ). Mass parameters (θ 8 -θ 13 )are included to represent variability in the position of cable bundles, screws and glue after each reassembly of the structure. These were selected after a detailed sensitivity study had been carried out [10].

Results obtained by the sensitivity method
Model updating was completed using Eqs. (2) and (5). When updating the means of the parameters the residuals for the eigenvalues and modes shapes listed in Table 2 were included, but only the eigenvalue terms were used for covariance updating. The resulting parameter means and standard deviations, normalized by initial values, are shown in Table 4. The correlation matrix is given in Fig. 10. The strongest correlations are found between parameters located at the same joint. For example, between θ 4 and θ 5 which are sensor harness stiffnesses at the wing-fuselage connection. Also, θ 6 , θ 7 and θ 8 all located at the VTP-HTP joint are strongly correlated. Mass and stiffness parameters are found to be negatively correlated simply because a reduction in mass has a similar effect on natural frequencies as an increase in stiffness and vice versa-this correlation is not meaningful physically. Mode 19 (first HTP bending) is seen in Table 2 to have large coefficient of variation (CoV) and is sensitive to parameter θ 7 (z-direction VTP-HTP stiffness) which is shown to have the largest variability in Table 4. Similarly, mode 20 (VTP fore-aft) is strongly affected by parameter θ 17 (x-direction VTP-fuselage stiffness). It is seen in Table 4 that θ 17 has a high CoV and the mode with highest variability in Table 2 is mode 20.  The VTP-HTP and VTP-fuselage joints are found to have greater variability than the wings-fuselage joint. The masses have generally small variability except for the mass of cables θ 12 on the outer wing regions. This parameter shows a reduction of 34.5% for the mean and a CoV which cannot be explained physically and is likely to be a compensating effect for other uncertainties not accounted for in the chosen updating parameters.
Converged output scatter diagrams with the measured data and superimposed covariance ellipses are shown in Fig. 11, where excellent agreement is observed. The geometry and orientation of the ellipses is found to be in very good agreement with distribution of the measured outputs for every mode. This result is to be expected since the objective function [19] requires convergence of the mean values and all the covariance terms on the test data. Two sets of Bayesian model updating results using TMCMC are presented. Results are first produced by using an artificial neural network as a surrogate model. Afterwards further results are obtained to refine the results already presented in Sect. 5.1. In this latter case, the updated parameter mean values and covariances from the sensitivity method are used to form the prior. Use of a surrogate is then unnecessary, and converged results are obtained using the full finite element model. All the results presented in this section were obtained using OpenCossan [46].

Bayesian updating with an artificial neural network (ANN) surrogate
A uniform prior is used in this case, within the range of 5%-200% of the initial parameter values, to indicate virtually no knowledge of the AIRMOD physics. The updated results are in principle therefore entirely dependent upon the data. The likelihood function takes the form given by Eq. (10) with the diagonal matrix having entries given at the beginning of the Markov chain by the variances of the test data Var z e i , z e i rather than the variances of the error Var (ε i (θ) , ε i (θ)). This is based on the common assumption that the outputs are independent, E z e i − z i (θ) = 0 and E z e i × z i (θ) = E z e i 2 , which, if untrue, is considered to have very little effect on the posterior (E [•] denotes the mathematical expectation). In reality, there is a further complication in that the ANN model, being a surrogate, does not provide a perfect replica of the FE model input-output relationship. Different configurations of the ANN were tested using the R 2 criterion for each of the fourteen outputs, where the second term on the left-hand-side approximates to the square of error divided by the variance (the overbar denotes the mean). The mean performance was determined by, so that a well-performing ANN has an R 2 close to unity. Among several tested ANNs, the two best performing ones had configurations (input:hidden nodes:output) of (18:56:48:14) and (18:16:6:1), the latter with a single output; i.e. a separate ANN for each output. Whilst in the latter case the outputs are assumed to be independent, the ANN is still able to capture the correlation among the outputs. The R 2 values for these two configurations were 0.9446 and 0.9479, respectively, whereas the worst individual R 2 was 0.5605 and 0.7996. The single independent output ANN is clearly the closest approximation to the FE model. Also the training time (excluding the cost of running the full FE model 2000 times) for this configuration was found to be 172 seconds of CPU   from a Gaussian mixture based on 500 points determined by LHC sampling. The uniform prior is indicated by the horizontal dashed blue line in each case, and the vertical grey-dotted lines denote the mean and two STDs determined by the sensitivity method. Considerable confidence can be attributed to these results because the peaks of the orange histograms are seen in general to agree quite closely with the means obtained independently by the sensitivity method. The spread of the histograms is greater than that of the Gaussian distributions from the sensitivity method. This is likely to be due to the use of a uniform prior, whereas the sensitivity method uses regularization to penalize the deviation of the parameters from their original distributions. Also, care should be taken when using uniform or bounded priors, since samples of the posterior distribution will never be generated in the area of the input domain where the prior PDF is zero valued.
The effect of broader distributions on the parameters obtained using the ANN surrogate with uniform priors can be seen in Fig. 13 where in the higher modes, especially f 14 , f 19 and f 20 , the geometry of the output (frequency) distributions from tests is not replicated closely enough by the results of Bayesian model updating. The effect of using a surrogate becomes clear when the updated parameter distributions are applied to the full FE model. This is shown in Fig. 14 where lack of fidelity of the ANN model is apparent particularly in modes f 7 , f 11 and f 12 .
In order to correct the error caused by use of the surrogate, sometimes called the code uncertainty, additional Markov chain steps were added using the full FE model. In this case the posterior from the Bayes-ANN updating process became the prior for a further Bayes-ANN-FE update. The results are shown in Figs. 12 and 15. The light purple histograms in Fig. 12 are those produced by the Bayes-ANN-FE update, and the smooth green curves show distributions from a Gaussian mixture as described previously. A shadow effect is used where Bayes-ANN and Bayes-ANN-FE histograms overlap. The Bayes-ANN-FE histogram for θ 1 appears to show a bimodal distribution, and the θ 6 histogram shows a well-defined peak for one of the sensor-harness stiffness parameters. θ 8 and θ 11 are mass parameters with the widening spread of the distribution in the latter case indicating a reduction in confidence. A shift in the peak is quite pronounced in the case of θ 14 (wing-fuselage joint x-direction stiffness) which is considered to account for the improved frequency distributions for mode f 11 (first wing fore-aft) shown in Fig. 15 (cf. Fig. 14). The geometry of the distribution for f 12 (second wing foreaft), however, is still not in sufficient agreement with the test data. In the following section, Bayesian refinement of the updated parameter statistic from the sensitivity method is presented, thereby avoiding completely the use of the ANN surrogate.

Bayesian refinement of sensitivity model updating results
In this section, the updated Gaussian model obtained by sensitivity analysis is used as the prior and Bayesian model updating is carried out using the full FE model, without the use of a surrogate to refine the results already presented in Sect. 5.1. As explained before the likelihood function makes use of the diagonal error covariance matrix .
Selected parameter histograms are shown in Fig. 16. The blue and yellow ochre histograms describe the prior (sensitivity) and posterior (Bayes-sensitivity-FE) updating results. The smooth curves in orange and purple are produced by Gaussian mixtures, as described previously, for the prior and posterior, respectively. It is clear that the distributions do not change substantially except for a general fairly modest widening of the distributions in most cases. Sample 2D sections through the hyper-PDF of the parameters are shown in Fig. 17, the left-hand-side showing the prior distribution and the right-hand-side the posterior. The points (in grey) are the LHC 500 samples used to create the continuous PDFs from the Gaussian mixture. This procedure results in prior distributions that are close to, but not exactly Gaussian as can be seen on the left-hand-side of Fig. 17. It is clear from this figure that the geometry of the hyper-PDF changes slightly with parameters that are found to be correlated by the sensitivity method remaining so after Bayesian refinement. Parameters θ 1 and θ 2 represent the stiffnesses of the front and rear bungee cords which principally affect the pitch and heave rigid body modes. In fact the pitch mode is likely to include some heave behaviour and vice versa, both probably quite sensitive in opposite ways to small changes of mass distribution so that they appear to be correlated. Parameters θ 2 and θ 9 are the rear bungee cord stiffness and the right wingtip mass of glue and screws. As expected, these parameters are almost completely independent of each other. VTP stiffness θ 6 and θ 7 located at the same joint show considerable positive correlation-as one stiffens so does the other-an effect which might indeed be expected when screws are made tighter, or less tight, upon reassembly. Negative masses and stiffnesses are not permitted and hence the θ 6 −θ 7 section is curtailed in Fig. 17. There is interaction between mass parameters θ 10 and θ 11 , mass of screws and glue at the left wingtip and mass of sensor cables at the outer wing regions. Since the total mass remains approximately constant, when one mass parameter is increased another must decrease. Stiffnesses θ 14 and θ 16 at the wing-fuselage joint, x and z-directions, respectively, show a hardening/softening interaction due to the particular geometry of the connection.
The output frequency covariances now show excellent agreement with measurements as shown in Fig. 18. The geometry of the higher-mode distributions, some of which had been difficult to replicate using the uniform prior and in the Bayes-ANN-FE analysis, are now reproduced faithfully as can be observed especially for frequencies f 12f 20 .

Conclusions
The results presented in Sect. 5 show that reliable updated models can be obtained by both sensitivity and Bayesian updating methods. It is a particularly positive result that the means of the parameter distributions obtained independently by the sensitivity method and Bayesian model updating with the uniform prior are found to be so similar. Propagation of the Bayesian updated parameters (with uniform prior and ANN surrogate) through the full FE model revealed serious discrepancies due to lack of fidelity of the surrogate. This problem was partly, but not completely overcome by additional Markov chain steps using the full FE model, using the Bayes-ANN updated model as prior. One of the differences between the sensitivity approach and the Bayesian method with uninformative prior is that regularization is present in the former to penalize deviations from initial parameter values, but the latter assumes no knowledge of the physics of the system under test and relies solely upon the data. The results obtained in Sect. 5.2.1 highlight the need for a surrogate model by the Bayesian approach, the difficulty in selecting its form (the configuration of the ANN in the example presented here), and the time required to train the surrogate. The use of the uninformative prior (uniform distribution) is likely to have resulted in the wide-banded distributions shown in Fig. 12, the information available from the data being insufficient to provide the required degree of confidence in the means of the parameters.
The sensitivity method assumes the multivariable parameter distribution to be Gaussian whereas in reality it is not. A potential advantage of the Bayes approach is that it make no such assumption and results in the most probable model based upon the given data and prior. The significance of an informative prior is revealed in the results produced when using the Gaussian sensitivity-updated model as prior. Then excellent results are produced and distributions from sensitivity-based model updating are refined. In the case of AIRMOD it is seen that this results in a widening of the spread of updating-parameter distributions which might be of practical engineering significance, possibly with regard to tolerances on manufactured components.
The use of a surrogate enables efficient stochastic model updating, but at the same time introduces additional uncertainty. Its training, with sufficient fidelity to the full FE model, is an expensive task both in terms of practitioners' time and computational expense-2000 FE model executions for AIRMOD. Also, Bayesian model updating requires huge computational resource. In the case of AIRMOD, Bayesian updating was carried out on a parallelized system using 20 cores of a multi-AMD Opteron processor 6168 system. The wall-clock time was approximately 8 h, whereas sensitivity model updating can be carried out on a standard desktop computer in a few minutes and does not require the use of a surrogate.
Techniques for the selection of updating parameters are available from both the sensitivity and the Bayesian updating communities as described in the Introduction. This topic continues to be open for further research, and it seems vitally important that such methods must embody a thorough understanding of both the physical structure under test and the finite element model to be updated.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.