Skip to main content
Log in

Fast model updating coupling Bayesian inference and PGD model reduction

  • Original Paper
  • Published:
Computational Mechanics Aims and scope Submit manuscript

Abstract

The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31

Similar content being viewed by others

References

  1. Darema F (2004) Dynamic data driven applications systems: a new paradigm for applications simulations and measurements. In: Computational science-ICCS 2004: 4th international conference, Springer, Berlin, pp 662–669

    Chapter  Google Scholar 

  2. Tarantola A (2005) Inverse problem theory. SIAM, Philadelphia

    MATH  Google Scholar 

  3. Ladevèze P, Nedjar D, Reynier M (1994) Updating of finite element models using vibrations tests. AIAA J 32(7):1485–1491

    Article  Google Scholar 

  4. Allix O, Feissel P, Nguyen H (2005) Identification strategy in the presence of corrupted measurements. Eng Comput 22(5/6):487–504

    Article  Google Scholar 

  5. Kaipo J, Somersalo E (2005) Statistical and computational inverse problems. Springer, Berlin

    Google Scholar 

  6. Gogu C, Yin W, Haftka RT, Ifju P, Molimard J, Le Riche R, Vautrin A (2013) Bayesian identification of elastic constants in multi-directional laminate from Moiré interferometry displacement fields. Exp Mech 53(4):635–648

    Article  Google Scholar 

  7. Allaire D, Chambers J, Cowlagi R, Kordonowy D, Lecerf M, Mainini L, Ulker F, Willcox K (2013) An offline/online DDDAS capability for self-aware aerospace vehicles. Procedia Comput Sci 18:1959–1968

    Article  Google Scholar 

  8. Beck JL (2010) Bayesian system identification based on probability logic. Struct Control Health Monit 17(7):825–847

    Article  Google Scholar 

  9. Jensen HA, Vergara C, Papadimitrou C, Millas E (2010) The use of updated robust reliability measures in stochastic dynamical systems. Comput Methods Appl Mech Eng 267:825–847

    MathSciNet  Google Scholar 

  10. Yan WJ, Katafygiotis LS (2015) A novel Bayesian approach for structural model updating utilizing statistical modal information from multiple setups. Struct Saf 52:260–271

    Article  Google Scholar 

  11. Papadimitriou C, Papadioti DC (2013) Component mode synthesis techniques for finite element model updating. Comput Struct 126:15–28

    Article  Google Scholar 

  12. Gogu C (2009) Facilitating bayesian identification of elastic constants through dimensionality reduction and response surface methodology. Ph.D. thesis, École Nationale Supérieure des Mines de Saint-Étienne

  13. Huynh DBP, Nguyen NC, Rozza G, Patera AT (2007) Reduced basis approximation and a posteriori error estimation for parametrized PDEs. 3(January)

  14. Peherstorfer B, Willcox K (2015) Dynamic data-driven reduced-order models. Comput Methods Appl Mech Eng 291:21–41

    Article  MathSciNet  Google Scholar 

  15. Cui T, Marzouk Y, Willcox K (2014) Data-driven model reduction for the Bayesian solution of inverse problems. SIAM Review

  16. Manzoni A, Pagani S, Lassila T (2016) Accurate solution of Bayesian inverse uncertainty quantification problems combining reduced basis methods and reduction error models. SIAM/ASA J Uncertain Quantif

  17. Marzouk Y, Najm H (2009) Dimensionality reduction and polynomial chaos acceleration of Bayesian inference problems. J Comput Phys 228(6):1862–1902

    Article  MathSciNet  Google Scholar 

  18. Chinesta F, Keunings R, Leygue A (2014) The proper generalized decomposition for advanced numerical simulations. Springer, Berlin

    Book  Google Scholar 

  19. Chinesta F, Ladevèze P, Cueto E (2011) A short review on model order reduction based on proper generalized decomposition. Arch Comput Methods Eng 18:395–404

    Article  Google Scholar 

  20. Ladevèze P (1989) The large time increment method for the analysis of structures with non-linear behavior described by internal variables. Comptes Rendus de l’académie des Sci Serie II 309(11):1095–1099

    MathSciNet  MATH  Google Scholar 

  21. Chamoin L, Allier P-E, Marchand B (2016) Synergies between the constitutive relation error concept and PGD model reduction for simplified V&V procedures. Adv Model Simul Eng Sci 3:18. https://doi.org/10.1186/s40323-016-0073-9

    Article  Google Scholar 

  22. Vitse M, Néron D, Boucard PA (2014) Virtual charts of solutions for parametrized nonlinear equations. Comput Mech 54(6):1529–1539

    Article  MathSciNet  Google Scholar 

  23. Courard A, Néron D, Ladevèze P, Ballère L (2016) Integration of PGD-virtual charts into an engineering design process. Comput Mech 57(4):637–651

    Article  MathSciNet  Google Scholar 

  24. Marchand B, Chamoin L, Rey C (2016) Real-time updating of structural mechanics models using Kalman filtering, modified constitutive relation error, and proper generalized decomposition. Int J Numer Methods Eng 107(9):786–810

    Article  MathSciNet  Google Scholar 

  25. Bouclier R, Louf F, Chamoin L (2013) Real-time validation of mechanical models coupling PGD and constitutive relation error. Comput Mech 52(4):861–883

    Article  MathSciNet  Google Scholar 

  26. Louf F, Champaney L (2013) Fast validation of stochastic structural models using a PGD reduction scheme. Elsevier, Amsterdam

    MATH  Google Scholar 

  27. Berger J, Orlande HRB, Mendes N (2016) Proper generalized decomposition model reduction in the Bayesian framework for solving inverse heat transfer problems. Inverse Probl Sci Eng

  28. Grepl M (2005) Reduced-basis approximation and a posteriori error estimation. Ph.D. thesis

  29. Nouy A (2010) A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations. Comput Methods Appl Mech Eng 199(23–24):1603–1626

    Article  MathSciNet  Google Scholar 

  30. Allier PE, Chamoin L, Ladevèze P (2015) Proper generalized decomposition computational methods on a benchmark problem: introducing a new strategy based on constitutive relation error minimization. Springer, Berlin

    Google Scholar 

  31. Maday Y, Manzoni A, Quarteroni A (2014) An online intrinsic stabilization strategy for the reduced basis approximation of parametrized advection-dominated problems. Springer, Berlin

    MATH  Google Scholar 

  32. Ammar A, Chinesta F, Díez P, Huerta A (2010) An error estimator for separated representations of highly multidimensional models. Comput Methods Appl Mech Eng 199(25–28):1872–1880

    Article  MathSciNet  Google Scholar 

  33. Ladevèze P, Chamoin L (2012) Toward guaranteed PGD-reduced models. Bytes and Science, CIMNE, Barcelona

  34. Green PL, Worden K (2015) Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty. Philos Trans R Soc A

  35. Ladevèze P, Chamoin L (2011) On the verification of model reduction methods based on the proper generalized decomposition. Comput Methods Appl Mech Eng 200(23–24):2032–2047

    Article  MathSciNet  Google Scholar 

  36. Kalman RE (1960) A new approach to linear ltering and prediction problems. J Basic Eng 82(1):35–45

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ludovic Chamoin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this section the details of a PGD solution is illustrated on the welding example from Sect. 4.

1.1 Problem

The same problem as in Sect. 4 is considered with the convection-diffusion equation:

$$\begin{aligned} \frac{\partial T}{\partial t} + \underline{v}.\underline{\text {grad}} T - \varDelta T = s \end{aligned}$$
(61)

With:

$$\begin{aligned} s(x,y;\sigma )=\frac{u}{2 \pi \sigma ^2} \text {exp} \left( - \frac{ \left( x-x_c\right) ^2+ \left( y-y_c\right) ^2 }{2 \sigma ^2}\right) \end{aligned}$$
(62)

The purpose is to build a multiparametric reduced order model with separation of space, time and parameter \(\sigma \).

1.2 Progressive Galerkin PGD

As presented in Sect. 3 the PGD modes are built recursively thanks to the Galerkin orthogonality. The spaces of variation of each parameter is defined as follows: \(I=[0,T_f]\) the time interval and \(\varSigma =[\sigma _\text {min},\sigma _\text {max}]\), the space of variation of \(\sigma \). The admissible field spaces are defined:

$$\begin{aligned} \mathcal {T}= & {} \{ T \in H^1(\varOmega =]0;5[\times ]0;1[), T=0 \text { on } \varGamma _D \} \end{aligned}$$
(63)
$$\begin{aligned} \mathcal {I}= & {} \{T, \int _I{\Vert T(x,y,.;\sigma ) \Vert ^2_{H^1(\varOmega )}} < \infty , \forall (x,y,\sigma ) \in \varOmega \times \varSigma \} \end{aligned}$$
(64)
$$\begin{aligned} \mathcal {E}= & {} \{T, \int _{\varSigma }{\Vert T(x,y,t;.) \Vert ^2_{H^1(\varOmega )}} < \infty , \forall (x,y,t) \in \varOmega \times I \} \end{aligned}$$
(65)

The weak formulation of Eq. (61) on each space reads:

Find \(T \in \mathcal {T} \otimes \mathcal {I} \otimes \mathcal {E}\), such as \(\forall T^* \in \mathcal {T} \otimes \mathcal {I} \otimes \mathcal {E}\):

$$\begin{aligned} a(T,T^*)=l(T^*) \end{aligned}$$
(66)

with:

$$\begin{aligned}&a(T,T^*)= \end{aligned}$$
(67)
$$\begin{aligned}&\int _{I \times \varSigma \times \varOmega } \frac{\partial T}{\partial t}.T^* + \underline{v}.\underline{\text {grad}} T .T^*+\underline{\text {grad}} T .\underline{\text {grad}} T^* dt d\sigma d \varOmega \end{aligned}$$
(68)
$$\begin{aligned}&l(T^*) = \int _{I \times \varSigma \times \varOmega } s.T^* dt d\sigma d \varOmega \end{aligned}$$
(69)

The solution is searched in the separated form:

$$\begin{aligned} T_{m}(x,y,t;\sigma )=\sum _{n=1}^{m} \varLambda _n(x,y) \lambda _n(t) \alpha _n(\sigma ) \end{aligned}$$
(70)

The \(m-1\) first modes are supposed to be known and the m-th mode is searched. Then the solution reads:

$$\begin{aligned}&\{T_m(x,y,t;\sigma ) \} \nonumber \\&\quad =\sum _{n=1}^{m-1} \lambda _n(t) \alpha _n(\sigma ) \varLambda _n(x,y)+ \lambda (t) \alpha (\sigma ) \varLambda (x,y) \end{aligned}$$
(71)

The unknowns are the functions: \(\varLambda \), \(\lambda \) et \(\alpha \).

The test field \(T^* \in \mathcal {T} \otimes \mathcal {I} \otimes \mathcal {E}\) is taken in the separated form:

$$\begin{aligned} T^*=\lambda ^* \alpha \varLambda +\lambda \alpha ^* \varLambda +\lambda \alpha \varLambda ^* \end{aligned}$$
(72)

Using this form, the variational formulation (66) leads to decoupled problems with the applications \(S_m\), \(T_m\), \(P_m\) such as:

$$\begin{aligned}&\varLambda =S_m(\lambda ,\alpha ) \end{aligned}$$
(73)
$$\begin{aligned}&\lambda =T_m(\alpha , \varLambda ) \end{aligned}$$
(74)
$$\begin{aligned}&\alpha =P_m(\lambda , \varLambda ) \end{aligned}$$
(75)

1.2.1 Spatial application \(S_m\)

The decoupled weak formulation for space problem reads:

$$\begin{aligned} a\left( T_{m-1}+ \lambda \alpha \varLambda ,\lambda \alpha \varLambda ^*\right) =l( \lambda \alpha \varLambda ^*) \end{aligned}$$
(76)

with:

$$\begin{aligned}&a\left( \lambda \alpha \varLambda ,\lambda \alpha \varLambda ^*\right) \nonumber \\&\quad =\int _{I \times \varSigma \times \varOmega } \lambda \dot{\lambda } \alpha ^2 \varLambda \varLambda ^* +\alpha ^2 \lambda ^2 \underline{v}.\underline{\text {grad}} \varLambda \varLambda ^*\nonumber \\&\qquad +\,\alpha ^2 \lambda ^2 \underline{\text {grad}} \varLambda \underline{\text {grad}} \varLambda ^* d \varOmega d \sigma dt \end{aligned}$$
(77)

The use of a P1 discretization on all fields reads:

$$\begin{aligned}&\varLambda =[N_x]\{\varLambda \} \end{aligned}$$
(78)
$$\begin{aligned}&\lambda =[N_t]\{\lambda \} \end{aligned}$$
(79)
$$\begin{aligned}&\alpha =[N_\sigma ]\{\sigma \} \end{aligned}$$
(80)

where \([N_\bullet ]\) represents the shape functions matrix and \(\{ \bullet \}\) the nodal values of the fields.

Then:

$$\begin{aligned} a\left( \lambda \alpha \varLambda ,\lambda \alpha \varLambda ^*\right) =\{\varLambda ^*\}^T[A_{\varLambda }] \{\varLambda \} \end{aligned}$$
(81)

with:

$$\begin{aligned}{}[A_{\varLambda }]= & {} \left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha \} \right) . \nonumber \\&\left[ \left( \{ \lambda \}^T [H_t] \{ \lambda \} \right) [M] + \left( \{ \lambda \}^T [M_t] \{ \lambda \} \right) [C_H] \right] \end{aligned}$$
(82)

where the following matrices are defined:

$$\begin{aligned}&[M_\bullet ]=\int _\bullet [N_\bullet ]^T[N_\bullet ] d \bullet \end{aligned}$$
(83)
$$\begin{aligned}&[H_\bullet ]=\int _\bullet [dN_\bullet ]^T[N_\bullet ] d \bullet \end{aligned}$$
(84)
$$\begin{aligned}&[C_\bullet ]=\int _\bullet [dN_\bullet ]^T[dN_\bullet ] d_\bullet \end{aligned}$$
(85)
$$\begin{aligned}&[C_H]=\kappa .[C_x]+[H_x] \end{aligned}$$
(86)

Likewise:

$$\begin{aligned} a\left( T_{m-1}, \lambda \alpha \varLambda ^*\right) =\{ \varLambda ^*\}^T \sum _{n=1}^{m-1} [A_{\varLambda _n}]\{ \varLambda _n\} \end{aligned}$$
(87)

with:

$$\begin{aligned}&[A_{\varLambda _n}]= \left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha _n\} \right) . \nonumber \\&\quad \left[ \left( \{ \lambda \}^T [H_t] \{ \lambda _n\} \right) [M]+ \left( \{ \lambda \}^T [M_t] \{ \lambda _n\} \right) [C_H] \right] \end{aligned}$$
(88)
$$\begin{aligned}&l(\{\lambda \alpha \varLambda ^*\}) \nonumber \\&\quad =\int _{I \times \varSigma \times \varOmega } \lambda (t) \alpha (\sigma ) s(x,y;\sigma ).\varLambda ^*(x,y) dt d\sigma d\varOmega \end{aligned}$$
(89)

The right-hand side of the variational formulation can be written as:

$$\begin{aligned} l(\lambda \alpha \varLambda ^*)=\{\varLambda ^*\} ^T\{ Q_{\varLambda }\} \end{aligned}$$
(90)

However the integrand is not represented in a separated form. In order to do so, an asymptotic expansion at the center \(\sigma _0\) of the P1 element at the d order is used.

The volumic load s is approximated as:

$$\begin{aligned} s(x,y;\sigma _0+\delta \sigma ) \approx \sum _{i=0}^{d} \frac{\partial ^i s}{\partial \sigma ^i} (x,y;\sigma _0) \frac{\delta \sigma ^i}{i!} \end{aligned}$$
(91)

which leads to:

$$\begin{aligned}&\int _{\sigma _{\text {min}}}^{\sigma _{{\text {max}}}} s(x,y;\sigma ) \alpha (\sigma ) d\sigma \nonumber \\&\quad = \sum _{k=1}^N \int _{\sigma _k}^{\sigma _{k+1}} s(x,y;\sigma ) \alpha (\sigma ) d\sigma \nonumber \\&\quad = \sum _{k=1}^N \int _{\sigma _k-\sigma _{0k}}^{\sigma _{k+1}-\sigma _{0k}} s(x,y;\sigma _{0k}+\tilde{\sigma }) \alpha (\sigma _{0k}+\tilde{\sigma })d\tilde{\sigma } \nonumber \\&\quad = \sum _{k=1}^N \sum _{i=0}^{d} \int _{\sigma _k-\sigma _{0k}}^{\sigma _{k+1}-\sigma _{0k}} \frac{\tilde{ \sigma }^i}{i!} \alpha (\sigma _{0k}+\tilde{\sigma })d\tilde{\sigma } \frac{\partial ^i s}{\partial \sigma ^i} (x,y;\sigma _{0k}) \nonumber \\&\quad = \sum _{k=1}^N \sum _{i=0}^{d} \int _{\sigma _k}^{\sigma _{k+1}} \frac{ (\sigma -\sigma _{0k})^i}{i!} \alpha ({\sigma })d{\sigma } \frac{\partial ^i s}{\partial \sigma ^i} (x,y;\sigma _{0k}) \end{aligned}$$
(92)

In practice \(d=1\) is sufficient to have a good approximation with the P1 discretization of the interval \(\varSigma \).

The right-hand side reads now:

$$\begin{aligned}&l(\lambda \alpha \varLambda ^*)=\{1_t\}^T[M_t]\{\lambda \}.\nonumber \\&\quad \sum _{k=1}^N \sum _{i=0}^{d} \int _{\sigma _k}^{\sigma _{k+1}} \frac{ (\sigma -\sigma _{0k})^i}{i!} \alpha ({\sigma })d{\sigma } \{\varLambda ^*\} ^T\{ S_{ik}\} \end{aligned}$$
(93)

Finally the application \(S_m\) leads to a linear system at each iteration of the fixed-point algorithm with the unknown \(\{ \varLambda \}\).

1.2.2 Time application \(T_m\)

For the time application a Runge–Kutta algorithm with automatic adjustment of the time step is used to solve the encountered ordinary differential equation:

$$\begin{aligned} a.\dot{\lambda }(t)+b.\lambda (t)=c-\sum _n^{m-1} a_n.\dot{\lambda _n}(t)+b_n.{\lambda _n}(t) \end{aligned}$$
(94)

with:

$$\begin{aligned} a&= \int _{\varSigma \times \varOmega } \alpha ^2 \varLambda ^2 d\sigma d\varOmega \end{aligned}$$
(95)
$$\begin{aligned}&=\left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha \} \right) \left( \{ \varLambda \}^T [M] \{\varLambda \} \right) \end{aligned}$$
(96)
$$\begin{aligned} b&= \int _{ \varSigma \times \varOmega } \alpha ^2 \left( \underline{v}.\underline{\text {grad}} \varLambda \varLambda + \underline{\text {grad}} \varLambda \underline{\text {grad}} \varLambda \right) d\sigma d\varOmega \end{aligned}$$
(97)
$$\begin{aligned}&=\left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha \} \right) \left( \{ \varLambda \}^T [C_H] \{\varLambda \} \right) \end{aligned}$$
(98)
$$\begin{aligned} c&= \int _{\varSigma \times \varOmega } \alpha (\sigma ) s(x,y;\sigma ).\varLambda (x,y) dt d\sigma d\varOmega \end{aligned}$$
(99)
$$\begin{aligned}&=\sum _{k=1}^N \sum _{i=0}^{d} \int _{\sigma _k}^{\sigma _{k+1}} \frac{ (\sigma -\sigma _{0k})^i}{i!} \alpha ({\sigma })d{\sigma } \{\varLambda \} ^T\{ S_{ik}\} \end{aligned}$$
(100)
$$\begin{aligned} a_n&= \int _{\varSigma \times \varOmega } \alpha \alpha _n \varLambda \varLambda _n d\sigma d\varOmega \end{aligned}$$
(101)
$$\begin{aligned}&=\left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha _n\} \right) \left( \{ \varLambda \}^T [M] \{\varLambda _n\} \right) \end{aligned}$$
(102)
$$\begin{aligned} b_n&=\int _{ \varSigma \times \varOmega } \alpha \alpha _n \left( \underline{v}.\underline{\text {grad}} \varLambda \varLambda _n+ \underline{\text {grad}} \varLambda \underline{\text {grad}} \varLambda _n \right) d\sigma d\varOmega \end{aligned}$$
(103)
$$\begin{aligned}&=\left( \{ \alpha \}^T [M_{\sigma }] \{ \alpha _n\} \right) \left( \{ \varLambda \}^T [C_H] \{\varLambda _n\} \right) \end{aligned}$$
(104)

At each iteration of the fixed-point algorithm an ordinary differential equation on the unknown \(\lambda \) is solved.

1.2.3 Parametric application \(P_m\)

Here, the same approach used in the spatial application is used. The decoupled weak form for the parametric problem reads:

$$\begin{aligned}&a\left( T_{m-1}+ \lambda \alpha \varLambda ,\lambda \alpha ^* \varLambda \right) =l( \lambda \alpha ^* \varLambda ) \end{aligned}$$
(105)
$$\begin{aligned}&a\left( \lambda \alpha \varLambda ,\lambda \alpha ^* \varLambda \right) \nonumber \\&\quad =\int _{I \times \varSigma \times \varOmega } \lambda \dot{\lambda } \alpha \alpha ^* \varLambda ^2 +\alpha \alpha ^* \lambda ^2 \underline{v}.\underline{\text {grad}} \varLambda \varLambda \nonumber \\&\qquad +\,\alpha \alpha ^* \lambda ^2 \underline{\text {grad}} \varLambda \underline{\text {grad}} \varLambda dt d\sigma d\varOmega \end{aligned}$$
(106)

A P1 discretization leads to:

$$\begin{aligned} a\left( \lambda \alpha \varLambda ,\lambda \alpha ^* \varLambda \right) =\{\alpha ^*\}^T[A_{\alpha }] \{\alpha \} \end{aligned}$$
(107)

with:

$$\begin{aligned}{}[A_{\alpha }]= & {} \left[ \left( \{ \varLambda \}^T [M] \{ \varLambda \} \right) \left( \{ \lambda \}^T [H_t] \{ \lambda \} \right) \right. \nonumber \\&\left. +\, \left( \{ \varLambda \}^T [C_H] \{ \varLambda \} \right) \left( \{ \lambda \}^T [M_t] \{ \lambda \} \right) \right] \left[ M_{\sigma }\right] \end{aligned}$$
(108)

The contribution of the previous modes reads:

$$\begin{aligned} a\left( T_{m-1}, \lambda \alpha ^* \varLambda \right) =\{ \alpha ^*\}^T \sum _{n=1}^{m-1} [A_{\alpha _n}]\{ \alpha _n\} \end{aligned}$$
(109)

with:

$$\begin{aligned}{}[A_{\alpha _n}]= & {} \left[ \left( \{ \varLambda \}^T [M] \{ \varLambda _n\} \right) \left( \{ \lambda \}^T [H_t] \{ \lambda _n\} \right) \right. \nonumber \\&\left. + \left( \{ \varLambda \}^T [C_H] \{ \varLambda _n\} \right) \left( \{ \lambda \}^T [M_t] \{ \lambda _n\} \right) \right] \left[ M_{\sigma }\right] \nonumber \\ \end{aligned}$$
(110)

Finally the right-hand side reads:

$$\begin{aligned}&l(\lambda \alpha ^* \varLambda )\nonumber \\&\quad =\{ 1_t\}^T[M_t]\{\lambda \}. \nonumber \\&\quad \sum _{k=1}^N \sum _{i=0}^{d} \int _{\sigma _k}^{\sigma _{k+1}} \frac{ (\sigma -\sigma _{0k})^i}{i!} \alpha ^*({\sigma })d{\sigma } \{\varLambda \} ^T\{ S_{ik}\} \end{aligned}$$
(111)

At each iteration of the fixed-point algorithm a linear system is solved with the unknown \(\{\alpha \}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rubio, PB., Louf, F. & Chamoin, L. Fast model updating coupling Bayesian inference and PGD model reduction. Comput Mech 62, 1485–1509 (2018). https://doi.org/10.1007/s00466-018-1575-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00466-018-1575-8

Keywords

Navigation