Limited Sensor-Based Probabilistic Damage Detection Using Combined Normal–Lognormal Distributions

Abstract

A methodology for probabilistic damage detection in Bayesian framework without any requirement of mode matching is presented with detailed formulations on finite element model updating using incomplete modal data measured using limited number of sensors. Multiple modal measurement/data sets from multiple different sensor set-ups can be used in the proposed methodology with further scope for using repeated measurements from any single sensor set-up. Combined normal–lognormal multivariate distribution is considered in the Bayesian framework. Strictly positive random parameters are assigned with lognormal distribution, while the remaining random parameters are assigned with normal distribution. In this work, the normal distribution is incorporated for the likelihood function, which consists of the eigen-system equation error and the error between the system mode shapes and experimental mode shapes. On the other hand, mass and stiffness parameters are assigned with the lognormal distribution. Detailed formulations for probabilistic identification of changes/damages are also developed. The proposed approach is validated using a three-dimensional building structure considering multiple simulated damage cases. Performance in updating and damage detection is evaluated based on multi-set-up and multi-dataset considerations. Besides, the proposed technique is compared with the similar Bayesian updating solely based on normal distribution and the Gibbs sampling.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Abbreviations

\( {{\varPhi }} \) :

System mode-shape

\( {\hat{{\lambda }}},\;{\hat{{\varPsi }}} \) :

Experimental eigenvalue and experimental eigenvector respectively

\( {{\theta }}_{1} ,\;{{\theta }}_{2} \) :

Mass-parameter vector and stiffness-parameter vector respectively

\( N_{d} ,\;N_{m} ,\;N_{i} \) :

Total number of degrees of freedom of the system, number of measured modes and number of measured degrees of freedom for the ith setup respectively

\( N_{r} ,\;N_{s} \) :

Number of sensor-setups and number of modal data sets for each setup respectively

\( N_{\theta 1} ,\;N_{\theta 2} \) :

Number of mass parameters and stiffness parameters respectively

\( {\text{M}},\;{\text{M}}_{0} ,\;{\text{M}}_{l} \) :

Mass matrix, non-parameterized component of mass-matrix and sub-system mass matrix corresponding to the lth mass-parameter respectively

\( {\text{K}},\;{\text{K}}_{0} ,\;{\text{K}}_{l} \) :

Stiffness matrix, non-parameterized component of stiffness-matrix and sub-system stiffness matrix corresponding to the lth stiffness-parameter respectively

\( {{\eta }}_{1} ,\;{{\eta }}_{2} \) :

Mean-vector of \( \ln \left( {{{\theta }}_{1} } \right) \) and \( \ln \left( {{{\theta }}_{2} } \right) \) respectively

\( {{\varSigma }}_{\ln 1} ,\;{{\varSigma }}_{\ln 2} \) :

Covariance matrix of \( \ln \left( {{{\theta }}_{1} } \right) \) and \( \ln \left( {{{\theta }}_{2} } \right) \) respectively

\( {\text{e}}_{{\left( {i,m,s} \right)}} ,\;{{\varepsilon }}_{{\left( {i,m,s} \right)}} \) :

Eigen-equation error and error measuring discrepancies between measured eigenvector and system mode shape respectively corresponding to mth mode of sth modal data set for the ith setup

\( {{\theta }}_{1}^{\eta } ,\;{{\theta }}_{2}^{\eta } \) :

Nominal values of \( {{\theta }}_{1} \) and \( {{\theta }}_{2} \)respectively

\( {\text{L}}_{i} \) :

Observable matrix to map down system mode shapes corresponding to the measured degrees of freedom for the ith setup

\( {{\varSigma }}_{{\left( {i,m} \right)}} \) :

Covariance matrix of error measuring discrepancies between mth mode measured mode-shape and system mode shape for the ith setup

\( \sigma_{eq}^{2} \) :

Eigen equation error variance

N :

Normal distribution

0, I :

Zero vector/matrix and identity matrix respectively

\( \chi \) :

Auxiliary variable for normalization of mode shapes

\( \beta \) :

Lagrange multiplier

\( {{\varGamma }} \) :

Covariance matrix of the posterior PDF

\( {\text{H}} \) :

Hessian matrix of the objective function

\( \hat{f},\;f \) :

Experimental frequency and analytical frequency respectively

d :

Damage-extent

c :

Extent of change

\( \begin{aligned} & {\text{G}}_{M}^{{\left( {i,m,s} \right)}} ,{\text{G}}_{K}^{m} ,{\text{V}}_{M}^{{\left( {i,m,s} \right)}} ,{\text{V}}_{K}^{m} , \\ & {\text{J}}_{1} ,{\text{J}}_{2} ,M_{\theta 1} ,M_{\theta 2} ,{\text{C}}_{1_I} ,{\text{C}}_{2_I} , \\ & {\text{G}}_{1_I} ,{\text{G}}_{2_I} ,{\text{T}}_{1} ,{\text{T}}_{2} ,{\text{T}}_{3} ,{\text{T}}_{4} ,{\text{T}}_{5} ,{\text{T}}_{6} \\ \end{aligned} \) :

Various sub-matrices/vectors

References

  1. 1.

    Doebling, S.W.; Farrar, C.R.; Prime, M.B.: A summary review of vibration-based damage identification methods. Shock Vib. Dig. 30, 91–105 (1998)

    Article  Google Scholar 

  2. 2.

    Simoen, E.; De Roeck, G.; Lombaert, G.: Dealing with uncertainty in model updating for damage assessment: a review. Mech. Syst. Signal Process. 56, 123–149 (2015)

    Article  Google Scholar 

  3. 3.

    Yang, X.; Guo, X.; Ouyang, H.; Li, D.: Kriging model based finite element model updating method for damage detection. Appl. Sci. 7(10), 1039 (2017)

    Article  Google Scholar 

  4. 4.

    Chen, H.P.: Structural Health Monitoring of Large Civil Engineering Structures. Wiley, New York (2018)

    Google Scholar 

  5. 5.

    Jaishi, B.; Ren, W.-X.: Damage detection by finite element model updating using modal flexibility residual. J. Sound Vib. 290, 369–387 (2006)

    Article  Google Scholar 

  6. 6.

    Yuen, K.V.; Katafygiotis, L.S.: Bayesian time-domain approach for modal updating using ambient data. Probab. Eng. Mech. 16(3), 219–231 (2001)

    Article  Google Scholar 

  7. 7.

    Zhang, F.-L.; et al.: Fast Bayesian approach for modal identification using free vibration data, Part I-Most probable value. Mech. Syst. Signal Process. 70, 209–220 (2016)

    Article  Google Scholar 

  8. 8.

    Debnath, N.; Dutta, A.; Deb, S.K.: Placement of sensors in operational modal analysis for truss bridges. Mech. Syst. Signal Process. 31, 196–216 (2012)

    Article  Google Scholar 

  9. 9.

    Friswell, M.I.; Mottershead, J.E.: Finite Element Model Updating in Structural Dynamics. Kluwer Academic Publishers, Boston (1995)

    Google Scholar 

  10. 10.

    Mottershead, J.E.; Friswell, M.I.: Model updating in structural dynamics: a survey. J. Sound Vib. 167, 347–375 (1993)

    MATH  Article  Google Scholar 

  11. 11.

    Datta, B.N.: Finite element model updating, eigenstructure assignment and eigenvalue embedding techniques for vibrating systems. Mech. Syst. Signal Process. 16(1), 83–96 (2002)

    Article  Google Scholar 

  12. 12.

    Ewins, D.J.: Adjustment or updating of models. Sadhana 25, 235–245 (2000)

    Article  Google Scholar 

  13. 13.

    Beck, J.L.: System Identification methods applied to measured seismic response. In: Proceedings of the Eleventh World Conference on Earthquake Engineering (1996).

  14. 14.

    Beck, J.L.; Katafygiotis, L.S.: Updating models and their uncertainties. I: Bayesian statistical framework. J. Eng. Mech. 124(4), 455–461 (1998)

    Article  Google Scholar 

  15. 15.

    Vanik, M.W.; Beck, J.L.; Au, S.K.: Bayesian probabilistic approach to structural health monitoring. J. Eng. Mech. 126(7), 738–745 (2000)

    Article  Google Scholar 

  16. 16.

    Beck, J.L.; Yuen, K.V.: Model selection using response measurements: Bayesian probabilistic approach. J. Eng. Mech. 130(2), 192–203 (2004)

    Article  Google Scholar 

  17. 17.

    Goller, B.; Beck, J.L.; Schueller, G.I.: Evidence-based identification of weighting factors in Bayesian model updating using modal data. J. Eng. Mech. 5, 430–440 (2011)

    Google Scholar 

  18. 18.

    Prajapat, K.; Ray-Chaudhuri, S.: Prediction error variances in Bayesian model updating employing data sensitivity. J. Eng. Mech. 142(12), 04016096 (2016)

    Article  Google Scholar 

  19. 19.

    Yuen, K.V.: An extremely efficient finite-element model updating methodology with applications to damage detection. In: Proceedings of Enhancement and Promotion of Computational Methods in Engineering and Science X, Sanya, Hainan, China, August 21-23, pp 166–179 (2006).

  20. 20.

    Yuen, K.V.; Beck, J.L.; Katafygiotis, L.S.: Efficient model updating and health monitoring methodology using incomplete modal data without mode matching. Struct. Control Health Monitor. 13, 91–107 (2006)

    Article  Google Scholar 

  21. 21.

    Yuen, K.V.: Bayesian Methods for Structural Dynamics and CIVIL Engineering. Wiley, New York (2010)

    Google Scholar 

  22. 22.

    Yuen, K.V.: Recent developments of Bayesian model class selection and applications in civil engineering. Struct. Saf. 32(5), 338–346 (2010)

    Article  Google Scholar 

  23. 23.

    Yuen, K.V.; Kuok, S.C.: Bayesian methods for updating dynamic models. Appl. Mech. Rev. 64(1), 010802 (2011)

    Article  Google Scholar 

  24. 24.

    Yan, W.J.; Katafygiotis, L.S.: A novel Bayesian approach for structural model updating utilizing statistical modal information from multiple steps. Struct. Saf. 52, 260–271 (2015)

    Article  Google Scholar 

  25. 25.

    Mustafa, S.; Debnath, N.; Dutta, A.: Bayesian probabilistic approach for model updating and damage detection for a large truss bridge. Int. J. Steel Struct. 6, 10 (2015)

    Google Scholar 

  26. 26.

    Mustafa, S.; Matsumoto, Y.: Bayesian model updating and its limitations for detecting local damage of an existing truss bridge. J. Bridge Eng. ASCE, ISSN 1084-0702 (2017).

  27. 27.

    Das, A.; Debnath, N.: A Bayesian model updating with incomplete complex modal data. Mech. Syst. Signal Process. 136, 106524 (2020)

    Article  Google Scholar 

  28. 28.

    Beck, J.L.; Au, S.K.: Bayesian updating of structural models and reliability using Markov Chain Monte Carlo simulation. J. Eng. Mech. 128(4), 380–391 (2002)

    Article  Google Scholar 

  29. 29.

    Ching, J.; Chen, Y.C.: Transitional Markov Chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging. J. Eng. Mech. 133(7), 816–832 (2007)

    Article  Google Scholar 

  30. 30.

    Sun, H.; Büyüköztürk, O.: Probabilistic updating of building models using incomplete modal data. Mech. Syst. Signal Process. 75, 27–40 (2016)

    Article  Google Scholar 

  31. 31.

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M.I.; Adhikari, S.; Finite element model updating using the shadow hybrid Monte Carlo technique. In: Mechanical Systems and Signal Processing.

  32. 32.

    Lam, H.F.; Yang, J.; Au, S.K.: Bayesian model updating of a coupled-slab system using field test data utilizing an enhanced Markov chain Monte Carlo simulation algorithm. Eng. Struct. 102, 144–155 (2015)

    Article  Google Scholar 

  33. 33.

    Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21(6), 1087–1092 (1953)

    MATH  Article  Google Scholar 

  34. 34.

    Hastings, W.K.: Monte Carlo sampling methods using Markov Chains and their applications. Biometrika 57(1), 97–109 (1970)

    MathSciNet  MATH  Article  Google Scholar 

  35. 35.

    Ching, J.; Muto, M.; Beck, J.L.: Structural model updating and health monitoring with incomplete modal data using gibbs sampler. Comput.-Aided Civil Infrastruct. Eng. 21, 242–257 (2006)

    Article  Google Scholar 

  36. 36.

    Bansal, S.: A new Gibbs sampling based Bayesian model updating approach using modal data from multiple setups. Int. J. Uncertain. Quantif. 5(4), 361–374 (2015)

    Article  Google Scholar 

  37. 37.

    Cheung, S.H.; Bansal, S.: A new Gibbs sampling based algorithm for Bayesian model updating with incomplete complex modal data. Mech. Syst. Signal Process. 92, 156–172 (2017)

    Article  Google Scholar 

  38. 38.

    Geman, S.; Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6(6), 721–741 (1984)

    MATH  Article  Google Scholar 

  39. 39.

    Das, A.; Debnath, N.: A Bayesian finite element model updating with combined normal and lognormal probability distributions using modal measurements. Appl. Math. Model. 61, 457–483 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  40. 40.

    Limpert, E.; Stahel, W.A.: Problems with using the normal distribution—and ways to improve quality and efficiency of data analysis. PLoS ONE 6(7), e21403 (2011)

    Article  Google Scholar 

  41. 41.

    Limpert, E.; Stahel, W.A.; Abbt, M.: Log-normal distributions across the sciences: keys and clues. Bioscience 51(5), 341–352 (2001)

    Article  Google Scholar 

  42. 42.

    Johnson, E.; Lam, H.F.; Katafygiotis, L.S.; Beck, J.L.: Phase I IASC-ASCE structural health monitoring benchmark problem using simulated data. J. Eng. Mech. 130(1), 3–15 (2004)

    Article  Google Scholar 

  43. 43.

    Bernardo, J.M.: Smith, AFM: Bayesian Theory. Wiley, England (2000)

    Google Scholar 

  44. 44.

    Sadowski, A.J.; Rotter, J.M.; Reinke, T.; Ummenhofer, T.: Statistical analysis of the material properties of selected structural carbon steels. Struct. Saf. 53, 26–35 (2015)

    Article  Google Scholar 

  45. 45.

    Efremov, Y.M.; Lomakina, M.E.; Bagrov, D.V.; Makhnovskiy, P.I.; Alexandrova, A.Y.; Kirpichnikov, M.P.; Shaitan, K.V.: Mechanical properties of fibroblasts depend on level of cancer transformation. Biochem. Biophys. Acta. 1843, 1013–1019 (2014)

    Article  Google Scholar 

  46. 46.

    Burmaster, D.E.; Crouch, E.A.C.: Lognormal distributions for body weight as a function of age for males and females in the United States 1976–1980. Risk Anal. 17(4), 499–505 (1997)

    Article  Google Scholar 

  47. 47.

    Kleiber, C.; Kotz, S.: Statistical Size Distributions in Economics and Actuarial Sciences. Wiley, Hoboken (2003)

    Google Scholar 

  48. 48.

    Horrace, W.C.: Some results on the multivariate truncated normal distribution. J. Multivariate Anal. 94, 209–221 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  49. 49.

    Mathai, A.M.; Moschopoulos, P.G.: A multivariate inverted beta model. Statistica LVII 2, 189–197 (1997)

    MathSciNet  MATH  Google Scholar 

  50. 50.

    Furman, E.: On a multivariate gamma distribution. Stat. Probab. Lett. 78(15), 2353–2360 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  51. 51.

    Au, S.K.: Assembling mode shapes by least squares. Mech. Syst. Signal Process. 25(1), 163–179 (2011)

    Article  Google Scholar 

  52. 52.

    Petersen, K.B.; Pedersen, M.S.: The Matrix Cookbook. (2012).

  53. 53.

    Fletcher, S.J.; Zupanski, M.: A hybrid multivariate Normal and lognormal distribution for data assimilation. Atmos. Sci. Lett. 7, 43–46 (2006)

    Article  Google Scholar 

  54. 54.

    Papadimitriou, C.; Beck, J.L.; Katafygiotis, L.S.: Asymptotic expansions for reliability and moments of uncertain systems. J. Eng. Mech. 123(12), 1219–1229 (1997)

    Article  Google Scholar 

  55. 55.

    MATLAB, version R2018b. The MathWorks Inc., Natick, Massachusetts.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nirmalendu Debnath.

Appendix

Appendix

Appendix A: Formulations Related to Optimization Framework

Optimization of the objective function as shown in Eq. (19) is presented in detail in this appendix section. By minimizing the objective function \( F\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \) with respect to the auxiliary variable \( \chi_{i,m} \), the optimal value of \( \chi_{i,m}^{*} \) can be obtained as shown in Eq. (A.1).

$$ \chi_{i,m}^{*} = \frac{{\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } }}{{\left[ {N_{s} {{\varPhi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} + 2\beta_{i,m} \left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{2} } \right]}} $$
(A.1)

Using Eq. (A.1) in Eq. (18), we get two possibilities:

$$ \frac{{\sum\limits_{{s = 1}}^{{N_{s} }} {{{\hat{\Psi }}}_{{\left( {i,m,s} \right)}}^{T} {{\Sigma }}_{{\left( {i,m} \right)}}^{{ - 1}} {\text{L}}_{i} {{\Phi }}^{{\left( m \right)}} } }}{{\left[ {N_{s} {{\Phi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\Sigma }}_{{\left( {i,m} \right)}}^{{ - 1}} {\text{L}}_{i} {{\Phi }}^{{\left( m \right)}} + 2\beta _{{i,m}} \left\| {{\text{L}}_{i} {{\Phi }}^{{\left( m \right)}} } \right\|^{2} } \right]}} = \pm \left\| {{\text{L}}_{i} {{\Phi }}^{{\left( m \right)}} } \right\|^{{ - 1}} $$
(A.2)

Solving Eq. (A.2), we get two solutions for \( \beta_{i,m} \) as follows:

$$ \beta_{i,m} = - \frac{{N_{s} {{\varPhi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} }}{{2\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{2} }} \pm \frac{{\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } }}{{2\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|}} $$
(A.3)

Now, the second derivatives of the objective function \( F\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \) with respect to \( \chi_{i,m} \) are given by Eq. (A.4).

$$ \frac{{\partial^{2} F}}{{\partial \chi_{i,m}^{2} }}{ = }N_{s} {{\varPhi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} + 2\beta_{i,m} \left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{2} $$
(A.4)

Minimization of \( F\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \) occurs only when \( \frac{{\partial^{2} F}}{{\partial \chi_{i,m}^{2} }} > 0 \), and this condition can be applied in Eq. (A.4) to get the following expression:

$$ \beta_{i,m} > - \frac{{N_{s} {{\varPhi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} }}{{2\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{2} }} $$
(A.5)

Therefore, the higher root is considered for the optimal value of \( \beta_{i,m}^{*} \) and is given by Eq. (A.6).

$$ \beta_{i,m} = - \frac{{N_{s} {{\varPhi }}^{{\left( m \right)^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} }}{{2\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{2} }} + \left| {\frac{{\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } }}{{2\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|}}} \right| $$
(A.6)

Putting the value of \( \beta_{i,m} \) obtained using Eq. (A.6) in Eq. (A.1), we get the optimal value of \( \chi_{i,m} \) as in Eq. (A.7).

$$ \chi_{i,m}^{*} = \frac{{\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } }}{{\left| {\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } } \right|\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|}} = \text{sgn} \left( {\sum\limits_{s = 1}^{{N_{s} }} {{\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } } \right)\left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)} } \right\|^{ - 1} $$
(A.7)

where ‘sgn’ indicates the signum function. The optimal mode shape vector for mth mode \( {{\varPhi }}^{\left( m \right)*} \) can be obtained by assigning the derivative of the objective function with respect to \( {{\varPhi }}^{\left( m \right)} \) equal to zero. The expression for \( {{\varPhi }}^{\left( m \right)*} \) can be obtained as in Eq. (A.8)

$$ \begin{aligned} {{\varPhi }}^{\left( m \right)*} = & \left[ {\sigma_{\text{eq}}^{ - 2} \sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{K}}\left( {{{\theta }}_{2}^{ * } } \right) - {\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}\left( {{{\theta }}_{1}^{ * } } \right)} \right)}^{2} }}\right. \\ &\quad\left. {+ N_{s} \sum\limits_{i = 1}^{{N_{r} }} {\chi_{i,m}^{2*} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} } + 2\sum\limits_{i = 1}^{{N_{r} }} {\beta_{i,m}^{*} \chi_{i,m}^{2*} {\text{L}}_{i}^{T} {\text{L}}_{i} } } \right]^{ - 1} \\ = & \left[ {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\chi_{i,m}^{*} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}} } } } \right] \\ \end{aligned} $$
(A.8)

Similarly, the optimal vector \( {{\theta }}_{1}^{*} \) can be obtained as shown in Eq. (A.9) by assigning the derivative of the objective function with respect to \( {{\theta }}_{1} \) equal to zero.

$$ \begin{aligned} {{\theta }}_{1}^{ * } &= \left[ {\sigma_{eq}^{ - 2} \sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{G}}_{M}^{{\left( {i,m,s} \right)}} } \right)^{T} {\text{G}}_{M}^{{\left( {i,m,s} \right)}} } } } } \right]^{ - 1} \\ &\quad\left[ \begin{aligned} \sigma_{eq}^{ - 2} \left\{ {\sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{G}}_{M}^{{\left( {i,m,s} \right)}} } \right)^{T} \left( {{\text{G}}_{K}^{m} {{\theta }}_{2}^{ * } + {\text{V}}_{K}^{m} - {\text{V}}_{M}^{{\left( {i,m,s} \right)}} } \right)} } } } \right\} - {\text{C}}_{1I} \hfill \\ - {\text{G}}_{1I} \left\{ {{{\varSigma }}_{\ln 1}^{ - 1} \left( {\ln \left( {{{\theta }}_{1}^{ * } } \right) - {{\eta }}_{1} } \right)} \right\} \hfill \\ \end{aligned} \right] \end{aligned} $$
(A.9)

Matrices \( {\text{V}}_{K}^{m} \), \( {\text{V}}_{M}^{{\left( {i,m,s} \right)}} \), \( {\text{G}}_{K}^{m} \), \( {\text{G}}_{M}^{{\left( {i,m,s} \right)}} \), \( {\text{C}}_{1I} \) and \( {\text{G}}_{1I} \) as introduced in Eq. (A.9) for better readability are having the form as expressed in Eqs. (A.10)–(A.15), respectively.

$$ {\text{V}}_{K}^{m} { = }\left[ {{\text{K}}_{0} {{\varPhi }}^{\left( m \right)*} } \right]_{{N_{d} {\mathbf{ \times 1}}}} $$
(A.10)
$$ {\text{V}}_{M}^{{\left( {i,m,s} \right)}} { = }\left[ {{\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}_{0} {{\varPhi }}^{\left( m \right)*} } \right]_{{N_{d} \times 1}} $$
(A.11)
$$ {\text{G}}_{K}^{m} { = }\left[ {\begin{array}{*{20}c} {{\text{K}}_{1} {{\varPhi }}^{\left( m \right)*} } & {{\text{K}}_{2} {{\varPhi }}^{\left( m \right)*} } & \cdots & {{\text{K}}_{{\left( {N_{\theta 2} } \right)}} {{\varPhi }}^{\left( m \right)*} } \\ \end{array} } \right]_{{N_{d} \times N_{\theta 2} }} $$
(A.12)
$$ {\text{G}}_{M}^{{\left( {i,m,s} \right)}} { = }\left[ {\begin{array}{*{20}c} {{\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}_{1} {{\varPhi }}^{\left( m \right)*} } & {{\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}_{2} {{\varPhi }}^{\left( m \right)*} } & \cdots & {{\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}_{{\left( {N_{\theta 1} } \right)}} {{\varPhi }}^{\left( m \right)*} } \\ \end{array} } \right]_{{N_{d} \times N_{\theta 1} }} $$
(A.13)
$$ {\text{C}}_{1I} = \left[ {\begin{array}{*{20}c} {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{1}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{1}^{*} }$}},} & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{2}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{2}^{*} }$}},} & \cdots & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{*} }$}}} \\ \end{array} } \right]_{{N_{\theta 1} \times 1}}^{T} $$
(A.14)
$$ {\text{G}}_{1I} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{1}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{1}^{*} }$}}} & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{2}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{2}^{*} }$}}} & \cdots & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{*} }$}}} \\ \end{array} } \right]_{{N_{\theta 1} \times N_{\theta 1} }} $$
(A.15)

where ‘diag’ represents diagonal elements of a matrix.

The optimal vector \( {{\theta }}_{2}^{*} \) may be similarly obtained as shown in Eq. (A.16).

$$\begin{aligned} {{\theta }}_{2}^{ * } &= \left[ {\sigma_{\text{eq}}^{ - 2} N_{r} N_{s} \sum\limits_{m = 1}^{{N_{m} }} {\left( {{\text{G}}_{K}^{m} } \right)^{T} {\text{G}}_{K}^{m} } } \right]^{ - 1} \\ &\quad \left[ \begin{aligned} \sigma_{\text{eq}}^{ - 2} \left\{ {\sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{G}}_{K}^{m} } \right)^{T} \left( {{\text{G}}_{M}^{{\left( {i,m,s} \right)}} {{\theta }}_{1}^{ * } + {\text{V}}_{M}^{{\left( {i,m,s} \right)}} } \right)} - N_{r} N_{s} \sum\limits_{m = 1}^{{N_{m} }} {\left( {{\text{G}}_{K}^{m} } \right)^{T} {\text{V}}_{K}^{m} } } } } \right\} \hfill \\ - {\text{C}}_{2I} - {\text{G}}_{2I} \left\{ {{{\varSigma }}_{\ln 2}^{ - 1} \left( {\ln \left( {{{\theta }}_{2}^{ * } } \right) - {{\eta }}_{2} } \right)} \right\} \hfill \\ \end{aligned} \right] \end{aligned} $$
(A.16)

Matrices \( {\text{C}}_{2I} \) and \( {\text{G}}_{2I} \) as introduced in Eq. (A.16) for better readability are having the form as expressed in Eqs. (A.17) and (A.18), respectively.

$$ {\text{C}}_{2I} = \left[ {\begin{array}{*{20}c} {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{1}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{1}^{*} }$}},} & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{2}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{2}^{*} }$}},} & \cdots & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{*} }$}}} \\ \end{array} } \right]_{{N_{\theta 2} \times 1}}^{T} $$
(A.17)
$$ {\text{G}}_{2I} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{1}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{1}^{*} }$}}} & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{2}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{2}^{*} }$}}} & \cdots & {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{*} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{*} }$}}} \\ \end{array} } \right]_{{N_{\theta 2} \times N_{\theta 2} }} $$
(A.18)

Appendix B: Formulation of Hessian Matrix

The Hessian matrix H of the objective function \( F\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \) as in Eq. (19) is formulated here associated with the uncertain parameter set \( \left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \). The Hessian matrix is evaluated with the uncertain parameters in the order as follows: (i) \( {{\varPhi }} = \left( {\left( {{{\varPhi }}^{\left( 1 \right)} } \right)^{T} ,\left( {{{\varPhi }}^{\left( 2 \right)} } \right)^{T} , \ldots ,\left( {{{\varPhi }}^{{\left( {N_{m} } \right)}} } \right)^{T} } \right)^{T} \), (ii) \( {{\chi }} = \left[ {\begin{array}{*{20}c} {{{\chi }}_{1}^{T} } & {{{\chi }}_{2}^{T} } & \cdots & {{{\chi }}_{{N_{m} }}^{T} } \\ \end{array} } \right]^{T} \), (iii) \( {{\beta }} = \left[ {\begin{array}{*{20}c} {{{\beta }}_{1}^{T} } & {{{\beta }}_{2}^{T} } & \cdots & {{{\beta }}_{{N_{m} }}^{T} } \\ \end{array} } \right]^{T} \), (iv) \( {{\theta }}_{1} = \left[ {\left( {\theta_{1} } \right)_{1} ,\left( {\theta_{1} } \right)_{2} , \ldots ,\left( {\theta_{1} } \right)_{{N_{\theta 1} }} } \right]^{T} \), (v) \( {{\theta }}_{2} = \left[ {\left( {\theta_{2} } \right)_{1} ,\left( {\theta_{2} } \right)_{2} , \ldots ,\left( {\theta_{2} } \right)_{{N_{\theta 2} }} } \right]^{T} \). The mth block of \( {{\chi }} \) and \( {{\beta }} \) is expressed here as \( {{\chi }}_{m}^{T} = \left[ {\chi_{1,m} ,\chi_{2,m} , \ldots ,\chi_{{N_{r} ,m}} } \right]^{T} \) and \( {{\beta }}_{m}^{T} = \left[ {\beta_{1,m} ,\beta_{2,m} , \ldots ,\beta_{{N_{r} ,m}} } \right]^{T} \), respectively. For simplified formulation of Hessian matrix \( {\text{H}}\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) \), it is assumed that \( {{\varSigma }}_{\ln 1}^{ - 1} \) and \( {{\varSigma }}_{\ln 2}^{ - 1} \) are diagonal matrices as observed in Eqs. (B.1) and (B.2), respectively.

$$ {{\varSigma }}_{\ln 1}^{ - 1} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\sigma_{M1}^{ - 2} } & {\sigma_{M2}^{ - 2} } & \cdots & {\sigma_{{MN_{\theta 1} }}^{ - 2} } \\ \end{array} } \right]_{{N_{\theta 1} \times N_{\theta 1} }} $$
(B.1)
$$ {{\varSigma }}_{\ln 2}^{ - 1} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\sigma_{K1}^{ - 2} } & {\sigma_{K2}^{ - 2} } & \cdots & {\sigma_{{KN_{\theta 2} }}^{ - 2} } \\ \end{array} } \right]_{{N_{\theta 2} \times N_{\theta 2} }} $$
(B.2)

The overall Hessian matrix is considered with multiple sub-matrix components as shown in Eq. (B.3).

$$ {\text{H}}\left( {{{\varPhi }},{{\chi }},{{\beta }},{{\theta }}_{1} ,{{\theta }}_{2} } \right) = \left[ {\begin{array}{*{20}c} {{\text{H}}_{11} } & {{\text{H}}_{12} } & {{\text{H}}_{13} } & {\begin{array}{*{20}c} {{\text{H}}_{14} } & {{\text{H}}_{15} } \\ \end{array} } \\ {} & {{\text{H}}_{22} } & {{\text{H}}_{23} } & {\begin{array}{*{20}c} {{\text{H}}_{24} } & {{\text{H}}_{25} } \\ \end{array} } \\ {} & {} & {{\text{H}}_{33} } & {\begin{array}{*{20}c} {{\text{H}}_{34} } & {{\text{H}}_{35} } \\ \end{array} } \\ {\begin{array}{*{20}c} {} \\ {\text{sym}} \\ \end{array} } & {\begin{array}{*{20}c} {} \\ {} \\ \end{array} } & {\begin{array}{*{20}c} {} \\ {} \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\text{H}}_{44} } \\ {} \\ \end{array} } & {\begin{array}{*{20}c} {{\text{H}}_{45} } \\ {{\text{H}}_{55} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \right] $$
(B.3)

It may be mentioned that the components of the Hessian matrix: \( {\text{H}}_{24} = \frac{{\partial^{2} F}}{{\partial^{2} {{\chi \theta }}_{1} }} \), \( {\text{H}}_{25} = \frac{{\partial^{2} F}}{{\partial^{2} {{\chi \theta }}_{2} }} \), \( {\text{H}}_{33} = \frac{{\partial^{2} F}}{{\partial {{\beta }}^{2} }} \), \( {\text{H}}_{34} = \frac{{\partial^{2} F}}{{\partial {{\beta \theta }}_{1} }} \) and \( {\text{H}}_{35} = \frac{{\partial^{2} F}}{{\partial {{\beta \theta }}_{2} }} \) are zero matrices. The remaining components of the Hessian matrix are expressed as follows:

  1. (i)

    \( {\text{H}}_{11} = \frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{2} }} \) of size \( N_{d} N_{m} \times N_{d} N_{m} \) is a block diagonal matrix formulated as shown in Eq. (B.4) where each block matrix is of size \( N_{d} \times N_{d} \).

$$ {\text{H}}_{11} = \frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{2} }} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{{\left( 1 \right)^{2} }} }}} & {\frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{{\left( 2 \right)^{2} }} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{{\left( {N_{m} } \right)^{2} }} }}} \\ \end{array} } \right]_{{N_{d} N_{m} \times N_{d} N_{m} }} $$
(B.4)

The expression for the mth block diagonal of Eq. (B.4) is shown in Eq. (B.5).

$$\begin{aligned} \frac{{\partial^{2} F}}{{\partial {{\varPhi }}^{{\left( m \right)^{2} }} }} &= \left[ {\sigma_{eq}^{ - 2} \sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{K}}\left( {{{\theta }}_{2}^{ * } } \right) - {\hat{{\lambda }}}_{{\left( {i,m,s} \right)}} {\text{M}}\left( {{{\theta }}_{1}^{ * } } \right)} \right)}^{2} } } \right. \\ & \quad \left. {+ N_{s} \sum\limits_{i = 1}^{{N_{r} }} {\chi_{i,m}^{2*} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} } + 2\sum\limits_{i = 1}^{{N_{r} }} {\beta_{i,m}^{*} \chi_{i,m}^{2*} {\text{L}}_{i}^{T} {\text{L}}_{i} } } \right]_{{N_{d} \times N_{d} }} \end{aligned}$$
(B.5)
  1. (ii)

    \( {\text{H}}_{12} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \chi }}}} \) of size \( N_{d} N_{m} \times N_{m} N_{r} \) is a block diagonal matrix formulated as shown in Eq. (B.6) where each block matrix is of size \( N_{d} \times N_{r} \).

$$ {\text{H}}_{12} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \chi }}}} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( 1 \right)} {{\chi }}_{1} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( 2 \right)} {{\chi }}_{2} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{{\left( {N_{m} } \right)}} {{\chi }}_{{N_{m} }} }}} \\ \end{array} } \right]_{{N_{d} N_{m} \times N_{m} N_{r} }} $$
(B.6)

The expression for the mth block diagonal of Eq. (B.6) is shown in Eq. (B.7).

$$ \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} {{\chi }}_{m} }} = \left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \chi_{1,m} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \chi_{2,m} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \chi_{{N_{r} ,m}} }}} \\ \end{array} } \right]_{{N_{d} \times N_{r} }} $$
(B.7)

where

$$ \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \chi_{i,m} }} = \left[ {2N_{s} \chi_{i,m}^{*} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} + 4\beta_{i,m}^{*} \chi_{i,m}^{*} {\text{L}}_{i}^{T} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} - \sum\limits_{s = 1}^{{N_{s} }} {{\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\hat{{\varPsi }}}_{{\left( {i,m,s} \right)}} } } \right]_{{N_{d} \times 1}} $$
(B.8)
  1. (iii)

    \( {\text{H}}_{13} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \beta }}}} \) of size \( N_{d} N_{m} \times N_{m} N_{r} \) is a block diagonal matrix formulated as shown in Eq. (B.9) where each block matrix is of size \( N_{d} \times N_{r} \).

$$ {\text{H}}_{13} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \beta }}}} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( 1 \right)} {{\beta }}_{1} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( 2 \right)} {{\beta }}_{2} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{{\left( {N_{m} } \right)}} {{\beta }}_{{N_{m} }} }}} \\ \end{array} } \right]_{{N_{d} N_{m} \times N_{m} N_{r} }} $$
(B.9)

The expression for the mth block diagonal of Eq. (B.9) is shown in Eq. (B.10).

$$ \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} {{\beta }}_{m} }} = \left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \beta_{1,m} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \beta_{2,m} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \beta_{{N_{r} ,m}} }}} \\ \end{array} } \right]_{{N_{d} \times N_{r} }} $$
(B.10)

where

$$ \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi }}^{\left( m \right)} \beta_{i,m} }} = 2\chi_{i,m}^{2*} {\text{L}}_{i}^{T} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} $$
(B.11)
  1. (iv)
    $$ {\text{H}}_{14} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \theta }}_{1} }} = - \sigma_{eq}^{ - 2} \sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {{\text{L}}_{1}^{{\left( {i,m,s} \right)}} } } } $$
    (B.12)

where

$$ \left( {{\text{L}}_{1}^{{\left( {i,m,s} \right)}} } \right)_{{l^{th} \, col}} = \left[ {\hat{\lambda }_{{\left( {i,m,s} \right)}} \left\{ {\left( {{\text{K}}^{*} - \hat{\lambda }_{{\left( {i,m,s} \right)}} {\text{M}}^{*} } \right){\text{M}}_{l} + {\text{M}}_{l} \left( {{\text{K}}^{*} - \hat{\lambda }_{{\left( {i,m,s} \right)}} {\text{M}}^{*} } \right)} \right\}{{\varPhi }}^{\left( m \right)*} } \right]_{{N_{d} \times 1}} $$
(B.13)
  1. (v)
    $$ {\text{H}}_{15} = \frac{{\partial^{2} F}}{{\partial^{2} {{\varPhi \theta }}_{2} }} = \sigma_{eq}^{ - 2} \sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {{\text{L}}_{2}^{{\left( {i,m,s} \right)}} } } } $$
    (B.14)

where

$$ \left( {{\text{L}}_{2}^{{\left( {i,m,s} \right)}} } \right)_{{l^{th} \, col}} = \left[ {\left\{ {\left( {{\text{K}}^{*} - \hat{\lambda }_{{\left( {i,m,s} \right)}} {\text{M}}^{*} } \right){\text{K}}_{l} + {\text{K}}_{l} \left( {{\text{K}}^{*} - \hat{\lambda }_{{\left( {i,m,s} \right)}} {\text{M}}^{*} } \right)} \right\}{{\varPhi }}^{\left( m \right)*} } \right]_{{N_{d} \times 1}} $$
(B.15)
  1. (vi)

    \( {\text{H}}_{22} = \frac{{\partial^{2} F}}{{\partial {{\chi }}^{2} }} \) of size \( N_{m} N_{r} \times N_{m} N_{r} \) is a block diagonal matrix formulated as shown in Eq. (B.16) where each block matrix is of size \( N_{r} \times N_{r} \).

$$ {\text{H}}_{22} = \frac{{\partial^{2} F}}{{\partial {{\chi }}^{2} }} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial {{\chi }}_{1}^{2} }}} & {\frac{{\partial^{2} F}}{{\partial {{\chi }}_{2}^{2} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial {{\chi }}_{{N_{m} }}^{2} }}} \\ \end{array} } \right]_{{N_{m} N_{r} \times N_{m} N_{r} }} $$
(B.16)

The expression for the mth block diagonal of Eq. (B.16) is shown in Eq. (B.17).

$$ \frac{{\partial^{2} F}}{{\partial {{\chi }}_{m}^{2} }} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial \chi_{1,m}^{2} }}} & {\frac{{\partial^{2} F}}{{\partial \chi_{2,m}^{2} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial \chi_{{N_{r} ,m}}^{2} }}} \\ \end{array} } \right]_{{N_{r} \times N_{r} }} $$
(B.17)

where

$$ \frac{{\partial^{2} F}}{{\partial \chi_{i,m}^{2} }} = N_{s} {{\varPhi }}^{{\left( m \right)*^{T} }} {\text{L}}_{i}^{T} {{\varSigma }}_{{\left( {i,m} \right)}}^{ - 1} {\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} + 2\beta_{i,m}^{*} \left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} } \right\|^{2} $$
(B.18)
  1. (vii)

    \( {\text{H}}_{23} = \frac{{\partial^{2} F}}{{\partial^{2} {{\chi \beta }}}} \) of size \( N_{m} N_{r} \times N_{m} N_{r} \) is a block diagonal matrix formulated as shown in Eq. (B.19) where each block matrix is of size \( N_{r} \times N_{r} \).

$$ {\text{H}}_{23} = \frac{{\partial^{2} F}}{{\partial^{2} {{\chi \beta }}}} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} {{\chi }}_{1} {{\beta }}_{1} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} {{\chi }}_{2} {{\beta }}_{2} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} {{\chi }}_{{N_{m} }} {{\beta }}_{{N_{m} }} }}} \\ \end{array} } \right]_{{N_{m} N_{r} \times N_{m} N_{r} }} $$
(B.19)

The expression for the mth block diagonal of Eq. (B.19) is shown in Eq. (B.20).

$$ \frac{{\partial^{2} F}}{{\partial^{2} {{\chi }}_{m} {{\beta }}_{m} }} = {\mathbf{diag}}\left[ {\begin{array}{*{20}c} {\frac{{\partial^{2} F}}{{\partial^{2} \chi_{1,m} \beta_{1,m} }}} & {\frac{{\partial^{2} F}}{{\partial^{2} \chi_{2,m} \beta_{2,m} }}} & \cdots & {\frac{{\partial^{2} F}}{{\partial^{2} \chi_{{N_{r} ,m}} \beta_{{N_{r} ,m}} }}} \\ \end{array} } \right]_{{N_{r} \times N_{r} }} $$
(B.20)

where

$$ \frac{{\partial^{2} F}}{{\partial^{2} \chi_{i,m} \beta_{i,m} }} = 2\chi_{i,m}^{*} \left\| {{\text{L}}_{i} {{\varPhi }}^{\left( m \right)*} } \right\|^{2} $$
(B.21)
  1. (viii)
    $$ {\text{H}}_{44} = \frac{{\partial^{2} F}}{{\partial {{\theta }}_{1}^{2} }} = \sigma_{eq}^{ - 2} \sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{G}}_{M}^{{\left( {i,m,s} \right)}} } \right)^{T} {\text{G}}_{M}^{{\left( {i,m,s} \right)}} } } } - {\text{T}}_{1} + {\text{T}}_{2} + {\text{T}}_{3} $$
    (B.22)

Matrices \( {\text{T}}_{1} \), \( {\text{T}}_{2} \) and \( {\text{T}}_{3} \) as introduced in Eq. (B.22) for better readability are having the form as expressed in Eq. (B.23)–(B.25), respectively.

$$ {\text{T}}_{1} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{1}^{2} }$}},{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{2}^{2} }$}}, \ldots {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} }$}}} \right) $$
(B.23)
$$ {\text{T}}_{2} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{1} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{1} } \right)} {\left( {{{\theta }}_{1} } \right)_{1}^{2} \sigma_{M1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{1}^{2} \sigma_{M1}^{2} }$}},{\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{2} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{2} } \right)} {\left( {{{\theta }}_{1} } \right)_{2}^{2} \sigma_{M2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{2}^{2} \sigma_{M2}^{2} }$}}, \ldots \, {\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }} } \right)} {\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} \sigma_{{MN_{\theta 1} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} \sigma_{{MN_{\theta 1} }}^{2} }$}}} \right) $$
(B.24)
$$ {\text{T}}_{3} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{${\left( {{{\eta }}_{1} } \right)_{1} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{1} } \right)_{1} } {\left( {{{\theta }}_{1} } \right)_{1}^{2} \sigma_{M1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{1}^{2} \sigma_{M1}^{2} }$}},{\raise0.7ex\hbox{${\left( {{{\eta }}_{1} } \right)_{2} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{1} } \right)_{2} } {\left( {{{\theta }}_{1} } \right)_{2}^{2} \sigma_{M2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{2}^{2} \sigma_{M2}^{2} }$}}, \ldots {\raise0.7ex\hbox{${\left( {{{\eta }}_{1} } \right)_{{N_{\theta 1} }} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{1} } \right)_{{N_{\theta 1} }} } {\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} \sigma_{{MN_{\theta 1} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{1} } \right)_{{N_{\theta 1} }}^{2} \sigma_{{MN_{\theta 1} }}^{2} }$}}} \right) $$
(B.25)
  1. (ix)
    $$ {\text{H}}_{45} = \frac{{\partial^{2} F}}{{\partial^{2} {{\theta }}_{1} {{\theta }}_{2} }} = - \sigma_{\text{eq}}^{ - 2} \sum\limits_{m = 1}^{{N_{m} }} {\sum\limits_{i = 1}^{{N_{r} }} {\sum\limits_{s = 1}^{{N_{s} }} {\left( {{\text{G}}_{M}^{{\left( {i,m,s} \right)}} } \right)^{T} {\text{G}}_{K}^{m} } } } $$
    (B.26)
  2. (x)
    $$ {\text{H}}_{55} = \frac{{\partial^{2} F}}{{\partial {{\theta }}_{2}^{2} }} = \sigma_{\text{eq}}^{ - 2} N_{r} N_{s} \sum\limits_{m = 1}^{{N_{m} }} {\left( {{\text{G}}_{K}^{m} } \right)^{T} {\text{G}}_{K}^{m} } - {\text{T}}_{4} + {\text{T}}_{5} + {\text{T}}_{6} $$
    (B.27)

Matrices \( {\text{T}}_{4} \), \( {\text{T}}_{5} \) and \( {\text{T}}_{6} \) as introduced in Eq. (B.27) for better readability are having the form as expressed in Eqs. (B.28)–(B.30), respectively.

$$ {\text{T}}_{4} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{1}^{2} }$}},{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{2}^{2} }$}}, \ldots {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} }$}}} \right) $$
(B.28)
$$ {\text{T}}_{5} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{1} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{1} } \right)} {\left( {{{\theta }}_{2} } \right)_{1}^{2} \sigma_{K1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{1}^{2} \sigma_{K1}^{2} }$}},{\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{2} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{2} } \right)} {\left( {{{\theta }}_{2} } \right)_{2}^{2} \sigma_{K2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{2}^{2} \sigma_{K2}^{2} }$}}, \ldots {\raise0.7ex\hbox{${\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }} } \right)}$} \!\mathord{\left/ {\vphantom {{\left( {1 - \ln \left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }} } \right)} {\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} \sigma_{{KN_{\theta 2} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} \sigma_{{KN_{\theta 2} }}^{2} }$}}} \right) $$
(B.29)
$$ {\text{T}}_{6} = {\mathbf{diag}}\left( {{\raise0.7ex\hbox{${\left( {{{\eta }}_{2} } \right)_{1} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{2} } \right)_{1} } {\left( {{{\theta }}_{2} } \right)_{1}^{2} \sigma_{K1}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{1}^{2} \sigma_{K1}^{2} }$}},{\raise0.7ex\hbox{${\left( {{{\eta }}_{2} } \right)_{2} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{2} } \right)_{2} } {\left( {{{\theta }}_{2} } \right)_{2}^{2} \sigma_{K2}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{2}^{2} \sigma_{K2}^{2} }$}}, \ldots \, {\raise0.7ex\hbox{${\left( {{{\eta }}_{2} } \right)_{{N_{\theta 2} }} }$} \!\mathord{\left/ {\vphantom {{\left( {{{\eta }}_{2} } \right)_{{N_{\theta 2} }} } {\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} \sigma_{{KN_{\theta 2} }}^{2} }}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {{{\theta }}_{2} } \right)_{{N_{\theta 2} }}^{2} \sigma_{{KN_{\theta 2} }}^{2} }$}}} \right) $$
(B.30)

Appendix C: Relation Between the Hessian and Covariance Matrix for a Random Vector Following Combined Normal–Lognormal Probability Distribution

Let consider a vector \( {{\theta }}\left( { = \left[ {\begin{array}{*{20}c} {{{\theta }}_{1} } \\ {{{\theta }}_{2} } \\ \end{array} } \right]} \right) \) following multivariate combined normal–lognormal distribution, where \( {{\theta }}_{1} \) consisting of p numbers of variables follows lognormal distribution having mean \( \ln \left( {{{\theta }}_{1}^{*} } \right) \) and \( {{\theta }}_{2} \) consisting of q numbers of variables follows normal distribution having mean \( {{\theta }}_{2}^{*} \). Total number of variables in \( {{\theta }} \) is taken as \( n\left( { = p + q} \right) \). Now, joint probability density of \( {{\theta }} \) following multivariate combined normal–lognormal distribution can be shown as in Eq. (C.1).

$$\begin{aligned} p\left( {{\theta }} \right) &= p\left( {{{\theta }}_{1} ,{{\theta }}_{2} } \right) \\ &= \left( {2\pi } \right)^{ - n/2} \left| {{\varSigma }} \right|^{ - 1/2} \mathop {\varvec{\Pi}}\limits_{i = 1}^{p} \frac{1}{{\left( {{{\theta }}_{1} } \right)_{i} }}\\ &\exp \left( { - \frac{1}{2}\left( {\begin{array}{*{20}c} {\ln \left( {{{\theta }}_{1} } \right) - \ln \left( {{{\theta }}_{1}^{*} } \right)} \\ {{{\theta }}_{2} \text{ - }{{\theta }}_{2}^{*} } \\ \end{array} } \right)^{T} {{\varSigma }}^{ - 1} \left( {\begin{array}{*{20}c} {\ln \left( {{{\theta }}_{1} } \right) - \ln \left( {{{\theta }}_{1}^{*} } \right)} \\ {{{\theta }}_{2} \text{ - }{{\theta }}_{2}^{*} } \\ \end{array} } \right)} \right) \end{aligned} $$
(C.1)

Taking negative logarithm of the probability density function (PDF), the objective function can be obtained as shown in Eq. (C.2).

$$ J\left( {{\theta }} \right) = - \ln p\left( {{\theta }} \right) = \frac{n}{2}\ln \left( {2\pi } \right) + \frac{1}{2}\left| {{\varSigma }} \right| + \sum\limits_{i = 1}^{p} {\ln \left( {{{\theta }}_{1} } \right)_{i} } + \frac{1}{2}\left( {\begin{array}{*{20}c} {\ln \left( {{{\theta }}_{1} } \right) - \ln \left( {{{\theta }}_{1}^{*} } \right)} \\ {{{\theta }}_{2} \text{ - }{{\theta }}_{2}^{*} } \\ \end{array} } \right)^{T} \left[ {\begin{array}{*{20}c} {{{\varSigma }}_{11}^{ - 1} } & {{{\varSigma }}_{12}^{ - 1} } \\ {{{\varSigma }}_{21}^{ - 1} } & {{{\varSigma }}_{22}^{ - 1} } \\ \end{array} } \right]\left( {\begin{array}{*{20}c} {\ln \left( {{{\theta }}_{1} } \right) - \ln \left( {{{\theta }}_{1}^{*} } \right)} \\ {{{\theta }}_{2} \text{ - }{{\theta }}_{2}^{*} } \\ \end{array} } \right) $$
(C.2)

where \( {{\varSigma }}^{ - 1} = \left[ {\begin{array}{*{20}c} {\left( {{{\varSigma }}_{11}^{ - 1} } \right)_{p \times p} } & {\left( {{{\varSigma }}_{12}^{ - 1} } \right)_{p \times q} } \\ {\left( {{{\varSigma }}_{21}^{ - 1} } \right)_{q \times p} } & {\left( {{{\varSigma }}_{22}^{ - 1} } \right)_{q \times q} } \\ \end{array} } \right] \)In order to establish the relation between the covariance matrix \( {{\varSigma }} \) and Hessian matrix H, different components of the Hessian matrix are formulated and are shown as follows:

  1. (i)

    For \( 1 \le l \le p \), component of the Hessian matrix \( {\text{H}}^{{\left( {l,l} \right)}} \left( {{\theta }} \right) \) can be expressed as shown in Eq. (C.3).

$$ {\text{H}}^{{\left( {l,l} \right)}} \left( {{\theta }} \right) = \frac{{\partial^{2} J\left( {{\theta }} \right)}}{{\partial \left( {{{\theta }}_{1} } \right)_{l}^{2} }} = \frac{{\left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,l} \right)}} - \sum\limits_{i = 1}^{p} {\left( {\ln \left( {{{\theta }}_{1} } \right)_{i} - \ln \left( {{{\theta }}_{1}^{*} } \right)_{i} } \right)\left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,i} \right)}} } - \sum\limits_{i = 1}^{q} {\left( {\left( {{{\theta }}_{2} } \right)_{i} - \left( {{{\theta }}_{2}^{*} } \right)_{i} } \right)\left( {{{\varSigma }}_{12}^{ - 1} } \right)^{{\left( {l,i} \right)}} } - 1}}{{\left( {{{\theta }}_{1}^{*} } \right)_{l}^{2} }} $$
(C.3)

The expression for the Hessian component shown in Eq. (C.3) at \( {{\theta }} = {{\theta }}^{*} \) can be rewritten as in Eq. (C.4).

$$ {\text{H}}^{{\left( {l,l} \right)}} \left( {{{\theta }}^{*} } \right) = \left. {\frac{{\partial^{2} J\left( {{\theta }} \right)}}{{\partial \left( {{{\theta }}_{1} } \right)_{l}^{2} }}} \right|_{{{{\theta }} = {{\theta }}^{*} }} = \frac{{\left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,l} \right)}} - 1}}{{\left( {{{\theta }}_{1}^{*} } \right)_{l}^{2} }} $$
(C.4)

Simplifying the expression shown in Eq. (C.4), we get the expression as shown in Eq. (C.5).

$$ \left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,l} \right)}} = 1 + \left( {{{\theta }}_{1}^{*} } \right)_{l}^{2} {\text{H}}^{{\left( {l,l} \right)}} \left( {{{\theta }}^{*} } \right) $$
(C.5)
  1. (ii)

    For \( 1 \le l,l' \le p \) and \( l \ne l' \), component of the Hessian matrix \( {\text{H}}^{{\left( {l,l'} \right)}} \left( {{\theta }} \right) \) at \( {{\theta }} = {{\theta }}^{*} \) can be expressed as shown in Eq. (C.6).

$$ {\text{H}}^{{\left( {l,l'} \right)}} \left( {{{\theta }}^{*} } \right) = \left. {\frac{{\partial^{2} J\left( {{\theta }} \right)}}{{\partial \left( {{{\theta }}_{1} } \right)_{l} \partial \left( {{{\theta }}_{1} } \right)_{l'} }}} \right|_{{{{\theta = \theta }}^{*} }} = \frac{{\left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,l'} \right)}} }}{{\left( {{{\theta }}_{1}^{*} } \right)_{l} \left( {{{\theta }}_{1}^{*} } \right)_{l'} }} $$
(C.6)

Or in other words, Eq. (C.6) can be expressed as in Eq. (C.7).

$$ \left( {{{\varSigma }}_{11}^{ - 1} } \right)^{{\left( {l,l'} \right)}} = \left( {{{\theta }}_{1}^{*} } \right)_{l} \left( {{{\theta }}_{1}^{*} } \right)_{l'} {\text{H}}^{{\left( {l,l'} \right)}} \left( {{{\theta }}^{*} } \right) $$
(C.7)
  1. (iii)

    For \( 1 \le l \le p \) and \( 1 \le l' \le q \), component of the Hessian matrix \( {\text{H}}^{{\left( {l,p + l'} \right)}} \left( {{\theta }} \right) \) at \( {{\theta }} = {{\theta }}^{*} \) can be expressed as shown in Eq. (C.8).

$$ {\text{H}}^{{\left( {l,p + l'} \right)}} \left( {{{\theta }}^{*} } \right) = \left. {\frac{{\partial^{2} J\left( {{\theta }} \right)}}{{\partial \left( {{{\theta }}_{1} } \right)_{l} \partial \left( {{{\theta }}_{2} } \right)_{l'} }}} \right|_{{{{\theta = \theta }}^{*} }} = \frac{{\left( {{{\varSigma }}_{12}^{ - 1} } \right)^{{\left( {l,l'} \right)}} }}{{\left( {{{\theta }}_{1}^{*} } \right)_{l} }} $$
(C.8)

Or in other words, Eq. (C.8) can be expressed as in Eq. (C.9).

$$ \left( {{{\varSigma }}_{12}^{ - 1} } \right)^{{\left( {l,l'} \right)}} = \left( {{{\theta }}_{1}^{*} } \right)_{l} {\text{H}}^{{\left( {l,p + l'} \right)}} \left( {{{\theta }}^{*} } \right) $$
(C.9)
  1. (iv)

    For \( 1 \le l,l' \le q \), component of the Hessian matrix \( {\text{H}}^{{\left( {p + l,p + l'} \right)}} \left( {{\theta }} \right) \) at \( {{\theta }} = {{\theta }}^{*} \) can be expressed as shown in Eq. (C.10).

    $$ {\text{H}}^{{\left( {p + l,p + l'} \right)}} \left( {{{\theta }}^{*} } \right) = \left. {\frac{{\partial^{2} J\left( {{\theta }} \right)}}{{\partial \left( {{{\theta }}_{2} } \right)_{l} \partial \left( {{{\theta }}_{2} } \right)_{l'} }}} \right|_{{{{\theta = \theta }}^{*} }} = \left( {{{\varSigma }}_{22}^{ - 1} } \right)^{{\left( {l,l'} \right)}} $$
    (C.10)

Appendix D: Pseudo-Code for Implementation of the Proposed Methodology into FE Model

figurea

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Das, A., Debnath, N. Limited Sensor-Based Probabilistic Damage Detection Using Combined Normal–Lognormal Distributions. Arab J Sci Eng (2020). https://doi.org/10.1007/s13369-020-05056-7

Download citation

Keywords

  • Bayesian model updating
  • Lognormal distribution
  • Multiple set-ups
  • Multiple modal data sets
  • Damage detection
  • Gibbs sampling