Skip to main content
Log in

A modified stochastic perturbation algorithm for closely-spaced eigenvalues problems based on surrogate model

  • RESEARCH PAPER
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

Aiming at uncertainty propagation and dynamic reanalysis of closely-spaced eigenvalues, with consideration of uncertainties in design variables, a modified stochastic perturbation method is proposed. Concerning quasi-symmetric or partial-symmetric structures that frequently appear, one of their primary features is closely-distributed natural frequencies. For structure with closely-spaced eigenvalues, due to its instability and sensitivity to the changes of design variables and its excessively concentrated adjacent eigenvalues, conventional uncertainty analysis or dynamic reanalysis methods for distinct eigenvalue are no longer available. Initially, the spectral decompositions of stiffness and mass matrices are provided; by transfer technique, the eigen-problem of closely-spaced eigenvalues is converted to that of repeated eigenvalues with two perturbation parts appended; then the perturbed closely-spaced eigenvalue is rewritten as the sum of original closely-spaced eigenvalues’ mean value and surrogate model which approximates the first-order perturbation term by polynomial chaos expansions. According to this method, statistical quantities of perturbed closely-spaced eigenvalues are calculated directly and accurately, which contributes to its uncertainty analysis and dynamic reanalysis. Furthermore, the capability of proposed method in dealing with relatively large uncertainties and complex engineering structure is demonstrated. The accuracy and efficiency of proposed method have been verified sufficiently by numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • Berveiller M, Sudret B, Lemaire M (2006) Stochastic finite element: a non-intrusive approach by regression. Eur J Comput Mech 15:1–3, 81-92

    MATH  Google Scholar 

  • Bettebghor, Leroy F-H (2014) Overlapping radial basis function interpolants for spectrally accurate approximation of functions of eigenvalues with application to buckling of composite plates. Comput Math Appl 67(10):1816–1836

    Article  MathSciNet  Google Scholar 

  • Brincker R, Lopez-Aenlle M (2015) Mode shape sensitivity of two closely spaced eigenvalues. J Sound Vib 334:377–387

    Article  Google Scholar 

  • Cameron RH, Martin WT (1947) The orthogonal development of non-linear functionals in series of fourier-hermite functionals. Ann Math 2:385–392

    Article  MathSciNet  MATH  Google Scholar 

  • Chen SH (1991) Matrix perturbation theory in structure vibration analysis. Chongqing Publishing House, Chongqing

    Google Scholar 

  • Chen SH (1992) Vibration theory of structures with random parameters. Jilin Science and Technology Press, Changchun

    Google Scholar 

  • Chen SH (2007) Matrix perturbation theory in structural dynamics. Science Press, Beijing

    Google Scholar 

  • Chen SH, Yang XW, Lian HD (2000) Comparison of several eigenvalue reanalysis methods for modified structures. Struct Multidiscip Optim 20:253–259

    Article  Google Scholar 

  • Chowdhury R, Adhikari S (2010) High dimensional model representation for stochastic finite element analysis. App Math Model 34:3917–3932

    Article  MathSciNet  MATH  Google Scholar 

  • Chowdhury R, Rao BN (2009) Hybrid high dimensional model representation for reliability analysis. Comput Methods Appl Mech Eng 198:753–765

    Article  MATH  Google Scholar 

  • Dongbin X, Em KG (2003) Modeling uncertainty in flow simulation via generalized polynomial chaos. J Comput Phys 187:137–167

    Article  MathSciNet  MATH  Google Scholar 

  • Elishakoff I (1983) Probabilistic methods in the theory of structures. Wiley, New York

    MATH  Google Scholar 

  • Field RV Jr, Grigoriu M (2004) On the accuracy of the polynomial chaos approximation. Probab Eng Mech 19:65–80

    Article  Google Scholar 

  • Gallina A, Pichler L, Uhl T (2011) Enhanced meta-modelling technique for analysis of mode crossing, mode veering and mode coalescence in structural dynamics. Mech Syst Signal Process 25(7):2297–2312

    Article  Google Scholar 

  • Ghosh D, Ghanem R (2012) An invariant subspaced-based approach to the random eigenvalue problem of systems with clustered spectrum. Int J Numer Method Eng 91:378–396

    Article  MATH  Google Scholar 

  • Isukapalli SS (1999) Uncertainty analysis of transport-transformation models. PhD thesis, The State University of New Jersey

  • Kirsch U (2003) Approximate vibration reanalysis of structures. AIAA J 41(3):504–511

    Article  Google Scholar 

  • Menner A (1995) Tatang. Direct incorporation of uncertainty in chemical and environmental engineering systems. PhD thesis, Massachusetts Institute of Technology

  • Nagy ZK, Braatz RD (20007) Distributional uncertainty analysis using power series and polynomial chaos expansions. J Process Control (17):229–240

  • Pagnacco E, Souza de Curs E, Sampaio R (2016) Subspace inverse power method and polynomial chaos representation for the modal frequency responses of random mechanical systems. Comput Mech 58:129–149

    Article  MathSciNet  MATH  Google Scholar 

  • Qiu ZP, Chen SH, Elishakoff I (1995) Natural frequencies of structures with uncertain but nonrandom parameters. J Optim Theory Appl 86:669–683

    Article  MathSciNet  MATH  Google Scholar 

  • Rahman S (2006) A solution of the random eigenvalue problem by a dimensional decomposition method. Int J Numer Method Eng 67:1318–1340

    Article  MathSciNet  MATH  Google Scholar 

  • Sliva G, Brezillon A, Cadou JM, Duigou L (2010) A study of the eigenvalue sensitivity by homotopy and perturbation methods. J Comput Appl Math 234:2297–2302

    Article  MathSciNet  MATH  Google Scholar 

  • Sudret B (2008) Global sensitivity analysis using polynomial chaos expansions. Reliab Eng Syst Saf 93:964–979

    Article  Google Scholar 

  • Villadsen J, Michelsen ML (1978) Solution of differential equation models by polynomial approximation. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  • Wang X, Wang L (2011) Uncertainty quantification and propagation analysis of structures base on the measurement data. Mathe Comput Model 54(11-12):2725–2735

    Article  MATH  Google Scholar 

  • Wang L, Wang X (2015) Dynamic loads identification in presence of unknown but bounded measurement errors. Inverse Prob Sci Eng 23(8):1–29

    MathSciNet  MATH  Google Scholar 

  • Wang C, Qiu Z, Wu D (2014) Numerical analysis of uncertain temperature field by stochastic finite difference method. Sci China Phys Mech Astron 57(4):698–707

    Article  Google Scholar 

  • Wang C, Qiu Z (2015) Modified perturbation method for eigenvalues of structure with interval parameters. Sci China Phys Mech Astron 58(1):014602

    Google Scholar 

  • Wang L, Wang X, Li X (2016a) Inverse system method for dynamic loads identification via noisy measured dynamic responses. Eng Comput 33(4):1070–1094

    Article  Google Scholar 

  • Wang C, Qiu Z, Yang Y (2016b) Uncertainty propagation of heat conduction problem with multiple random inputs. Int J Heat Mass Transf 99:95–101

    Article  Google Scholar 

  • Zhiping Q, Hechen Q (2014) A direct-variance-analysis method for generalized stochastic eigenvalue problem based on matrix perturbation theory. Sci China Technol Sci 57(6):1238–1248

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the AVIC Research Project (No.cxy2012BH07), National Nature Science Foundation of the P. R. China (No.11372025, No. 11432002), Defense Industrial Technology Development Program (No.A0420132101, No.A0820132001 and No.JCKY2013601B), Aeronautical Science Foundation of China (No. 2012ZA51010) and 111 Project (No.B07009). The authors would like to thank all the sponsors sincerely. Moreover, the authors wish to express their thanks to the reviewers for their constructive comments and suggestions on improvement of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiping Qiu.

Appendices

Appendix

1.1 Discrimination of closely-spaced eigenvalues

The perturbation method for closely-spaced eigenvalues differs from the method for distinct eigenvalue and repeated eigenvalues. Hence, before the perturbation analysis is implemented on an eigenvalue and its eigenvector, we have to distinguish the type of these eigenvalues.

There are some existing methods about the discrimination of closely-spaced eigenvalues, concerning two adjacent eigenvalues λ i and λ i+1, the index of intensive degree δ is defined as

$$ \delta =\left({\lambda}_{i+1}-{\lambda}_i\right)/\left({\lambda}_{i+1}+{\lambda}_i\right). $$
(1)

The smaller δ is, the closer λ i and λ i+1 are. According to (1), δ measures the gap between adjacent eigenvalues.

Nevertheless, there are some limitations on the adoption of δ. For instance, if two vibration systems are investigated, in which we have δ 1 = 0.03 and δ 2 = 0.09, respectively, one cannot draw a conclusion that the system corresponding to δ 1 = 0.03 is associated with closely-spaced eigenvalues while the system corresponding to δ 2 = 0.09 is associated with distinct eigenvalue. Therefore, the aforementioned index δ doesn’t reflect the essence of closely-spaced eigenvalues, which leads to the difficulty in application.

In this work, the angel θ i between the original eigenvector u i0 and the eigenvector u i after perturbation is utilized for measuring the intensive degree of adjacent eigenvalues, which is defined as

$$ {\theta}_i=1- c o{s}^2\left\langle {\boldsymbol{u}}_0^i,{\boldsymbol{u}}_i\right\rangle =1-\left|{{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{u}}_i\right|/{\left(\left({{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{u}}_0^i\right)\left({{\boldsymbol{u}}_i}^{\mathrm{T}}{\boldsymbol{u}}_i\right)\right)}^{\frac{1}{2}}, $$
(2)

with regard to u i0 is a complex eigenvector, θ i is defined as

$$ {\theta}_i=1- c o{s}^2\left\langle {\boldsymbol{u}}_0^i,{\boldsymbol{u}}_i\right\rangle =1-\left|{{\boldsymbol{u}}_0^i}^{\mathrm{H}}{\boldsymbol{u}}_i\right|/{\left(\left({{\boldsymbol{u}}_0^i}^{\mathrm{H}}{\boldsymbol{u}}_0^i\right)\left({{\boldsymbol{u}}_i}^{\mathrm{H}}{\boldsymbol{u}}_i\right)\right)}^{\frac{1}{2}}. $$
(3)

If θ i approaches 0, then the perturbations of parametric matrices don’t have much influence on u i0 ; If θ i approaches 1, then the perturbations of parametric matrices generate considerable influence on u i0 . As it is well known, the closely-spaced eigenvectors are very sensitive to the parametric perturbations. So if θ i is approximately equal to 1, we can preliminarily conclude that λ i0 has a relatively small gap from the adjacent eigenvalues. Furthermore, if every element in the set (θ j , θ j + 1, ⋯, θ k ) obtained from (3) approaches 1, it’s for sure that (λ j , λ j + 1, ⋯, λ k ) is a set of clustered eigenvalues.

In conclusion, the discrimination of closely-spaced eigenvalues is summarized as:

  1. 1)

    Calculating each and every θ i in the system;

    $$ {\theta}_i=1- c o{s}^2\left\langle {\boldsymbol{u}}_0^i,{\boldsymbol{u}}_i\right\rangle =1-\left|{{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{u}}_i\right|/{\left(\left({{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{u}}_0^i\right)\left({{\boldsymbol{u}}_i}^{\mathrm{T}}{\boldsymbol{u}}_i\right)\right)}^{\frac{1}{2}}, $$
  2. 2)

    Identifying a cluster (θ j , θ j + 1, ⋯, θ k ) in which every element is large in value from θ = {θ 1, θ 2, ⋯, θ n };

  3. 3)

    The cluster of eigenvalues (λ j , λ j + 1, ⋯, λ k ) which corresponds to (θ j , θ j + 1, ⋯, θ k ) is a set of closely-spaced eigenvalues.

Difficulties in applying the distinct eigenvalue’s perturbation method on closely-spaced eigenvalues

For clarity, we start the investigation from the first-order and second-order perturbation formulae in terms of the distinct eigenvalue as follows:

$$ {\boldsymbol{K}}_0{\boldsymbol{u}}_0^i={\lambda}_0^i{\boldsymbol{M}}_0{\boldsymbol{u}}_0^i, $$
(1)
$$ {{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{u}}_0^i=1, $$
(2)

where K 0 and M 0 are n×n symmetric stiffness and mass matrices, respectively; λ i0 and u i0 represent the distinct eigenvalue and its eigenvector, respectively.

After the perturbations, K 0 and M 0 will become K 0 + ε K 1 and M 0 + ε M 1, respectively; the eigenpairs λ i  , u i corresponding to λ i0 , u i0 are expressed as

$$ {\lambda}_i={\lambda}_0^i+\varepsilon {\lambda}_1^i+{\varepsilon}^2{\lambda}_2^i+ O\left({\varepsilon}^3\right), $$
(3)
$$ {\boldsymbol{u}}_i={\boldsymbol{u}}_0^i+\varepsilon {\boldsymbol{u}}_1^i+{\varepsilon}^2{\boldsymbol{u}}_2^i+ O\left({\varepsilon}^3\right), $$
(4)

in which ε denotes a minimum parameter.

In the light of matrix perturbation method, simultaneously for brevity in expression, the first-order and second-order perturbation of eigenvalue λ i are obtained as

$$ \left\{\begin{array}{l}{\lambda}_1^i={{\boldsymbol{u}}_0^i}^{\mathrm{T}}\left({\boldsymbol{K}}_1-{\lambda}_0^i{\boldsymbol{M}}_1\right){\boldsymbol{u}}_0^i\\ {}{\lambda}_2^i={{\boldsymbol{u}}_0^i}^{\mathrm{T}}\left({\boldsymbol{K}}_1-{\lambda}_0^i{\boldsymbol{M}}_1\right){\boldsymbol{u}}_1^i-{\lambda}_1^i{{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{u}}_1^i-{\lambda}_1^i{{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{M}}_1{\boldsymbol{u}}_0^i\end{array}\right., $$
(5)

on the basis of modal superposition method, we have

$$ \begin{array}{ll}{\boldsymbol{u}}_1^i={\displaystyle \sum_{j=1}^n{C}_{i j}{\boldsymbol{u}}_0^j}\kern0.5em ,\hfill & {\boldsymbol{u}}_2^i={\displaystyle \sum_{j=1}^n{D}_{i j}{\boldsymbol{u}}_0^j}\hfill \end{array}, $$
(6)

in conjunction with

$$ \begin{array}{l}{C}_{i j}=\left[\begin{array}{ll}-\frac{1}{2}{{\boldsymbol{u}}_0^i}^{\mathrm{T}}{\boldsymbol{M}}_1{\boldsymbol{u}}_0^i\hfill & \left( i= j\right)\hfill \\ {}\frac{1}{\lambda_0^i-{\lambda}_0^j}{{\boldsymbol{u}}_0^j}^{\mathrm{T}}\left({\boldsymbol{K}}_1-{\lambda}_0^i{\boldsymbol{M}}_1\right){\boldsymbol{u}}_0^i\hfill & \left( i\ne j\right)\hfill \end{array}\right.,\hfill \\ {}{D}_{i j}=\left[\begin{array}{ll}-\frac{1}{2}{{\boldsymbol{u}}_1^i}^{\mathrm{T}}\left({\boldsymbol{M}}_0{\boldsymbol{u}}_1^i+{\boldsymbol{M}}_0{\boldsymbol{u}}_0^i+{\boldsymbol{M}}_1{\boldsymbol{u}}_0^i\right)\hfill & \kern0.2em \left( i= j\right)\hfill \\ {}\frac{1}{\lambda_0^i-{\lambda}_0^j}\left[{{\boldsymbol{u}}_0^j}^{\mathrm{T}}\left({\boldsymbol{K}}_1-{\lambda}_1^i{\boldsymbol{M}}_1\right){\boldsymbol{u}}_0^i-\frac{\lambda_1^j}{\lambda_0^i-{\lambda}_0^j}{{\boldsymbol{u}}_0^j}^{\mathrm{T}}\left({\boldsymbol{M}}_0{\boldsymbol{u}}_1^i-{\boldsymbol{M}}_1{\boldsymbol{u}}_0^i\right)\right]\hfill & \left( i\ne j\right)\hfill \end{array}\right..\hfill \end{array} $$
  1. (1)

    Truncation error in the perturbation expansions

    Note that in perturbation (5)-(6), when the clustered eigenvalues occur, (λ i0  − λ j0 ) will become extremely small, and then C ij is no longer a small coefficient. Thus u i1 is not small compared to u i0 , due to the fact that u i1 is included in the expression of λ i2 , which means λ i2 is no longer a minimum as well.

    Additionally, 1/(λ i0  − λ j0 ) and u i1 exist in D ij , which means u i2 is also not small compared to u i1 . Videlicet, when there comes the clustered natural frequencies, the existing perturbation method for distinct eigenvalue, no matter first-order or second-order perturbation, is probably invalid. On the premise that perturbations of design variables are very small, although the convergence of perturbation expansions could be guaranteed, the calculative result is lack of reliability because of truncation error.

  2. (2)

    Difficulties in determining the individual eigenvector correspond to clustered eigenvalues

    Each and every eigenvector which belongs to a set of clustered eigenvalues is hard to determine precisely, but the eigen-subspace spanned by those entire eigenvectors can be determined precisely. According to the theory of algebraic eigenvalue, the distinct eigenvalue of real symmetric matrix is well-conditioned. However, its eigenvector is probably ill-conditioned, and the ill-conditioned degree depends on how intensive the natural frequencies are.

Perturbation analysis for repeated eigenvalues

Assuming that λ i0 are repeated eigenvalues with m multiplicities, which belongs to a degenerate system. Accordingly, there should be m eigenvectors w i(i = 1, 2, ⋯, m), which are pairwise orthogonal, subject to

$$ {\boldsymbol{K}}_0{\boldsymbol{w}}^i={\lambda}_0^i{\boldsymbol{M}}_0{\boldsymbol{w}}^i, $$
(1)

in which K 0, M 0 denote original stiffness matrix and original mass matrix of the system with repeated eigenvalues, respectively. Moreover, the linear combination of w i is also the eigenvector of repeated eigenvalue λ i0 , which can be expressed as

$$ \begin{array}{l}{\boldsymbol{u}}_0^i={\alpha}_1{\boldsymbol{w}}^1+{\alpha}_2{\boldsymbol{w}}^2+\cdots +{\alpha}_m{\boldsymbol{w}}^m\hfill \\ {}\kern1.62em =\left[{\boldsymbol{w}}^1,{\boldsymbol{w}}^2,\cdots, {\boldsymbol{w}}^m\right]\cdot {\left[{\alpha}_1,{\alpha}_2,\cdots, {\alpha}_m\right]}^{\mathrm{T}}\hfill \\ {}\kern1.62em =\boldsymbol{W}\cdot \boldsymbol{\alpha} \hfill \end{array}, $$
(2)

where α is an undetermined vector.

The structural parameters of degenerate system will be changed under the external perturbation, such as variations of serving environment and differences in raw materials, which finally results in the variations of mass matrix and stiffness matrix as follows:

$$ \left\{\begin{array}{l}\boldsymbol{K}\kern0.6em =\kern0.5em {\boldsymbol{K}}_0\kern0.3em +\varepsilon {\boldsymbol{K}}_1\hfill \\ {}\boldsymbol{M}={\boldsymbol{M}}_0+\varepsilon {\boldsymbol{M}}_1\hfill \end{array}\right., $$
(3)

in which ε is a minimum parameter, and the original system, known as the system corresponding to ε = 0. ε M 1, ε K 1 indicate the first-order perturbation term of M 0, K 0, respectively.

Repeated eigenvalues no longer exist in the system after perturbation, which means there should be m different eigenvalues λ 1, λ 2, ⋯, λ m corresponding to m different eigenvectors u i(i = 1, 2, ⋯, m). It is similar to the circumstance of distinct eigenvalue. We expand the eigenvalue λ i and eigenvector u i as ε -series form

$$ {\lambda}^i={\lambda}_0^i+\varepsilon {\lambda}_1^i+{\varepsilon}^2{\lambda}_2^i+\cdots, $$
(4)
$$ {\boldsymbol{u}}^i={\boldsymbol{u}}_0^i+\varepsilon {\boldsymbol{u}}_1^i+{\varepsilon}^2{\boldsymbol{u}}_2^i+\cdots . $$
(5)

By virtue of the perturbation method, (3), (4), (5) are substituted into Ku = λ Mu, then we expand it with O(ε 3) terms omitted. After that, comparing the coefficients of the same power terms of ε on each side of the equation, we have

$$ {\boldsymbol{K}}_0{\boldsymbol{u}}_1^i+{\boldsymbol{K}}_1{\boldsymbol{u}}_0^i={\lambda}_0^i{\boldsymbol{M}}_0{\boldsymbol{u}}_1^i+{\lambda}_0^i{\boldsymbol{M}}_1{\boldsymbol{u}}_0^i+{\lambda}_1^i{\boldsymbol{M}}_0{\boldsymbol{u}}_0^i, $$
(6)

substituting (2) into (6), yielding

$$ {\boldsymbol{K}}_0{\boldsymbol{u}}_1^i+{\boldsymbol{K}}_1{\displaystyle \sum_{j=1}^m{\alpha}_j{\boldsymbol{w}}^j={\lambda}_0^i{\boldsymbol{M}}_0{\boldsymbol{u}}_1^i+{\lambda}_0^i{\boldsymbol{M}}_1{\displaystyle \sum_{j=1}^m{\alpha}_j{\boldsymbol{w}}^j+{\lambda}_1^i{\boldsymbol{M}}_0{\displaystyle \sum_{j=1}^m{\alpha}_j}{\boldsymbol{w}}^j}}, $$
(7)

premultiplying (7) with [w k]T gives

$$ {\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{K}}_0{\boldsymbol{u}}_1^i+{\displaystyle \sum_{j=1}^m{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{K}}_1{\boldsymbol{w}}^j{\alpha}_j={\lambda}_0^i{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{u}}_1^i+{\lambda}_0^i{\displaystyle \sum_{j=1}^m{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_1{\boldsymbol{w}}^j{\alpha}_j+{\lambda}_1^i{\displaystyle \sum_{j=1}^m{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{w}}^j{\alpha}_j}}}, $$
(8)

taking the following conditions into consideration

$$ {\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{K}}_0{\boldsymbol{u}}_1^i={\left[{\boldsymbol{u}}_1^i\right]}^{\mathrm{T}}{\boldsymbol{K}}_0{\boldsymbol{w}}^k={\lambda}_0^i{\left[{\boldsymbol{u}}_1^i\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{w}}^k, $$
(9)
$$ {\lambda}_0^i{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{u}}_1^i={\lambda}_0^i{\left[{\boldsymbol{u}}_1^i\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{w}}^k, $$
(10)

and the orthogonality

$$ {\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_0{\boldsymbol{w}}^j={\delta}_{k j}, $$
(11)

Equation (8) is rewritten as

$$ \begin{array}{ll}{\displaystyle \sum_{j=1}^m\left({\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{K}}_1{\boldsymbol{w}}^j-{\lambda}_0^i{\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_1{\boldsymbol{w}}^j\right){\alpha}_j={\lambda}_1^i{\alpha}_j}\hfill & \left( k=1,2,\cdots, m\right)\hfill \end{array}. $$
(12)

Assuming that

$$ \left\{\begin{array}{l}{a}_{k j}={\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{M}}_1{\boldsymbol{w}}^j\\ {}{b}_{k j}={\left[{\boldsymbol{w}}^k\right]}^{\mathrm{T}}{\boldsymbol{K}}_1{\boldsymbol{w}}^j\end{array}\right., $$
(13)

then (12) is equivalent to

$$ \begin{array}{ll}{\displaystyle \sum_{j=1}^m\left({b}_{j k}-{\lambda}_0^i{a}_{k j}\right){\alpha}_j={\lambda}_1^i{\alpha}_k,}\hfill & k=1,2,\cdots, m\hfill \end{array}, $$
(14)

the matrix form of (14) can be represented as

$$ \left(\boldsymbol{D}-{\lambda}_1^i{\boldsymbol{I}}^{m\times m}\right)\boldsymbol{\alpha} =0, $$
(15)

where the element of matrix D is subject to

$$ \begin{array}{ll}{d}_{kj}={b}_{kj}-{\lambda}_0^i{a}_{kj},\hfill & \left( k, j=1,2,\cdots, m\right)\hfill \end{array}. $$
(16)

Equation (15) is a standard eigenvalue equation with m order, by solving this eigenvalue problem we will acquire the first-order perturbation term λ j1 (j = 1, 2, ⋯, m), which corresponds to the original repeated eigenvalue λ i0 , and the undetermined coefficient α j .

λ j1 (j = 1, 2, ⋯, m) are arranged in ascending order, if (15) has no repeated eigenvalues, then both λ j1 and α j are unique. For simplicity, we assume that (15) has no repeated eigenvalues.

Solving (15) equals to solve the following equation

$$ \det \left[\boldsymbol{D}-{\lambda}_1^j{\boldsymbol{I}}^{m\times m}\right]=0, $$
(17)

then λ j1  (j = 1, 2, ⋯, m) is obtained, by substituting it back into (15), coefficients α j  (j = 1, 2, ⋯, m) are acquired.

After that, according to (2), we will get the eigenvector which correspond to the repeated eigenvalues λ i0 as follows

$$ \begin{array}{ll}{u}_0^j=\boldsymbol{W}{\alpha}_j\hfill & \left( j= i, i+1,\cdots, i+ m-1\right)\hfill \end{array} $$
(18)

Construction of surrogate model based on PCE method

Essentially speaking, it is the uncertainties of structural parameters, such as statistical variability of material property and geometry condition, errors caused by equipment in manufacture, variations in usage condition, that lead to the perturbation of mass matrix M 1 and perturbation of stiffness matrix K 1, i.e.,

$$ \begin{array}{ll}{\boldsymbol{M}}_1={\boldsymbol{M}}_1\left(\boldsymbol{\xi} \right)={\boldsymbol{M}}_1\left({\xi}_1,{\xi}_2,\cdots, {\xi}_n\right),\hfill & {\boldsymbol{K}}_1={\boldsymbol{K}}_1\left(\boldsymbol{\xi} \right)={\boldsymbol{K}}_1\left({\xi}_1,{\xi}_2,\cdots, {\xi}_n\right)\hfill \end{array}, $$
(1)

where ξ = ξ 1, ξ 2, ⋯, ξ n denote the continuous random variables in system. It is worth mentioning that, to avoid ambiguity in derivation, we will use superscript q to stand for the serial number of closely-spaced eigenvalues’ first-order perturbation term. Assuming the original system as

$$ {\lambda}_1^q= f\left({\boldsymbol{M}}_1,{\boldsymbol{K}}_1\right)= f\left({\xi}_1,{\xi}_2,\cdots, {\xi}_n\right). $$
(2)

Let the probability space of original system be smooth adequately, in addition f(ξ 1, ξ 2, ⋯, ξ n ) cannot be expressed explicitly. But it’s available to obtain the corresponding responses of original system by means of numerical approximation.

The surrogate model of original system is established by PCE method hereinafter. Homogeneous Hermite polynomial expansion is employed in terms of Gaussian random process (Dongbin and Em 2003; Field and Grigoriu 2004; Wang and Wang 2015), and the random response Y(ξ) is expressed as

$$ Y\left(\boldsymbol{\xi} \right)={c}_0+{\displaystyle \sum_{i_1=1}^n{c}_{i_1}{H}_1\left({\xi}_{i_1}\right)+{\displaystyle \sum_{i_1=1}^n{\displaystyle \sum_{i_2=1}^{i_1}{c}_{i_1{i}_2}{H}_2\left({\xi}_{i_1},{\xi}_{i_2}\right)}+{\displaystyle \sum_{i_1=1}^n{\displaystyle \sum_{i_2=1}^{i_1}{\displaystyle \sum_{i_3=1}^{i_2}{c}_{i_1{i}_2{i}_3}{H}_3\left({\xi}_{i_1},{\xi}_{i_2},{\xi}_{i_3}\right)+\cdots }}}}}, $$
(3)

in which Y(ξ) is a second-order random variable with finite variance. According to the Cameron-Martin theorem (Cameron and Martin 1947), it can approximate any functionals in L 2(C) and converges in the L 2(C) sense, where C is the space of real functions which are continuous on the interval [0, 1] and vanish at 0. In which \( \boldsymbol{c}=\left({c}_0,{c}_{i_1},\cdots, {c}_{i_1{i}_2},\cdots, {c}_{i_1{i}_2{i}_3},\cdots \right) \) is a vector of undetermined coefficients, ξ = (ξ 1, ξ 2, ⋯, ξ n ) denotes the Gaussian random variables, \( {H}_n\left({\xi}_{i_1},{\xi}_{i_2},\cdots, {\xi}_{i_n}\right) \) stands for the n th order multidimensional Hermite polynomial.

We truncate (3) by finite terms, assuming the remaining number of terms is s, then (3) is simplified as

$$ Y\left(\boldsymbol{\xi} \right)\approx {\displaystyle \sum_{j=0}^{s-1}{\overset{\frown }{\boldsymbol{c}}}_j{\varGamma}_j\left(\boldsymbol{\xi} \right)}, $$
(4)

where \( {\overset{\frown }{\boldsymbol{c}}}_j \) stands for the undetermined coefficients, \( {\varGamma}_j\left(\boldsymbol{\xi} \right)={\varGamma}_j\left({\xi}_{i_1},{\xi}_{i_2},\cdots, {\xi}_{i_n}\right) \) denotes the jth order generalized Wiener-Askey polynomial chaos. Simultaneously, there is a one-to-one correspondence between the functions \( {H}_n\left({\xi}_{i_1},{\xi}_{i_2},\cdots, {\xi}_{i_n}\right) \) and Γ j (ξ), and their coefficients \( {c}_0,{c}_{i_1},\cdots, {c}_{i_1{i}_2},\cdots, {c}_{i_1{i}_2{i}_3},\cdots \) and \( {\overset{\frown }{\boldsymbol{c}}}_j \).

For clarity, the n-dimensional Hermite polynomial is shown as

$$ {H}_n\left({\xi}_{i_1},\cdots, {\xi}_{i_n}\right)={e}^{1/2{\boldsymbol{\xi}}^{\mathrm{T}}\boldsymbol{\xi}}{\left(-1\right)}^n\frac{\partial^n}{\partial {\xi}_{i_1}\cdots \partial {\xi}_{i_n}}{e}^{-1/2{\boldsymbol{\xi}}^{\mathrm{T}}\boldsymbol{\xi}}, $$

and one-dimensional Hermite polynomials are demonstrated as

$$ \begin{array}{l}{H}_0=1,\kern0.4em {H}_1\left(\xi \right)=\xi, \kern0.4em {H}_2\left(\xi \right)={\xi}^2-1,\kern0.4em {H}_3\left(\xi \right)={\xi}^3-3\xi, \hfill \\ {}{H}_{i+1}\left(\xi \right)=\xi {H}_i\left(\xi \right)- i{H}_{i-1}\left(\xi \right),\cdots \hfill \end{array}. $$

The polynomial chaos Γ j (ξ) forms a complete orthogonal basis in the L 2 space of the Gaussian random variables, i.e.,

$$ \left\langle {\varGamma}_i,{\varGamma}_j\right\rangle =\left\langle {\varGamma}_i^2\right\rangle {\delta}_{i j}, $$

where δ ij is the Kronecker Delta operator and 〈⋅, ⋅ 〉 denotes the ensemble average which is also the inner product in the Hilbert space of Gaussian random variables ξ,

$$ \left\langle f\left(\boldsymbol{\xi} \right) g\left(\boldsymbol{\xi} \right)\right\rangle ={\displaystyle \int f\left(\boldsymbol{\xi} \right) g\left(\boldsymbol{\xi} \right) W\left(\boldsymbol{\xi} \right) d\boldsymbol{\xi}}, $$

where the weighting function is

$$ W\left(\boldsymbol{\xi} \right)=\frac{1}{\sqrt{{\left(2\pi \right)}^n}}{e}^{-1/2{\boldsymbol{\xi}}^{\mathrm{T}}\boldsymbol{\xi}}, $$

in this way, a complete set of orthogonal basis which is square-integrable is established with respect to n-dimensional Hermite polynomial. Moreover, the polynomials in (3) are convergence in mean square.

If the surrogate model is formed by second order Hermite polynomial chaos expansion, then the random response Y(ξ) can be expressed as

$$ {Y}_2\left(\boldsymbol{\xi} \right)={c}_{0,2}+{\displaystyle \sum_{i=1}^n{c}_{i,2}{\xi}_i+{\displaystyle \sum_{i=1}^n{c}_{i i,2}\left({\xi}_i^2-1\right)+{\displaystyle \sum_{i=1}^{n-1}{\displaystyle \sum_{j> i}^n{c}_{i j,2}{\xi}_i{\xi}_j}}}}, $$
(5)

where n denotes the dimensionality of random variables, c 0,2 , c i,2 , c ii,2 , c ij,2 stand for the undetermined coefficients of Hermite polynomial expansion. By virtue of (5), one can induce the number of undetermined coefficients in second order Hermite polynomial expansion for approximate random response Y 2(ξ) as s = (n + p) !/(n ! p !), in which p stands for the highest order in polynomial expansion.

Note that in (5), the key of constructing the surrogate model, which is formed by PCE method, is the calculation of undetermined coefficients c 0,2 , c i,2 , c ii,2 , c ij,2, ⋯. Concerning the PCE surrogate model which consists of n-dimensional continuous random variables, the corresponding response functions will be generated upon the specified collocation points groups, and each collocation points group corresponds to a set of sample \( \tilde{\boldsymbol{\xi}}=\left({\tilde{\xi}}_1,{\tilde{\xi}}_2,\cdots, {\tilde{\xi}}_n\right) \), which belongs to continuous random variable ξ = (ξ 1, ξ 2, ⋯, ξ n ).

As mentioned before, all the continuous random variables in this paper are subject to Gaussian distribution. Besides that, Hermite polynomials are chosen as the orthogonal basis. It is a routine that if the highest order of Hermite polynomial is p, then roots of the (p + 1) th order Hermite polynomial are adopted as collocation points (Isukapalli S.S. Uncertainty analysis of transport-transformation models. PhD thesis and New Jersey 1999; Villadsen and Michelsen 1978; Sudret 2008; Berveiller et al. 2006; Menner A. Tatang. Direct Incorporation of Uncertainty in Chemical and Environmental Engineering Systems. PhD thesis and Technology 1995). After preparing all the collocation points, one substitutes the previous collocation points into the original system one by one. The corresponding response function will be generated sequently, and the following formula is established

$$ \left[\begin{array}{cccc}\hfill \begin{array}{c}\hfill {\varGamma}_0\left({\boldsymbol{\xi}}_0\right)\hfill \\ {}\hfill {\varGamma}_0\left({\boldsymbol{\xi}}_1\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {\varGamma}_0\left({\boldsymbol{\xi}}_N\right)\hfill \end{array}\hfill & \hfill \begin{array}{c}\hfill {\varGamma}_1\left({\boldsymbol{\xi}}_0\right)\hfill \\ {}\hfill {\varGamma}_1\left({\boldsymbol{\xi}}_1\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {\varGamma}_1\left({\boldsymbol{\xi}}_N\right)\hfill \end{array}\hfill & \hfill \begin{array}{c}\hfill \cdots \hfill \\ {}\hfill \cdots \hfill \\ {}\hfill \cdots \hfill \\ {}\hfill \cdots \hfill \end{array}\hfill & \hfill \begin{array}{c}\hfill {\varGamma}_{s-1}\left({\boldsymbol{\xi}}_0\right)\hfill \\ {}\hfill {\varGamma}_{s-1}\left({\boldsymbol{\xi}}_1\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {\varGamma}_{s-1}\left({\boldsymbol{\xi}}_N\right)\hfill \end{array}\hfill \end{array}\right]\left[\begin{array}{c}\hfill {c}_0\hfill \\ {}\hfill {c}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {c}_{s-1}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill f\left({\boldsymbol{\xi}}_0\right)\hfill \\ {}\hfill f\left({\boldsymbol{\xi}}_1\right)\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill f\left({\boldsymbol{\xi}}_{\mathrm{N}}\right)\hfill \end{array}\right], $$
(6)

where ξ 0, ξ 1, ⋯, ξ N denote the sampling point groups, N stands for the number of sampling point groups, s is the number of undetermined coefficients. By means of regression analysis based on the least squares method, the undetermined coefficients of polynomial chaos expansion could be calculated from (6).

Construction of M 1, K 1 in section 3

According to the stochastic perturbation method, which is truncated to first-order perturbation term in this paper, the system response g(x) = g(x 1, x 2, ⋯, x n ) can be expressed as

$$ g\left(\boldsymbol{x}\right)= g\left(\overline{\boldsymbol{x}}\right)+{g}_1\left(\boldsymbol{x}\right), $$
(1)

viz. g(x 1, x 2, ⋯, x n ) = g(μ 1, μ 2, ⋯, μ n ) + g 1(x 1, x 2, ⋯, x n ),where x = [x 1, x 2, ⋯, x n ] denotes the continuous random variables in structural system, \( \overline{\boldsymbol{x}}=\left[{\mu}_1,{\mu}_2,\cdots, {\mu}_n\right] \) stands for the mean values of continuous random variables, g 1(x) is the first-order perturbation term of system response.

On the other aspect, in the light of high dimensional model representation (HDMR), the first-order approximation of system response also can be denoted as

$$ g\left(\boldsymbol{x}\right)\equiv g\left({x}_1,{x}_2,\cdots, {x}_n\right)={\displaystyle \sum_{i=1}^n g\left({\mu}_1,\cdots, {\mu}_{i-1},{x}_i,{\mu}_{i+1},\cdots, {\mu}_n\right)-\left( n-1\right) g\left(\overline{\boldsymbol{x}}\right)}, $$
(2)

Taking (1) and (2) into consideration, the first-order perturbation term g 1(x) is obtained as

$$ \begin{array}{l}{g}_1\left(\boldsymbol{x}\right)={\displaystyle \sum_{i=1}^n g\left({\mu}_1,\cdots, {\mu}_{i-1},{x}_i,{\mu}_{i+1},\cdots, {\mu}_n\right)- ng\left(\overline{\boldsymbol{x}}\right)}\hfill \\ {}={\displaystyle \sum_{i=1}^n\left[ g\left({\mu}_1,\cdots, {\mu}_{i-1},{x}_i,{\mu}_{i+1},\cdots, {\mu}_n\right)- g\left(\overline{\boldsymbol{x}}\right)\right]}\hfill \end{array}, $$
(3)

on the basis of the preceding derivation, the first-order perturbation term of mass matrix and stiffness matrix can be expressed as follows:

$$ \left\{\begin{array}{l}\kern0.1em {\boldsymbol{M}}_1={\boldsymbol{M}}_{\beta 1}={\displaystyle \sum_{i=1}^n\left[\boldsymbol{M}\left({\mu}_1,{\mu}_2,\cdots, {\mu}_{i-1},{\xi}_{\beta i},{\mu}_{i+1},\cdots {\mu}_n\right)-\boldsymbol{M}\left({\mu}_1,{\mu}_2,\cdots, {\mu}_{i-1},{\mu}_i,{\mu}_{i+1},\cdots, {\mu}_n\right)\right]\kern0.1em }\\ {}\kern0.1em {\boldsymbol{K}}_1={\boldsymbol{K}}_{\beta 1}={\displaystyle \sum_{i=1}^n\left[\boldsymbol{K}\left({\mu}_1,{\mu}_2,\cdots, {\mu}_{i-1},{\xi}_{\beta i},{\mu}_{i+1},\cdots {\mu}_n\right)-\boldsymbol{K}\left({\mu}_1,{\mu}_2,\cdots, {\mu}_{i-1},{\mu}_i,{\mu}_{i+1},\cdots, {\mu}_n\right)\right]\kern0.1em }\end{array}\right.. $$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiu, H., Qiu, Z. A modified stochastic perturbation algorithm for closely-spaced eigenvalues problems based on surrogate model. Struct Multidisc Optim 56, 249–270 (2017). https://doi.org/10.1007/s00158-017-1660-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-017-1660-1

Keywords

Navigation