Skip to main content
Log in

Quadratic model updating with gyroscopic structure from partial eigendata

  • Published:
Optimization and Engineering Aims and scope Submit manuscript

Abstract

Quadratic eigenvalue model updating problem, which aims to match observed spectral information with some feasibility constraints, arises in many engineering areas. In this paper, we consider a damped gyroscopic model updating problem (GMUP) of constructing five n-by-n real matrices M,C,K,G and N, such that they are closest to the given matrices and the quadratic pencil Q(λ):=λ 2 M+λ(C+G)+K+N possess the measured partial eigendata. In practice, M,C and K, represent the mass, damping and stiffness matrices, are symmetric (with M and K positive definite), G and N, represent the gyroscopic and circulatory matrices, are skew-symmetric. Under mild assumptions, we show that the Lagrangian dual problem of GMUP can be solved by a quadratically convergent inexact smoothing Newton method. Numerical examples are given to show the high efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. A pencil is called regular if there is at least one value of λ such that the determinant of Q(λ) is nonzero.

  2. YAMLIP is a free MATLAB® toolbox to model and solve optimization problems. See details at http://users.isy.liu.se/johanl/yalmip/.

References

  • Bai Z, Chu D, Sun D (2007) A dual optimization approach to inverse quadratic eigenvalue problems with partial eigenstructure. SIAM J Sci Comput 29:2531–2561

    Article  MathSciNet  MATH  Google Scholar 

  • Baruch M (1978) Optimization procedure to correct stiffness and flexibility matrices using vibration data. AIAA J 16:1208–1210

    Article  MATH  Google Scholar 

  • Bonnans J, Shaprio A (2000) Perturbation analysis of optimization problems. Springer, New York

    MATH  Google Scholar 

  • Borwein J, Lewis A (1992) Partially finite convex programming, part ii: Explicit lattice models. Math Program 57:48–83

    Google Scholar 

  • Brinkmeier M, Nackenhorst U (2008) An approach for large-scale gyroscopic eigenvalue problems with application to high-frequency response of rolling tires. Comput Mech 41:503–515

    Article  MATH  Google Scholar 

  • Carvalho J, Datta B, Lin W, Wang C (2006) Symmetry preserving eigenvalue embedding in finite-element model updating of vibrating structures. J Sound Vib 290:839–864

    Article  MathSciNet  MATH  Google Scholar 

  • Chan Z, Sun D (2008) Constraint nondegeneracy, strong regularity and nonsingularity in semidefinite programming. SIAM J Optim 19:370–396

    Article  MathSciNet  MATH  Google Scholar 

  • Chu D, Chu M, Lin W (2009) Quadratic model updating with symmetric, positive definiteness, and no spill-over. SIAM J Matrix Anal Appl 31:546–564

    Article  MathSciNet  MATH  Google Scholar 

  • Chu M, Buono ND (2008) Total decoupling of a general quadratic pencil, part i: theory. J Sound Vib 309:96–111

    Article  Google Scholar 

  • Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity problems, vols I and II. Springer, New York

    Google Scholar 

  • Friswell M, Inman D, Pilkey D (1998) The direct updating of damping and stiffness matrices. AIAA J 36:491–493

    Article  Google Scholar 

  • Friswell M, Mottershead J (1995) Finite element model updating in structural dynamics. Solid mechanics and its applications, vol 38. Kluwer, Dordrecht

    Book  MATH  Google Scholar 

  • Gao Y, Sun D (2009) Calibrating least squares semidefinite programming with equality and inequality constraints. SIAM J Matrix Anal Appl 31:1432–1457

    Article  MathSciNet  Google Scholar 

  • Grant M, Boyd S (2011) CVX: Matlab software for disciplined convex programming, version 1.21. http://cvxr.com/cvx

  • Jia Z, Wei M (2011) A real-valued spectral decomposition of the undamped gyroscopic system with applications. SIAM J Matrix Anal Appl 32:584–604

    Article  MathSciNet  MATH  Google Scholar 

  • Kuo Y, Lin W, Xu S (2006) New methods for finite element model updating problems. AIAA J 44:1310–1316

    Article  Google Scholar 

  • Lancaster P (1999) Strongly stable gyroscopic systems. Electron J Linear Algebra 5:53–66

    MathSciNet  MATH  Google Scholar 

  • Lancaster P (2008) Model-updating for self-adjoint quadratic eigenvalue problems. Linear Algebra Appl 428:2778–2790

    Article  MathSciNet  MATH  Google Scholar 

  • Lin M, Dong B, Chu M (2010) Semi-definite programming techniques for structured quadratic inverse eigenvalue problems. Numer Algorithms 53:419–437

    Article  MathSciNet  MATH  Google Scholar 

  • Löwner K (1934) Über monotone Matrixfunktionen. Math Z 38:177–216

    Article  MathSciNet  Google Scholar 

  • Mottershead J, Friswell M (1993) Model updating in structural dynamics: A survey. J Sound Vib 167:347–375

    Article  MATH  Google Scholar 

  • Qian J, Lin W (2007) A numerical method for quadratic eigenvalue problems of gyroscopic systems. J Sound Vib 306:284–296

    Article  MathSciNet  MATH  Google Scholar 

  • Rockafellar R (1974) Conjugate duality and optimization. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  • Sturm J (1999) Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optim Methods Softw 11:625–633

    Article  MathSciNet  Google Scholar 

  • Sun J, Sun D, Qi L (2004) A squared smoothing Newton method for nonsmooth matrix equations and its applications in semidefinite optimization problems. SIAM J Optim 14:783–806

    Article  MathSciNet  MATH  Google Scholar 

  • Tisseur F, Meerbergen K (2001) The quadratic eigenvalue problem. SIAM Rev 43:235–286

    Article  MathSciNet  MATH  Google Scholar 

  • Toh K, Tütüncü R, Todd M (2003) Solving semidefinite-quadratic-linear programs using SDPT3. Math Program 95:189–217

    Article  MathSciNet  MATH  Google Scholar 

  • van der Vorst H (1992) Bi-CGStab: A fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems. SIAM J Sci Stat Comput 13:631–644

    Article  MATH  Google Scholar 

  • Wei F (1990) Mass and stiffness interaction effects in analytical model modification. AIAA J 28:1686–1688

    Article  Google Scholar 

Download references

Acknowledgement

The authors would like to thank the associate editor and the reviewers for their valuable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiantao Xiao.

Additional information

X. Xiao was partially supported by the TianYuan Special Funds of the National Natural Science Foundation of China (Grant No. 11026166) and the Fundamental Research Funds for the Central Universities. L. Zhang was supported in part by the National Natural Science Foundation of China (grant No. 11071029) and the Fundamental Research Funds for the Central Universities.

Appendices

Appendix A: Basic duality theory

Let \(\mathcal{X}\) and \(\mathcal{Y}\) be two real Hilbert spaces, each equipped with a scalar product 〈⋅,⋅〉 and its induced norm ∥⋅∥. Let \(\mathcal{A}:\mathcal{X}\rightarrow\mathcal{Y}\) be a linear operator. Consider the following constrained best approximation problem,

$$ \begin{array}{@{}c@{\quad}c@{}}\textrm{min}& \displaystyle\frac{1}{2}\|x-c\|^2\\[7pt]\textrm{s.t.}& \mathcal{A}x=0,\\[5pt]&x\in\mathcal{Q},\end{array} $$
(29)

where \(\mathcal{Q}\) is a closed convex cone in \(\mathcal{X}\). The Lagrangian dual (see, e.g., Borwein and Lewis 1992 or Bonnans and Shaprio 2000) of the best approximation problem (29) is in the form of

$$ \begin{array} {@{}l@{\quad}l@{}} \textrm{max}& -\displaystyle\frac{1}{2}\bigl\|c+\mathcal{A}^*y\bigr\|^2 +\displaystyle \frac{1}{2}\bigl\|c+\mathcal{A}^*y-\varPi _{\mathcal{Q}}\bigl(c+\mathcal {A}^*y\bigr)\bigr\|^2+ \displaystyle\frac{1}{2}\|c\|^2\\[7pt]\textrm{s.t.}&y\in\mathcal{Y}, \end{array} $$
(30)

where \(\mathcal{A}^{*}:\mathcal{Y}\rightarrow\mathcal{X}\) is the adjoint of \(\mathcal{A}\), and for any \(x\in\mathcal{X}\), \(\varPi _{\mathcal{Q}}(x)\) is the metric projection of x onto \(\mathcal{Q}\); i.e., \(\varPi _{\mathcal{Q}}(x)\) is the unique optimal solution to

$$\begin{array}{@{}l@{\quad}l@{}}\textrm{min}& \displaystyle\frac{1}{2}\|u-x\|^2\\[7pt]\textrm{s.t.}&u\in\mathcal{Q}.\end{array} $$

Define \(\theta:\mathcal{Y}\rightarrow\mathbb{R}\) by

$$\theta(y):=\frac{1}{2}\bigl\|c+\mathcal{A}^*y\bigr\|^2 -\frac{1}{2}\bigl\|c+\mathcal{A}^*y-\varPi _{\mathcal{Q}}\bigl(c+\mathcal {A}^*y\bigr)\bigl\|^2- \frac{1}{2}\|c\|^2.$$

Since \(\mathcal{Q}\) is a closed convex cone, the function θ takes the following form,

$$ \theta(y):=\frac{1}{2}\bigl\|\varPi _{\mathcal{Q}}\bigl(c+\mathcal {A}^*y\bigr)\bigr\|^2- \frac{1}{2}\|c\|^2.$$
(31)

Note that θ(⋅) is a convex function (Borwein and Lewis 1992), and θ(⋅) is continuously differentiable (but not twice continuously differentiable) with

$$\nabla\theta(y)=\mathcal{A}\varPi _{\mathcal{Q}}\bigl(c+\mathcal{A}^*y\bigr),\quad y\in\mathcal{Y}.$$

Then the dual problem (30) becomes a smooth convex optimization problem with a simple constraint:

$$ \begin{array}{@{}l@{\quad}l@{}}\textrm{min}& \theta(y)\\[5pt]\textrm{s.t.}&y\in\mathcal{Y}.\end{array} $$
(32)

In order to apply a dual based optimization approach to solve problem (29), we need the following generalized Slater condition to hold:

$$ \left \{ \begin{array}{@{}l@{}}\mathcal{A}:\mathcal{X}\rightarrow\mathcal{Y}\textrm{ is onto},\\\exists\bar{x}\in\mathcal{X}\textrm{ such that }\mathcal{A}\bar {x}=0,\bar{x}\in\mathrm{int}(\mathcal{Q}),\end{array} \right .$$
(33)

where “int” denotes the topological interior of a given set. The following proposition is a well known result in the conventional duality theory for convex programming (Rockafellar 1974, Theorems 17 and 18).

Proposition 2

Under the generalized Slater condition (33), there exists at least one \(y^{*}\in\mathcal{Y}\) that solves the dual problem (30), and the unique solution to original problem (29) is given by

$$x^*=\varPi _{\mathcal{Q}}\bigl(c+\mathcal{A}^*y^*\bigr).$$

Furthermore, for every real number τ, the level set \(\{y\in\mathcal {Y}:\theta(y)\leq\tau\}\) is closed, bounded, and convex.

From Proposition 2, one may use any gradient based optimization method to find an optimal solution to the convex optimization problem (30), and then to solve problem (29), as long as the generalized Slater condition (33) holds. Since \(\varPi _{\mathcal{Q}}(\cdot)\) is globally Lipschitz continuous, ∇θ(⋅) is globally Lipschitz continuous, which means one cannot use the classical Newton method to solve (30) directly. However, the semismooth Newton method or smoothing Newton method may be applicable if the function θ(⋅) is semismoothly differentiable.

Appendix B: Huber smoothing function

Suppose that the matrix \(X\in\mathcal{S}^{n}\) has the spectral decomposition

$$ X=P\textrm{diag}(\lambda_1,\ldots, \lambda_n)P^T,$$
(34)

where λ 1≥⋯≥λ n are the eigenvalues of X and P is a corresponding orthogonal matrix of eigenvectors. Then,

$$\varPi _{\mathcal{S}_+^n}(X)=P\textrm{diag}\bigl(\textrm{max}(0,\lambda_1), \ldots ,\textrm{max}(0,\lambda_n)\bigr)P^T.$$

Define three index sets

$$\alpha:=\{i|\lambda_i>0\},\qquad\beta:=\{i|\lambda_i=0\},\qquad\gamma:=\{ i|\lambda_i<0\}.$$

Write P=[P α P β P γ ] with P α ,P β and P γ containing the columns in P indexed by α,β and γ, respectively. Let ϕ:ℝ×ℝ→ℝ be the following Huber smoothing function:

$$\phi(\varepsilon,t)=\left \{ \begin{array}{@{}l@{\quad}l@{}}t,&\textrm{if }t\geq\frac{|\varepsilon|}{2},\\[4pt]\frac{1}{2|\varepsilon|}(t+\frac{|\varepsilon|}{2})^2,&\textrm{if }|t|<\frac{|\varepsilon|}{2},\\[4pt]0,&\textrm{if }t\leq-\frac{|\varepsilon|}{2}.\end{array} \right .\quad(\varepsilon,t)\in\mathbb{R}\times\mathbb{R}.$$

For any ε∈ℝ, let

$$ \varPhi (\varepsilon,X):=P\textrm{diag}\bigl(\phi(\varepsilon,\lambda_1),\ldots ,\phi(\varepsilon,\lambda_n)\bigr)P^T.$$
(35)

Note that ϕ(0,t)=max(0,t), and similarly \(\varPhi (0,X)=\varPi _{\mathcal{S}_{+}^{n}}(X)\). From Löwner (1934), we have that if β=∅ or ε≠0,

$$\varPhi '_{\varepsilon}(\varepsilon,X)=P\textrm{diag}\bigl(\phi_{\varepsilon }'(\varepsilon,\lambda_1),\ldots,\phi'_{\varepsilon}(\varepsilon,\lambda_n)\bigr)P^T,$$

and

$$\varPhi '_{X}(\varepsilon,X) (H)=P\bigl[\varOmega (\varepsilon,\lambda)\circ \bigl(P^THP\bigr)\bigr]P^T,\quad\forall H\in \mathcal{S}^n,$$

where “∘” denotes the Hadamard product, λ=(λ 1,…,λ n )T, and the matrix Ω(ε,λ) is given by

$$\bigl[\varOmega (\varepsilon,\lambda)\bigr]_{ij}=\left \{ \begin{array}{@{}l@{\quad}l@{}}\frac{\phi(\varepsilon,\lambda_i)-\phi(\varepsilon,\lambda_j)}{\lambda_i-\lambda_j}\in[0,1],\quad &\textrm{if }\lambda_i\neq\lambda_j,\\\phi_{\lambda_i}'(\varepsilon,\lambda_i)\in[0,1],&\textrm{if}\lambda_i=\lambda_j,\end{array} \right .\quad i,j=1,\ldots,n.$$

Thus if β=∅ or ε≠0, Φ(⋅,⋅) is continuously differentiable around \((\varepsilon,X)\in\mathbb{R}\times\mathcal {S}^{n}\). Furthermore, Φ(⋅,⋅) is globally Lipschitz continuous and strongly semismooth at any \((0,X)\in\mathbb{R}\times\mathcal{S}^{n}\), which can be easily proved as in Sun et al. (2004).

Appendix C: Proof of Proposition 1

Proof

It is easy to see that, if (Δε,ΔY,ΔZ)=0, E′(ε,Y,Z)(Δε,ΔY,ΔZ)=0. Thus we only need to show, E′(ε,Y,Z)(Δε,ΔY,ΔZ)=0 implies (Δε,ΔY,ΔZ)=0.

From the definition of E, E′(ε,Y,Z)(Δε,ΔY,ΔZ)=0 is equivalent to

$$\left \{ \begin{array}{@{}l@{}}\varDelta \varepsilon=0,\\\varPsi '(\varepsilon,Y,Z)(\varDelta \varepsilon,\varDelta Y,\varDelta Z)=0,\end{array} \right .$$

which implies

$$\mathcal{A}\bigl(\varPhi _{X}'(\varepsilon,\hat{M})(M_{\varDelta }), C_{\varDelta },\varPhi _{X}'(\varepsilon,\hat{K}) (K_{\varDelta }),G_{\varDelta },N_{\varDelta }\bigr)=0,$$

where

$$\begin{cases}\hat{M}:=\varPhi (\varepsilon,M_0+\mathcal{A}_1^*(Y,Z)),\\[4pt]\hat{K}:=\varPhi (\varepsilon,K_0+\mathcal{A}_3^*(Y,Z)),\\[4pt]M_{\varDelta }:=\mathcal{A}_1^*(\varDelta Y, \varDelta Z),\\[4pt]C_{\varDelta }:=\mathcal{A}_2^*(\varDelta Y, \varDelta Z),\\[4pt]K_{\varDelta }:=\mathcal{A}_3^*(\varDelta Y, \varDelta Z),\\[4pt]G_{\varDelta }:=\mathcal{A}_4^*(\varDelta Y, \varDelta Z),\\[4pt]N_{\varDelta }:=\mathcal{A}_5^*(\varDelta Y, \varDelta Z).\end{cases} $$

Next, we consider

Since for any \(H\in\mathcal{S}^{n}\), both 〈H,H〉 and \(\langle H,\varPhi _{X}'(\varepsilon,X)(H)\rangle\) are nonnegative, we obtain

$$\left \{ \begin{array}{@{}l@{}}\langle M_{\varDelta },\varPhi _{X}'(\varepsilon,\hat{M})(M_{\varDelta })\rangle =0,\\[4pt]\langle K_{\varDelta },\varPhi _{X}'(\varepsilon,\hat{K})(K_{\varDelta })\rangle =0,\\[4pt]C_{\varDelta }=G_{\varDelta }=N_{\varDelta }=0.\end{array} \right .$$

From N Δ =0, we get

$$\mathcal{A}_{5}^*(\varDelta Y,\varDelta Z)=\frac{1}{2} \left[\begin{array}{@{}c@{\quad}c@{}}\varDelta Y-\varDelta Y^T&-\varDelta Z\\\varDelta Z^T&0\end{array} \right]=0.$$

Thus ΔZ=0 and ΔY=ΔY T. From C Δ =0, we get

$$\mathcal{A}_{2}^*(\varDelta Y,\varDelta Z)=\frac{1}{2} \left[\begin{array}{@{}c@{\quad}c@{}}\varDelta YS^T+S\varDelta Y^T&S\varDelta Z\\\varDelta Z^TS^T&0\end{array} \right]=0,$$

Similarly, from G Δ =0, we have

$$\mathcal{A}_{4}^*(\varDelta Y,\varDelta Z)=\frac{1}{2} \left[\begin{array}{@{}c@{\quad}c@{}}\varDelta YS^T-S\varDelta Y^T&-S\varDelta Z\\\varDelta Z^TS^T&0\end{array} \right]=0.$$

Thus SΔY=0. Together with the fact that S=RΛR −1 is nonsingular, we obtain ΔY=0. The proof is complete. □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xiao, X., Gu, J. & Zhang, L. Quadratic model updating with gyroscopic structure from partial eigendata. Optim Eng 14, 431–455 (2013). https://doi.org/10.1007/s11081-012-9188-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11081-012-9188-0

Keywords

Navigation