Skip to main content

Advertisement

Log in

Gaussian mixture model for robust design optimization of planar steel frames

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

A Correction to this article was published on 04 December 2020

This article has been updated

Abstract

A new method is presented for an application of the Gaussian mixture model (GMM) to a multi-objective robust design optimization (RDO) of planar steel frame structures under aleatory (stochastic) uncertainty in material properties, external loads, and discrete design variables. Uncertainty in the discrete design variables is modeled in the wide range between the smallest and largest values in the catalog of the cross-sectional areas. A weighted sum of Gaussians is statistically trained based on the sampled training data to capture an underlying joint probability distribution function (PDF) of random input variables and the corresponding structural response. A simple regression function for predicting the structural response can be found by extracting the information from a conditional PDF, which is directly derived from the captured joint PDF. A multi-objective RDO problem is formulated with three objective functions, namely, the total mass of the structure, and the mean and variance values of the maximum inter-story drift under some constraints on design strength and serviceability requirements. The optimization problem is solved using a multi-objective genetic algorithm utilizing the trained GMM for calculating the statistical values of objective and constraint functions to obtain Pareto-optimal solutions. Since the three objective functions are highly conflicting, the best trade-off solution is desired and found from the obtained Pareto-optimal solutions by performing fuzzy-based compromise programming. The robustness and feasibility of the proposed method for finding the RDO of planar steel frame structures with discrete variables are demonstrated through two design examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Change history

  • 04 December 2020

    A Correction to this paper has been published: <ExternalRef><RefSource>https://doi.org/10.1007/s00158-020-02789-9</RefSource><RefTarget Address="10.1007/s00158-020-02789-9" TargetType="DOI"/></ExternalRef>

References

Download references

Acknowledgments

The authors are thankful for the fruitful comments from Prof. Makoto Yamakawa at the Tokyo University of Science. Financial support from the Japan International Cooperation Agency (JICA) and JSPS KAKENHI No. JP19H02286 is fully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bach Do.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Replication of results

The main steps for applying the proposed method to the RDO problem of planar steel frames are presented in detail in Section 4.5. Training data generated for constructing GMMs in Sections 5.1 and 5.2 are available online at https://bit.ly/gmm_rdosteelframes. Models or source codes used in this study are available from the corresponding author by request for non-commercial purposes only.

Additional information

Responsible Editor: Xiaoping Du

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. Gradient and Hessian of GMMs

Appendix. Gradient and Hessian of GMMs

Let x = [x1, x2, …, xD]T ∈ ℝD be D-dimensional vectors generated from a D-variate Gaussian \( {p}_{\mathbf{X}}\left(\mathbf{x}\right)\sim {\mathcal{N}}_D\left(\boldsymbol{\upmu}, \boldsymbol{\Sigma} \right) \), such that

$$ {p}_{\mathbf{X}}\left(\mathbf{x}\right)=\frac{1}{{\left(2\pi \right)}^{D/2}{\left|\boldsymbol{\Sigma} \right|}^{1/2}}\exp \left[-\frac{1}{2}{\left(\mathbf{x}-\boldsymbol{\upmu} \right)}^T{\boldsymbol{\Sigma}}^{-1}\left(\mathbf{x}-\boldsymbol{\upmu} \right)\right] $$
(45)

Let Σ = UΛUT be the singular value decomposition of the covariance matrix Σ, where U is an orthogonal matrix and Λ is a non-singular diagonal matrix with λd at the dth diagonal component. Let y = UT(x − μ) ∈ ℝD so that ∂ye/∂xd = ude, where e ∈ {1, 2, …, D}, and ude is the (d, e)th element of U. Taking the first derivatives of pX(x), we have

$$ {\displaystyle \begin{array}{c}\frac{\partial p}{\partial {x}_d}={p}_{\mathbf{x}}\left(\mathbf{x}\right)\frac{\partial }{\partial {x}_d}\left[-\frac{1}{2}{\left(\mathbf{x}-\boldsymbol{\upmu} \right)}^T{\boldsymbol{\Sigma}}^{-1}\left(\mathbf{x}-\boldsymbol{\upmu} \right)\right]={p}_{\mathbf{x}}\left(\mathbf{x}\right)\frac{\partial }{\partial {x}_d}\left(-\frac{1}{2}{\mathbf{y}}^T{\boldsymbol{\Lambda}}^{-1}\mathbf{y}\right)\\ {}\kern1.2em ={p}_{\mathbf{x}}\left(\mathbf{x}\right)\sum \limits_{e=1}^D\frac{\partial }{\partial {y}_e}\left(-\frac{1}{2}\sum \limits_{d=1}^D\frac{y_d^2}{\lambda_d}\right)\frac{\partial {y}_e}{\partial {x}_d}=-{p}_{\mathbf{x}}\left(\mathbf{x}\right)\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{de}\end{array}} $$
(46)

which is the dth element of vector −pX(x)−1y, which yields the gradient of pX(x) as

$$ \mathbf{g}=\nabla {p}_{\mathbf{X}}\left(\mathbf{x}\right)={p}_{\mathbf{X}}\left(\mathbf{x}\right){\boldsymbol{\Sigma}}^{-1}\left(\boldsymbol{\upmu} -\mathbf{x}\right) $$
(47)

Let c ∈ {1, 2, …, D}, and taking the second derivatives of pX(x), we obtain

$$ {\displaystyle \begin{array}{c}\frac{\partial }{\partial {x}_c}\left(\frac{\partial p}{\partial {x}_d}\right)=\frac{\partial }{\partial {x}_c}\left(-{p}_{\mathbf{x}}\left(\mathbf{x}\right)\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{de}\right)=-{p}_{\mathbf{x}}\left(\mathbf{x}\right)\frac{\partial }{\partial {x}_c}\left(\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{de}\right)-\frac{\partial {p}_{\mathbf{X}}\left(\mathbf{x}\right)}{\partial {x}_c}\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{de}\\ {}=-{p}_{\mathbf{x}}\left(\mathbf{x}\right)\left(\sum \limits_{e=1}^D\frac{u_{de}}{\lambda_d}\frac{\partial {y}_e}{\partial {x}_c}\right)+{p}_{\mathbf{x}}\left(\mathbf{x}\right)\left(\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{ce}\right)\left(\sum \limits_{e=1}^D\frac{y_e}{\lambda_e}{u}_{de}\right)\\ {}=-{p}_{\mathbf{x}}\left(\mathbf{x}\right)\left(\sum \limits_{e=1}^D\frac{u_{de}}{\lambda_d}{u}_{ce}\right)+{p}_{\mathbf{x}}\left(\mathbf{x}\right)\left(\sum \limits_{e=1}^D\frac{y_d}{\lambda_e}{u}_{ce}\right)\left(\sum \limits_{e=1}^D\frac{y_d}{\lambda_e}{u}_{de}\right)\end{array}} $$
(48)

which is the (c, d)th element of the matrix −pX(x)−1UT + [pX(x)]−1ggT (Carreira-Perpinan 2000). Hence, the Hessian of pX(x) is

$$ \mathbf{H}=\left(\nabla {\nabla}^T\right){p}_{\mathbf{X}}\left(\mathbf{x}\right)=-{p}_{\mathbf{X}}\left(\mathbf{x}\right){\boldsymbol{\Sigma}}^{-1}+{\left[{p}_{\mathbf{X}}\left(\mathbf{x}\right)\right]}^{-1}{\mathbf{gg}}^T={p}_{\mathbf{X}}\left(\mathbf{x}\right){\boldsymbol{\Sigma}}^{-1}\left[-\boldsymbol{\Sigma} +\left(\mathbf{x}-\boldsymbol{\upmu} \right){\left(\mathbf{x}-\boldsymbol{\upmu} \right)}^T\right]{\boldsymbol{\Sigma}}^{-1} $$
(49)

Results in (47) and (49) lead to the gradient and Hessian of the GMM \( \sum \limits_{k=1}^K{\pi}_k\phi \left(\mathbf{x};{\boldsymbol{\upmu}}_{\mathbf{X},k},{\boldsymbol{\Sigma}}_{\mathbf{X},k}\right) \) as

$$ {\mathbf{g}}_m=\nabla \left[\sum \limits_{k=1}^K{\pi}_k\phi \left(\mathbf{x};{\boldsymbol{\upmu}}_{\mathbf{X},k},{\boldsymbol{\Sigma}}_{\mathbf{X},k}\right)\right]=\sum \limits_{k=1}^K{\pi}_k\phi \left(\mathbf{x};{\boldsymbol{\upmu}}_{\mathbf{X},k},{\boldsymbol{\Sigma}}_{\mathbf{X},k}\right){\boldsymbol{\Sigma}}_{\mathbf{X},k}^{-1}\left({\boldsymbol{\upmu}}_{\mathbf{X},k}-\mathbf{x}\right) $$
(50)
$$ {\mathbf{H}}_m\kern0.5em {\displaystyle \begin{array}{c}=\left(\nabla {\nabla}^T\right)\left[\sum \limits_{k=1}^K{\pi}_k\phi \left(\mathbf{x};{\boldsymbol{\upmu}}_{\mathbf{X},k},{\boldsymbol{\Sigma}}_{\mathbf{X},k}\right)\right]\\ {}=\sum \limits_{k=1}^K{\pi}_k\phi \left(\mathbf{x};{\boldsymbol{\upmu}}_{\mathbf{X},k},{\boldsymbol{\Sigma}}_{\mathbf{X},k}\right){\boldsymbol{\Sigma}}_{\mathbf{X},k}^{-1}\left[-{\boldsymbol{\Sigma}}_{\mathbf{X},k}+\left(\mathbf{x}-{\boldsymbol{\upmu}}_{\mathbf{X},k}\right){\left(\mathbf{x}-{\boldsymbol{\upmu}}_{\mathbf{X},k}\right)}^T\right]{\boldsymbol{\Sigma}}_{\mathbf{X},k}^{-1}\end{array}} $$
(51)

Since the mixing weight wk(x) of the regression function in (13) is a fraction of two GMMs, the following quotient rules, which is applied for calculating the derivatives of f(x) = g(x)/h(x), can be utilized to obtain the gradient and Hessian of the regression function in (15) and (16):

$$ \frac{\partial f}{\partial x}=\frac{h\partial g/\partial x-g\partial h/\partial x}{h^2} $$
(52)
$$ \frac{\partial^2f}{\partial {x}^2}=\frac{\partial^2}{\partial {x}^2}\left(\frac{g}{h}\right)=\frac{\partial^2g/\partial {x}^2-2\left[\partial f/\partial x\right]{\left[\partial f/\partial h\right]}^T-f\left[{\partial}^2\mathrm{h}/\partial {x}^2\right]}{h} $$
(53)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Do, B., Ohsaki, M. Gaussian mixture model for robust design optimization of planar steel frames. Struct Multidisc Optim 63, 137–160 (2021). https://doi.org/10.1007/s00158-020-02676-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00158-020-02676-3

Keywords

Navigation