Skip to main content
Log in

A machine learning-assisted structural optimization scheme for fast-tracking topology optimization

  • Research Paper
  • Published:
Structural and Multidisciplinary Optimization Aims and scope Submit manuscript

Abstract

In this work, we propose to accelerate the computational speed of structural optimization by using a machine learning-assisted structural optimization (MLaSO) scheme. A new machine learning model is proposed for integrating within structural optimization using one mesh, training globally during structural optimization based on the selected historic results of a chosen optimization quantity in previous iterations. At selected iterations, the trained neural network predicts the update of the chosen optimization quantity so that the solution can be updated without conducting finite element analysis and sensitivity analysis. The proposed MLaSO scheme can be easily integrated into different structural optimization methods and used to solve many design problems without preparing additional training datasets. As a demonstration, MLaSO is integrated within the solid isotropic material with penalization topology optimization algorithm to solve four 2D design problems. The performance and benefits of MLaSO, in terms of prediction accuracy and computational efficiency, are demonstrated based on the present numerical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Abbreviations

\({\varvec{x}}\) :

Design variable vector

\(\overline{{\varvec{x}} }\) :

Processed design variable vector

\({\varvec{K}}\) :

Global stiffness matrix

\({\varvec{U}}\) :

Global displacement vector

\({\mathbf{k}}_{0}\) :

Stiffness matrix for a solid element

\(\mathbf{u}\) :

Element displacement vector

\({\varvec{F}}\) :

Global load vector

\({\varvec{\Phi}}\) :

Neural network input vector

\({\varvec{\Psi}}\) :

Neural network output vector

\(\widehat{{\varvec{\Psi}}}\) :

Neural network prediction output

\({\widehat{{\varvec{\Psi}}}}_{0}\) :

Initial guess

\(\overline{{\varvec{w}} }\) :

Radial basis weight matrix

\({\varvec{w}}\) :

Matrix of weights

\({\varvec{b}}\) :

Vector of bias

\({\varvec{\omega}}\) :

Collective representation of w and b

\({\varvec{h}}\) :

Hidden layer

\({\varvec{d}}\) :

Search direction vector

\(\overline{{\varvec{d}} }\) :

Processed search direction vector

\(\boldsymbol{\alpha }\) :

Move step size vector

\({\varvec{e}}{\varvec{r}}\) :

Accumulated error vector

\({\varvec{\varepsilon}}\) :

Prediction error vector

\(\sigma\) :

Activation function

\({\beta }_{e}\) :

Dynamic smoothing factor for element e

\(T\) :

Machine learning model

\({f}_{e}\) :

Error function

\(g\) :

Constraint function

\(f\) :

Objective function

\({E}_{e}\) :

Elastic module for element e

\({x}_{e}\) :

Element density vector

\(\gamma\) :

Dynamic weightage

\({r}_{l}\) :

Learning rate

\({r}_{u}\) :

Upper bound of initial weights and bias

\(\Delta \gamma\) :

The increment of dynamic weightage

\(n\) :

Sample size

\({v}_{e}\) :

Element volume

\({V}_{f}\) :

Volume limit

\(V\) :

Design domain volume

\(p\) :

Material penalty factor

\(R\) :

Element centroid distance

\(k\) :

Iteration number in MLaSO

\({k}_{\text{so}}\) :

Iteration number in structural optimization

\({k}_{\text{p}}\) :

Prediction iteration number

\({k}_{\text{r}}\) :

Routine iteration number

\(\Delta k\) :

Increment of iteration number

\({k}_{\text{s}}\) :

Entry point

\({k}_{\text{c}}\) :

Total iteration number at convergence

\({k}_{\text{rc}}\) :

Total routine iteration number at convergence

\(l\) :

Step number for convergence criterion

\({n}_{\text{k}}\) :

Sample size for convergence criterion

\({\varepsilon }_{\text{C}}\) :

Compliance difference

\({k}_{\text{pc}}\) :

Total prediction iteration number at convergence

\({\varepsilon }_{{k}_{\text{rc}}}\) :

Difference in the total number of routine iterations at convergence

\({\epsilon }_{\text{m}}\) :

Relative difference in the prediction and its exact value

\({\epsilon }_{\text{c}}\) :

Relative difference in objective function

\(M\) :

Total number of training samples collected

\(q\) :

Penalty in the error function

\({\tau }_{\text{c}}\) :

Structural optimization converging criterion

\({\tau }_{\text{t}}\) :

Training converging criterion

\(C\) :

Structure compliance

\({L}_{2}\) :

Norm-2 distance

\({\varvec{\delta}}\) :

Collective representation of φ and \(\widehat{\boldsymbol{\varphi }}\)

\({N}_{e}\) :

Number of elements

\({N}_{m}\) :

Number of neurons in mth hidden layer

\({N}_{\text{h}}\) :

Number of hidden layers

\({t}_{\text{train}}\) :

Training time

\({t}_{\text{Train}}\) :

Total training time

\({t}_{\text{pred}}\) :

Prediction time

\({t}_{\text{Pred}}\) :

Total prediction time

\({N}_{\text{z}}\) :

Number of nonzero elements in the global stiffness matrix

\(\boldsymbol{\varphi }\) :

Chosen optimization quantity obtained in routine iteration

\(\overline{\boldsymbol{\varphi } }\) :

Exponential averaged chosen optimization quantity

\(\widehat{\boldsymbol{\varphi }}\) :

Chosen optimization quantity obtained by neural network prediction

\({N}_{\text{in}}\) :

Number of training data points for the training input

\({N}_{\text{out}}\) :

Number of training data points for the training output

\({t}_{\text{opt}}\) :

Total computational time spent on solving a structural optimization problem

\({t}_{\text{Total}}\) :

Total computational time spent by MLaSO

\({t}_{\text{FEA}}\) :

Computational time of one finite element analysis

\({t}_{\text{der}}\) :

Computational time of one sensitivity analysis

\({t}_{\text{up}}\) :

Computational time of one design variable update

\({t}_{\text{d}}\) :

Total computational time difference

\({t}_{\text{s}}\) :

Total time-saving

\(i,j\) :

Element index number i and j

\(\text{min}\) :

Minimum value

\(\text{max}\) :

Maximum value

\(\text{ref}\) :

Results obtained using top88

\(e\) :

Element e

\(m\) :

mth hidden layer

\(k\) :

Iteration number

\({k}_{\text{n}}\) :

Epoch number in a training loop

\({k}_{\text{r}}\) :

Routine iteration number

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liyong Tong.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Replication of results

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Additional information

Responsible Editor: Jianbin Du

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Algorithms for MLaSO-\({\varvec{d}}\)

Appendix: Algorithms for MLaSO-\({\varvec{d}}\)

figure a
figure b
figure c

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xing, Y., Tong, L. A machine learning-assisted structural optimization scheme for fast-tracking topology optimization. Struct Multidisc Optim 65, 105 (2022). https://doi.org/10.1007/s00158-022-03181-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00158-022-03181-5

Keywords

Navigation