Abstract
Concurrent multiscale structural optimization is concerned with the improvement of macroscale structural performance through the design of microscale architectures. The multiscale design space must consider variables at both scales, so design restrictions are often necessary for feasible optimization. This work targets such design restrictions, aiming to increase microstructure complexity through deep learning models. The deep neural network (DNN) is implemented as a model for both microscale structural properties and material shape derivatives (shape sensitivity). The DNN’s profound advantage is its capacity to distill complex, multidimensional functions into explicit, efficient, and differentiable models. When compared to traditional methods for parameterized optimization, the DNN achieves sufficient accuracy and stability in a structural optimization framework. Through comparison with interface-aware finite element methods, it is shown that sufficiently accurate DNNs converge to produce a stable approximation of shape sensitivity through back propagation. A variety of optimization problems are considered to directly compare the DNN-based microscale design with that of the Interface-enriched Generalized Finite Element Method (IGFEM). Using these developments, DNNs are trained to learn numerical homogenization of microstructures in two and three dimensions with up to 30 geometric parameters. The accelerated performance of the DNN affords an increased design complexity that is used to design bio-inspired microarchitectures in 3D structural optimization. With numerous benchmark design examples, the presented framework is shown to be an effective surrogate for numerical homogenization in structural optimization, addressing the gap between pure material design and structural optimization.
Similar content being viewed by others
References
Allaire G, Brizzi R (2005) A multiscale finite element method for numerical homogenization. Multiscale Model Simul 4(3):790–812. https://doi.org/10.1137/040611239
Allaire G, Bonnetier E, Francfort G, Jouve F (1997) Shape optimization by the homogenization method. Numer Math 76:27–68. https://doi.org/10.1007/s002110050253
Andreassen E, Andreasen CS (2014) How to determine composite material properties using numerical homogenization. Comput Mater Sci 83:488–495. https://doi.org/10.1016/j.commatsci.2013.09.006
Andreasen CS, Sigmund O (2012) Multiscale modeling and topology optimization of poroelastic actuators. Smart Mater Struct 21(6):065005. https://doi.org/10.1088/0964-1726/21/6/065005
Baker N, Alexander F, Bremer T, Hagberg A, Kevrekidis Y, Najm H, Parashar M, Patra A, Sethian J, Wild S, Willcox K, Lee S (2019) Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence. Technical report. USDOE Office of Science (SC), Washington, DC. https://doi.org/10.2172/1478744
Bendsøe MP, Kikuchi N (1988) Generating optimal topologies in structural design using a homogenization method. Comput Methods Appl Mech Eng 71(2):197–224. https://doi.org/10.1016/0045-7825(88)90086-2
Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411. https://doi.org/10.1109/TNNLS.2011.2181867
Bourdin B (2001) Filters in topology optimization. Int J Numer Methods Eng 50(9):2143–2158. https://doi.org/10.1002/nme.116
Brandyberry DR, Najafi AR, Geubelle PH (2020) Multiscale design of three-dimensional nonlinear composites using an interface-enriched generalized finite element method. Int J Numer Methods Eng 121(12):2806–2825. https://doi.org/10.1002/nme.6333
Bruns TE, Tortorelli DA (2001) Topology optimization of non-linear elastic structures and compliant mechanisms. Comput Methods Appl Mech Eng 190(26):3443–3459. https://doi.org/10.1016/S0045-7825(00)00278-4
Chan YC, Da D, Wang L, Chen W (2022) Remixing functionally graded structures: data-driven topology optimization with multiclass shape blending. Struct Multidisc Optim 65(5):135. https://doi.org/10.1007/s00158-022-03224-x
Cheng L, Bai J, To AC (2019) Functionally graded lattice structure topology optimization for the design of additive manufactured components with stress constraints. Comput Methods Appl Mech Eng 344:334–359. https://doi.org/10.1016/j.cma.2018.10.010
Fazlyab M, Robey A, Hassani H, Morari M, Pappas G (2019) Efficient and accurate estimation of Lipschitz constants for deep neural networks. In: Advances in neural information processing systems, vol 32. Curran Associates, Inc. Accessed: Dec. 26, 2022. Available: https://proceedings.neurips.cc/paper/2019/hash/95e1533eb1b20a97777749fb94fdb944-Abstract.html. Accessed 26 Dec 2022
Gallant A, White H (1992) On learning the derivatives of an unknown mapping with multilayer feedforward networks. Neural Netw. https://doi.org/10.1016/S0893-6080(05)80011-5
Garner E, Kolken HMA, Wang CCL, Zadpoor AA, Wu J (2019) Compatibility in microstructural optimization for additive manufacturing. Addit Manuf 26:65–75. https://doi.org/10.1016/j.addma.2018.12.007
Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR workshop and conference proceedings, 2010, pp 249–256. ISSN 1938-7228. Accessed: Dec. 26, 2022. https://proceedings.mlr.press/v9/glorot10a.html
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press. Google-Books-ID omivDQAAQBAJ
Gouk H, Frank E, Pfahringer B, Cree MJ (2021) Regularisation of neural networks by enforcing Lipschitz continuity. Mach Learn 110(2):393–416. https://doi.org/10.1007/s10994-020-05929-w
Groen JP, Sigmund O (2018) Homogenization-based topology optimization for high-resolution manufacturable microstructures. Int J Numer Methods Eng 113(8):1148–1163. https://doi.org/10.1002/nme.5575
Groen JP, Wu J, Sigmund O (2019) Homogenization-based stiffness optimization and projection of 2D coated structures with orthotropic infill. Comput Methods Appl Mech Eng 349:722–742. https://doi.org/10.1016/j.cma.2019.02.031
Guedes J, Kikuchi N (1990) Preprocessing and postprocessing for materials based on the homogenization method with adaptive finite element methods. Comput Methods Appl Mech Eng 83(2):143–198. https://doi.org/10.1016/0045-7825(90)90148-F
Guedes J, Rodrigues H, Bendsøe M (2003) A material optimization model to approximate energy bounds for cellular materials under multiload conditions. Struct Multidisc Optim 25(5):446–452. https://doi.org/10.1007/s00158-003-0305-8
Hassani B, Hinton E (1998) A review of homogenization and topology optimization I—homogenization theory for media with periodic structure. Comput Struct 69(6):707–717. https://doi.org/10.1016/S0045-7949(98)00131-X
Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366. https://doi.org/10.1016/0893-6080(89)90020-8
Imediegwu C, Murphy R, Hewson R, Santer M (2019) Multiscale structural optimization towards three-dimensional printable structures. Struct Multidisc Optim 60(2):513–525. https://doi.org/10.1007/s00158-019-02220-y
Kazemi H, Norato JA (2022) Topology optimization of programmable lattices with geometric primitives. Struct Multidisc Optim 65(1):33. https://doi.org/10.1007/s00158-021-03094-9
Kim C, Lee J, Yoo J (2021) Machine learning-combined topology optimization for functionary graded composite structure design. Comput Methods Appl Mech Eng 387(114):158. https://doi.org/10.1016/j.cma.2021.114158
Kingma DP, Ba J (2017) Adam: a method for stochastic optimization
Kollmann HT, Abueidda DW, Koric S, Guleryuz E, Sobh NH (2020) Deep learning for topology optimization of 2D metamaterials. Mater Des 196(109):098. https://doi.org/10.1016/j.matdes.2020.109098
Lee YJ, Misra S, Chen WH, Koditschek DE, Sung C, Yang S (2022) Tendon-driven auxetic tubular springs for resilient hopping robots. Adv Intell Syst 4(4):2100152. https://doi.org/10.1002/aisy.202100152
Logarzo HJ, Capuano G, Rimoli JJ (2021) Smart constitutive laws: inelastic homogenization through machine learning. Comput Methods Appl Mech Eng 373:113482. https://doi.org/10.1016/j.cma.2020.113482
Murphy R, Imediegwu C, Hewson R, Santer M (2021) Multiscale structural optimization with concurrent coupling between scales. Struct Multidisc Optim 63(4):1721–1741. https://doi.org/10.1007/s00158-020-02773-3
Najafi AR, Safdari M, Tortorelli DA, Geubelle PH (2015) A gradient-based shape optimization scheme using an interface-enriched generalized FEM. Comput Methods Appl Mech Eng 296:1–17. https://doi.org/10.1016/j.cma.2015.07.024
Najafi AR, Safdari M, Tortorelli DA, Geubelle PH (2017) Shape optimization using a NURBS-based interface-enriched generalized FEM. Int J Numer Methods Eng 111(10):927–954. https://doi.org/10.1002/nme.5482
Najafi AR, Safdari M, Tortorelli DA, Geubelle PH (2021) Multiscale design of nonlinear materials using a Eulerian shape optimization scheme. Int J Numer Methods Eng 122(12):2981–3014. https://doi.org/10.1002/nme.6650
Nguyen-Thien T, Tran-Cong T (1999) Approximation of functions and their derivatives: a neural network implementation with applications. Appl Math Model 23(9):687–704. https://doi.org/10.1016/S0307-904X(99)00006-2
Nikolakakis KE, Haddadpour F, Karbasi A, Kalogerias DS (2022) Beyond Lipschitz: sharp generalization and excess risk bounds for full-batch GD. http://arxiv.org/abs/2204.12446, arXiv:2204.12446. [cs, stat]
Novak R, Bahri Y, Abolafia DA, Pennington J, Sohl-Dickstein J (2018) Sensitivity and generalization in neural networks: an empirical study. http://arxiv.org/abs/1802.08760, arXiv:1802.08760, [cs, stat]
Pantz O, Trabelsi K (2008) A post-treatment of the homogenization method for shape optimization. SIAM J Control Optim 47(3):1380–1398. https://doi.org/10.1137/070688900
Rozvany GIN, Zhou M, Birker T (1992) Generalized shape optimization without homogenization. Struct Optim 4(3):250–252. https://doi.org/10.1007/BF01742754
Safdari M, Najafi AR, Sottos NR, Geubelle PH (2015) A NURBS-based interface-enriched generalized finite element method for problems with complex discontinuous gradient fields. Int J Numer Methods Eng 101(12):950–964. https://doi.org/10.1002/nme.4852
Safdari M, Najafi AR, Sottos NR, Geubelle PH (2016) A NURBS-based generalized finite element scheme for 3D simulation of heterogeneous materials. J Comput Phys 318:373–390. https://doi.org/10.1016/j.jcp.2016.05.004
Sigmund O (1994) Materials with prescribed constitutive parameters: an inverse homogenization problem. Int J Solids Struct 31(17):2313–2329. https://doi.org/10.1016/0020-7683(94)90154-6
Sigmund O, Aage N, Andreassen E (2016) On the (non-)optimality of Michell structures. Struct Multidisc Optim 54(2):361–373. https://doi.org/10.1007/s00158-016-1420-7
Soghrati S, Aragón AM, Armando Duarte C, Geubelle PH (2012) An interface-enriched generalized FEM for problems with discontinuous gradient fields. Int J Numer Methods Eng 89(8):991–1008. https://doi.org/10.1002/nme.3273
Svanberg K (1987) The method of moving asymptotes—a new method for structural optimization. Int J Numer Methods Eng 24(2):359–373. https://doi.org/10.1002/nme.1620240207
Torquato S (2010) Optimal design of heterogeneous materials. Annu Rev Mater Res 40(1):101–129. https://doi.org/10.1146/annurev-matsci-070909-104517
Torquato S, Haslach H (2002) Random heterogeneous materials: microstructure and macroscopic properties. Appl Mech Rev 55(4):B62–B63. https://doi.org/10.1115/1.1483342
Vilardell AM, Takezawa A, du Plessis A, Takata N, Krakhmalev P, Kobashi M, Yadroitsava I, Yadroitsev I (2019) Topology optimization and characterization of Ti6Al4V ELI cellular lattice structures by laser powder bed fusion for biomedical applications. Mater Sci Eng A 766:138330. https://doi.org/10.1016/j.msea.2019.138330
Wang F, Sigmund O (2021) 3D architected isotropic materials with tunable stiffness and buckling strength. J Mech Phys Solids 152:104–415. https://doi.org/10.1016/j.jmps.2021.104415
Wang F, Sigmund O, Jensen JS (2014) Design of materials with prescribed nonlinear properties. J Mech Phys Solids 69:156–174. https://doi.org/10.1016/j.jmps.2014.05.003
Wang L, Chan YC, Ahmed F, Liu Z, Zhu P, Chen W (2020) Deep generative modeling for mechanistic-based learning and design of metamaterial systems. Comput Methods Appl Mech Eng 372:113377. https://doi.org/10.1016/j.cma.2020.113377
Watts S, Tortorelli DA (2017) A geometric projection method for designing three-dimensional open lattices with inverse homogenization. Int J Numer Methods Eng 112(11):1564–1588. https://doi.org/10.1002/nme.5569
Watts S, Arrighi W, Kudo J, Tortorelli DA, White DA (2019) Simple, accurate surrogate models of the elastic response of three-dimensional open truss micro-architectures with applications to multiscale topology design. Struct Multidisc Optim 60(5):1887–1920. https://doi.org/10.1007/s00158-019-02297-5
White DA, Arrighi WJ, Kudo J, Watts SE (2019) Multiscale topology optimization using neural network surrogate models. Comput Methods Appl Mech Eng 346:1118–1135. https://doi.org/10.1016/j.cma.2018.09.007
Wu J, Sigmund O, Groen JP (2021a) Topology optimization of multi-scale structures: a review. Struct Multidisc Optim 63(3):1455–1480. https://doi.org/10.1007/s00158-021-02881-8
Wu J, Wang W, Gao X (2021b) Design and optimization of conforming lattice structures. IEEE Trans Vis Comput Graph 27(1):43–56. https://doi.org/10.1109/TVCG.2019.2938946
Yu X, Zhou J, Liang H, Jiang Z, Wu L (2018) Mechanical metamaterials associated with stiffness, rigidity and compressibility: a brief review. Prog Mater Sci 94:114–173. https://doi.org/10.1016/j.pmatsci.2017.12.003
Zheng L, Kumar S, Kochmann DM (2021) Data-driven topology optimization of spinodoid metamaterials with seamlessly tunable anisotropy. Comput Methods Appl Mech Eng 383:113894. https://doi.org/10.1016/j.cma.2021.113894
Zhou M, Rozvany GIN (1991) The COC algorithm, Part II: topological, geometrical and generalized shape optimization. Comput Methods Appl Mech Eng 89(1):309–336. https://doi.org/10.1016/0045-7825(91)90046-9
Zhou H, Zhu J, Wang C, Zhang Y, Wang J, Zhang W (2022) Hierarchical structure optimization with parameterized lattice and multiscale finite element method. Struct Multidisc Optim 65(1):39. https://doi.org/10.1007/s00158-021-03149-x
Zhu B, Skouras M, Chen D, Matusik W (2017) Two-scale topology optimization with microstructures. ACM Trans Graph 36(5):164:1-164:16. https://doi.org/10.1145/3095815
Acknowledgements
The authors would like to acknowledge support from Drexel University. N. Black is grateful for support from the GAANN Grant (Grant Number P200A190036). The work is also supported by the NSF CAREER Award CMMI-2143422. N. Black would also like to acknowledge the fundamental contributions of Dr. Daniel A. Tortorelli to the development of computational algorithms for solid mechanics and Dr. Matthew Burlick to the development of deep learning algorithms. The authors would also like to thank Reza Pejman for insightful discussions related to this work.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Replication of results
Comprehensive implementation details were provided and the authors are confident that the work if reproducible. For further details and access to the training datasets used in this work, readers are encouraged to contact the authors.
Additional information
Responsible Editor: Ramin Bostanabad
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: IGFEM shape sensitivity of the homogenized elasticity tensor
This section introduces the relevant IGFEM sensitivity analysis for the homogenized elasticity tensor in relation to material shape parameters. We begin with the energy-based expression of the homogenized elasticity tensor:
If we simplify the expression to a single component of the homogenized tensor and omit the subscripts used to indicate microscale element quantities, we can write an element’s contribution to the homogenized tensor as
We remark that in this expression, only the strain \(\varvec{\varepsilon }\) is a function of the shape parameters. The strain \(\varvec{\varepsilon }^{0}\) is prescribed, the element-wise constitute relation \(\varvec{C}_{e_{\mu} }\) is not a function of the design variables in the IGFEM shape optimization formulation.
The strain \(\varvec{\varepsilon }\) can be represented as the function \(\varvec{\varepsilon }\left( \varvec{X}(\varvec{x}), \varvec{x} \right) = \mathbb {B}\left( \varvec{X}(\varvec{x}), \varvec{x} \right) \mathbb {U}\left( \varvec{X}(\varvec{x}), \varvec{x} \right)\) for the shape parameters \(\varvec{x}\). Hereafter, we consider a single shape parameter \(x_{i}\). We introduce the simple notation \(\frac{\partial \mathbb {B}}{\partial x_{i}}\) as the expression for the shape derivative of \(\mathbb {B}\). The defining feature of IGFEM is \(\mathbb {B}\left( \varvec{X}(\varvec{x}), \varvec{x} \right)\), where the strain–displacement is a function of the shape parameters; for more information on the IGFEM implementation of \(\frac{\partial \mathbb {B}}{\partial x_{i}}\), see Najafi et al. (2015, 2017, 2021) and Brandyberry et al. (2020). The shape material derivative of \(\mathbb {U}\) is introduced as \(\mathbb {U}_{i}^{*}\). Following these definitions, the shape derivative of \(\varvec{\varepsilon }\) is expressed element-wise as
while the shape derivative \(\mathbb {U}_{i}^{*}\) is evaluated through the following pseudo-problem:
The pseudo-problem is assembled from the element quantities \(\frac{\partial \varvec{K}_{e}}{\partial x_{i}}\) and \(\frac{\partial \varvec{F}_{e}}{\partial x_{i}}\). We assert that the material derivative \(\frac{\partial \varvec{C}_{e_{\mu} }}{\partial x_{i}}\) is zero, so the element stiffness derivative follows:
where we note that \(\frac{\partial \mathbb {B}}{\partial x_{i}}^{\text {T}} \varvec{C}_{e_{\mu} } \mathbb {B}\) is symmetric and \({\text {div}}(\mathbb {V}_{i})\) follows from the shape velocity term (Najafi et al. 2015). For the homogenization case where \(\frac{{\text {d}} \varvec{\varepsilon }_0}{{\text {d}} x_{i}} = 0\), the element force derivative is
where
Recalling that only \(\varvec{\varepsilon }\) is a function of the design parameter with its shape sensitivity in (38), then the sensitivity expression of the homogenized elasticity tensor can be defined similar to (40):
Next we target the term \(\frac{\partial \mathbb {B}}{\partial x_{i}}\mathbb {U}_{e} + \mathbb {B}\mathbb {U}_{ei}^{*}\). If we combine the expression for the pseudo-element with the relation \(\mathbb {K}_{e} = \mathbb {B}^{\text {T}}\varvec{C}_{e_{\mu} }\mathbb {B}\), the pseudo-element can be used to eliminate \(\mathbb {B}\mathbb {U}_{ei}^{*}\) in (43) using the element-wise pseudo-force of (39):
Applying the symmetry of \(\frac{\partial \mathbb {B}}{\partial x_{i}}^{\text {T}} \varvec{C}_{e_{\mu} }\mathbb {B}\) in (44), we conclude
Using (45) in the expression for constitutive sensitivity (43), we produce
Using this element contribution to the shape sensitivity of the constitutive parameters, we recover the form in (29).
Appendix 2: Practical considerations
The appropriate DNN architecture and training procedure heavily depend on the application (that is, it depends on the function space to be emulated). For multiscale optimization problems employing homogenization, including applications in structural, thermal, and acoustic simulations that employ parameterized microstructures, this section may be used to generally guide the DNN training process. This section reviews some of the key issues associated with DNN training including vanishing/exploding gradients, batch size, and training dataset generation.
1.1 Gradient propagation
As the DNN trains, its weights are iteratively updated to improve some objective function. The back propagation procedure [cf. (9)] is used to update the weights and biases of the DNN. The convergence of these model parameters is not guaranteed; some combinations of model initialization and training procedures will produce unstable gradients, often referred to as vanishing or exploding gradients (Glorot and Bengio 2010; Goodfellow et al. 2016).
In this work, vanishing gradients were observed and are reported in Table 1 as the number of DNN hidden layers was increased past \(L=3\). Figure 20 illustrates the propagation of the DNN’s Jacobian for a collection of architectures all trained with 667 IGFEM elliptical training examples with an initial learning rate of \(10^{-4}\) over \(10^{5}\) iterations of full-batch gradient descent. As the number of hidden layers increases, as in Fig. 20e, f, the Jacobian tends toward zero and caused the training failures reported in Table 1. In all examples, the vanishing gradient phenomena manifested during training and resulted in blatantly poor models. For the relatively small models in this work, if the training process was stable, then the DNN’s Jacobian was adequate for applications in multiscale optimization.
1.2 Batch size
The batch size in a DNN training procedure refers to the number of training examples used to calculate the model’s sensitivity for a given training iteration (Goodfellow et al. 2016). Full-batch training was implemented in this work because the training datasets are relatively small (100s to 1000s of examples) and an efficient training procedure was desirable. Training in mini batches, generally samples of 4–32 training examples may improve generalization and robustness (Nikolakakis et al. 2022; Novak et al. 2018). Table 2 compares the objective values for two identical DNN architectures trained via full-batch gradient descent and small-batch gradient descent (batch size \(=32\)). Small-batch training did improve the DNN’s performance as parameterization increased. The sensitivity, shown in Fig. 21, was inconsistently improved. Based on this evidence, the gains achieved through small-batch training do not significantly outweigh the additional training cost. For more complicated systems that require highly parameterized models, however, small-batch training may be necessary to build accurate surrogate models Fig. 22.
1.3 Training dataset size
A training dataset is necessary to construct a viable DNN surrogate model for engineering applications. The ideal training dataset captures the depth and complexity of the target function so that the DNN may learn a general and robust map within the function space. Whether due to excessively costly data generation or incalculable complexity, the ideal training dataset is not always feasible.
Parameterized homogenization is apt for building effective training datasets. Input parameters are bounded by geometric limits, and output parameters are bounded by the constitutive limits of the material. Given these conditions, it is possible to create a representative dataset with 100s to 1000s of examples that may be used to create a relatively small yet general surrogate model for homogenization. Figure 23 illustrates correlation between accurate execution and training dataset size for a DNN of \(L=3\) and \(n=32\). For more complicated geometric parameterizations and/or nonlinear physics, it is likely that more data are needed to capture the complexity of the feature space.
1.4 Homogenization in multiscale optimization
Homogenization assumes a significant separation of scales, approximating the local effects of a periodically varying microstructure (cf. Sect. 2). The examples presented in this work have largely focused on the numerical behavior of DNN surrogate models for homogenization in a selection of optimization exercises. Continued work through full-scale simulation and physical experimentation is necessary to judge the effects of homogenization on multiscale structures. This “Appendix” is presented as a short illustrative study to show the limits of homogenization-based multiscale design.
The test case for experimental validation is derived from a prescribed deformation problem characterized by
which targets the displacement of a zero Poisson’s ratio structure given the boundary conditions shown in Fig. 24a. Design optimization was performed using the FEM-informed DNN model for the 3D BioTruss, producing the \(10\times 10\times 1\) structure shown in Fig. 24a after 100 iterations. Designs are compared using measured Poisson’s ratio of the macroscale structure
where the strains \(\varepsilon _{\text {lat}}\) and \(\varepsilon _{\text {long}}\) are the lateral and longitudinal strains measured along the specimen’s centroidal axes. The initial uniform specimen [\(\varvec{\beta }^{(i)} =\{0.5, 0.5\}_{i = 1:12}\); \({\zeta }^{(i)} =0.5_{i = 1:6}\)] has a Poisson’s ratio of 0.33 as evaluated by FEM-based homogenization. After 200 iterations of design optimization (\(V_{x}=0.2\)), the BioTruss design converged to a Poisson’s ratio of 0.00 (as evaluated by FEM-based homogenization). Because the design space reached the parameter limits imposed by the BioTruss geometry (Fig. 25), this specific microarchitecture formulation is likely unable to produce a negative Poisson’s ratio.
The design produced via DNN-driven multiscale optimization was manufactured using 3D printing of TPU 95a filament (\(E_{1}=49\) MPa, \(\nu = 0.32\) Lee et al. 2022) via fused deposition modeling (Fig. 24b, c). The properties of TPU 95a differ from the simulated fictitious material (\(E_{1}=1\) Pa, \(\nu = 0.30\)), but because the structural deformation is displacement controlled, the deformation of both materials are sufficiently similar for comparison. Indeed both the fictitious material and TPU 95a produce an initial Poisson’s ratio of 0.33 for the uniform specimen and 0.00 for the optimized structure, as evaluated by FEM-based homogenization.
The optimized design of TPU 95a microarchitectures was analyzed in the displacement controlled compression fixture shown in in Fig. 26. The Poisson’s ratio was measured experimentally using \(\varepsilon _{\text {lat}}\) and \(\varepsilon _{\text {long}}\) measured along the specimen’s respective centroidal axes. At \(\varepsilon _{\text {long}} = -0.10\), the calculated Poisson’s ratio was \(-0.06\), and at \(\varepsilon _{\text {long}} = -0.20\), the measured Poisson’s ratio was \(-0.02\). The variation between modeled (\(\nu = 0.00\)) and experimental Poisson’s ratios is attributed to localized buckling near the compression plates. A full exploration of the observed nonlinear behavior is well outside the scope of this work; we simply conclude that the optimized design did indeed approach the targeted displacement within the limits of its parameterized geometry provided the DNN’s evaluations and shape sensitivities. Beyond navigating the design space, a thorough validation of the final analysis would require full-scale simulation and experimentation as in Cheng et al. (2019).
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Black, N., Najafi, A.R. Deep neural networks for parameterized homogenization in concurrent multiscale structural optimization. Struct Multidisc Optim 66, 20 (2023). https://doi.org/10.1007/s00158-022-03471-y
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00158-022-03471-y