1 Introduction

In the past three decades, topology optimization has become an active research field to seek new optimal counterintuitive designs in a wide range of problems governed by different physics, i.e., solid mechanics [1,2,3,4,5,6], fluid dynamics [7,8,9], thermal dynamics [10,11,12], acoustics [13,14,15,16,17,18] and electromagnetism [19,20,21], among others. Furthermore, topology optimization of coupled multiphysics problems has been addressed in recent works, combining structural-thermal interaction [22,23,24], structural-fluid interaction [25,26,27,28] or even thermal-fluid interaction [29,30,31,32]. As a result of this substantial effort, the optimal design obtained from the minimization of a given topology optimization problem can be used by engineers as a first approximation in the development of new products in a wide range of applications.

All up-to-date approaches exhibit certain strengths and weaknesses. As a first approximation,Footnote 1 optimization techniques can be grouped into two main blocks: (I) methods based on trial-and-error schemes, e.g., Genetic Algorithms or Ant Colony Algorithms [39,40,41,42,43,44], and (II) methods relying on the gradient computation [3, 45,46,47,48,49,50]. The main disadvantage of the former group is their extremely high computational cost as the number of unknowns increases. This computational cost may become prohibitive for current computational systems since thousands of different layouts must be tested to find the optimal configuration. Consequently, the algorithms included in the second set are the most widespread algorithms, e.g., (a) topology optimization within homogenization theory [51], (b) density-based optimization (SIMP) techniques [45, 52, 53], (c) evolutionary methodologies (ESO)Footnote 2 [54, 55], (d) Level-set approaches [2, 3, 56], (e) Topological Derivative method [57], (f) Phase field approach [49, 58, 59], and (g) Variational Topology Optimization (VARTOP) method [60, 61], among others.

Starting with the seminal paper of Bendsøe and Kikuchi [51], numerical methods for topology optimization have been extensively developed. In particular, this article was the basis for a stream of density-based approaches, such as the Solid Isotropic Material with Penalization (SIMP) method [45, 52, 53], being nowadays one of the most widely used topology optimization methods. Simple continuous element-wise design variables are used in the formulation and resolution of the topology optimization problem in a fixed design domain, for which Young’s modulus is defined as a polynomial function of the element-wise density, \(\rho _e\in \left( 0,1\right) \). The design variable must be first penalizedFootnote 3 (normally a penalty exponent \(p\ge 3\) is used) and later regularized, thus providing almost black-and-white solutions (with semi-dense elements) not ensuring manufacturability. On the other hand, a large number of regularization schemes have been suggested to be used regarding topology optimization, including: (1) filtering, via the classical sensitivity [65,66,67] or density [4, 68] filters, projection techniques [69, 70], morphology-based filters [71, 72] or Helmholtz-type filters [73, 74], among others, and (2) geometric constraint techniques, e.g., perimeter constraint [75, 76] or gradient constraints [77]. Through these mathematical techniques, significant numerical instabilities [67, 78] resulting from the ill-posedness of unconstrained topology optimization problems for continuous structures, including gray areas (semi-dense intermediate elements), checkerboard patterns, and mesh-dependency issues are alleviated. Finally, the solution to the topology optimization problem is obtained via the update of the design variable, through the classical optimality criteria (OC) method [51, 79],Footnote 4 the moving asymptotes (MMA) algorithm [81] or other mathematical programming-based optimization algorithms, among others.

Besides the SIMP method, the Evolutionary Structural Optimization (ESO) method, firstly introduced by Xie and Steven [46, 54] although similar ideas were presented earlier [82, 83], is also one of the most used for industrial applications. In the recent years, the ESO approach has gained widespread popularity due its simplicity and ease of implementation in commercial FE codes. ESO, considered as a hard-kill method,Footnote 5 relies on a simple heuristic criterion to gradually remove inefficient material. The elements with low rejection criterion are gradually removed starting from a full stiff design domain, thus evolving towards an optimum. Contrary to SIMP, a discrete element design variable, \(\chi \in \{0,1\}\), is used to define the topology layouts, which are free of gray elements. This change in design variable results in convergence issues and a high dependency on the initial configuration, thus leading to local optimal solutions [84]. Despite these numerical issues, ESO has been applied to a large range of problems, from the well-known structural problems [85,86,87,88], including non-linear problems [89, 90], to thermal problems [10, 91, 92], and contact problems [93, 94]. In addition, the above mentioned issues are mitigated in its later version, the Bi-directional Evolutionary Structural Optimization (BESO) method [55, 95, 96], which extends the approach to allow for new elements to be added, while inefficient elements are removed at the same time. The new material is added either in the locations near to those elements with a high criterion function value [95, 96] or in those void areas with higher criterion function values, computed via a linear interpolation of the displacement field [55]. However, these extrapolation techniques are not consistent with those used for the solid elements. As any other density-based method, the sensitivity is commonly regularized via a checkerboard suppression filter [97], a mesh-independent filter [98] or a perimeter control [99], similar to the ones used in SIMP, in order to reduce mesh-dependencies. Nevertheless, such hard-kill BESO methods fail to obtain convergent solutions. For that reason, in later revisions, Huang and Xie [98] proposed a modified hard-kill BESO to enhance the time-convergence via a stabilization algorithm, where the historical information is used, in addition to a new mesh-independent filter and nodal sensitivities. As an alternative to hard-kill ESO/BESO methods, soft-kill approaches retain the void elements as very soft elements, thus allowing the computation of the criterion function within the entire domain. Zhu et al. [100] and Huang and Xie [101] proposed independent approaches that used a penalized density variable. In particular, Huang and Xie [101] combined the modified BESO method with SIMP material interpolation scheme, improving the numerical stability and the potential of the methodology. An excellent overview of some recent developments in topology optimization using ESO is provided in [102].

The last major stream is constituted by Level-set-based methods. In contrast to the previous topology optimization approaches, the optimal layout is implicitly defined by a scalar function \(\phi \) (with the sign). Additionally, the structural boundary of the design \(\varGamma \) is represented by the zero-level iso-contour (or iso-surface) of the level-set function (LSF) [103,104,105]. As a result, optimal designs with sharp and smooth edges are obtained, thus avoiding semi-dense (gray) elements, like those observed in density-based methods. Many formulations of Level-set-based approaches have been proposed over the years since Haber and Bendsoe [106] suggested its applications with topology optimization techniques, e.g., [2, 48, 56, 107]. The most important ones relying on a level-set function are the Level-set (based on shape derivative), the Topological Derivative, and the Phase-field methods.

Regarding the original Level-set, Osher and Santosa [47], Allaire et al. [3, 56] and Wang et al. [2] took up the idea of using the level-set function for topology optimization and combined it with a shape-derivative-based topology optimization framework. Therefore, only material boundaries are altered via shape-sensitivity analysisFootnote 6 to seek the optimal design. As a consequence, this set of techniques can not nucleate new holes in the interior of the domain. Therefore, the resulting optimal solutions are heavily dependent on the initial layouts, which must be made up of many small holes evenly distributed throughout the domain [3]. This initial configuration ensures the merging and cancellation of holes via the propagation of the boundaries, while avoiding local optimal solutions. To overcome this limitation, a hole nucleation algorithm [109, 110], referenced as the bubble technique, was introduced in the topology optimization approach. This technique allows the creation of new holes in the material domain. Instead of the density variable used in density-based methods to define the topology design, the level-set function is used here, with the material domain being those points where the LSF is positive. The geometry in a fixed mesh is typically mapped to a mechanical model using either immersed boundary techniques (e.g., via X-FEM [111]) or density-based mapping (e.g., via relaxed Heaviside functions [2]) [112]. When using the second mapping, the stiffness coefficients are expressed in terms of the level-set function via approximated Heaviside functions, i.e., the ersatz material approximation is used [113, 114], where the void material is replaced with a soft material. In addition, the LSF is commonly updated using a Hamilton–Jacobi (HJ) equation [2, 3, 47],Footnote 7 thus requiring the solution of a pseudo-time PDE equation. Although this updating scheme tends to converge to smooth topologies, it may require a huge number of iterations,Footnote 8 as well as the regular application of reinitialization algorithms to a signed-distance function. These reinitializations must be applied each time there are significant shape-changes or after a hole nucleation process [2, 3], thus reducing the efficiency of the approach. Nevertheless, it has been extensively used for a broad range of design problems, including structural problems [119], vibration problems [47, 119], thermal problems [120], among others. As an alternative to HJ equations, updating procedures based on mathematical programming are used, e.g., using the parameters of the discretized level-set function as optimization variables (for instance radial basis functions (RBF) and spectral methods) [115, 117]. Finally, as described for density-based approaches, the topology optimization problem must be regularized to ensure mesh-independence and improve convergence. It can be achieved either by a filtering procedure or a constraint equation, e.g., perimeter constraint [3, 121].

Alternatively to Level-set using the shape-derivative framework, and after the mathematical development of the topological derivatives [57, 122, 123], some researchers incorporated the concept of the topological derivative into a shape-sensitivity-based Level-set method, thus leading to the Topological Derivative approach [48, 124]. A similar algorithm as in classical Level-set is performed. However, in contrast to those prior methods, this topology optimization technique can nucleate holes in the interior of the material domain by using the topological gradient or topological derivative. The sensitivity is defined as the variation of the objective function due to the insertion of an infinitesimal spherical void at any point \({\mathbf{x}}\) in the design domain \(\varOmega \), thus avoiding the stagnation in local optimal solutions. Nevertheless, the topological gradient must be analytically derived through rather complex mathematics for each of the topology optimization problems, e.g., structural linear optimization [123, 125, 126], thermal orthotropic optimization [127, 128], microstructure topology optimization [129, 130], among others. This mathematical operator represents an extra step required to proceed with the optimization, burdening its potential against other techniques with sensitivities easier to compute. The topological derivative was first incorporated in conjunction with shape-derivative as a way to systematically nucleate holes [48, 124, 131, 132], similar as in Level-set with the bubble technique. In later revisions, the topological derivative was used exclusively to update the level-set function [122, 133]. Until then, only stiff material could be removed from the material domain, making it impossible to add new material. It was not until Amstutz and Andrä [134] and He et al. [135] that fully bi-directional Topological Derivative approaches were introduced. In these techniques, the optimal layout, expressed in terms of the LSF, is defined as a function of the topological gradient. To improve stability, reaction and diffusive terms can be added to the classical HJ-equation, leading to the so-called Generalized HJ-equation. The diffusive term smooths out the design and suppresses sharp corners, avoiding the ill-posedness of the topology optimization problem.

The last group of interest is the Phase field topology optimization approach, where the theory of phase transitions is adapted to the resolution of topology optimization problems [136,137,138]. The design variable corresponds to the density, as other density-based approaches, but, in this case, a linear material interpolation is considered, without any exponent factor. In addition, an extra term is added to the objective function that controls the interface thickness while penalizing intermediate values, thus solving one of the main disadvantages of SIMP-like approaches. The optimal solutions present smooth material domains, almost black-and-white designs separated by sharp thin finite thickness interfaces. The modified functional is minimized based on the Cahn-Hilliard equation, leading to the resolution of two coupled second-order equations without requiring a volume constraint, i.e., the volume stays constant through the optimization procedure. However, some researchers have solved directly the modified topology optimization problem with the inclusion of a volume constraint [49, 50, 58, 139], resembling SIMP with an explicit penalization in the density and a gradient regularization. The gradient regularization results in a smoothing similar to that obtained by the inclusion of diffusive terms in the level-set-based methods.Footnote 9 Connected with this concept, Yamada et al. [140] suggested a Phase field approach based on a level-set function, used as the design variable, and a topological derivative incorporating a fictitious interface energy. This last mathematical technique allows to control the complexity of the optimal layout. Although being applied to other problems [12, 141], it still resorts to a Hamilton–Jacobi equation to update the topology design, which may entail high computation resources to achieve convergence.

As an alternative to all these well-established techniques, the Variational Topology Optimization (VARTOP)Footnote 10 approach [60] combines the mathematical simplicity of SIMP-based methods, while considering the characteristic function \(\chi \) as the design variable. Thus a binary configuration (black-and-white design) is obtained. The domain, and so the characteristic function, is implicitly represented through a 0-level-set function, termed as discrimination function, as in level-set-based methods. Nevertheless, the topology design is not updated neither via a Hamilton-Jacobi equation nor a Reaction-Diffusion equation, but via a fixed-point, non-linear, closed-form algebraic system resulting from the derivation of the topology optimization problem. In addition, an approximated topological derivative, in contrast to the exact Topological Derivative methods, is used in the formulation within an ersatz material approach, highly reducing the mathematical complexity independent of the tackled problem. The topology optimization problem is subjected to a volume constraint expressed in terms of a pseudo-time variable. This constraint equation is iteratively increased until the desired volume is achieved, thus obtaining converged topologies for intermediate volumes. By means of this procedure, referred to as time-advancing scheme, the corresponding Pareto Frontier is obtained. For each time-step, the closed-form optimality criteria has to be solved to compute both the Lagrange multiplier that fulfills the volume constraint and the optimal characteristic function. As for the regularization, a Laplacian regularization, similar to those used in SIMP and Phase-field approaches [49, 73, 74, 140], is applied to the discrimination function, providing not only smoothness in the optimal design but also mesh-size control. The technique has been already applied to linear static structural [60] and steady-state thermal [142] applications, considering the volume constraint as a single constraint equation, with promising results.

In the literature, there are plenty of articles that either theoretically compare several topology optimization methods as the ones presented above [33,34,35,36,37,38, 143], or compare the results obtained with the topology optimization approach proposed by the corresponding authors with those computed using a recognized topology optimization approach. However, not many articles compare in a practical way a set of cases with a wide range of techniques under the same convergence criteria. This is one of the most relevant aspects of this work since the results of a set of widely used techniques are compared with each other: (1) SIMP (briefly described in Sect. 3.1), (2) BESO using a soft-kill criterion (detailed in Sect. 3.2), (3) VARTOP using a pure variational topological approach (presented in Sect. 3.3), and (4) Level-set (detailed in Sect. 3.4). These three well-known methods have been selected among all existing ones due to their wide use both at the professional and research level, as well as for the convenience of implementation and their documentation, thus facilitating their verification and assuring a fair comparison. Following the implementations proposed by the different methods’ authors, the studied topology optimization techniques have been implemented including as few modifications as possible in order to match the original approaches. The modifications are detailed throughout the document for each of the addressed methods. Although the chosen topology optimization techniques have been applied to a wide spectrum of different applications, the comparison in this article focuses at minimizing the static structural problem. The comparison of the results is addressed through a set of well-known benchmark cases, whose optimal layouts are easily recognized. Specifically, minimum mean compliance, multi-load mean compliance, and compliant mechanism topology optimization problems are carried out.

The remainder of this paper is organized as follows. The structural static problem as well as the three addressed topology optimization problems are defined in Sect. 2, while in Sect. 3, the considered approaches are reviewed in terms of their formulation and algorithms. In addition, specific comments are provided to address the studied topology optimization problems with each of the techniques. These techniques are compared with each other in terms of the objective function, the quality of topology design, and the computational cost via a set of benchmark cases detailed in Sect. 4 and analyzed in Sect. 5.

2 Theoretical Aspects

2.1 Domain Definition

Let the design domain, \(\varOmega \), denote a fixed smooth open domain of \({\mathbb {R}}^n\) for \(n=\{2,3\}\), composed by two smooth subdomains \(\varOmega ^+,\varOmega ^-\subset \varOmega \), with \({\overline{\varOmega }}^+\cup {\overline{\varOmega }}^-={\overline{\varOmega }}\) and \(\varOmega ^+\cap \varOmega ^-=\emptyset \), as displayed in Fig. 1a.Footnote 11 The boundary of the design domain, termed as \(\partial \varOmega \), is also composed of the boundaries corresponding to the two subdomains \(\partial \varOmega ^+\) and \(\partial \varOmega ^-\), satisfying \(\partial \varOmega ^+\cap \partial \varOmega ^-=\varGamma \). The material domain, \(\varOmega ^+\), consists of a stiff material with a high Young’s modulus, while the second subdomain, \(\varOmega ^-\), is formed by a soft material with a low Young’s modulus. The stiffness ratio between both materials is given by the contrast factor, \(\alpha \ll 1\).

Fig. 1
figure 1

Domain representation: a representation of the design domain, \(\varOmega \), comprising two disjoint sub-domains \(\varOmega ^+\) and \(\varOmega ^-\) and b implicit representation via the level-set function \(\phi \) or the discrimination function \(\psi \)

The topology layout of the design domain can be defined via a characteristic function \({\overline{\chi }}{({\mathbf{x}})}:\varOmega \rightarrow \{0,1\}\) as

$$\begin{aligned} \left\{ \begin{aligned}&\varOmega ^{+}{:}{=}\{{\mathbf {x}}\in \varOmega \;/\; {\overline{\chi }}{({\mathbf{x}})}=1\} \\&\varOmega ^{-}{:}{=}\{{\mathbf {x}}\in \varOmega \;/\; {\overline{\chi }}{({\mathbf{x}})}=0\} \end{aligned} \right. { \, , }\end{aligned}$$
(1)

where \({\overline{\chi }}\) corresponds to the Heaviside function of \((\rho -{\overline{\rho }})\) in density-based approaches (SIMP), the Heaviside function of the level-set function \(\phi \) in Level-set-based methods, and the characteristic function \(\chi \) itself in VARTOP. Notice that the term \({\overline{\rho }}\) must be computed in density-based methods so that the constraint equation is satisfied (normally the volume), thus obtaining a white-and-black design.

In particular, for Level-set-based and VARTOP approaches, the subdomains can be defined through a continuous function, \(\varphi {({\mathbf{x}})}: \varOmega \rightarrow {\mathbb {R}}, \varphi \in H^1(\varOmega )\) (a level-set function \(\phi \) or a discrimination function \(\psi \), respectively) such that

$$\begin{aligned} \left\{ \begin{aligned}&\varOmega ^{+}{:}{=}\{{\mathbf {x}}\in \varOmega \;/\; \varphi {({\mathbf{x}})}>0\} \\&\varOmega ^{-}{:}{=}\{{\mathbf {x}}\in \varOmega \;/\; \varphi {({\mathbf{x}})}<0\} \end{aligned} \right. { \, , }\end{aligned}$$
(2)

as illustrated in Fig. 1b. The characteristic function \({\overline{\chi }}\) can be then obtained as \({\overline{\chi }}{({\mathbf{x}})}={{{\mathcal {H}}}}(\varphi {({\mathbf{x}})})\), where \({{{\mathcal {H}}}}(\cdot )\) stands for the Heaviside function.

2.2 The Topology Optimization Problem: Contextual Introduction

Topology optimization methods look for the optimal material distribution that minimizes a given objective function, \({{\mathcal {J}}}\), subjected to one or more constraints \({{{\mathcal {C}}}}_k\) (e.g., a volume constraint, \({{{\mathcal {C}}}}_0\le 0\), and possibly other N design variable constraints, \({{{\mathcal {C}}}}_k\le 0, \; k:1\dots N\)) and governed by a linear or non-linear state equation. The material distribution is described by the density variable \(\rho {({\mathbf{x}})}\), the characteristic function \(\chi {({\mathbf{x}})}\) or the level-set function \(\psi {({\mathbf{x}})}\), depending on the topology approach used to carry out the optimization. To keep the definition of the topology optimization problem as general as possible, let us define \({{\zeta }}{({\mathbf{x}})}\) as the design variable at point \({\mathbf{x}}\), which will be considered as \(\rho {({\mathbf{x}})}\), \(\chi {({\mathbf{x}})}\) or \(\phi {({\mathbf{x}})}\) for the density-based, VARTOP, and level-set-based methods, respectively. Based on this concept, the classical mathematical formulation of the corresponding topology optimization problem is given by

$$\begin{aligned} \left[ \begin{aligned}&\underset{{{\zeta }}\in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv \int _{\varOmega }{j({{\mathbf {u}}}({{\zeta }}),{{\zeta }},{\mathbf{x}})}{ \, d\varOmega }&(a) \\&\text {subject to:}&\\&\quad {{{\mathcal {C}}}}_0({{\zeta }})\equiv \int _{\varOmega }{c_0({{\zeta }},{\mathbf{x}}){ \, d\varOmega }}\le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_k({{\zeta }})\le 0, \;\; k:1\dots N&(b-2) \\&\text {governed by:}&\\&\quad {\text {State equation} }&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(3)

where the objective function \({{\mathcal {J}}}\) can be expressed as a volume integral of a local function \(j({{\mathbf {u}}}({{\zeta }}),{{\zeta }},{\mathbf{x}})\) over the entire domain, and the constraint functional \({{{\mathcal {C}}}}_0\) represents the volume constraint in terms of the design variable \({{\zeta }}\). Additional constraint equations \({{{\mathcal {C}}}}_k({{\zeta }})\) can be incorporated into the topology optimization problem to explicitly include constraints to the design variables, particular to each approach.

Fig. 2
figure 2

Elastic problem sketch: a fixed analysis domain \(\varOmega \) with boundary conditions (in which the displacement \(\overline{{\mathbf {u}}}{({\mathbf{x}})}\) or the normal traction \(\varvec{t}_n{({\mathbf{x}})}\) can be prescribed at \(\partial _{u}\varOmega \) and \(\partial _\sigma \varOmega \), respectively) and b Stiff and soft material domains, \(\varOmega ^+\) and \(\varOmega ^-\), respectively, with the same boundary conditions

The state equation gives as a solution the unknown field \({\mathbf {u}}({{\zeta }})\) for a specific optimal design \({{{\zeta }}}\) included in the admissible set of solutions, \({{\mathscr {U}}}_{ad}\). This unknown field must satisfy the boundary conditions applied to the design domain. In particular, the linear elasticity equilibrium problem, formulated as

$$\begin{aligned} \left[ \begin{aligned}&\text {Find }{{\mathbf {u}}}({{\zeta }},{\mathbf{x}}) \, \text {such that} \\&\left\{ \begin{aligned}&\varvec{\nabla }\cdot \varvec{\sigma }({{\zeta }},{\mathbf{x}})+{\mathbf {b}}({{\zeta }},{\mathbf{x}})= {\mathbf {0}}&in \; \varOmega \\&\varvec{\sigma }({{\zeta }},{\mathbf{x}})\cdot {\mathbf {n}}={{\varvec{t}}_n}{({\mathbf{x}})}&on \; \partial _{\sigma }\varOmega, \\&{\mathbf {u}}({{\zeta }},{\mathbf{x}})=\overline{{\mathbf {u}}}{({\mathbf{x}})}&on \; \partial _{u}\varOmega \\ \end{aligned} \right.&{\qquad}\qquad(4)\\ \end{aligned}\right. \end{aligned}$$

is considered as the state equation for all the topology optimization problems addressed in this paper. In the preceding equation, \(\varvec{\sigma }({{\zeta }},{\mathbf{x}})\) and \({\mathbf {b}}({{\zeta }},{\mathbf{x}})\) stand for the second-order stress tensor field and the volumetric force, respectively, which both depend on the topology layouts. Additionally, \({{\varvec{t}}_n}{({\mathbf{x}})}\) and \(\overline{{\mathbf {u}}}{({\mathbf{x}})}\) are respectively the boundary tractions applied on \(\partial _{\sigma }\varOmega \subset \partial \varOmega \) and the displacements prescribed on \(\partial _{u}\varOmega \subset \partial \varOmega \), and \({\mathbf {n}}\) corresponds to the unit outward normal. As for the material behavior, the elastic material is governed by the Hooke’s law, i.e., \(\pmb {\sigma }={\mathbb {C}}_{{\zeta }}:\pmb {\varepsilon }\), with \(\pmb {\varepsilon }\) being the strain tensor (\(\pmb {\varepsilon } = {\varvec{\nabla }^S {\mathbf {u}}}_{{{\zeta }}}{({\mathbf{x}})}\)) and \({\mathbb {C}}_{{\zeta }}\) being the fourth-order, elastic constitutive tensor. The constitutive tensor depends on the design variable \({{\zeta }}\) via the corresponding material interpolation of each topology optimization approach.

As depicted in Fig. 2, the boundary \(\partial \varOmega \) of the analysis domain \(\varOmega \) is made of two mutually disjoint subsets, \(\partial _u\varOmega \) and \(\partial _\sigma \varOmega \), where the displacements and tractions are prescribed, respectively, as detailed in the previous equation.

Alternatively, the variational form of the linear elasticity problem (4) becomes

$$\begin{aligned} \left[ \begin{aligned}&\text {Find the displacement field } {{\mathbf {u}}}_{{\zeta }}\in {{{\mathcal {U}}}}(\varOmega ) \text { such that} \\&\begin{array}{lll} &{}\quad a({\mathbf {w}},{\mathbf {u}}_{{\zeta }}) = l({\mathbf {w}}) \quad \forall {\mathbf {w}}\in {{{\mathcal {V}}}}(\varOmega ) &{\qquad}\qquad(5)\\ &{}\text {where} \\ &{}\quad \begin{aligned} a({\mathbf {w}},{\mathbf {u}}_{\zeta }) = {\int _{\varOmega }{\varvec{\nabla }^S {\mathbf {w}}{({\mathbf{x}})}}:{\mathbb {C}}_{{{\zeta }}}{({\mathbf{x}})}:\varvec{\nabla }^S{\mathbf {u}}_{{\zeta }}{({\mathbf{x}})}}{ \, d\varOmega }{ \, , } \end {aligned} &{\qquad} \qquad(6)\\ &{}\quad \begin{aligned} l({\mathbf {w}}) = &{}\int _{\partial _{\sigma }\varOmega }{{\mathbf {w}}{({\mathbf{x}})}\cdot {\varvec{t}_n}{({\mathbf{x}})}}{ \, d\varGamma }\\ &{}+ \int _{\varOmega }{{\mathbf {w}}{({\mathbf{x}})}\cdot {\mathbf {b}}_{{\zeta }}{({\mathbf{x}})}}{ \, d\varOmega }{ \, , }\end{aligned} &{\qquad} \qquad(7) \end{array} \end{aligned}\right. \end{aligned}$$

with \({\mathbf {u}}_{{\zeta }}\) and \({\mathbf {w}}\) being the displacement field and the virtual displacement field, respectively.

The linear elasticity problem (Eqs. (5) to (7)), discretized using the standard finite element method, reads

$$\begin{aligned} \left[ \begin{aligned}&{{\mathbb {K}}}_{{{\zeta }}}\hat{{\mathbf {u}}}_{{{\zeta }}}={\mathbf {f}}_{{\zeta }}&\qquad \qquad (8)\\&\text {with} \\&{{\mathbb {K}}}_{{{\zeta }}}=\int _{\varOmega }{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }{ \, , }&\qquad \qquad(9)\\&\begin{aligned} {\mathbf {f}}_{{\zeta }}=&\int _{\partial _{\sigma }\varOmega }\mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\varvec{t}_n}{({\mathbf{x}})}{ \, d\varGamma }\\&+ \int _{\varOmega } \mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\mathbf {b}}_{{{\zeta }}}{({\mathbf{x}})}{ \, d\varOmega }{ \, , }\end{aligned}&\qquad \qquad(10)\\ \end{aligned}\right. \end{aligned}$$

where the stiffness matrix and the external force vector are denoted by \({{\mathbb {K}}}_{{{\zeta }}}\) and \({\mathbf {f}}\), respectively. For the sake of clarity, the dependence of the force vector on the design variable will be neglected, considering a constant force scenario \({\mathbf {f}}\), independent of the topology layout, with null volumetric forces. The displacement field, \({\mathbf {u}}_{{{\zeta }}}{({\mathbf{x}})}\), and its gradients, \({\varvec{\nabla }^S {\mathbf {u}}}_{{{\zeta }}}\), are approximated as follows

$$\begin{aligned}&{\mathbf {u}}_{{{\zeta }}}{({\mathbf{x}})}\equiv {\mathbf {N}}_{u}{({\mathbf{x}})}\hat{{\mathbf {u}}}_{{\zeta }} \end{aligned}$$
(11)
$$\begin{aligned}&{\varvec{\nabla }^S {\mathbf {u}}}_{{{\zeta }}}{({\mathbf{x}})}\equiv {\mathbf {B}}{({\mathbf{x}})}\hat{{\mathbf {u}}}_{{\zeta }} \end{aligned}$$
(12)

where \({\mathbf {N}}_{u}{({\mathbf{x}})}\) and \({\mathbf {B}}{({\mathbf{x}})}\) stand for the displacement, shape function matrix and the strain-displacement matrix, respectively, and \(\hat{{\mathbf {u}}}_{{\zeta }}\) corresponds to the nodal displacement vector. Notice that the dependence on the design variable \({{\zeta }}\) is highlighted by the subscript \((\cdot )_{{\zeta }}\).

2.2.1 Minimum Mean Compliance

The minimum mean compliance topology optimization problem seeks the optimal topology layout that minimizes the global stiffness of the structure, or equivalently, maximizes the external work on the structure. The objective function \({{{\mathcal {J}}}}^{(I)}({\mathbf {u}}_{{\zeta }},{{\zeta }})\), in variational form, is given by

$$\begin{aligned} {{{\mathcal {J}}}}^{(I)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv l({\mathbf {u}}_{{{\zeta }}}) { \, , }\end{aligned}$$
(13)

and the corresponding discretized topology optimization problem can be written as

$$\begin{aligned} \left[ \begin{aligned}&\underset{{{\zeta }}\in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}^{(I)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv \int _{\varOmega } \hat{{\mathbf {u}}}_{{{\zeta }}}^{\text {T}} {\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}\hat{{\mathbf {u}}}_{{{\zeta }}} { \, d\varOmega }\quad&(a) \\&\text {subject to:} \\&\quad {{{\mathcal {C}}}}_0({{\zeta }})\equiv \int _{\varOmega }{c_0({{\zeta }},{\mathbf{x}}){ \, d\varOmega }}\le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_k({{\zeta }})\le 0, \quad k:1\dots N&(b-2) \\&\text {governed by:} \\&\quad {{\mathbb {K}}}_{{{\zeta }}}\hat{{\mathbf {u}}}_{{{\zeta }}}={\mathbf {f}} \text {, with } \ {{\mathbb {K}}}_{{{\zeta }}}=\int _{\varOmega }{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\text { and } {\mathbf {f}}=\int _{\partial _{\sigma }\varOmega }\mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\varvec{t}_n}{({\mathbf{x}})}{ \, d\varGamma }&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(14)

with \(j({{\mathbf {u}}}({{\zeta }}),{{\zeta }},{\mathbf{x}})\) being \({\varvec{\nabla }^S{\mathbf {u}}_{{{\zeta }}}{({\mathbf{x}})}:{{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}:\varvec{\nabla }^S{\mathbf {u}}_{{{\zeta }}}}{({\mathbf{x}})}\). The mean compliance can be also defined as \({\mathbf {f}}^{\text {T}}\hat{{\mathbf {u}}}_{{{\zeta }}}\), when nodal variables are used.

2.2.2 Multi-Load Mean Compliance

Multi-load compliance topology optimization problems are a subfamily of minimum mean compliance problems (Sect. 2.2.1), in which a set of elastic problems with different loading conditions are solved independently, instead of a single one with all the external loads applied at the same time. As a result, the topology optimization procedure aims at a trade-off between the optimal topology layouts for each specific loading state. Hence, the objective function (3-a) is computed as the weighted average sum of all individual compliances, i.e.,

$$\begin{aligned} \begin{aligned}&{{{\mathcal {J}}}}^{(II)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv \sum _{i=1}^{n_l} l\left( {\mathbf {u}}_{{{\zeta }}}^{(i)}\right) \\&\quad \equiv \sum _{i=1}^{n_l} \int _{\varOmega } \varvec{\nabla }^S{\mathbf {u}}_{{{\zeta }}}^{(i)} {({\mathbf{x}})}:{{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}:\varvec{\nabla }^S{\mathbf {u}}_{{{\zeta }}}^{(i)}{({\mathbf{x}})}{ \, d\varOmega }{ \, , }\end{aligned} \end{aligned}$$
(15)

where i and \(n_l\) corresponds to the index of the i-th loading stateFootnote 12 and the number of loading states, respectively. Consequently, the topology optimization problem (14) becomes

$$\begin{aligned} \left[ \begin{aligned}&\underset{{{\zeta }}\in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}^{(II)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv \sum _{i=1}^{n_l} \int _{\varOmega }{ \hat{{\mathbf {u}}}_{{{\zeta }}}^{{(i)}\text {T}} {\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(i)} }{ \, d\varOmega }= \sum _{i=1}^{n_l} {\mathbf {f}}^{{(i)}\text {T}}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(i)}\quad&(a) \\&\text {subject to:} \\&\quad {{{\mathcal {C}}}}_0({{\zeta }})\equiv \int _{\varOmega }{c_0({{\zeta }},{\mathbf{x}}){ \, d\varOmega }}\le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_k({{\zeta }})\le 0, \quad k:1\dots N&(b-2) \\&\text {governed by:} \\&\quad {{\mathbb {K}}}_{{{\zeta }}}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(i)}={\mathbf {f}}^{(i)} \text {, with } \begin{aligned} {{\mathbb {K}}}_{{{\zeta }}}&=\int _{\varOmega }{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\\ {\mathbf {f}}^{(i)}&=\int _{\partial _{\sigma }\varOmega ^{(i)}}{\mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\varvec{t}_n}^{(i)}{({\mathbf{x}})}{ \, d\varGamma }}, \quad i:1\dots n_l \end{aligned}&(c) \end{aligned}\right. { \, . }\end{aligned}$$
(16)

Notice that \(n_l\) independent linear elastic problems must be solved in order to obtain the displacement field \(\hat{{\mathbf {u}}}_{{{\zeta }}}^{(i)}\) for each of the loading states.

2.2.3 Compliant Mechanism Synthesis

Contrary to the two previous sections where the main goal was to maximize the mean stiffness of the structure, the objective now is to design a flexible structure capable of transferring an action (force or displacement) from the input port to the output port, obtaining a desired force or displacement at that port. The corresponding objective function \({{{\mathcal {J}}}}^{(III)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \) can be expressed as

$$\begin{aligned} {{{\mathcal {J}}}}^{(III)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv -l_2\left( {\mathbf {u}}_{{{\zeta }}}^{(1)}\right) { \, , }\end{aligned}$$
(17)

where \(l_2\left( {\mathbf {u}}_{{{\zeta }}}^{(1)}\right) \) corresponds to the rhs term (7) of the elastic problem (5) for \({{\mathbf {w}}} = {\mathbf {u}}_{{{\zeta }}}^{(1)}\) with the boundary traction \({\varvec{t}_n^{(2)}}{({\mathbf{x}})}\) being a dummy constant force value applied only at the output port in the desired direction.

Mimicking the procedure detailed for the other two topology optimization problems in Sects. 2.2.1 and 2.2.2, the new mathematical problem is given by

$$\begin{aligned} \left[ \begin{aligned}&\underset{{{\zeta }}\in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}^{(III)}\left( {{\mathbf {u}}}({{\zeta }}),{{\zeta }}\right) \equiv -\int _{\varOmega }{ \hat{{\mathbf {u}}}_{{{\zeta }}}^{{(1)}\text {T}} {\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{{{\zeta }}}{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(2)}}{ \, d\varOmega }= - {\mathbf {1}}^{\text {T}}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(1)}\quad&(a) \\&\text {subject to:} \\&\quad {{{\mathcal {C}}}}_0({{\zeta }})\equiv \int _{\varOmega }{c_0({{\zeta }},{\mathbf{x}}){ \, d\varOmega }}\le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_k({{\zeta }})\le 0, \quad k:1\dots N&(b-2) \\&\text {governed by:} \\&\quad \begin{aligned} {{\mathbb {K}}}_{{{\zeta }}}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(1)}={\mathbf {f}}^{(1)}&\text {, with } {\mathbf {f}}^{(1)}=\int _{\partial _{\sigma }\varOmega ^{(1)}}\mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\varvec{t}_n}^{(1)}{({\mathbf{x}})}{ \, d\varGamma }\\ {{\mathbb {K}}}_{{{\zeta }}}\hat{{\mathbf {u}}}_{{{\zeta }}}^{(2)}={\mathbf {f}}^{(2)}&\text {, with } {\mathbf {f}}^{(2)}= {\mathbf {1}} = \int _{\partial _{\sigma }\varOmega ^{(2)}}\mathbf {N_{u}}^{\text {T}}{({\mathbf{x}})}{\varvec{t}_n}^{(2)}{({\mathbf{x}})}{ \, d\varGamma }\end{aligned}&(c) \end{aligned}\right. { \, . }\end{aligned}$$
(18)

As anticipated in expression (17), the compliant mechanism problem (18) is not self-adjoint, so an auxiliary state problem must be solved in addition to the original state problem. The additional system presents the same stiffness matrix \({{\mathbb {K}}}_{{{\zeta }}}\) but a different force vector \({{\mathbf {f}}}^{(2)}\), consisting in a dummy constant force at the output port, which solution is denoted as \({\mathbf {u}}_{{{\zeta }}}^{(2)}\). Additional springs, denoted by \(K_{in}\) and \(K_{out}\), must be considered in the input and output ports, respectively, to ensure convergence.

2.3 General Algorithm

The flowchart of the general algorithm, used to obtain the optimal topology layouts, is illustrated in Fig. 3. Note that each technique will present variations of this optimization algorithm, which will be specified in the corresponding sections (Sects. 3.13.4). Nevertheless, the methods addressed in this paper exhibit a similar updating scheme.

The main part of the algorithm consists in solving the state equation (8) to obtain the unknown field \({\mathbf {u}}({{\zeta }})\), and computing the corresponding sensitivities along with the objective function value (Eqs. (14)-a, (16)-a or (18)-a). After computing the topological derivatives of the objective function, some type of regularization must be applied to them (e.g., sensitivity filtering and/or temporal regularization) to improve numerical stability and ensure convergence. The topology \({{\zeta }}\) is then updated accordingly to each approach following a optimality criterion so that the objective function is minimized. For a given volume constraint, topology and objective function convergence is sought, thus obtaining the optimal topology design.

Depending on the topology optimization method, a set of intermediate optimal topologies is obtained when using time-advancing schemes, while a single optimal design is only obtained (the last one) for single-time-step methods (see Sect. 1). For the first family of approaches, the algorithm previously described must be repeated for each time-step until the convergence criteria are met, thus obtaining a set of converged solutions over the Pareto Frontier of optimal solutions between the objective function and the volume constraint.

Fig. 3
figure 3

The general flowchart for topology optimization approaches

3 Topology Optimization Methods

In the following subsections, the specific details of each considered topology optimization method are described, focusing on the differences among them.

3.1 SIMP Method

As aforementioned, the SIMP approach employs element-wise density variables \(\rho _e\) as design variables to describe the topology layout. Therefore, the design domain \(\varOmega \) is discretized into cells or voxelsFootnote 13 and each element e is assigned a density \(\rho _e\). Although \(\rho _e\) would ideally be equal to 1 for material and 0 for void, the design variable is here relaxed by allowing intermediate values \(0\le \rho _e \le 1\). As a consequence, additional constraints (3-b-2) must be added to the original topology optimization problem subject to the volume constraint. Furthermore, the material interpolation of the constitutive tensor \({\mathbb {C}}_\rho \) is defined as

$$\begin{aligned} {\mathbb {C}}_\rho (\rho _e) = {\mathbb {C}}^- + \rho _e^p \left( {\mathbb {C}}^+ - {\mathbb {C}}^- \right) , \quad \rho _e \in [0,1] { \, , }\end{aligned}$$
(19)

where \({\mathbb {C}}^+\) and \({\mathbb {C}}^-\) correspond to the constitutive tensor of the stiff and soft materials, respectively. In addition, the parameter p stands for the penalization factor (typically \(p\ge 3\)). For a constant Poisson ratio, Eq. (19) can be written in terms of the Young’s modulus E as

$$\begin{aligned} {\mathbb {C}}_\rho (\rho _e) = \left( E^- + \rho _e^p \left( E^+ - E^- \right) \right) {\mathbb {C}}_{(1)} { \, , }\end{aligned}$$
(20)

where \(E^+\) and \(E^-\) represent the Young’s modulus for the stiff and soft materials, respectively, and \({\mathbb {C}}_{(1)}\) corresponds to the constitutive tensor with unit Young’s modulus. Assuming that \(E^-\) can be defined proportional to the high Young’s modulus, \(E^+\), via the contrast factor \(\alpha \), the resultant constitutive tensor \({\mathbb {C}}_\rho (\rho _e)\) is defined as \({\mathbb {C}}^+\) multiplied by a coefficient, which depends on the design variable of the element, i.e., 

$$\begin{aligned} {\mathbb {C}}_\rho (\rho _e) = \left( \alpha + \rho _e^p \left( 1 - \alpha \right) \right) {\mathbb {C}}^+ { \, . }\end{aligned}$$
(21)

The global stiffness matrix \({{\mathbb {K}}}_{\rho }\) is obtained via the assembly of element stiffness matrices defined as

$$\begin{aligned} \begin{aligned} {{\mathbb {K}}}_e (\rho _e)&= \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{\rho }{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\\&= \left( \alpha + \rho _e^p \left( 1 - \alpha \right) \right) \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {\mathbb {C}}^+\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\\&= \left( \alpha + \rho _e^p \left( 1 - \alpha \right) \right) {{\mathbb {K}}}_e^+ { \, , }\end{aligned} \end{aligned}$$
(22)

with \({{\mathbb {K}}}_e^+ \) being the element stiffness matrix considering stiff material for element e.

Taking into account these two characteristics, the topology optimization problem (3) becomes

$$\begin{aligned} \left[ \begin{aligned}&\underset{\rho \in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) \equiv \int _{\varOmega }{j({{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho },{\mathbf{x}})}{ \, d\varOmega }&(a) \\&\text {subject to:} \\&\quad {{{\mathcal {C}}}}_0(\pmb {\rho })\equiv \frac{\vert \varOmega ({\pmb {\rho }}) \vert }{\vert \varOmega \vert } - f \le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_e(\rho _e)\le 0 \rightarrow 0\le \rho _e\le 1, \; e:1\dots N_e&(b-2) \\&\text {governed by:} \\& \quad {{\mathbb {K}}}_{\rho }\hat{{\mathbf {u}}}_{\rho }^{(i)}={\mathbf {f}}^{(i)}&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(23)

where \(\vert \varOmega ({\pmb {\rho }}) \vert \) and \(\vert \varOmega \vert \) are respectively the stiff material volume and the design domain volume, and f is the prescribed volume fraction.Footnote 14

The sensitivity of the objective function (23-a), \({\partial {{{\mathcal {J}}}}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }/{\partial \rho _e}\), is obtained via the adjoint technique, so that the derivative of the unknown field \({{\mathbf {u}}}_\rho \) with respect to the density \(\rho _e\), \({\partial {{\mathbf {u}}}(\pmb {\rho })}/{\partial \rho _e}\), is not required to be computed. Additionally, the sensitivity of the volume constraint (23-b-1) with respect to the density of the element e is equal to \({\vert \varOmega _e \vert } / {\vert \varOmega \vert }\). According to [53], the sensitivity of the objective function for the three topology optimization problems addressed in this work are given by

$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(I)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = - \omega _e (\rho _e)\ \hat{{\mathbf {u}}}_e (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e (\pmb {\rho }) \end{aligned}$$
(24)
$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(II)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = - \omega _e (\rho _e) \sum _{i=1}^{n_l}{ \hat{{\mathbf {u}}}_e^{(i)} (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e^{(i)} (\pmb {\rho }) } \end{aligned}$$
(25)
$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(III)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = \omega _e (\rho _e)\ \hat{{\mathbf {u}}}_e^{(1)} (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e^{(2)} (\pmb {\rho }) \end{aligned}$$
(26)

where \(\omega _e (\rho _e) = p \rho _e^{p-1} \left( 1 - \alpha \right) \) and \(\hat{{\mathbf {u}}}_e^{(i)} (\pmb {\rho })\) denotes the nodal displacements of element e and the state equation (i).

As mentioned in Sect. 1, a regularization technique must be applied to the original topology optimization problem to ensure the existence of a solution and avoid the formation of the so-called checkerboard patterns. Different filtering techniques modifying the element sensitivity are studied for the comparison, including a radial sensitivity filter computed using the convolution function (see Sect. 3.1.3) and the filter based on Helmholtz type differential equation (Sect. 3.1.1). The solution to the filtering technique is denoted by \(\widetilde{(\cdot )}\) and replaces the non-filtered sensitivity.

Considering the sensitivities of the objective function and volume constraint, the corresponding topology optimization problem can be solved by means of the Optimality Criteria (OC) method. The OC method seeks the fulfillment of the Karush-Kuhn-Tucker (KKT) conditions

$$\begin{aligned} \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} + \lambda \frac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} = 0, \; e:1\dots N_e { \, , }\end{aligned}$$
(27)

where \(\lambda \) is the Lagrange multiplier associated with the volume constraint \({{{\mathcal {C}}}}_0(\pmb {\rho })\) such that the volume constraint is met, and must be computed via a root-finding algorithm (e.g., a bisection method). Note that the element density \(\rho _e\) must be in the range of 0 to 1. The optimality conditions can be expressed as \(B_e=1\), where

$$\begin{aligned} B_e = - \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} \left( \lambda \frac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} \right) ^{-1} { \, . }\end{aligned}$$
(28)

A heuristic updating scheme, proposed by Bendsøe and Kikuchi [51], is used to update the design variables and achieve convergence. For minimum mean compliance, the scheme is defined as

$$\begin{aligned} \rho _e^{(k+1)} = \left\{ \begin{aligned}&\max \left( 0,\rho _e^{(k)}-m\right)&\text {if } \rho _e^{(k)} B_e^\eta \le \max \left( 0,\rho _e^{(k)}-m\right) \\&\min \left( 1,\rho _e^{(k)}+m\right)&\text {if } \rho _e^{(k)} B_e^\eta \ge \min \left( 1,\rho _e^{(k)}+m\right) \\&\rho _e^{(k)} B_e^\eta&\text {otherwise} \end{aligned}\right. { \, , }\end{aligned}$$
(29)

where m is a positive move limit, \(\eta \) is a numerical damping coefficient and k represents the iteration counter. These two numerical parameters are typically set 0.2 and 0.5, respectively, for minimum mean compliance. Eq. (28) is modified for compliant mechanism optimization problems to just account for positive sensitivities as

$$\begin{aligned} B_e = \max \left( \epsilon ,- \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} \right) \left( \lambda \frac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} \right) ^{-1} { \, , }\end{aligned}$$
(30)

with \(\epsilon \) being a small positive value. Equivalently, the updating scheme (29) can be expressed as

$$\begin{aligned} \rho _e^{(k+1)} = \left\{ \begin{aligned}&\max \left( 0,\rho _e^{(k)}-m\right)&\text {if } \rho _e^{(k)} B_e^\eta \le \max \left( \epsilon ,\rho _e^{(k)}-m\right) \\&\min \left( 1,\rho _e^{(k)}+m\right)&\text {if } \rho _e^{(k)} B_e^\eta \ge \min \left( 1,\rho _e^{(k)}+m\right) \\&\rho _e^{(k)} B_e^\eta&\text {otherwise} \end{aligned}\right. { \, , }\end{aligned}$$
(31)

with \(m=0.1\) and \(\eta =0.3\).

In the following subsections, three variations of the SIMP approach are introduced, mainly by changing the filter used to regularize the sensitivity (for instance, using a distance filter computed via a convolution function and a Helmholtz-type filter) or by trying a time-advancing strategy (with multiple steps).

3.1.1 SIMP Method Using PDE-Like Filter: \(\hbox {SIMP}^{(I)}\)

In this case, the sensitivities are regularized via a Helmholtz-type PDE equation with homogeneous Neumann boundary conditions, as detailed in [73, 144]. The regularized sensitivities \({\widetilde{\partial {{{\mathcal {J}}}}}}/{\partial \rho _e}\) correspond to the solution of

$$\begin{aligned} \left\{ \begin{aligned}&{\widetilde{\zeta }}-R_{min}^2\varDelta _{\mathbf{x}}{\widetilde{\zeta }}=\zeta&\quad in \; \varOmega \\&\nabla _{\mathbf{x}}{\widetilde{\zeta }}\cdot {\mathbf {n}}={0}&\quad on \; \partial \varOmega \end{aligned} \right. { \, , }\end{aligned}$$
(32)

where \(\zeta \) and \({\widetilde{\zeta }}\) are \(\rho _e \, {\partial {{{\mathcal {J}}}}}/{\partial \rho _e}\) and \(\rho _e \, {\widetilde{\partial {{{\mathcal {J}}}}}}/{\partial \rho _e}\), respectively, and \(\varDelta _{\mathbf{x}}({\mathbf{x}},\cdot )\) and \(\nabla _{\mathbf{x}}({\mathbf{x}},\cdot )\) are the Laplacian and gradient operators. The filter radius \(R_{min}\) is equal to \(r_{min}/(2\sqrt{3})\), with \(r_{min}\) being the filter radius of distance-based filters (see Sect. 3.1.3).

As reported by Lazarov and Sigmund [73], this type of filtering technique provides computational advantages when regularizing the sensitivities for complex non-uniform meshes in terms of memory storage and computational complexity when compared to classical filtering procedures. Although, for structured meshes, as in the cases addressed in this paper, this performance improvement may not be observed.

3.1.2 SIMP Method Using a Time-Advancing Scheme: \(\hbox {SIMP}^{(II)}\)

An incremental-time-advancing scheme can be implemented on top of \(\hbox {SIMP}^{(I)}\). The volume reference of the volume constraint is iteratively updated, thus obtaining a set of intermediate converged solutions. Once the convergence is achieved, the reference volume fraction f in Eq. (23-b-1) is decreased and the topology optimization procedure is repeated for the new volume constraint. At the first iteration of each time-step, the volume constraint must be fulfilled, so that the Helmholtz-type PDE filter keeps the volume constant.

3.1.3 SIMP Method Using Convolution Filter: \(\hbox {SIMP}^{(III)}\)

The filter in this approach modifies the sensitivities \({\partial {{{\mathcal {J}}}}}/{\partial \rho _e}\) by means of a standard distance filter as follows

$$\begin{aligned} \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} = \frac{1}{\max {(\gamma ,\rho _e)} \sum _{i\in N_{ei}}^{}{H_{ei}} } \sum _{i\in N_{ei}}^{}{H_{ei}\rho _i \frac{\partial {{{\mathcal {J}}}}}{\partial \rho _i}} { \, , }\end{aligned}$$
(33)

where \(\gamma \) is a small positive number to avoid division by zero and \(N_{ei}\) denotes the set of elements i for which the center-to-center distance, \({\text {dist}}(e,i)\), of element i to element e is smaller than a filter radius \(r_{min}\), defined by the user, i.e., \(N_{ei}=\{i\in N_e \;/\; {\text {dist}}(e,i) \le r_{min} \}\). The function \(H_{ei}\) corresponds to the weight factor function (of a linearly decaying filter kernel) given by

$$\begin{aligned} H_{ei} = \max {( 0, r_{min} - {\text {dist}}(e,i) )} { \, . }\end{aligned}$$
(34)

This sensitivity filter can be mathematically written using a convolution product of the filter function \(H({\mathbf {x}}-{\mathbf {y}})\) and the sensitivity of the objective function \({\partial {{{\mathcal {J}}}}}/{\partial \rho ({\mathbf {x}})}\) as

$$\begin{aligned} \begin{aligned}&\frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho } {({\mathbf{x}})}= \frac{1}{{\hat{\rho }}{({\mathbf{x}})}}\left( H*\left( \rho \frac{\partial {{{\mathcal {J}}}}}{\partial \rho }\right) \right) {({\mathbf{x}})}\\&\quad = \frac{1}{{\hat{\rho }}{({\mathbf{x}})}\int _{{\mathbb {B}}_r}{ H({\mathbf {x}}-{\mathbf {y}}) \ d{\mathbf {y}}}} \int _{{\mathbb {B}}_r}{ H ({\mathbf {x}}-{\mathbf {y}}) \rho ({\mathbf {y}}) \frac{\partial {{{\mathcal {J}}}}}{\partial \rho } ({\mathbf {y}}) \ d{\mathbf {y}}} { \, , }\end{aligned} \end{aligned}$$
(35)

where \({\mathbb {B}}_r\) is a sphere in 3D and a circle in 2D with center at \({\mathbf{x}}\), and radius \(r_{min}\) and \({\hat{\rho }}\) is equal to \(\max {(\gamma ,\rho _e)}\).

3.2 SOFTBESO Method

The original ESO and BESO methods use a discrete density variable \(\chi _e=\{0,1\}\) as design variable in a hard-kill topology optimization procedure where elements with low rejection criterion are removed from the topology layout. In particular, in BESO, elements can be added in specific areas by using extrapolation techniques. However, as mentioned in Sect. 1, this set of hard-kill approaches suffers from numerical instabilities failing in some circumstances to obtain convergent solutions. For that reason, this paper focuses on soft-kill evolutionary techniques, and in particular, in the bi-directional evolutionary (BESO) approach proposed by Huang and Xie [101].

In this context, the design variable, now termed as the element-wise density variable \(\rho _e=\{\rho _{min},1\}\), is defined using the original SIMP material interpolation, thus relaxing the design variable with a penalization factor p. Therefore, the material interpolation of the constitutive tensor is given by

$$\begin{aligned} {\mathbb {C}}_\rho (\rho _e) = \rho _e^p {\mathbb {C}}^+ = \rho _e^p E^+ {\mathbb {C}}_{(1)}, \; \rho _e =\{\rho _{min},1\} { \, . }\end{aligned}$$
(36)

Assuming that the Young’s modulus of the void material, \(E^-\), can be expressed, in terms of \(\alpha \), as \(\alpha \) times the Young’s modulus of the stiff material \(E^+\), then \(\rho _{min}\) must be equal to \(\root p \of {\alpha }\). A similar procedure as the one defined for SIMP-based approaches is used here to compute the element stiffness matrix, i.e.,

$$\begin{aligned} \begin{aligned} {{\mathbb {K}}}_e (\rho _e)&= \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{\rho }{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\\&= \rho _e^p \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {\mathbb {C}}^+\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }= \rho _e^p {{\mathbb {K}}}_e^+ { \, , }\end{aligned} \end{aligned}$$
(37)

with \({{\mathbb {K}}}_e^+\) being the element stiffness matrix considering stiff material for element e.

The corresponding topology optimization problem (3) for BESO can be written as

$$\begin{aligned} \left[ \begin{aligned}&\underset{\rho \in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) \equiv \int _{\varOmega }{j({{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho },{\mathbf{x}})}{ \, d\varOmega }&(a) \\&\text {subject to:} \\&\quad {{{\mathcal {C}}}}_0(\pmb {\rho })\equiv \frac{\vert \varOmega (\pmb {\rho }) \vert }{\vert \varOmega \vert } - f \le 0&(b-1) \\&\quad {{{\mathcal {C}}}}_e(\rho _e)\le 0 \rightarrow \rho _e=\{\rho _{min},1\}, \; e:1\dots N_e&(b-2) \\&\text {governed by:} \\&\quad {{\mathbb {K}}}_{\rho }\hat{{\mathbf {u}}}_{\rho }^{(i)}={\mathbf {f}}^{(i)}&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(38)

where the volume fraction f is decreased at each iteration until the desired final fraction \({\overline{f}}\), following an exponential expression \(f_{k+1}=\max {({\overline{f}},(1-ER)f_{k})}\). The evolutionary volume ratio \(ER\ll 1\) corresponds to the maximum volume fraction decreased at each iteration. Notice that the convergence in topology and objective are not met until the desired final fraction \({\overline{f}}\) is reached.

As detailed in Sect. 3.1, the sensitivities of the objective function (38-a) \({\partial {{{\mathcal {J}}}}}/{\partial \rho _e}\) for the considered topology optimization problems are defined as

$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(I)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = - \omega _e (\rho _e) \ \hat{{\mathbf {u}}}_e (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e (\pmb {\rho }) { \, , }\end{aligned}$$
(39)
$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(II)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = - \omega _e (\rho _e) \ \sum _{i=1}^{n_l}{ \hat{{\mathbf {u}}}_e^{(i)} (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e^{(i)} (\pmb {\rho }) } { \, , }\end{aligned}$$
(40)
$$\begin{aligned}&\frac{\partial {{{\mathcal {J}}}}^{(III)}\left( {{\mathbf {u}}}(\pmb {\rho }),\pmb {\rho }\right) }{\partial \rho _e} = \omega _e (\rho _e) \ \hat{{\mathbf {u}}}_e^{(1)} (\pmb {\rho }) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}_e^{(2)} (\pmb {\rho }) { \, , }\end{aligned}$$
(41)

with \(\omega _e(\rho _e)\) being equal to \(p \rho _e^{p-1}\).

For the regularization procedure, a linear distance-based filter, similar as the one used in SIMP\(^{(III)}\), is applied to the sensitivities \({\partial {{{\mathcal {J}}}}}/{\partial \rho _e}\). The filtered sensitivities are obtained as

$$\begin{aligned} \frac{\widetilde{\partial {\mathcal {J}}}}{\partial \rho _e} = \frac{1}{ \sum _{i\in N_{ei}}^{}{H_{ei}} } \sum _{i\in N_{ei}}^{}{H_{ei} \frac{\partial {{{\mathcal {J}}}}}{\partial \rho _i}} { \, , }\end{aligned}$$
(42)

which can also be computed using the convolution function of \(H_{ei}\) (34) and the non-regularized sensitivities \({\partial {{{\mathcal {J}}}}}/{\partial \rho _e}\). In addition to the spatial filtering (used to address the mesh-dependency problem), a temporal filtering is also applied by averaging the sensitivity numbers with historical information, thus improving convergence. The temporal filter can be expressed as

$$\begin{aligned} \left. \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} \right| _{e,k} = \frac{1}{2} \left( \left. \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} \right| _{e,k} + \left. \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} \right| _{e,k-1} \right) { \, , }\end{aligned}$$
(43)

where k corresponds to the iteration number. Notice that the temporal-filtered sensitivity used in the optimality criteria (44) replaces the spatial-filtered sensitivity.

The optimality criterion for the topology optimization problem (38) can easily be derived if no restriction is imposed on the design variable, i.e.,

$$\begin{aligned} \frac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} + \lambda \frac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} = 0, \; e:1\dots N_e { \, , }\end{aligned}$$
(44)

where the Lagrange multiplier \(\lambda \) must be computed so that the volume constraint \({{{\mathcal {C}}}}_0(\pmb {\rho })\) is fulfilled. For minimum mean compliance problem, the corresponding updating scheme can be expressed as

$$\begin{aligned} \rho _e^{k+1}= \left\{ \begin{aligned}&1&\text {if } \left( -\tfrac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} / \tfrac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} - \lambda \right) \ge 0 \\&\rho _{min}&\text {if } \left( -\tfrac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} / \tfrac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} - \lambda \right) < 0 \end{aligned} \right. { \, , }\end{aligned}$$
(45)

although the number of elements changing from the void domain to the material domain (or equivalently a volume fraction) is limited by a factor \(AR_{max}\). Consequently, if the number of elements changing from the void domain to the material domain is greater than the maximum volume addition ratio, only the \(AR_{max}\) elements from the void domain with the highest sensitivity are added to the material domain. In the material domain, the \(AR_{max}+ER\) elements with the lowest sensitivity are removed from the material domain and replaced with void material, thus satisfying the volume constraint. This procedure ensures that not many elements are added in a single iteration, causing the structure to loose its integrity.

For compliant mechanism synthesis [102, 145], the updating equation (45) is relaxed to

$$\begin{aligned} \rho _e^{k+1}= \left\{ \begin{aligned}&\min {\left( \rho _e^{(k)}+m,1\right) }&\text {if } \left( -\tfrac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} / \tfrac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} - \lambda \right) \ge 0 \\&\max {\left( \rho _e^{(k)}-m,\rho _{min}\right) }&\text {if } \left( -\tfrac{\widetilde{\partial {{{\mathcal {J}}}}}}{\partial \rho _e} / \tfrac{\partial {{{\mathcal {C}}}}_0}{\partial \rho _e} - \lambda \right) < 0 \end{aligned} \right. \end{aligned}$$
(46)

with \(m=0.1\), where the design variable \(\rho _e\) can now take intermediate values.

3.3 VARTOP Method

As in Level-set, the zero-level of the level-set function is used to precisely define the boundaries of the material domain, although no updating equation is defined in terms of the discrimination function \(\psi \). Instead, the characteristic function \({\overline{\chi }}{({\mathbf{x}})}=\{\beta ,1\}\) is employed as the design variable, which is computed from the discrimination function at each iteration via the Heaviside function \({{\mathcal {H}}}_\beta (\psi {({\mathbf{x}})})\)Footnote 15 (see Fig. 1). As result, the material interpolation of the constitutive tensor \({\mathbb {C}}_\chi \) is given by

$$\begin{aligned} {\mathbb {C}}_\chi {({\mathbf{x}})}= \chi ^m{({\mathbf{x}})}{\mathbb {C}}^+, \; \chi {({\mathbf{x}})}\in [\beta ,1] { \, , }\end{aligned}$$
(47)

with \(\beta \) being the relaxation factor. It is important to stress that, once the domain is discretized by finite elements, the characteristic function will take 1 or \(\beta \) in the majority of the domain, whereas it will only take values ranging 1 to \(\beta \) in the elements bisected by the material boundary \(\varGamma \). The relaxation factor \(\beta \) is defined such that the Young’s modulus for the void material is \(\alpha \) times that of the stiff material, i.e.,

$$\begin{aligned} \begin{aligned} {\mathbb {C}}_\chi ^- {({\mathbf{x}})}= \alpha {\mathbb {C}}^+ = \beta ^m {\mathbb {C}}^+ = \beta ^m E^+ {\mathbb {C}}_{(1)} \quad \quad \quad \text {for } {\mathbf{x}}\in \varOmega ^- { \, , }\end{aligned} \end{aligned}$$
(48)

with \(\beta \) being

$$\begin{aligned} \beta = \root m \of {\alpha } { \, , }\end{aligned}$$
(49)

where m is an exponential factor defined by the user and \(\alpha \) is the contrast factor for the Young’s modulus. The element stiffness matrix \({\mathbb {K}}_e(\chi {({\mathbf{x}})},{\mathbf{x}})\) is obtained as

$$\begin{aligned} \begin{aligned} {\mathbb {K}}_e(\chi {({\mathbf{x}})},{\mathbf{x}})&= \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ {{\mathbb {C}}}_{\chi }{({\mathbf{x}})}\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }\\&= \int _{\varOmega _e}{\mathbf {B}}^{\text {T}}{({\mathbf{x}})}\ \chi ^m{({\mathbf{x}})}{\mathbb {C}}^+\ {\mathbf {B}}{({\mathbf{x}})}{ \, d\varOmega }{ \, . }\end{aligned} \end{aligned}$$
(50)

The topology optimization problem (3) is now written as

$$\begin{aligned} \left[ \begin{aligned}&\underset{\chi \in {{\mathscr {U}}}_{ad}}{{\text {min}}}\;{{{\mathcal {J}}}}\left( {{\mathbf {u}}}(\chi ),\chi \right) \equiv \int _{\varOmega }{j({{\mathbf {u}}}(\chi ),\chi ,{\mathbf{x}})}{ \, d\varOmega }&(a) \\&\text {subject to:} \\&\quad \begin{aligned} {{{\mathcal {C}}}}_0(\chi )&\equiv \frac{\vert \overline{\varOmega ^-} \vert }{\vert \varOmega \vert } - \frac{1}{\vert \varOmega \vert } \int _\varOmega { \frac{1-\chi {({\mathbf{x}})}}{1-\beta } { \, d\varOmega }} \\&= t - \frac{\vert \varOmega ^- ({\chi }) \vert }{\vert \varOmega \vert } = 0 \end{aligned}&(b-1) \\&\text {governed by:} \\&\quad {{\mathbb {K}}}_{\chi }\hat{{\mathbf {u}}}_{\chi }^{(i)}={\mathbf {f}}^{(i)}&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(51)

where the volume constraint \({{{\mathcal {C}}}}_0(\chi )\) has been expressed in terms of the soft material fraction in contrast to Eqs. (23) and (38). The term t stands for the pseudo-time variable, used as time-advancing parameter.

The relaxed topological derivative (RTD), used as an approximation to the exact geometric topological derivative, for a functional \({{{\mathcal {F}}}}{(\chi )}:L^2(\varOmega )\rightarrow {\mathbb {R}}\) is defined as

$$\begin{aligned} \begin{aligned}&\dfrac{\delta {{{\mathcal {F}}}}(\chi )}{\delta \chi }{(\hat{{\mathbf{x}}})}= \left[ \dfrac{\partial f({{\mathbf {u}}}(\chi ),{\chi },{\mathbf{x}})}{\partial {\chi }}\right] _{{\mathbf{x}}=\hat{{\mathbf{x}}}}\varDelta \chi {(\hat{{\mathbf{x}}})}\text {, } \\&\text { with } {\varDelta \chi }{(\hat{{\mathbf{x}}})}= \left\{ \begin{aligned} -(&1-\beta )<0 \;\;\;\;\textit{for} \;\;{\hat{{\mathbf{x}}}}\in \varOmega ^{+} \\ (&1-\beta )>0 \;\;\;\;\textit{for} \;\;{\hat{{\mathbf{x}}}}\in \varOmega ^{-} \end{aligned} \right. { \, , }\end{aligned} \end{aligned}$$
(52)

where \({\varDelta \chi }{(\hat{{\mathbf{x}}})}\) is termed the exchange function and stands for the signed variation of \(\chi {(\hat{{\mathbf{x}}})}\), due to that material exchange. The sensitivity of the volume constraint (51-b-1) with respect to the characteristic function is equal to \(\text {sgn}(\varDelta \chi {(\hat{{\mathbf{x}}})})/{\vert \varOmega \vert }\), where \(\text {sgn}(\cdot )\) denotes the sign function of \((\cdot )\). The optimality condition for the constrained topology optimization problem can be written as

$$\begin{aligned} \left( \dfrac{\partial j \left( {{\mathbf {u}}}(\chi ),\chi ,\hat{{\mathbf {x}}} \right) }{\partial \chi }{\varDelta \chi }{(\hat{{\mathbf{x}}})}+\lambda \,\frac{\text {sgn}(\varDelta \chi {(\hat{{\mathbf{x}}})})}{\vert \varOmega \vert }\right) >{0}\;\; \forall \hat{{\mathbf {x}}}\in \varOmega { \, , }\end{aligned}$$
(53)

where \(\lambda \) stands for a Lagrange multiplier enforcing volume constraint \({{{\mathcal {C}}}}_0{(\chi )}=0\). Therefore, the closed-form non-linear solution for the topology optimization problem (51), termed as cutting&bisection algorithm, can be expressed as

$$\begin{aligned} \left\{ \begin{aligned}&\psi _{\chi }({\mathbf{x}},\lambda ){:}{=}{\xi }\left( {{\mathbf {u}}}(\chi ),\chi ,{{\mathbf {x}}} \right) - {\lambda }/{\vert \varOmega \vert }\\&\chi ({\mathbf{x}},\lambda )={{{\mathcal {H}}}}_{\beta }\left[ \psi _{\chi }({\mathbf{x}},\lambda ) \right] \\&{{{\mathcal {C}}}}_0(\chi ({\mathbf{x}},\lambda ))=0 \end{aligned}\right. { \, , }\end{aligned}$$
(54)

where \({\xi }\left( {{\mathbf {u}}}(\chi ),\chi ,{{\mathbf {x}}} \right) \) is termed the pseudo-energy and depends on the specific objective function. The Lagrange multiplier \(\lambda \) is computed using an efficient bisection algorithm and its value fulfils the volume constraint, as in the previous approaches.

For the three topology optimization problems, the pseudo-energies at point \({\hat{{\mathbf{x}}}}\) are given by

$$\begin{aligned}&{\xi }^{(I)}\left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) = \omega {(\hat{{\mathbf{x}}})}\ \hat{{\mathbf {u}}} (\chi ,{\hat{{\mathbf{x}}}}) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}} (\chi ,{\hat{{\mathbf{x}}}}) { \, , } \end{aligned}$$
(55)
$$\begin{aligned}&{\xi }^{(II)}\left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) = \omega {(\hat{{\mathbf{x}}})}\ \sum _{i=1}^{n_l}{ \hat{{\mathbf {u}}}^{(i)} (\chi ,{\hat{{\mathbf{x}}}}) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}^{(i)} (\chi ,{\hat{{\mathbf{x}}}}) } { \, , } \end{aligned}$$
(56)
$$\begin{aligned}&{\xi }^{(III)}\left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) = -\omega {(\hat{{\mathbf{x}}})}\ \hat{{\mathbf {u}}}^{(1)} (\chi ,{\hat{{\mathbf{x}}}}) {{\mathbb {K}}}_e^+ \hat{{\mathbf {u}}}^{(2)} (\chi ,{\hat{{\mathbf{x}}}}) { \, , } \end{aligned}$$
(57)

for non-bisected elements, with \(\omega {(\hat{{\mathbf{x}}})}\) being equal to \(m \chi ^{m-1}{(\hat{{\mathbf{x}}})}(1-\beta )\).Footnote 16 However, the pseudo-energy must be first shifted and normalized, according to

$$\begin{aligned} {\hat{\xi }}\left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) =\dfrac{\xi \left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) -\chi {(\hat{{\mathbf{x}}})}\varDelta _{shift}}{\varDelta _{norm}} { \, , }\end{aligned}$$
(58)

thus obtaining a modified energy density \({\hat{\xi }}\left( {{\mathbf {u}}}(\chi ),\chi ,{\hat{{\mathbf{x}}}}\right) \). The terms \(\varDelta _{shift}\) and \(\varDelta _{norm}\) correspond, respectively, to the shifting and normalization parameters defined at the first iteration.Footnote 17 This variable must be later regularized using a Laplacian regularization (similar as the one described in Sect. 3.1.1 for \(\hbox {SIMP}^{(I)}\)). The pseudo-energy density actually used in the closed-form solution (54) comes from the resolution of

$$\begin{aligned} \left\{ \begin{aligned}&{\hat{\xi }}_{\tau }-(\tau h_e)^2\varDelta _{\mathbf{x}}{\hat{\xi }}_{\tau }={\hat{\xi }}&\quad in \; \varOmega \\&\nabla _{\mathbf{x}}{\hat{\xi }}_{\tau }\cdot {\mathbf {n}}={0}&\quad on \; \partial \varOmega \end{aligned} \right. { \, , }\end{aligned}$$
(59)

where, \(\varDelta _{\mathbf{x}}({\mathbf{x}},\cdot )\) and \(\nabla _{\mathbf{x}}({\mathbf{x}},\cdot )\) are respectively the Laplacian and Gradient operators, and \({\mathbf {n}}\) is the outward normal to the boundary of the design domain, \(\partial \varOmega \). \(\tau \) and \(h_e\) stand for the dimensionless regularization parameter and the typical size of the finite element mesh, respectively.

Contrary to the common SIMP implementations (\(\hbox {SIMP}^{(I)}\) and \(\hbox {SIMP}^{(III)}\)) or the BESO method, the VARTOP is formulated under a time-advancing framework, where the pseudo-time t in Eq. (51-b-1) is iteratively increased, thus obtaining intermediate converged solutions, which are local minima and provide a Pareto Frontier in terms of the volume fraction. Notice that, at every time-step, convergence is achieved unlike the algorithm used for BESO.

3.4 Level-Set Method via a Hamilton–Jacobi Equation

As previously mentioned in the introduction, Level-set methods use a level-set function (LSF) to implicitly represent the optimal material domain via Eq. (2), and the material boundary via the 0-level of the level-set function. It is important to note that originally Level-set approaches only updated the material boundary based on a differential equation, preventing them from creating new voids. This drawback was first overcome by inserting new voids every certain iteration. However, Yamada et al. [140], among other researchers, suggested an approach in which a Level-set method was used to update not only the boundary of the material domain, but also the material domain itself, thus allowing to nucleate new voids. This specific approach will be taken as the reference in this article in conjunction with the relaxed topological derivative (RTD), defined in Eq. (52), to obtain the sensitivity function.

In contrast to VARTOP, the LSF is updated via a time-dependent updating equation, typically using a Hamilton–Jacobi equation, although other updating schemes can be used. From the level-set function, a characteristic function \({\overline{\chi }}\) can be defined as

$$\begin{aligned} {\overline{\chi }} ({\mathbf{x}},\phi {({\mathbf{x}})}) = {{{\mathcal {H}}}}_\beta (\phi {({\mathbf{x}})}) = \left\{ \begin{aligned}&1&\quad \text {for } \phi {({\mathbf{x}})}\ge 0\\&\beta&\quad \text {for } \phi {({\mathbf{x}})}<0 \end{aligned} \right. \end{aligned}$$
(60)

with \(\beta \) being the relaxation factor. Based on the preceding definition of the characteristic function, the corresponding material interpolations (Eqs. (4750)) are defined. The topology optimization problem (3), in terms of the level-set function, becomes

$$\begin{aligned} \left[ \begin{aligned}&\underset{\phi \in {{\mathscr {U}}}_{ad}}{{\text {min}}}{{{\mathcal {J}}}}\left( {{\mathbf {u}}}(\phi ),{\overline{\chi }} (\phi )\right) \equiv \int _{\varOmega } j({{\mathbf {u}}}(\phi ),{\overline{\chi }} (\phi ),{\mathbf{x}}){ \, d\varOmega }&(a) \\&\text {subject to:}&\\&\quad {{{\mathcal {C}}}}_0(\phi )\equiv t - \frac{\vert \varOmega ^- ({\overline{\chi }} (\phi )) \vert }{\vert \varOmega \vert } =0&(b-1) \\&\quad {{{\mathcal {C}}}}_n({\hat{\phi }}_n)\le 0 \rightarrow -1\le {\hat{\phi }}_n\le 1, \; n:1\dots N_n&(b-2) \\&\text {governed by:}&\\&\quad {{\mathbb {K}}}_{\phi }\hat{{\mathbf {u}}}_{\phi }^{(i)}={\mathbf {f}}^{(i)}&(c) \end{aligned}\right. { \, , }\end{aligned}$$
(61)

where the nodal level-set function \({\hat{\phi }}_n\) must be bounded between \(-1\) and 1, for convergence reasons. From Eq. (61), the Lagrangian function can be expressed as

$$\begin{aligned} \begin{aligned}&{{{\mathcal {L}}}}({{\mathbf {u}}}({\overline{\chi }} (\phi )),{\overline{\chi }} (\phi ),\lambda ) = {{{{\mathcal {J}}}}}({{\mathbf {u}}}({\overline{\chi }} (\phi )),{\overline{\chi }} (\phi )) \\&\quad + \left( \lambda + \tfrac{1}{2} s \, {{{\mathcal {C}}}}_0({\overline{\chi }} (\phi ))\right) {{{\mathcal {C}}}}_0({\overline{\chi }} (\phi )) { \, , }\end{aligned} \end{aligned}$$
(62)

when an Augmented Lagrangian method is used to fulfill the volume constraint. Therefore, the optimality condition (53) for the characteristic function \({\overline{\chi }} ({\mathbf{x}},\phi {({\mathbf{x}})})\) reads

$$\begin{aligned} \begin{aligned}&\dfrac{\delta {{{\mathcal {L}}}}({{\mathbf {u}}}({\overline{\chi }} (\phi )),{\overline{\chi }} (\phi ),\lambda )}{\delta {\overline{\chi }}}{(\hat{{\mathbf{x}}})}= \left( \dfrac{\partial j \left( {{\mathbf {u}}}({\overline{\chi }} (\phi )),{\overline{\chi }} (\phi ),\hat{{\mathbf {x}}} \right) }{\partial {\overline{\chi }} }{\varDelta {\overline{\chi }} }{(\hat{{\mathbf{x}}})}\right. \\&\quad \left. +\left( \lambda + s \, {{{\mathcal {C}}}}_0({\overline{\chi }} (\phi )) \right) \frac{\text {sgn}(\varDelta {\overline{\chi }} {(\hat{{\mathbf{x}}})})}{\vert \varOmega \vert }\right) >{0}\;\; \forall \hat{{\mathbf {x}}}\in \varOmega { \, , }\end{aligned} \end{aligned}$$
(63)

where \(\lambda \) and s stand for the Lagrange multiplier and the penalty factor of the Augmented Lagrangian method. Notice that the same pseudo-energy functions as in VARTOP (Eqs. (5557)) are obtained for the three topology optimization methods.

Instead of the closed-form non-linear solution described in Sect. 3.3, for each time-step the topology in this approach, \(\phi \), is updated via a Hamilton-Jacobi equation, i.e.,

$$\begin{aligned} \frac{\partial \phi }{\partial t} = \kappa \dfrac{\delta {{{\mathcal {L}}}}({{\mathbf {u}}}({\overline{\chi }} (\phi )),{\overline{\chi }} (\phi ),\lambda )}{\delta {\overline{\chi }} } \quad \text {in } \varOmega { \, , }\end{aligned}$$
(64)

where the shape derivative of the Lagrangian has been replaced with the corresponding relaxed topological derivative (52), and \(\kappa \) corresponds to a coefficient of proportionality. Substituting Eq. (63) to the preceding equation gives

$$\begin{aligned} \begin{aligned} \phi _{k+1}&= \phi _{k} + \varDelta t \, \kappa \, \left( \dfrac{\partial j \left( {{\mathbf {u}}}({\overline{\chi }} (\phi _k)),{\overline{\chi }} (\phi _k),\hat{{\mathbf {x}}} \right) }{\partial {\overline{\chi }} }{\varDelta {\overline{\chi }} }{(\hat{{\mathbf{x}}})}\right. \\&\quad \left. - \frac{\lambda _k + s \, {{{\mathcal {C}}}}_0({\overline{\chi }} (\phi _k))}{\vert \varOmega \vert } \right) \end{aligned} \end{aligned}$$
(65)

with

$$\begin{aligned} \lambda _{k} = \lambda _{k-1} + s \, {{{\mathcal {C}}}}_0({\overline{\chi }} (\phi _{k})) { \, . }\end{aligned}$$
(66)

The new level-set function \(\phi _{k+1}\) is then regularized via the Laplacian regularization (59), and the corresponding characteristic function \({\overline{\chi }}_{\phi ,k+1} ({\mathbf{x}},\phi _{\tau ,k+1}{({\mathbf{x}})})\) is computed based on the regularized LSF, \(\phi _{\tau ,k+1}{({\mathbf{x}})}\), using Eq. (60).

As defined in Eq. (61), a time-advancing framework can be formulated, thus obtaining a set of intermediate, converged optimal solutions. However, in addition to the topology and objective function criteria, the volume constraint must be checked to ensure convergence for every time-step, since it is no longer enforced at each iteration.

4 Benchmark Cases

A set of four numerical examples in 3D problems is used to compare the six different implementations with each other. The set of benchmark cases contains two minimum compliance problems (Sect. 2.2.1), one multi-load compliance problem (Sect. 2.2.2) and one compliant mechanism synthesis (Sect. 2.2.3). In all cases and methods, eight-node hexahedral (\(Q_1\)) finite elementsFootnote 18 are used in the solution of the state Eq. (8).

This set of 3D problems has been carefully selected to be used as benchmark cases using a variety of topology optimization techniques, which have been widely used for this purpose by different researchers. All of them exhibit a high geometric complexity and represent a significant challenge for the considered methods when solving the optimization problems. They are defined in the following subsections.

Fig. 4
figure 4

Cantilever beam: topology optimization domain with boundary conditions and dimensions. A distributed vertical load F is applied on the bottom-right edge while the displacements \({\mathbf {u}}\) are prescribed to \({\mathbf {0}}\) on the left surface of the domain. The rear surface of the domain, in soft gray, represents the surface of symmetry

4.1 Cantilever Beam

This first numerical example refers to the minimization of the structural mean compliance of a cantilever beam in a prismatic domain subjected to specific Dirichlet and Neumann boundary conditions. The displacements are prescribed on the left face of the design domain and a distributed vertical load is applied on the bottom-right edge of it. The analysis domain \(\varOmega \), displayed in Fig. 4, corresponds to a prism of (relative) dimensions \(1\times 2\times 1\), with the largest dimension oriented in the y-axis. Thanks to the symmetry with respect to the \(y-z\) plane, half of the domain is discretized with \(50\times 200\times 100\) unit cubic hexahedral elements.

This benchmark will be used to check the correctness of the implementations as well as to have a first comparison of the results obtained with each technique in terms of the number of iterations, objective function and topology quality. Some optimal layouts for different volume fractions can be found in [73, 146].

4.2 L-Shaped Structure

This second example tackles the optimization of a simplified version of a hook, as it is shown in Fig. 5. This optimization problem, as the previous one, also corresponds to the minimization of the mean compliance (14). The analysis domain is split into two regions: (1) the L-shaped structure, which corresponds to the design domain \(\varOmega \), and (2) a prismatic volume prescribed as void in the top right area, defined by \(y\ge \frac{1}{3}\) and \(z\ge \frac{1}{3}\). A single vertical load is applied as illustrated in Fig. 5 at point \(x=0\), \(y=1\) and \(z=\frac{1}{6}\) and the displacements at the top side, near the left edge, are prescribed (i.e., \(y\le \frac{1}{3}\) and \(z=1\)). The design domain, with symmetry in the \(y-z\) plane, is discretized with a structured mesh of \(30\times 180\times 180\) hexahedral elements.

Fig. 5
figure 5

L-shaped structure: topology optimization domain with boundary conditions. A point-wise vertical force F is applied on the right-bottom surface while the displacements \({\mathbf {u}}\) are prescribed on the top-left boundary of the design domain. The rear boundary corresponds to the \(y-z\) symmetry surface. A considerable fraction of the domain (on the top-right side of the domain) is prescribed to void material

Similarly to the previous problem, this example will provide a comparison between the approaches, but now with a more complex design domain (a rough edge domain with a point-wise load), thus showing the performance of the methods against point loads and non-rectangular design domains. The reader is sent to [147] for the reference optimal solution.

4.3 Multi-Load Cantilever Beam

A multi-load topology optimization problem with two different loading conditions not applied at once is optimized in this example. Both the dimensions of the prismatic domain, \(\varOmega \), and the discretization mesh are the same ones as in the first example (see Sect. 4.1). However, in this case, the displacements of all the nodes on the left side are imposed and two loading conditions are applied on the top and bottom-right edges. In the first loading condition, a vertical distributed downward force is applied on the bottom-right edge while in the second one, a distributed force with the same magnitude is applied upwards on the upper-right edge, as displayed in Fig. 6.

Through this numerical case, it is aimed at determining whether the methods are capable of obtaining symmetric designs when two opposite forces are applied in the design domain, as well as to compare the topology quality and computational cost of the resultant optimal topologies with all the considered techniques.

Fig. 6
figure 6

Multi-load Cantilever beam: a topology optimization domain with boundary conditions. The displacements are prescribed on the left surface of the domain, and a vertical distributed downward force \(F_1\) is applied in the first loading case (b), whereas a vertical distributed upward force \(F_2\), in the second loading case (c). The rear boundary of the domain corresponds to the symmetry surface

4.4 Gripper Compliant Mechanism

In this last numerical example, a compliant mechanism is designed where the vertical displacement at the output port is maximized. The displacements are prescribed near the bottom edge at the left side of the domain (\(y=0\) and \(z\le 0.2\)). As illustrated in Fig. 7, a positive, horizontal distributed load is applied at the input port (\(y=0\) and \(z\ge 1.8\)), while a vertical upward dummy load is applied at the output port (\(z=1.8\) and \(y\ge 3.6\)). The analysis domain \(\varOmega \), whose (relative) dimensions are \(2\times 4\times 4\), is discretized with a mesh of \(100\times 200\times 200\) hexahedral elements. However, thanks to the two existing symmetries, only a quarter of the domain is analyzed, thus leading to 1.000.000 finite elements. In addition, two regions near the input and output ports are prescribed to stiff material to guarantee stiff material in those areas (\(\varDelta z=0.2\)).

Fig. 7
figure 7

Gripper (compliant mechanism): a topology optimization domain with boundary conditions. The displacements are prescribed at the bottom part of the left surface of the domain, and a positive, horizontal distributed load is applied at the top of the left surface for the state equation (b), while a positive, vertical distributed dummy load is applied at the jaws of the gripper for the additional state equation (c). The top and rear surfaces correspond to the \(x-y\) and \(x-z\) symmetries, respectively

Additionally, surface distributed springs are included in the input and output ports (in the same direction as the target displacement) to restrict the displacement amplitude at these areas and simulate both the input work of the actuator and the elastic reaction work at the output port. The corresponding numerical values for the springs are \(K_{in}=1.5\times 10^{-1}\,{\text {N/m}}^3\) and \(K_{out}=1.5\,{\text {N/m}}^3\), while the distributed forces are \(f_1=3.81\times 10^{-3}\,{\text {N/m}}^2\) and \(f_2=3.81\times 10^{-4}\,{\text {N/m}}^2\), respectively. Note that the optimal solution will heavily depend on the ratios of these parameters, however not all parameter combinations will ensure a convergent admissible solutions. For that reason and due to the non-semi-definite topological derivative, this last example will provide an analysis of the performance of the different techniques with respect to the design of compliant mechanisms, produced either with localized hinges or deformable bars (for optimal reference solutions refer to [59, 140, 145]).

5 Comparison of Methods

5.1 Comparison Settings

In the following subsections, the basis of the comparison will be detailed, specifying the platform on which Matlab will be executed, the versions for the Matlab codes of each approach as well as the specific parameters used in each method and numerical example. In addition, it is important to define equivalent convergence criteria for the different techniques in order to guarantee a fair comparison in terms of the computational cost.

5.1.1 Computing Cluster Features

The benchmark cases are solved on a cluster, in which each node consists of two AMD EPYC 7451 with 24 cores (48 threads) each one at 2.9 GHz and 1 TB DDR4 RAM memory at 2666 MHz. Each example is solved using eight cores and 99 GB of RAM memory to ensure enough memory for each of the numerical benchmarks. In this way, a greater number of cases can be solved at once without affecting the result of each approach. All these cases are computed using modified codes in Matlab 2018b under Scientific Linux 7.2 (based on RedHat Enterprise 7.2).

5.1.2 Matlab Codes

All the codes used in this paper are 3D extensions of the respective 2D codes, already published by their respective developers, preserving the original algorithmic structure.

Firstly, SIMP-based method codes are based on the 2D implementation initially introduced by Sigmund [148] in the 99-line program for two-dimensional topology optimization. This program was later improved by Andreassen et al. [144], who vectorized the element loops in the assembly and filtering strategies. The code in Matlab was later extended to three-dimensional topology optimization problems by Liu and Tovar [147], who provided the analytical element stiffness matrix for a cubic hexahedral element. Therefore, the SIMP method with PDE-filtering (\(\hbox {SIMP}^{(I)}\)) and SIMP method with convolution filter (\(\hbox {SIMP}^{(III)}\)) take the basic scheme from the 82-line program (using a PDE filter) and the 71-line program (with the conv2 function) from [149], respectively, and implement the formulation for the 3D elastic problem from [147]. Both approaches use the optimality criteria (OC) method combined with a sensitivity filtering (\(ft=1\)) to solve the corresponding topology optimization problem. It is worth noticing that the \(L_\infty \) norm of the design variable has been replaced by a \(L_2\) norm normalized with the size of the domain. In addition, some minor changes to the OC Matlab function have been done to correctly consider active and passive elements. As mentioned, the SIMP method using time-advancing scheme (\(\hbox {SIMP}^{(II)}\)) employs the same scheme as the SIMP method with PDE-filtering (\(\hbox {SIMP}^{(I)}\)), but this time using a time-advancing scheme, similar as the one implemented in VARTOP [60, 61].

Secondly, a Soft-kill BESO code, implemented in Matlab, has been adapted from the one presented by Huang and Xie [102] in chapter 4 for 2D topological stiffness optimization. This code has been extended to three-dimensional problems mimicking the Matlab code for \(\hbox {SIMP}^{(III)}\), since they share most of the general scheme of the algorithm. Some modifications have been done to adapt the specific updating scheme, the sensitivity filtering along with the corresponding temporal filtering (the so-called averaging scheme in [102]). Additional minor changes must be implemented in the sensitivity and stiffness computations, since the original SIMP material interpolation is used instead of the one implemented in [144], with two unique discrete values: \(\rho =\{1,\rho _{min}\}\) for the minimum mean compliance problem (see Sect. 3.2). In this particular scenario, the minimum value \(\rho _{min}\) is imposed to elements in the void domain to avoid zero stiffness elements. Additionally, a similar \(L_2\) norm of the topology, implemented for SIMP-based methods, is here also used to check if the topology has converged in addition to the existing one in objective function.

Thirdly, the Matlab code for VARTOP is a 3D extension of the corresponding program for 2D topology optimization problems provided in [61]. The element stiffness matrices, as a product of the strain-displacement matrix with the nominal constitutive tensor, are precomputed for the non-bisected and bisected elements, here termed mixed elements. In this way, once the type of element has been determined, the global stiffness matrix can be quickly computed and assembled. In addition, the pseudo-energy density is also easily computed from the matrices calculated with the reference element. In this case, a Laplacian smoothing, applied to the pseudo-energy density at each iteration, is precomputed at the first iteration as implemented in \(\hbox {SIMP}^{(I)}\). Finally, the Lagrange multiplier is computed using the closed-form optimality method in conjunction with a modified marching cubes method to compute the volume, explained in Oliver et al. [60]. The convergence criteria defined in [60] are replaced with the objective function criterion and the topology criterion in terms of a relaxed design variable, in addition to the volume constraint.

Finally, the Level-set method using the Relaxed Topological Derivative (RTD) corresponds to a modification of the previous code for VARTOP, where the updating scheme is changed. Instead of the original closed-form optimality criteria, a Hamilton-Jacobi equation is used to update the level-set function with a given \(\varDelta t\), as detailed in Oliver et al. [60]. An Augmented Lagrangian method is used to ensure the volume constraint equation \({{{\mathcal {C}}}}_0\). The stiffness matrices as well as the different terms required to compute the pseudo-energy density are precomputed at the initial iteration. The same Laplacian smoothing as in VARTOP is here applied to the level-set function at each iteration. In addition to the two existing convergence criteria, the volume constraint must be also considered in the outer loop.

5.1.3 Guidelines for the Comparison

The general guidelines for a fair comparison are listed below:

  1. 1.

    Benchmark cases: the same numerical benchmark cases and finite element meshes must be used for every topology optimization approach. Four benchmark cases will be carried out using dense meshes (around 1 million finite element) in order to obtain high-quality designs.

  2. 2.

    Target volume fraction: the same ratio of the final material domain is imposed by the constraint equation in each approach, which is fulfilled through different techniques. The desired volume fraction corresponds to a small ratio of material with respect to the initial design domain, so that a large material reduction is achieved throughout the topology optimization. For three-dimensional problems using high dense meshes, this value will commonly be between 80 and 95% of the design domain, \(\varOmega \), depending on user requirements. Nevertheless, it will depend on each specific numerical example and its respective boundary conditions as connections between the different boundary conditions areas must be preserved. This ensures a stiff connection between the nodes in which the loads are applied and those where the displacements are prescribed in the elastic problem.

  3. 3.

    Objective function normalization: since not all the methods start from a full material configuration, an initial iteration with this material layout is computed in all Matlab codes as a reference iteration. The objective function value at this iteration, \({{{\mathcal {J}}}}_0\), is used to normalize the subsequent iterations in each method, thus obtaining equivalent values for each numerical example, technique and volume fraction. However, the use of different design variables, nodalFootnote 19 versus elementFootnote 20 variables, produces huge discrepancies in the actual objective function value since the stiffness of semi-dense elements is underestimated [71]. For that reason, an additional final iteration is computed with an element black-and-white configuration, i.e \(\chi ,\rho =\{1,10^{-9}\}\), thus obtaining a fully equivalent objective function value. Nevertheless, it is important to point out that this configuration is not practical from a design standpoint as the smoothness of the design is lost in the projection. The reader is addressed to “Appendix 2” for further details.

  4. 4.

    Contrast factor: since each compared topology optimization approach defines a different material interpolation for the constitutive tensor \({\mathbb {C}}_{{\zeta }}{}\), it is important to ensure the same contrast factor \(\alpha \), so that the same Young’s modulus is used for the soft material when using the ersatz material approach. This parameter may strongly affect the objective function value and the convergence of the topology optimization. A preliminary study has revealed that topology convergence can be achieved for contrast factors up to \(\alpha =10^{-6}\), for minimum mean compliance problems. Thus, this value will be used as contrast factor for this type of problems.

  5. 5.

    Convergence criteria: in order to guarantee a fair comparison, the convergence criteria of each approach must be replaced with the same equivalent conditions:

    • Volume constraint: the same tolerance \(Tol_{{{{\mathcal {C}}}}_0}=10^{-3}\) in the volume constraint (3-b-1) is assumed for all the topology optimization approaches, except in Level-set. In this approach, the tolerance is slightly relaxed to \(5\times 10^{-3}\), in order to improve convergence when using the Augmented Lagrangian method.

    • Objective function criterion: a (weighted) moving mean of the relative objective function \({{\mathcal {J}}}\) is evaluated along n consecutive iterations as

      $$\begin{aligned} \varDelta {{{\mathcal {J}}}}_k = \frac{1}{n} \sum _{i=k-n}^{i=k} {\frac{ \vert {{{\mathcal {J}}}}_{i} - {{{\mathcal {J}}}}_{i-1} \vert }{ {{{\mathcal {J}}}}_0 }} { \, , }\end{aligned}$$
      (67)

      where the parameter k denotes the k-th iteration and \({{{\mathcal {J}}}}_0\) stands for the objective function at the initial iteration. The appropriate number of iterations n depends on the topology optimization technique, as the number of iterations per time-step will not be the same. Notice that, for time-advancing schemes, the \(n+1\) iterations correspond to the same time-step, thus avoiding variations in the objective function due to the change of volume constraint. For all the benchmark cases, a tolerance \(Tol_{{{{\mathcal {J}}}}}=10^{-3}\) in the objective function is prescribed.

    • Topology criterion: a \(L_2\) norm between two consecutive iterations is evaluated in a relaxed design variable \(\rho \) as

      $$\begin{aligned} \varDelta {\rho }_k = \frac{1}{\vert \varOmega _0 \vert ^{1/2}} \left( \int _\varOmega { \left( \rho _k - \rho _{k-1} \right) ^2 { \, d\varOmega }} \right) ^{1/2} { \, , }\end{aligned}$$
      (68)

      where k represents the iteration number and \(\vert \varOmega _0 \vert \) stands for the material volume at the first iteration. The design variable \(\rho _k\) corresponds to a relaxed characteristic function for discrete design variables (for instance, in the VARTOP, SOFTBESO, and Level-set methods) or to the density variable for the SIMP approaches. In “Appendix 1”, the reader can find the exact definition of this topology criterion for discrete design variables. For this criterion, the convergence tolerance is \(Tol_{{\zeta }}= 2.5\times 10^{-3}\).

    For incremental time-advancing methods, such as the VARTOP, SIMP\(^{(II)}\), and Level-set methods, a linear variation of the tolerances in cost and topology is defined, starting from a higher value for the first time-step (around one order of magnitude higher) to the value established in the last time-step. Consequently, all approaches obtain the convergence with the same criteria for the last increment (i.e., for the same stiff material fraction), thus resulting in a fair comparison. It is important to stress that the objective function criterion is not a reliable indicator of convergence in compliant mechanism synthesis problems. The normalized objective function oscillates significantly more than in minimum mean compliance problems in all methods, thus preventing to obtain an optimal solution. This oscillation may be related to this specific type of problem, and, in particular, to the initial value of the objective function which may be null. As a consequence, the normalization of the objective function can not be performed, thus invalidating this convergence criterion. For these reasons, the objective function criterion has not been considered in the last numerical benchmark of this paper. However, it can be used in the other three benchmark cases since a proportionality between the objective and topology criteria is observed, both monotonously converging to the optimal values. In addition, the normalization problem is not detected in minimum mean compliance problems as the external work is different from 0 in any case. In case it would also be omitted in these examples, no substantial change would be observed with respect to the obtained results, just resulting in minimal variations in the number of iterations to converge.

5.1.4 Parameter Definition

For each topology optimization method and benchmark case, a specific set of parameters must be defined. These parameters define the material behavior (via the contrast factor or minimum Young’s modulus), the volume fraction, the convergence tolerances, the exact updating parameters for the design variable, and the the ones for the regularization technique.

Whenever possible, common values in material properties, volume fraction, and convergence tolerances will be imposed for the different benchmark cases and topology optimization methods. However, the specific updating parameters depend on each approach and numerical example due to convergence issues. In particular, a consistent difference is noticed regarding the definition of the parameters for minimum mean compliance and compliant mechanism synthesis problems, as already commented.

Regarding the target volume fraction, a \(10\%\) volume fraction of stiff material and a contrast factor \(\alpha =10^{-6}\) are imposed for minimum mean compliance problems. As for compliant mechanism synthesis problem, the volume constraint is applied for a \(15\%\) volume fraction, while contrast factor is increased to \(\alpha =10^{-2}\). In both problems, a linear isotropic material with a Young’s modulus \(E=1\) and Poisson’s ratio \(\nu =0.3\) is used.

The updating and regularization parameters depend on each approach, and in certain cases, on the optimization problem. The exact values of these parameters are detailed in “Appendix 3”. In general, the parameters of each method are given as follows:

  • SIMP-based methods: the penalty value and the minimum radius are prescribed to \(p=3\) and \(r_{min}=3\), respectively. A sensitivity filtering (\(ft=1\)) is used for the topology optimization, as aforementioned. The updating parameters m and \(\eta \) are defined according to the optimization problem, corresponding to 0.2 and 0.5 for minimum compliance problems, and 0.1 and 0.3, respectively, for compliant mechanism design problem.

  • SOFTBESO: the same values for the penalty factor and minimum radius as those in SIMP are used. The evolutionary ratio ER and the maximum volume addition ratio \(AR_{max}\) are prescribed to 0.01 and 0.1, respectively.

  • VARTOP: the number of steps \(n_{steps}\), the exponential factor m and the regularization factor \(\tau \) depend on each problem. However, the same value of time-steps as \(\hbox {SIMP}^{(II)}\) is used for each benchmark case.

  • Level-set: in contrast to \(\hbox {SIMP}^{(II)}\) and VARTOP, the optimizations are carried out with a single time-step. However, the same exponential factors as the ones in VARTOP are employed, and the regularization factor \(\tau \) is set to 1. In addition, the time-increment \(\varDelta t\) and the penalty coefficient s of the Augmented Lagrangian method also change with the optimization problem.

For incremental time-advancing techniques with multiple time-steps, the volume fraction of stiff material at each step is reduced following an exponential evolution

$$\begin{aligned} f_j = f_0 + \frac{{\overline{f}}-f_0}{1-e^k}\left( 1- e^{k\frac{j}{n_{steps}}} \right) , \; j:1\dots n_{steps} \end{aligned}$$
(69)

with factor \(k=-2\).

5.2 Results

The results obtained from the six topology optimization approaches are now compared with each other for every of the numerical benchmarks (see Sect. 4). The comparison is carried out in terms of the optimal topology, objective function value, and the computational cost, discussing the relative objective function values and the relative computation costs. In addition, an analysis of the convergence is also performed. Finally, an overall comparison of the different methods is made according to the results.

5.2.1 General Discussion of Results

The optimal topology layout for the required material volume is here compared for the six different approaches from a quantitative and qualitative standpoint. In particular, the quality of the optimal solution is discussed together with the minimum filament size of the resulting (linear) pieces of the final design (bars), the computational cost in terms of the iterations, and the normalized value of the objective function for the four addressed benchmark cases.

Cantilever beam The final solutions of the Cantilever beam problem for each topology optimization technique are displayed in Table 1. Although the resultant topologies are quite different, all methodologies except for SIMP\(^{(II)}\) obtain a similar overall optimal design consisting of two separated webs. However, these webs present a different internal layout and topology complexity. The designs obtained from SIMP\(^{(I)}\), SIMP\(^{(III)}\), VARTOP, and Level-set illustrate a much simpler design based on bars, while SOFTBESO produces an optimal design with a thin web (or a high number of close bars), with almost constant thickness. On the other side, SIMP\(^{(II)}\) finds a different optimal layout with a single continuous central web.

The mean bar width value \({\overline{h}}\), computed as the ratio between the stiff volume and the surface area of the solution, provides feedback on the complexity of the optimal design. For low \({\overline{h}}\) values, as in SOFTBESO, the optimal solution is made of a large number of thin bars, making it more difficult to manufacture and more likely to buckle. As this number increases, the width of the bars tends to increase, thus simplifying the complexity of the design, as in SIMP\(^{(I)}\), VARTOP, or Level-set. Furthermore, these topologies are less prone to buckling effects.

The topologies can be also compared in terms of the corresponding value of the objective function. It can be noted that as the number of bars increases and/or the size of these bars decreases (tending to a single continuous web in the limit), the value of the objective function decreases, as it is observed in the SOFTBESO and \({SIMP}^{(II)}\) approaches. On the contrary, Level-set and VARTOP optimize the design layout using thicker bars, thus obtaining a higher compliance value.Footnote 21 Similar designs and objective function values are obtained via SIMP\(^{(I)}\) and SIMP\(^{(III)}\).

Finally, the techniques can be compared in terms of the number of iterations (i.e., a computational cost-measure). As detailed in Table 1, VARTOP requires fewer iterations (116) to achieve the optimal topology layout while not being so far from the optimal topologies obtained by the other approaches. It is closely followed by SIMP\(^{(III)}\) with 124 iterations, and with a few iterations more one can find SIMP\(^{(I)}\), SIMP\(^{(II)}\), and SOFTBESO with 175, 231 and 272 iterations, respectively. The Level-set method takes many more iterations (1266) to converge.

Table 1 Comparison of the results of topology optimization methods for the Cantilever beam

L-shaped structure The obtained results for the L-shaped structure are presented in Table 2. As it can be noticed, the overall design of the structure is similar in all the methods. The vertical part, at the left, is almost identical in all the solutions. The most significant differences are found in the lower part of the structure, in which the topological complexity changes with the method. In this case, designs with 2 or 3 webs are obtained according to the approach, which connects the vertical part of the structure with the load application point. In particular, all the methods present a 2 web design except for the solutions of SIMP\(^{(II)}\), which displays an internal central structure. Additionally, all the designs are mainly constituted by bars, being thinner in SIMP\(^{(II)}\) and SOFTBESO, as displayed by the mean bar width value \({\overline{h}}\).

Regarding the objective function value, the values for all the solutions are around a compliance value of 2.40–2.60 with respect to the initial reference compliance.Footnote 22SIMP\(^{(II)}\) achieves the topology design with the lowest objective function value, while other approaches produce solutions with a value closer to 2.5. SOFTBESO obtains the solution with the highest objective function value (2.6).

Finally, a comparison of the number of iterations shows that, again, VARTOP requires fewer iterations than the other considered topology optimization techniques. There is not a large difference in the number of iterations with SIMP\(^{(I)}\) or SIMP\(^{(III)}\) (66 and 76 versus 62 of the VARTOP), although the difference in iterations increases when compared to SIMP\(^{(II)}\), SOFTBESO or Level-set techniques, as observed in the previous example.

Table 2 Comparison of the results of topology optimization methods for the L-shaped structure

Multi-load Cantilever beam Table 3 presents the results obtained regarding the multi-load problem for the corresponding approaches. In contrast to the previous cases, the resultant optimal solutions are quite different from each other. Although all six methodologies find symmetrical optimal solutions (with respect to the horizontal midplane of the domain), the resulting topologies do not correspond to the symmetrical solutions obtained for the first example (see Table 1), which could be intuitively presumed.

Most of the solutions are based on bar designs except for the SOFTBESO approach, which consists of a continuous core. For this reason, the mean bar size \({\overline{h}}\) is the lowest of all techniques. Nevertheless, the solutions obtained with VARTOP, SIMP\(^{(I)}\), and SIMP\(^{(III)}\) have many similarities, being the design of these last two techniques practically the same. Furthermore, the optimal layout achieved with SIMP\(^{(II)}\) is made of 3 webs with thinner bars, having a similar overall design. It is important to stress that the solutions obtained using VARTOP and Level-set present the highest mean bar width, thus achieving the best designs from a manufacturing standpoint and buckling resistance. However, the solution of the Level-set method corresponds to a different local minimum than the previous ones.

The lowest objective function values are obtained by SOFTBESO and SIMP\(^{(II)}\) even though the designs are quite complex and can not be easily manufactured. Conversely, Level-set finds the topology layout with the highest value. The other approaches (VARTOP, SIMP\(^{(III)}\), and SIMP\(^{(I)}\)) provide sufficiently manufacturable (low complexity) solutions with intermediate values. Similar to the previous cases, methods SIMP\(^{(I)}\), SIMP\(^{(III)}\), and VARTOP are the ones with the lowest computational cost, method SIMP\(^{(I)}\) being 30% faster than the other two methods.

Table 3 Comparison of the results of topology optimization methods for the Multi-load cantilever beam
Table 4 Comparison of the results of topology optimization methods for the Gripper compliant mechanism

Gripper compliant mechanism The results of this last numerical example are summarized in Table 4. The optimal solutions exhibit resemblance to each other, obtaining the desired mechanism. However, the topology layouts can be grouped into two groups: obtaining 3D-like designs for SIMP and SOFTBESO methods, while almost 2D-extruded designs are obtained for VARTOP and Level-set. The considered approaches can also be split into two main groups depending on their capability to generate the mechanism either by creating localized hinges or deformable bars. All methods except SOFTBESO achieve a design based on localized hinges, thus significantly increasing the value of the objective function. In other words, the same force applied at the input port results in a smaller displacement in the target direction at the output port. For this reason, it is concluded that SOFTBESO has not fully converged to the same local minimum under the given parameters. Regarding the mean bar width, the values for all approaches range between 2.4 and 2.5, being equal to 2.7 for SIMP\(^{(I)}\) as the design is based on a smaller number of thicker bars.

Except for the SOFTBESO, the objective function values of the other approaches range between \(-260\) and \(-331\), showing a larger discrepancy than in the previous benchmark cases. On the other hand, unlike the previous benchmark cases, the best topology design is obtained using the Level-set method, even though the layout almost resembles a 2D-extruded design.

Finally, the comparison of the number of iterations reveals that VARTOP requires much fewer iterations than the other methods. The other topology optimization techniques require between 200 and 300 iterations. Therefore, the considered methods can be sorted according to the number of iterations, in increasing order, as follows: VARTOP, Level-set, SOFTBESO, SIMP\(^{(III)}\), SIMP\(^{(II)}\) and SIMP\(^{(I)}\).

Fig. 8
figure 8

Objective function value for each example and topology optimization approach normalized with the results obtained with the SIMP\(^{(I)}\) method. Legend: (1) the Cantilever case is represented with a solid black line, (2) the L-shaped case, with a dotted black line, (3) the Cantilever multi-load case, with a dash-dotted line and (4) the Gripper mechanism with a dashed line, which is interrupted for the SOFTBESO method due to lack of convergence

After analyzing all the results, it can be stated that topologies resulting from Level-set and VARTOP have smooth and accurate interfaces since the solution is defined via a level-set or a discrimination function. Thus, low complexity topology designs are obtained. On the contrary, SIMP-based and SOFTBESO methods produce element-wise discontinuous designs. In addition, SIMP-based approaches require special post-processing as the design has semi-dense elements, thus requiring an extra projection procedure to determine the density value that defines the material interface. In this procedure, bars might be disconnected or broken up, giving as solution non-optimal topologies. Additionally, a smoothing post-processing should be done to achieve crisp and smooth edges from these two family of approaches.

5.2.2 Objective Function Value

In Fig. 8, the objective function values for each example and topology optimization method are illustrated. The values are normalized with respect to SIMP\(^{(I)}\). As aforementioned, the objective function for each of the numerical benchmarks does not differ much from one approach to another. The values are between a range of \(\pm 15\%\) of the ones obtained using SIMP\(^{(I)}\).

As observed in the graphic, SIMP\(^{(II)}\) achieves consistently the optimal solutions with the lowest objective function value as a consequence of the larger number of thin straight bars (high topology complexity), as detailed in Sect. 5.2.1. Nevertheless, two exceptions are observed, the first one for the multi-load cantilever problem where SOFTBESO achieves a solution with a lower objective function, and the second one for the Gripper case and the Level-set.

It is important to emphasize that a greater variance is only observed in the Gripper due to the fact that there is a greater difference in topology among the different approaches. Each technique achieves a characteristic compliant design with the exception of SOFTBESO. This approach obtains a topology layout with an objective function value that is almost two times higher than the one obtained using SIMP\(^{(I)}\).

Table 5 Comparison of computational cost in terms of iterations of the considered topology optimization methods

5.2.3 CPU Computation Cost: Iterations

The computational cost is assessed in this paper according to the number of iterations instead of the computational time. In this way, it is possible to decouple the solution from the platform used to run the topology optimization technique (i.e., OS, programming language, and hardware, among others) as well as from the solver used to solve the state problem. It has been observed that the selection of a specific iterative solver may significantly increase the computational time of some approaches with respect to others. Therefore, to remain as unbiased as possible, and in the hypothetical case that all methods would use a direct solver with equivalent computational cost per iteration, the computational cost could be evaluated with the number of iterations, thus obtaining a fair comparison.

Fig. 9
figure 9

Relative computational cost in terms of the number of iterations. Each numerical example is normalized with the number of iterations of the SIMP\(^{(I)}\). Legend: (1) the Cantilever case is represented with a solid black line, (2) the L-shaped case, with a dotted black line, (3) the Cantilever multi-load case, with a dash-dotted line and (4) the Gripper mechanism with a dashed line

The comparison of the computation cost in terms of the number of iterations is shown in Table 5. The values of the computational cost, normalized with respect to SIMP\(^{(I)}\), are illustrated in Fig. 9. As can be seen, the relative computational cost depends on each numerical example. However, it keeps a certain tendency along the considered approaches for minimum mean compliance problems.

Regarding the Cantilever beam, VARTOP and SIMP\(^{(III)}\) are up to 1.4 times faster than SIMP\(^{(I)}\), and up to 2 times faster than SIMP\(^{(II)}\) or SOFTBESO. Level-set turns out to be 7 times more computationally expensive than SIMP\(^{(I)}\). In addition, it is important to stress that VARTOP is 7% faster than SIMP\(^{(III)}\), even though it provides not only the optimal solution but a set of solutions for different volume fractions (Pareto Frontier).

For the L-shaped structure and the multi-load cantilever beam optimizations, the relative computation costs increase from the previous example, except in VARTOP. Its relative computational cost becomes almost 1 for the L-shaped structure and even 1.4 for the multi-load cantilever case. SIMP\(^{(I)}\) results in the the fastest approach for this latter benchmark. The advantage over the SOFTBESO, Level-set, and \({SIMP}^{(II)}\) methods is still present, although no significant improvement in computational cost is obtained with respect to SIMP\(^{(III)}\) and VARTOP.

As for the compliant mechanism example, the previously observed trend does not apply any more. In this case, VARTOP is the fastest approach by far (almost an order of magnitude faster), followed by the Level-set and SOFTBESO approaches. Both methods require approximately half as many iterations as SIMP\(^{(I)}\). SIMP\(^{(III)}\) and SIMP\(^{(II)}\) techniques are respectively \(20\%\) and \(10\%\) faster than the reference method. This trend change in the computational cost may be caused by the change in the topology optimization problem (i.e., the non self-adjoint character of the problem).

5.2.4 Robustness: Monotonic Convergence Degree

The convergence robustness is analyzed through the evolution of the objective function and volume fraction, and the criteria in topology and objective function throughout the optimization. For each technique, the analysis of these variables determine the monotonic convergence degree of every method. The discussion is performed only through the first two examples, since they are representative enough to provide a complete overview of the issue of robustness.

The evolution of the objective function for the Cantilever beam is illustrated in Fig. 10. Single-time-step methods are represented in the first column while incremental time-advancing techniques (i.e., SIMP\(^{(II)}\), VARTOP, and Level-set) are depicted in the second column. Each time-step is shaded with a different color to improve its visualization. The normalized objective function value \({{{\mathcal {J}}}}/{{{\mathcal {J}}}}_0\) (solid line colored in black) is illustrated in the left y-axis, while the stiff material fraction (dash-dotted line, colored in gray) is associated with the right y-axis.

Based on the convergence, the following features can be highlighted: (1) \(\hbox {SIMP}^{(I)}\) and \(\hbox {SIMP}^{(III)}\) prescribe a constant stiff material fraction (i.e., \(f=0.1\)) from the initial iteration, and the objective function converges monotonically to a value close to 7.8,Footnote 23 (2) in SOFTBESO, the stiff material fraction is gradually reduced from the initial value 1 to the target value 0.1, consequently, the objective function increases until the target volume is achieved, (3) in SIMP\(^{(II)}\) and VARTOP, the target stiff material fraction is reduced from 1 to 0.1 in 12 time-steps, thereby the objective function is minimized at each time-step, and (4) Level-set, which even though it can also be an incremental time-advancing method, it has a particular response since the volume constraint is not strictly enforced on each iteration, but it oscillates ruled by an Augmented Lagrangian method.

As illustrated in “Appendix 4”, the order of convergence of the objective function is close to 1 for all the techniques. Therefore, all topology optimization methods have a linear convergence in the objective function.

Fig. 10
figure 10

Evolution histories of the values of the objective function and volume fraction throughout the iterations of the Cantilever beam topology optimization for the six considered methods. Single-time-step approaches are illustrated in the first column, while incremental time-advancing techniques are depicted in the second column, each time-step being shaded with a different color. The normalized objective function \({{{\mathcal {J}}}}_\rho \) or \({{{\mathcal {J}}}}_\chi \) is associated with the left y-axis and represented with a solid black line. On the other side, the volume fraction (i.e., the stiff material fraction) \(\frac{\vert \varOmega ^+\vert }{\vert \varOmega \vert }\) is represented by a dash-dotted gray line in the right y-axis of each graphic

Fig. 11
figure 11

Evolution histories of the criteria values in the objective function and in the topology throughout iterations of the Cantilever beam topology optimization for the six considered methods. Single-time-step approaches are illustrated in the first column, while incremental time-advancing techniques are depicted on the second column, each time-step being shaded with a different color. The criterion in objective function, associated with the left y-axis, is represented with a solid black line, while the criterion in the topology is represented by a dash-dotted gray line in the right y-axis of each graphic. In addition, the corresponding maximum tolerances \(Tol_{{{\mathcal {J}}}}\) and \(Tol_{{\zeta }}\) allowed in the last time-step (or in the entire optimization for single-time-step methods) are also displayed in every graphic as horizontal lines with the same properties

The convergence curves of the objective function and topology criteria are depicted in Fig. 11. The objective function criterion (solid black line) is represented in the left y-axis, while the topology criterion (gray dash-dotted line) in the right y-axis. As in Fig. 10, the previous four different groups can be distinguished, but now in terms of the convergence criteria. Both SIMP\(^{(II)}\) and VARTOP show a strictly monotonous convergence within each time-step, only noticing some small oscillations in the second last time-step where a change in topology has taken place. As for the other methods, SIMP\(^{(I)}\) and SIMP\(^{(III)}\) present monotonous convergence with small amplitude oscillations, while some important variations are noticed in SOFTBESO once the final stiff material fraction is achieved. Finally, the convergence criterion in the Level-set method mimics the trend detected in the objective function and volume fraction with small amplitude oscillations. As a global comment, it can be stated that the objective function criterion corresponds to the most restrictive criterion in all topology optimization approaches, except in the Level-set method, in which the volume constraint is the limiting one.

The convergences corresponding to the other examples have been also analyzed in detail. The graphics do not present any significant difference with respect to those already analyzed for the Cantilever beam. However, for completeness reasons the corresponding graphics of the second example are depicted in “Appendix 5”.

5.2.5 Overall Performance

In this last subsection, instead of comparing the different methods in a quantitative and analytical way, a more qualitative comparison is presented according to the following aspects: (1) Surface smoothness, (2) Topology complexity, (3) Objective function, and (4) Computational cost.

The first aspect refers to the surface smoothness required by several manufacturing techniques. In these technologies, sharp edges and noise shells (i.e., abrupt continuous changes) must be avoided in the boundary of the optimal solution. The second criterion takes into account the complexity of the optimal design obtained by each technique, considering other mechanical properties not directly included in the optimization. For instance, designs based on thick bars will have a better structural behavior in buckling or fatigue compared to designs with a greater number of thin bars. These two aspects will also have an impact on the manufacturing challenges, which will decrease as the design becomes smoother and less complex. The third one considers the value of the objective function, or equivalently the efficiency of each method of finding a better local minimum. This criterion gathers the information shown in Fig. 8 regarding the relative objective function values. The last point of comparison globally assesses the computational cost of each method to perform the optimization. Analogous to the last aspect, this criterion gathers the information represented in Fig. 9 with respect to the relative computational cost.

Fig. 12
figure 12

Qualitative comparison of the studied methods regarding the smoothness of the design (dotted column), the topology complexity (right-inclined lines pattern), the value of the objective function (left-inclined lines pattern) and the computational cost in terms of iterations (column with crosshatch pattern). Each one of the areas is rated qualitatively with the levels A, B, C, or D, being A the best qualification and D the worst one

In Fig. 12, each one of the aspects is represented with a column bar rated between A and D, with A being the best qualification in that section and D being the worst one. For each approach, four bars of different colors and patterns are represented, each one corresponding to an analyzed aspect.

Fig. 13
figure 13

Qualitative comparison of the studied methods, combining the topology related properties and the computational ones in a single bar. The topology quality is represented with a light blue colored bar and a right tilted line pattern, while the computational efficiency is represented using a dark blue bar with a left tilted line pattern. Each of the criteria is rated qualitatively with the levels A, \(\hbox {B}^+\) B, \(\hbox {C}^+\), C, \(\hbox {D}^+\) or D, being A the best qualification and D the worst one

Regarding the surface smoothness, VARTOP and Level-set provide designs whose surfaces are smooth. On the contrary, all other approaches only achieve element-wise optimal designs, thus the boundary of the solution is defined through abrupt continuous changes. Consequently, additional post-processing procedures are required to manufacture these solutions with smooth boundaries. For this reason, VARTOP and Level-set are evaluated with an A, while the others are rated with a C grade.

Concerning the topology complexity, it has been noticed in Sect. 5.2.1 that the quality of the solutions is reasonably high in almost all the techniques, being this slightly lower for SIMP\(^{(II)}\) and BESO methods. In this two techniques, the complexity of the optimal topology increases, obtaining designs based on thinner bars (i.e. lower bar width) with lower buckling prevention. Accordingly, this two approaches are rated with a B while the others, with an A.

In terms of the objective function, all approaches obtain a similar optimal value, although SIMP\(^{(II)}\) consistently obtains marginally lower values than the other techniques, as detailed in Sect. 5.2.2. For this reason, SIMP\(^{(II)}\) obtains an A qualification, while the other methods are left with a B. Finally, the comparison of the computational cost, discussed in Sect. 5.2.3, is represented in the last column. The computational cost is lower and of similar magnitude for SIMP\(^{(I)}\), SIMP\(^{(III)}\) and VARTOP, followed by the SIMP\(^{(II)}\) and BESO approaches, and finally by Level-set. These three groups are respectively rated with an A, B, and C.

Figure 12 can be further simplified by combining the two topology-related features in a single criterion referred to as topology quality, and the two criteria related to the objective function and the computational cost in a single criterion called computational efficiency. These two criteria are equivalently represented by a bar chart in Fig. 13. From this figure, it can be concluded that VARTOP, although not being the best approach in all considered aspects in Fig. 12, is presented as a competitive technique to more conventional topology optimization approaches, such as SIMP\(^{(I)}\) and SIMP\(^{(III)}\). On the other hand, SOFTBESO and Level-set do not provide any significant advantages, exhibiting mostly deficiencies in topology complexity or computational cost, respectively, for the cases studied in this paper.

6 Concluding Remarks

This contribution presents a thorough comparison among most of the well-established topology optimization approaches, i.e. the SIMP, Level-set, and SOFTBESO methods, and the VARTOP approach. A set of well-known 3D numerical benchmarks in the field of structural topology optimization has been addressed to analyze their performance. The corresponding results have been assessed in terms of the optimal topology, the robustness in convergence, the objective function, and the computational cost.

Regarding the topology, a quality dependence has been observed among the assessed methods, being slightly lower for SIMP\(^{(II)}\) and BESO methods. The quality and complexity of the topologies can depend on the type of design variable: continuous vs discrete and nodal vs element, as well as the approach to impose the volume constraint.

Regarding the design variable, the methods can be split into three groups:

  • Level-set and VARTOP use nodal scalar functions to precisely describe the interface, as well as a discrete characteristic function to define sharp white-and-black configurations.

  • SOFTBESO uses an element-wise discrete functions to define the topology layout.Footnote 24 Although obtaining white-and-black designs, the material boundary is only defined through elements.

  • SIMP-based methods use an element-wise continuous variable. Consequently, these techniques can not precisely define the boundary, but instead get a blurred interface with gray semi-dense elements. By using a projection technique, white-and-black designs can be obtained, which interface is defined through elements.

As a consequence, SOFTBESO and SIMP techniques may not obtain the best possible optimal solution and would require post-processing techniques to obtain smooth designs which could be easily manufactured. However, there is no guarantee that the resultant topologies are actually optimal layouts.

As aforementioned, the volume constraint methodology may also affect the resultant topology. In those techniques where the volume constraint is gradually imposed using element-wise variables (e.g. SOFTBESO and SIMP\(^{(II)}\)), the final topologies tend to be more complex and consist of a larger number of thin bars. As a result, these topologies have worse mechanical behavior and are more challenging and expensive to manufacture.

In terms of the topology, it can be concluded that both a combination of nodal scalar design variables with a gradual incremental volume constraint, and a combination of a continuous element-wise variable with a constant volume constraint provide optimal topologies with high quality. In the comparison, SIMP\(^{(I)}\), SIMP\(^{(III)}\), VARTOP, and Level-set achieve optimal designs that are based on thicker bars (with higher mean bar size), thus improving manufacturability with high-quality optimal designs and reducing the buckling proneness.

Concerning the robustness of each method, it has been confirmed that all the techniques have a linear convergence in the objective function regardless of the methodology used to impose the volume constraint. This fact supports the selection of the techniques for the comparison, and the independence with respect to the filtering technique (i.e. spatial or Helmholtz-type filtering) and the updating scheme of the design variable (i.e. incremental or absolute). These two differences may have an effect on the optimal solution, but not on the order of convergence.

In regard to the objective function value, a small variation of \(\pm 15\%\) is observed in the four numerical benchmarks among the studied methods with the exception of two tests in the Gripper mechanism. However, SIMP\(^{(II)}\) achieves systematically the lowest objective function values, as the majority of its optimal designs are based on smaller, thinner bars (i.e., high complex designs), as mentioned before. Due to this characteristic, these designs are proclive to buckling.

For the first three examples, all studied techniques can be sorted according to a descending number of required iterations as follows: Level-set, SOFTBESO, SIMP\(^{(II)}\), VARTOP, SIMP\(^{(III)}\) and SIMP\(^{(I)}\). However, the relative computational cost depends on each example and technique, but the same trend is observed. It is important to emphasize that incremental time-advancing techniques such as SIMP\(^{(II)}\) and VARTOP obtain not only the final optimal solution but also a set of intermediate converged solutions at almost the same computational cost (Pareto frontier for the volume fraction). In this scenario, VARTOP is up to 1.5 times faster than the corresponding \({SIMP}^{(II)}\) implementation. As for the Gripper compliant mechanism, the tendency in computational cost completely changes from the previous examples, observing a very significant reduction with VARTOP compared to SIMP\(^{(I)}\). Contrary to the other numerical examples, SOFTBESO and Level-set also require a lower number of iterations than SIMP-based implementations.

In conclusion, the VARTOP, SIMP\(^{(I)}\), and \({SIMP}^{(III)}\) approaches present topology layouts with a higher topology quality than the other methods at a lower computational cost, even though their objective function is not minimized as much as in other approaches.

The authors are aware that, in spite of the efforts done for a fair comparison, a certain degree of subjectivity can still remain in this kind of studies, but they also think that those studies should be presented to the community of structural topology optimization even to be argued and discussed with the aim of the progress of computational topology optimization.