A new variational approach for the thermodynamic topology optimization of hyperelastic structures

We present a novel approach to topology optimization based on thermodynamic extremal principles. This approach comprises three advantages: (1) it is valid for arbitrary hyperelastic material formulations while avoiding artificial procedures that were necessary in our previous approaches for topology optimization based on thermodynamic principles; (2) the important constraints of bounded relative density and total structure volume are fulfilled analytically which simplifies the numerical implementation significantly; (3) it possesses a mathematical structure that allows for a variety of numerical procedures to solve the problem of topology optimization without distinct optimization routines. We present a detailed model derivation including the chosen numerical discretization and show the validity of the approach by simulating two boundary value problems with large deformations.


Introduction
An important question in engineering is how to design a structure such that it fulfills the individual purpose in an ideal way. For instance, the purpose can be maximum stiffness for a restricted total structure volume [38]. There exist various methods to answer this question: size optimization alters the thickness of an existing design; shape optimization modulates the form; topology optimization searches for the best design without any preexisting requirements, see [15]. The direct applicability of topology optimization to various problems has rendered this strategy quite popular. Review papers can be found, e.g., in [16,33,41]. Additionally, the progress in rapid prototyping now provides the feasibility to realize even very complex topologies, cf. the review paper of [28].
To clearly identify the geometric boundaries of optimized structures, a so-called black-white solution has to be found where black indicates the (locally) full material and white the void material; the intermediate gray state has to be restricted to a minimum. One possibility to interpret the black-white solution is given by phase fractions such that the optimization B Philipp Junker philipp.junker@rub.de 1 Chair of Continuum Mechanics, Ruhr University Bochum, Bochum, Germany problem can be reformulated as phase transformation problem. This approach is pursued, e.g., by phase field methods for topology optimization, cf. [9,39,47]. Here, the mixture energy between the phases "full" and "void" is a double well function such that the grey intermediate states are energetically less favorable. A comparable approach is used for continuum mechanical strategies for topology optimization as the popular solid isotropic material penalization (SIMP) method: instead of using a convex mixture energy of the form = ρ 0 , a non-convex energy of the form = ρ 3 0 has become the standard. Here, ρ ∈ ]0, 1] denotes the relative density that identifies the full (ρ = 1) and void (ρ ≈ 0) state; thus, ρ can be interpreted as volume fraction. As for phase field approaches, a distinct black-white solution is obtained. However, the usage of the non-convex energy has a drastic impact on the mathematical problem: the loss of ellipticity excludes existence of a (unique) solution of the optimization problem. Any numerical simulation finds a solution that strongly depends on the finite spatial discretization as e.g., employed by the finite element mesh. Consequently, strong mesh dependencies are present and numerical instabilities manifest in form of alternating black/white distributions which are referred to as checkerboarding. Topology optimization has therefore to be equipped with regularization strategies. A prominent class of such regularization are filter techniques of which a wide variety exists. Examples can be found, e.g., in [4,26,37,44]. A different approach is gradientenhancement for topology optimization, cf. e.g. [17], as also used in damage modeling, see [21].
A prominent subproblem of topology optimization is the treatment of hyperelastic structures, i.e. to optimize elastic structures undergoing large deformations. Works dealing with snap-through problems are given in [5] and [27], for instance; shape optimization including r -adaption is presented in [40]; finite deformations in combination with a homogenization design method are investigated in [30]; a fictitious domain technique for large deformations is employed in [46]. An important problem for topology optimization for finite deformations remains the treatment of buckling for which various approaches exist. Examples can be found, e.g., in [7,25,29]. Further examples are given, e.g., in [22] for hyperelastic bodies with prescribed, non-zero displacements, in [12] for geometrically non-linear structures, and in [8] which presents a method how to optimize structures at finite deformations using the finite element program ANSYS.
In a series of papers, we contributed to the topology optimization problem by making use of thermodynamic extremal principles, see e.g., [17]. The resultant equations transform the optimization problem to an evolutionary problem, cf. also [23,24]. A comparison of the thermodynamical principle-based topology optimization to SIMP is given in [19]. An advantage of this approach is its origin to material modeling which directly gives access to include material non-linearities to the optimization problem as for fiberreinforced materials in [18] and tension/compression hybrid material systems as steel/concrete in [11]. There, efficient regularization by gradient enhancement has been employed in combination with a fast numerical treatment, see [17].
A disadvantage of the topology optimization approaches based on thermodynamic principles presented so far in the literature is that an extension to hyperelastic structures is not yet available. For generalization, we now present a more fundamental approach to thermodynamic topology optimization which is indeed also valid for large deformations. We therefore start with some remarks on the previous formulation of the thermodynamic topology optimization and identify the related problems. We then present a generalized approach in combination with a novel representation of the relative density. This allows to analytically compute the Lagrange parameter for controlling the volume of the structure. Afterwards, we discuss the numerical treatment and apply our model to Neo-Hookean structures. We present the validity of our model based on the Messerschmitt-Bölkow-Blohm (MBB) beam problem and investigate the problem of buckling structures. To demonstrate the robustness of our model, we employ an arc-length control which allows topology optimization of the MBB beam for even unrealistically large deformations. Here, remarkably different structures are found optimal compared to results obtained for small deformations. Furthermore, we investigate a complex three-dimensional cantilever problem which indicates that strong tension/compression asymmetries are inherently included when optimizing hyperelastic structures.

Preliminaries to thermodynamic topology optimization
In a series of previous papers, we derived a novel approach to topology optimization. It was based on variational strategies that are usually applied to material modeling [17] such as Hamilton's principle, cf. [13,14]. Since the respective strategies are well-known from material modeling and route back to the thermodynamics and mechanics of materials, we referred these approaches to as thermodynamic topology optimization. Note that this does not imply that the evolution of the optimization process itself follows a real physical, thermodynamical process. It is rather the thermodynamical principles which are exploited to derive an efficient framework for topology optimization. A detailed comparison of these strategies with the SIMP approach has been presented in [19].
Hamilton's principle states that the sum of Gibbs G energy and dissipation-related work D y should become stationary. This can be recast as variational principle as δG[u, y](δu, δ y) + δD y [ y](δ y) = 0 ∀ δu, δ y (1) with the Gibbs energy Herein, δu and δ y are the test functions associated to the primal variables which are the displacements u and the internal variables y. Whereas the displacements are searched in the standard Sobolev space H 1 , the space for the internal variables depends on the specific choice of the functionals. Here, we assume y to be a vectorial quantity. In case of y having other dimensions, i.e., being scalar-or matrixvalued, the related scalar products have to be adopted. For instance, isotropic damage models usually make use of a scalar damage variable as internal variable; plasticity models, on the other hand, consider the second-order tensor of plastic strains as internal variable. The quantity denotes the strain energy density defined per reference unit volume. The external forces can be split into the body forces b and the traction vector t . The volume of the body is denoted by with its surface ∂ . The rate is indicated by d y/dt =:ẏ. The work due to dissipative processes is defined as with the non-conservative thermodynamic forces p diss y which are assumed to be invariant regarding variations of y, i.e., δ p diss y = 0, such that The non-conservative thermodynamic forces can be derived from a dissipation function diss via the relation p diss is also referred to as dissipation functional or dissipation distance when discretized in time, cf. [6,31,32]. The sole constraint of formulating a dissipation potential is diss y ≥ 0 which is fulfilled e.g., by functions that are homogeneous of order one or two. Consequently, for diss y = r |ẏ|, rate-independent processes can be modeled with r being the yield criterion that indicates some threshold value that separates elastic and inelastic processes as e.g., the yield stress in plasticity (r = σ y ). A function that is homogeneous of order two, i.e., diss y = η|ẏ| 2 /2 can be used for modeling rate-dependent processes with the viscosity η. Then, Hamilton's principle reads δG[u, y](δu, δ y) Examples how this principle can be applied to material modeling, where internal variables might be plastic strains, damage parameters or volume fractions of crystallographic phases, are given, e.g., in [21,45].
The key idea of thermodynamic topology optimization was to use the very same variational statement (5) also for structural optimization as has been done for material modeling. To this end, the strain energy density was replaced by = ρ 3 0 where the exponent of 3 for the relative density ρ ∈ ]0, 1] was adapted from classical optimization approaches, i.e. SIMP. Consequently, maximum stiffness is employed for ρ = 1. Here, the density ρ ← y is interpreted as internal variable whose adaption to external conditions is described in terms of an evolution equation. The strain energy density of the full material is denoted by 0 . The evolution of the distribution of the relative density was accounted for by a dissipation function diss := ηρ 2 /2 with some (numerical) viscosity η. Application of Hamilton's principle yields then an evolution equation for the relative density of the forṁ Inspection of this equation revealed that the relative density is reduced more at those spatial coordinates with high loading, i.e. large values of 0 , which, in turn, reduces the global stiffness. This is usually a characteristic of damage models and, obviously, stands in contrast to the aim of topology optimization: instead of a model for topology optimization, a damage model is the result by direct application of Hamilton's principle. Therefore, a reformulation of the variational equations was necessary in order to obtain evolution equations of the appropriate form. With y → ρ, it reads as follows: Therefore, the strain energy density was used in the form proposed also by SIMP, reading (ε, ρ) = ρ 3 0 (ε), for the derivation of the balance law of linear momentum, (7) 1 ; however, for the evolution equation of the relative density in (7) 2 , Hookes law σ = ρ 3 E 0 · ε had been inserted with the elastic constants for the full material E 0 . This results in the strain energy density (σ , ρ) = ρ −3 which fulfills the purpose of topology optimization: the relative density is locally increased more at spatial coordinates with high values of the strain energy density. The benefit of this thermodynamically motivated procedure was the inclusion of material non-linearities of which examples can be found in [11,18]. Furthermore, the classical optimization problem is reformulated as evolutionary problem. Therefore, no (additional) optimization procedures are necessary and implementations in any finite element subroutine with user elements or user materials is possible. The differential equation itself serves as update scheme. Although this modeling principle is sound in the context of small deformations, a direct application for hyperelastic materials, i.e., in the context of large deformations is not possible since an analytic inversion of the constitutive relation between stress and strain measures is not available here. We therefore propose a novel variational model for topology optimization, inspired by (1).

A generalized approach to variational topology optimization
tion problem can be reformulated as evolutionary problem for which no optimization routines are requested; (3) the evolutionary character of the resultant model for topology optimization allows for a immediate consideration of material microstructure evolution without using external mathematical optimizer. Thus, a generalized approach to thermodynamic topology optimization shall be formulated that can also be applied to hyperelastic structures undergoing large deformations. An appropriate model for topology optimization is supposed to apply mass where locally the mechanical loads are high and, in turn, remove mass where locally mechanical loads are low. The intensity of the mechanical loads is measured by the strain energy density which enters the Gibbs energy serving as global measure. Furthermore, the rearrangement of relative density in terms of local and convective rates contributes to the energetic balancing. Since the rearrangement consumes partly the stored energy, it is the difference between the Gibbs energy plus dissipation-related work and the work related to rearrangement that is assumed to be stationary. Thereby, the goal to couple the local rearrangement of relative density to the intensity of mechanical loads is achieved. Constraints are included by the functional C. These considerations can be recast in the Hamilton functional for both microstructure evolution and topology optimization as where the set of internal variables y = α ∪ χ is decomposed into two parts: variables that account for the physical behavior, i.e., that are related to physical microstructure evolution, are indicated by α; the variable that is used for the topology optimization is indicated by χ and referred to as density variable. It is worth mentioning that we do not formulate the variational approach directly in terms of the relative density ρ; instead, we use the density variable χ . This enables us to chose a suitable mapping ρ = ρ(χ) such that constraints on the relative density can be fulfilled by construction of ρ. A specification is provided in Sect. 3.2.
A measure of the dissipation-related work due to local rearrangement is given by the part of D y that is related to the topology. The energy due to convective rearrangement is measured by the functional F. Thus, the total energy due to rearrangement is given by = 0 ∀ δu, δα, δχ (11) with δD α [α](δα) = p diss α · δα dV , cf. (4). This variational balance condition comprises microstructure evolution and topology optimization. Thus, the stationarity conditions enable us to derive the balance of linear momentum and the evolution equations for dissipative materials along with the variational condition for topology optimization: Here, we used p diss α = ∂ diss α (α)/∂α. The displacements are searched in H 1 ; the spaces for the internal variables, i.e., α and χ , still depend on the specific choice of the functionals. Again, dissipative effects of the material are modeled by the internal variable α. This formulation is more general than the previous equations used for thermodynamic topology optimization. However, for the case of small deformations, the novel formulation yields identical evolution equations as the classical form. Examples for thermodynamic topology optimization including dissipative material behavior are given for tension/compression hybrid structures in [11] and for anisotropic materials in [18]. In the current work, we restrict ourselves to non-linear elastic materials such that (12) 2 is not considered here.

Strain energy density and balance of linear momentum
Let us begin with the evaluation of (12) 1 for which we need to define the strain energy density in (2) to specify the Gibbs energy. In analogy to the SIMP approach in topology optimization, we use a multiplicative split of the energy into its elastic contribution 0 and a non-linear scaling. It thus reads with the right Cauchy-Green tensor C = F T F and the deformation gradient Here, the spatial coordinates in the current configuration are indicated by x while those of the reference configuration are given by x(t = t 0 ) = x 0 =: X. The balance of linear momentum results from the condition δ u G = 0: Note that we evaluate the integrals with respect to the reference configuration. Following the method of Coleman and Noll, we identify the 2nd Piola-Kirchhoff stress tensor S = 2∂ /∂ C. The weak form of the balance of linear momentum (15) thus reads

Representation of the relative density
Before we evaluate (12) 3 , let us introduce a representation of the relative density ρ. Several constraints have to be fulfilled by models for topology optimization. For instance, the relative density has to be bounded from above and below, reading with the minimal relative density ρ min > 0 which is introduced for numerical reasons. Usual values are ρ min < 10 −6 . The bounds can be introduced by means of Kuhn-Tucker parameters into the constraint functional C. However, they drastically complicate the numerical treatment since the constraint has to be fulfilled for each discretization point individually. Therefore, the goal is to avoid the complex constraint of bounds for the relative density. To this end, we make use of a mapping ρ = ρ(χ) which fulfills (17) directly. This mapping represents the relative density and is chosen as the sigmoid function where the quantity χ ∈ ] − ∞, +∞[ is termed density variable. The sigmoid function fulfills by construction ρ(χ) ∈ ]0, 1[, lim

Generalized approach for thermodynamic topology optimization
We make use of the variational balance condition in (12) 3 and specify the individual functionals as follows: It is worth to already mention that an appropriate choice of the parameter β in the term for conevctive rearrangement F controls the member size, i.e. the minimum truss thickness. Thus, the topology optimization problem is regularized and for β = ch 2 with some constant c > 0 the checkerboard phenomenon is not present where the finite element mesh size is given by h. Same choices for the respective terms have been made for thermodynamic topology optimization in the context of the linearized theory in [17].
Thanks to the representation of ρ in Sect. 3.2, the only constraint that remains to be considered by the model for topology optimization is simply mass conservation which is given by with the prescribed relative structure densityρ. The initial condition for the density function is then ρ 0 = ρ(χ 0 ) = ρ. The constraint is taken into account by introducing a Lagrange parameter λ. Thus, the constraint functional reads The variational balance condition in analogy to Hamilton's principle according to (12) 3 hence reads where we used the relation p diss = ∂ diss /∂χ and δ χ p diss = 0, cf. (4). Computing the Gâteaux derivatives results in when diss = ηχ 2 /2 has been chosen. Integration by parts of the third term and consideration of the arbitrariness of δχ gives where (27) 2 implies zero mass fluxes across the boundaries. It is worth mentioning that (27) 2 only holds on parts of the boundary where no Dirichlet boundary conditions are employed, i.e., ∂ ρ = ∅. In case of prescribed density at the boundary, e.g. due to a connection to an existing construction part, the Neumann condition (27) 2 transforms to a Dirichlet condition ρ = ρ ∀x ∈ ∂ ρ such that n 0 ∇ρ does not apply on x ∈ ∂ ρ . In (27) 1 , the quantity p := −∂ /∂χ is defined in analogy to thermodynamic driving forces and serves as source for the density evolution. It reads with = ρ(χ) 3 For computing the Lagrange parameter λ, we multiply (27) 1 by ρ and integrate over the volume . This gives ηρ χ dV (23) = 0 from which we find Concluding, the field equations (16) and (27) in combination with (30) describe the evolution of the displacements u and the density variable χ in time and space. The relative density ρ can be computed according to (18). In the following, we present a possible numerical treatment.

Numerical treatment
Summarizing, the following set of equations has to be solved for the displacement field u and the density variable χ : with the Dirichlet and Neumann boundary conditions, i.e., u = u ∀ x ∈ ∂ u and F Sn 0 = t ∀ x ∈ ∂ u , respectively, and = u ∪ t . The quantity n 0 is the normal vector on ∂ in the reference configuration. The evolution of topology is given in terms of (31) 2 which is a partial differential equation in space and time. The initial condition is chosen as χ = χ 0 ∀ x ∈ . This naive choice corresponds to the usual homogeneously gray initial distribution which is also used for other methods for topology optimization. However, we show by numerical investigations that remarkably different optimal topologies only evolve when strong inhomogeneities are chosen as initial distribution, see Sect. 5 According to (30), the equations are complemented by the Lagrange multiplier The field equations in (31) constitute a coupled system of partial differential equations (PDEs). For standard systems of coupled PDEs, the finite element method can be employed for numerical solution. However, the Lagrange multiplier prevents a direct application of such a strategy in the present case: since the evaluation of λ demands knowledge of the global fields of u and χ , a staggered solution scheme appears advantageous. In a first step, the updated displacement field is computed from (31) 1 by fixating the density variable; in a second step, the density variable is updated using (31) 2 . This procedure is repeated until convergence. For the weak form of the balance of linear momentum (31) 1 the finite element method can be employed. For the field equation of the density variable (31) 2 , we make use of a modified finite differences scheme that is also valid for irregular and unstructured grids and had been referred to as neighbored element method in e.g., [17,21,42]. Details for the numerical treatment are given in the following subsections. Of course, a different numerical treatment could be employed. For instance, using the weak form of (31) 2 , as presented in (26), would enable us to solve it along with (31) 1 in a monolithic manner in which no operator split is necessary. A similar procedure has been pre-sented in [20]. Unfortunately, the numerical effort in terms of computation times is higher by approximately one order of magnitude. Consequently, we prefer the staggered update scheme which, as shown later, is quite fast since the extra effort for optimization is quite small compared to the effort needed to compute the displacement field.

Balance of linear momentum
The weak form of the balance of linear momentum (16) is as usual numerically solved by means of the finite element method. To this end, the displacements are approximated by use of where the shape functions are denoted by N and the nodal displacements in the spatial direction k are indicated byû (k) . Then, the weak form of the balance of linear momentum (16) can be reformulated in terms of the approximations as δû · r = 0 with the global column matrix of nodal virtual displacements δû and Herein, r is the global residual column-matrix that results from assembling and r e is the residual column-matrix of each finite element (e). The quantity s denotes the matrix notation of the 2nd Piola-Kirchhoff stress tensor and B e an operator matrix including the derivatives of the shape functions. When the linear terms including body forces and the traction vector are not taken into account and Dirichlet boundary conditions are included in the variation, the nodal displacements are found from Details for the derivation are deferred to Appendix A. Note that both B e and s depend on the nodal displacements such that (35) constitutes as a system of nonlinear algebraic equations for the global nodal displacementsû. Due to its nonlinearity, the Newton method is employed to solve (34). To this end, the tangent in each element is mandatory. Here,û e denotes the column-matrix of all nodal displacements in element (e). The tangent is given as with the material tangent matrix E e = 2∂ s/∂ c and the matrix form c of the right Cauchy-Green tensor C. The geometrical part of the derivative is defined as G e := (∂ B T e /∂û e )s, all in element (e). Details are given in Appendix B and C .
Note that both the residual r e and its derivative in (37) have to be assembled to obtain the residual and its tangent for an entire finite element analysis.

Field equation for the relative density
We approximate the relative density field with one value for each finite element (e), i.e., only one value for the relative density is used for all integration points in each finite element. Thus, for the approximation of the piece-wise constant relative density, we use C 0 shape functions. For the solution of (31) 2 , we make use of the recently introduced neighbored element method (NEM). It was developed to discretize the Laplace operator in a model for topology optimization in [17]. However, it could also successfully be applied to a damage model in [21] and a generalized formulation was found in [42]. For convenience, we recall here the principal ideas of NEM.
The Laplace operator can be computed on regular and structured meshes by the finite difference method (FDM). Unfortunately, the standard FDM fails to compute sufficiently accurate values for the Laplace operator for meshes with even small irregularity. Therefore, the NEM makes use of a Taylor series expansion of order two to approximate the relative density between the midpoints of neighbored finite elements. The set of n (e) neighbored elements for element (e) is given by N (e) such that N (e) k returns the global element number for the kth neighbor. For elements that are located at the boundary of the design space, mirror ("ghost") elements are introduced to increase N (e) accordingly. In analogy to the FDM, we set ρ mirrored = ρ (e) which fulfills (27) 2 identically. The Taylor expansion for all neighbored elements (k) with the expansion point of the investigated element (e) can be written as Herein, ρ (N (e) k ) denotes the relative density at the midpoint of a neighboring element (k) with the global element number N (e) k ; ρ (e) is the relative density at the midpoint of element (e).
The derivative terms of the Taylor series are collected in the vector Finally, the matrix D (e) ∈ R n×d stores the distances between the midpoint of the element of interest (e) and those of all neighbored elements k ∈ N (e) with varying powers. The value of d depends on the spatial dimension and for 3D, we find d = 9 (in 2D d = 5). The kth row in D (e) is given by Similarly to the definition of the distances, let us introduce ρ (e) ∈ R n be a column-matrix for element (e), whose kth component reads Then, the Taylor series expansion up to order two can shortly be represented by Due to the known values of the density variable in all elements, the differences ρ (e) are known. Furthermore, the distance matrix D (e) is fixed for fixed grits. Consequently, we can solve (42) for the vector ρ (e) ∂ that stores the different derivatives according to (39). The value of the Laplace operator can subsequently be determined from ρ (e) ∂ by summation of the last three components. To obtain a convenient notation, we introduce a := 0 0 0 0 0 0 1 1 1 , and we can thus write A necessary condition for the solvability of (42) is that n ≥ d, i.e., the number of neighbored elements has to be at least as large as the number of derivative terms d. The set of neighbored elements N (e) k can be chosen in two different ways. One possibility is to select all neighbored elements that share the same vertices (stencil for structured, regular finite element meshes) plus three diagonal elements (two in 2D). For sufficiently regular meshes, D (e) contains full rank and D (e) is regular such that a solution to (42) exists. For this choice of N (e) k , n = d holds and we can invert (42) directly. We then obtain for the Laplace operator The other option to simply collect all neighbored elements, i.e., all elements with same vertices and all diagonal elements. Then, no special selection of surrounding elements for the neighbored elements must be performed. For this general case, n > d holds but it was shown in [42] that the right-inverse can be taken such that the Laplace operator reads Here, we assumed full rank of all D (e) which is fulfilled for sufficiently undistorted finite elements. Note that we evaluate all equations in the reference configuration. Consequently, all D (e) maintain their rank throughout the entire computation. For a unified notation, we introduce with We emphasize that for fixed finite element meshes, i.e., when no adaptive finite element mesh is used, the values for each l (e) can be computed once in advance to the respective simulation.

Time integration
The evolution equation and the Lagrange parameter in (31) 2 and (32), respectively, need to be discretized for numerical investigation. For simplicity, we perform an explicit Euler time integration along with the Neighbored Element Method for the approximation of the Laplace operator ρ. The time incrementation is given by t := t n+1 − t n where n + 1 indicates the current time step and n the previous time step. Of course, usage of an implicit time integration scheme would be beneficial and should be chosen rather than the explicit one we used. However, our intention is to provide an optimization approach that is suitable to be programmed into an existing finite element program with a minimum of extra effort or need of external libraries, e.g. for the solution of a system coupled nonlinear equations. We therefore make use of the most simple integration method and accept the related numerical disadvantages. However, an appropriate choice of the viscosity allows for a smooth numerical behavior which was observed for all numerical experiments we performed. Algorithm scaled density variable The target volumeρ is conserved by use of the Lagrange parameter λ n . However, the representation of the relative density ρ n+1 = ρ(χ n+1 ) possesses only a small sensitivity on changes of χ n+1 when ρ n+1 ≈ 0 or ρ n+1 ≈ 1. Therefore, the evolutionary update of the density variable χ and thus of the relative density ρ might be influenced by truncation errors that accumulate during the iteration process. We thus make use of the scaling as presented in Algorithm 1 to reduce the impact of truncation errors. The unscaled density variable is termed asχ and obtained from the discretized form of (31) 2 , readingχ with the discretized form of (32) Note that the precise numerical value of the driving forces is of minor importance; it is the relative fraction that determines whether the relative density in a specific finite element (e) will increase or decrease. It is thus convenient to scale the driving forces by introducingp and to use these to update the density variable as presented in (49). The metric ρ (e) n ) ensures that only the driving forces of all "active", grey states, i.e., ρ ∈ ]0, 1[, are considered for scaling. This procedure reduces strong local effects such as boundary conditions, cf. [17].
As mentioned, truncation errors might accumulate over the times steps ifχ is used directly to compute the updated value for the relative density. Consequently, the unscaled valuesχ (e) are taken to compute the unscaled relative densitỹ ρ (e) which might violate eρ (e) (e) = eρ (e) , i.e., the unscaled relative density does not cover the prescribed total volume given by ρ dV = eρ (e) . We therefore compute a "trial" density ρ which fulfills the constraint exactly. However, this procedure might result in local values for the trial density that exceed the admissible interval, i.e., it might be that ρ (k) t > 1 for some elements (k). We therefore perform a corrector step and set for each element (e) the bounded ρ (e) b and unbounded ρ (e) u relative densities as follows: where tol = 0.999 is chosen as maximum relative density for the numerical calculations presented later.
In a final step, the scaled relative density for the current time step can be found by combining the bounded values with the unbounded values, whereas the latter are weighted to the structure volume that remains after subtraction of the structure volume of all elements that contain locally the maximum value for the relative density, i.e., ρ This procedure ensures that both constraints, prescribed total structure volume and admissible bounds, are fulfilled by the scaled relative density ρ (e) n+1 . A summary of the scaling process is given in Algorithm 1.
Before closing this section regarding the time integration and update scheme of the density variable, let us present the discretized form of the driving forces. They read locally for each element (e) Note that only one value for the density variable is used per finite element. Therefore, we make use of an averaged value for the free energy density which is given bȳ Then, Eq. (49) is written in short form as with the total driving force Remark The choice of the time increment t in combination with the viscosity η controls the convergence speed. The time increment, on the other hand, also controls the load increment. Due to the inherent non-linearities in the context of finite deformations, the maximum load has to be applied within several time steps whose increments have to be limited. Therefore, the time increment cannot be chosen arbitrarily. Remark The optimal topology has been found when no further update is performed from one iteration step to the next one. This can be expressed byχ = 0 ∀x ∈ . Inserting this relation directly into (31) 2 removes the transient term and gives a partial differential equation that only includes spatial derivatives. A solution of this reduced equation would result in the optimal topology without the need of an update scheme in time. However, numerical investigations show that attempts to directly solving the reduced equation fail due to severe numerical instabilities. As a consequence, the timedependent term adds a viscous contribution and allows for an evolutionary solution scheme which converges smoothly. A-preferably mathematical-analysis why a direct solution is not obtainable is highly appreciated.

Combined update of the displacement and density field
The update of the density field is given in terms of a transient partial differential equation. Thus, each update step corresponds to a time step during which the boundary conditions of the boundary value problem may change. Due to the explicit Euler integration for the density field, the displacement field can be solved in standard manner, i.e., no additional nodal unknowns enter the finite element procedure. Furthermore, the computation of the stress for the residual and the material derivative for the tangent is evaluated with the elementwise discretized relative density; then, the system of non-linear algebraic equations for the displacement field can be solved without taking adaptions of the density field into account. The entire procedure can be interpreted-when the boundary conditions are fixated-as staggered approach to solve both field equations for the displacements and the relative density. The algorithmic procedure can be equipped with an exit criterion based on the energy S := r û stored in the boundary with prescribed displacements, with the reaction forces r at the nodes with prescribed displacementŝ u / ∈ 0: if the increment of stored energy between two time steps is below some tolerance, the solution is converged. The amount of stored energy S is, as usual, identified as measure for the stiffness of the structure. A flowchart of the numerical treatment is presented in Fig. 1.

Specification of the considered material model
The method presented here can be used for arbitrary hyperelastic structures, i.e., arbitrary ansatzes for the strain energy density can be used for the presented generalized approach to thermodynamic topology optimization. However, in order to ensure minimizers of the potential energy and to automatically guarantee material stability [35], polyconvex strain energy densities in the sense of [2] should be considered. Extensions to anisotropic materials are found in e.g., [3,34, where I 1 = tr[C], J = det[F], λ = Eν(1 − ν − 2ν 2 ) −1 , and μ = E(2 + 2ν) −1 with Young's modulus E and Poisson's ratio ν, cf. e.g., [48]. The 2nd Piola-Kirchhoff stress tensor S is then given in tensor form by which can be transformed by aid of Jacobi's formula and the identity matrix I to The derivative of the 2nd Piola-Kirchhoff stress tensor is then given in component form by with k 1 = ρ 3 λJ 2 and k 2 = ρ 3 [λ(J 2 − 1) − 2μ]. The associated matrix representation of the material tangent E is given in Appendix C.

Preliminaries to the simulation results
We present numerical results for two different boundary value problems: the classical benchmark problem given by the Messerschmitt-Bölkow-Blohm (MBB) beam and a complex cantilever problem that demonstrates the behavior of the optimization model for a truly three-dimensional problem. The material parameters, e.g., Young's modulus and Poisson's ratio of the full material, are the same for all computations. The energy scales proportionally with the Young's modulus for which we can follow the usual choice in topology optimization and set E = 1 MPa. Since we are interested to optimize structures that behave hyperelastically, e.g. polymers or rubber, we set ν = 0.45. All simulations are obtained by an implementation of the model into FEAP [10] where we used tri-linear hexahedral finite elements. We emphasize that no amplification of the deformed configuration of the structures has been performed, i.e., the shown deformed structures represent the true deformation state. The structures are all obtained by using the isovolume filter for the relative density ρ in Paraview [1] where the minimum threshold is set to 0.5 if not stated differently; this is the average value between void and full material in a structured finite element mesh. This implies that full material is assumed everywhere where the threshold is exceeded and otherwise void material is obtained. Furthermore, all simulations, except of the arc-length controlled simulations, are performed for prescribed displacements which facilitates the generation of comparative numerical results.

Messerschmitt-Bölkow-Blohm (MBB) beam
All necessary data regarding mesh information and model parameters are given for the MBB beam in Table 1. The dimensions, boundary conditions and symmetry plane are plotted in Fig. 2. All simulations are performed for thick, three-dimensional computations. The thickness is given in Table 2.

Displacement-controlled simulations
As first example, we compute the MBB beam with a maximum displacement of u max = 0.25 mm, prescribed at the top center point of the beam. The structure resulting from the topology optimization is depicted in the deformed state at top of Fig. 3. The resultant structure is quite similar to classical optimization results for linear elastic structures: there are no dramatic differences between the results obtained for the linearized and the finite theory. Although the deflection is already quite large (25% of the height), the deformations are not large enough to trigger a distinct tension-compression asymmetry of the optimized structures. However, the chosen Neo-Hookean energy for the hyperelastic materials includes non-quadratic terms. It thus might be expected that a different result for the MBB beam is present when the displacement is pointing in opposite direction, which would be differing to the results from the linearized strain framework. This difference results from a tension-compression asymmetry in the considered material response. We therefore present the result for u max = −0.25 mm at bottom of Fig. 3. However, the differences between the two loading states are limited. This observation is also supported by the centered picture in Fig. 3: here, the optimized result in reference configuration for u max = −0.25 mm is given at the left-hand side. On the right hand side, the result for u max = 0.25 mm is given. Only small deviations are present. As already indicated in the theoretical derivation, cf. (20), the gradient enhancement ensures that the model is well-posed if the parameter β is sufficiently large for a specific element size, i.e., no checkerboard should be present and numerical results should be independent of the respective finite element mesh. To investigate this point, the result for the MBB beam is presented using two different meshes in Fig. 4: at the left-hand side, a coarse mesh is used with 2562 nodes that form 1200 elements; at the right-hand side, a fine mesh is employed with 15,402 nodes that form 7500 elements; both the side view and an isometric perspective are presented. Here, cubic elements are used, i.e., the element length is identical for all three spatial directions. However, for both quasi-2D results, only one row of finite elements is used. Therefore, the boundary value problems differ in thickness. As can be seen from Fig. 4, the optimal structure is identical, though. This shows that (1) the thickness does not influence the result, i.e., the MBB beam is indeed a two-dimensional problem, and (2) the results are mesh-independent. In Fig. 5, the influence of the regularization parameter β is investigated for the MBB beam for tension loading with u max = 0.25 mm. The values are β = 0.01 MPa mm 2 (left) and β = 0.08 MPa mm 2 (right), respectively. The fine finite element mesh with 7500 elements has been used. It is obvious that the choice of β influences two aspects: (1) the minimal member size, i.e., the minimum truss thickness, and (2) the transition between To this end, we present in Fig. 6 results for four inhomogeneous initial distributions of the initial relative density: the initial value for the density is set for the lower vertical half of the design space to ρ 0 = ρ 0,inhom ∀x|y ≤ 0.5 mm and to ρ 0 =ρ − ρ 0,inhom ∀x|y > 0.5 mm for the upper vertical half of the design space such that ρ 0 dV =ρ holds. The homogeneous initial condition corresponds to ρ 0,inhom = 0.5ρ. The numerical experiments reveal that the impact of an inhomogeneous inital distribution is small even for distinctly inhomogeneous initial distributions (ρ 0,inhom = 0.825ρ). However, for even larger values, i.e., ρ 0,inhom = 0.850ρ, remarkable different structures are obtained, see lower line in Fig. 6. The maximum stiffnesses after convergence of the respective optimized structures are S max (ρ 0,inhom = 0.800ρ) = 1.56 × 10 −5 Nmm, S max (ρ 0,inhom = 0.825ρ) = 1.53 × 10 −5 Nmm, S max (ρ 0,inhom = 0.850ρ) = 1.27 × 10 −5 Nmm, and S max (ρ 0,inhom = 0.900ρ) = 0.91 × 10 −5 Nmm. These results are computed using the coarse finite element mesh with 1200 elements for which the stiffness of the optimized topology with homogeneous initial condition is S max (ρ 0 = ρ) = 1.60 × 10 −5 Nmm. This indicates that the homogeneously gray initial distribution for the relative density provides, at least for this example, best results since here S max reaches its highest value. This can be explained since the evolutionary character of the topology update does not ensure that the global minimum is found; in contrast, once stuck in a local minimum, the method is not capable of finding lower minima which is demonstrated by the results with remarkably different topology in Fig. 6.

Performance analysis
Let us investigate the performance of the novel variational approach to topology optimization in this paragraph. To be more precise, we present here the numerical effort and the stiffness measured in terms of the stored energy. All simulations were performed on a system consisting of 40 Intel Xeon CPUs with 2.20 GHz and 64 GB memory. Minimal parallelization was only applied for solving the non-linear system of algebraic equations for the displacements using 4 cores; to be more precise, assembling and updating the density field was performed without any parallelization. On purporse, we did not use more cores or apply a parallel assembling in order to show the efficiency of the method enabling it to be used on standard computers. Exemplarily, we present the computation times for the MBB beam under compression. Here, the simulation of 300 time steps with n loop = 8 needed approximately 700 sec for completion. This number includes the computation of the matrices l (e) for all finite elements prior to the optimization process. Furthermore, for each time step, the results were written to the hard drive disk as vtk file each of which has a size of 4.4 MB; this effort is also included in the 700 sec. The consumption of computation time of the optimization algorithm is negligible to the time needed for convergence of the displacement field: doubling of the number of updates of the density variable, i.e., doubling the number of updates of the density field by setting n loop = 16 increased the total simulation time to 709 sec which is an increase of less than 1.3%. Keeping in mind that parallelization was not excessively used, the computation effort demonstrates that our novel approach to topology optimization is indeed quite fast. This impression is confirmed by comparing our results to other approaches: [22] report 3-5 finite element iteration steps for convergence. In all numerical examples we present, 3 iteration steps were necessary during loading; this number reduced to 2 when the maximum load was kept constant. Furthermore, we can conclude from the presented computation times that a purely elastic computation without optimization would need around 690 s. Thus, the additional time effort for the optimization (between 10 and 20 s in total) is very limited and the finite element computation of the displacement field is the expensive part. Same observation has been made by [22]. Summarizing, our presented approach for topology optimization of hyperelastic structures can compete with other approaches. A same statement could be drawn for the linear version of thermodynamic topology optimization, cf. [19].
The number of iterations varied between three, when the prescribed load continues to increase, and two, when the boundary conditions do not change anymore. This shows empirically that the novel approach to topology optimization does not alter the number of Newton iterations. investigation of the influence of the initial distribution of the relative density ρ 0 from which the initial χ 0 is computed. In all cases, we assign ρ 0 = ρ 0,inhom ∀x|y ≤ 0.5 mm and ρ 0 =ρ − ρ 0,inhom ∀x|y > 0.5 mm, respectively Fig. 7 Evolution of the stiffness for the MBB beam problem using the fine finite element mesh. Left: compression loading; right: tension loading. The plots are scaled to the maximum value which is 5.38 × 10 −6 Nmm (left) and 6.14 × 10 −6 Nmm (right) As can be seen from Fig. 7, the stored energy evolves quite similarly for both loading cases: at the left-hand side, the stored energy of the MBB beam for the tension cases is plotted over time; at the right-hand side, the stored energy for the compression loading is depicted. The final values differ to some extent since the structure for tension stores approximately 14.1% more energy. Furthermore, a distinct kink in the graph can be observed at time step n = 50: this results from the load that is linearly ramped up within the first 50 time steps. After 100 time steps, i.e., 50 time steps after the maximum load has been applied, no remarkable increase in stiffness is present. Consequently, it can be concluded that basically 50 iteration steps are needed for convergence of the pure topology optimization algorithm which is comparable to the computation effort for the MBB beam for both thermodynamic topology optimization and SIMP approaches for linear elastic structures, [19]. Figure 7 furthermore reveals that the stored energy which also serves as measure of the structure's stiffness increases monotonically. This is of particular interest since we did not formulate an appropriate goal function (directly): this result empirically shows that the novel approach to thermodynamic topology optimization as given in (12) appears to indeed give structures with optimized stiffness.

Arc-length-controlled simulations for the MBB beam
Although a prescribed deformation of |u max | = 0.25 mm yields results with already large deflections which lie outside the validity of the linearized theory, our goal is to study the behavior of the model for even larger deformations. We thus increase the maximum displacement further. However, here it might happen that the Newton iteration for computing the displacement field is terminated if the determinant of the deformation gradient becomes negative. This is, of course, an unphysical result whose origin can be investigated from Fig. 8.
At the left-hand side of Fig. 8, the deformed mesh including the spatial distribution of ρ is presented, and the extracted structure is given at the right-hand side. This result corresponds to the last time step that has converged; during the next time step, the Newton scheme was terminated due to det[F] < 0. Inspection of Fig. 8 reveals that the physical reason for the termination of the Newton scheme is the buckling of the outer lower bar. Due to its horizontal orientation, mainly compression loads are present that exceed at some point the critical load for buckling. Consequently, a snapthrough problem is present which is known to constitute an ill-posed problem when no numerical control schemes are applied, see also the review paper [28].
In principal, it cannot be estimated in advance in which element the error of a negative determinant of F will be present. To be more precise, the problem can both arise in elements with full or void material. In the exemplarily shown problem in Fig. 8, a void element is deformed in an unphysical way. This is more obvious by the isometric zoom of the deformed mesh in Fig. 9: the deformation forces the elements to become over-distorted which is mathematically expressed by det[F] < 0. Since buckling phenomena are accompanied by localization effects, it is appreciated to resolve these spatial areas with higher accuracy, i.e., remove possible mesh-sensitivities. Unfortunately, the spatial regions with buckling cannot be anticipated for problems of topology optimization (simply because the topology, which includes severe jumps of coefficients due to the distribution of relative density, is unknown in advance). This issue could be resolved in a most elegant and efficient manner when mesh adaptations are performed during the computation as presented, e.g., in [43] for thermodynamic topology optimization. This aspect, however, is beyond the scope of the current work. To still analyze the model behavior for extreme deformations, we thus apply the arc-length control which is a built-in function in FEAP. Here, it is possible to control neither force or displacement; in contrast, only the increment of both can be controlled. In FEAP, the increment is given in terms of the time increment t which hence looses its physical interpretation. However, to avoid modifications in the numerical implementation of the model, we still use the same parameter t and adapt the viscosity η. Since we are using a different finite element mesh, the value of n loop is also adapted, see Tables 1 and 2.
The arc-length control continues to increase the increment of forces and displacements, i.e., the loading increases throughout the entire simulation process. Then, of course, we cannot expect the model to converge in the sense that the topology will not change anymore. However, this problem is also present in comparable form, e.g., for classical SIMP approaches where it manifests by trusses that continue to slowly move and slightly adapt the angle for the MBB beam.
The result for the model for the MBB beam under tension loading with arc-length control is given in Fig. 10 for the time steps n = {400, 2000, 6000, 9000} with respective maximum deflections of u max = {0.007, 0.087, 0.588, 0.923} mm. Note that here again the deformed structures are depicted. Since the arc-length scheme in FEAP uses the time increment t as control parameter, very small values for t have to be chosen even for a purely linear elastic computation without any optimization. Therefore, it takes much more time steps for pursuing the computation. A future implementation of our model into a finite element program with combined arclength and time step control is hence feasible. However, we see that after 400 load steps the maximum deflection is quite small and the "classical" optimal MBB structure has developed, cf.      Fig. 11 (top left). As for the tension loading, buckling of the topology is present at larger deformation states, however, at different locations, cf. time step n = 1000 in Fig. 11 (top right). The buckling, which also increases for increased time steps, triggers remarkable remodeling. After the minor modifications that are already present for n = 1000 in Fig. 11 Fig. 11 (bottom left), the topology continues to adapt to the severe deformations as presented for time step n = 8000 in Fig. 11 (bottom right). As for the case of positive prescribed displacements, severe buckling is observed which serves as beads that affect the stiffness of the structure.
Concluding, the presented optimization model possesses a high level of numerical stability which allows to compute and remodel structures for severe finite deformations even undergoing buckling.

Displacement-controlled simulations for the 3D cantilever
After analyzing the model for the classical MBB beam benchmark problem, the model is now applied to the complex boundary value problem of a cantilever that is indeed three-dimensional. Again, we present results for displacement-controlled boundary conditions which simplifies the investigation of the model for different maximum loads. The maximum displacement is applied within the first 50 time steps of modeling time for all results discussed here. All model parameters are collected in Table 3. The dimensions, boundary conditions and symmetry plane are presented in Fig. 12. The mesh consists of 32,307 nodes and 28,160 trilinear, hexahedral elements. As first example, the prescribed displacement is positive, thus inducing "tension" loading. Figure 13 shows the resulting structures in the reference configuration for various time steps and for different maximum displacement.
To be more precise, the maximum displacement takes val-  Fig. 13). However, the structures continue to evolve in time and the final design is given approximately at n = 1200, cf. second row in Fig. 13. Only minor modifications are observed when time continues as can be seen in the last row in Fig. 13. The deformed configurations of the topologies at n = 2000 are given in side view and in isometric perspective in Fig. 14. In this example, the effect of the anisotropic character of the log term in the strain energy density is clearly visible: for moderate loading, i.e. u max = 0.4 mm, strong bending moments are present in the deformed state, see Fig. 14 at the left-hand side. To compensate this, the arc-like structure is established. For higher loading, i.e. u max = 0.8 mm, smaller bending moments are present in the deformed configuration, cf. center column in Fig. 14. Thus, the arc-like structure is thinner but occupies larger dimensions. Increasing the maximum load Fig. 13 3D cantilever problem, "tension" load case: evolving optimal structures at discrete time steps (n = {200, 1200, 2000}, respectively) and for varying maximum prescribed displacements to be five times the reference maximum displacement, no bending moments at all are present. Consequently, the optimal structure is a "tension sheet" as can be seen in Fig. 14 at the right-hand side.
Remark Optimizing hyperelastic structures undergoing large deformations may result for sufficiently large loads in structures that cover a deformed configuration that exceeds the design space , cf. Fig. 14. This would be in line with topology optimization problems where the limitations regarding the design space are governed by the manufacturing process rather than the operation status. Enforcing optimal structures that lie within also in the deformed configuration demands more sophisticated constraints than the ones used here. Furthermore, in this case prescribed deformations have to be limited such that the constraint is even possible to achieve, i.e., a deformation of u max = 2.0 mm as employed in Fig. 14 already violates the constraint. The maximum admissible displacement that fulfills this constraint would be u max = 1.0 mm, cf. the geometric dimensions of the boundary value problem as depicted in Fig. 12.
The stored energies over time are plotted in Fig. 15. Interestingly, the convergence rate increases for higher loading. Due to the nonlinearities in the energy, the fractions are 1 : 25.28 : 238.62 for the maximum displacements of u max = {0.4, 0.8, 2.0} mm, respectively.
As final example, we present in Fig. 16 the evolution of the optimal topology when the maximum displacement is pointing downwards, thus implying a "compression" load case. The character of the evolution of the structure shares some similarities to that for the tension load case: only minor deviations are observable at n = 200, cf. left-hand side in Fig. 16. However, the optimal structure that has evolved after 1200 time steps (center in Fig. 16) possesses a remarkably different shape than the structures for the tension load case and reminds of optimal structures for linear elasticity. Almost no modifications are visible when time continues, cf. n = 2000 at the right-hand side in Fig. 16. Finally, the deformed configurations are presented in Fig. 17 and the evolution of the stored energy in Fig. 18.

Arc-length control simulations for the 3D cantilever problem
Concluding, we also apply the arc-length control to the 3D cantilever. The model parameters are given in Table 3. The results are given in the undeformed configuration in Fig. 19, and those in the deformed configuration are given for the "tension" load case in Fig. 20 and for the "compression" load case in Fig. 21, respectively. Here, again, the boundary conditions have to be provided by traction forces to apply arc-length control in FEAP. Accordingly, displacements in all three spacial directions are possible. These modifications in the boundary conditions lead to different optimal structures than for the examples presented above. The results represent the last load steps that converged with a minimum value for the iso-volume filter of 0.33. Interestingly, here much faster evolution can be observed as for the MBB beam: maximum possible deformations are reached for n = 771 and n = 1024, respectively. As can be seen from Figs. 20 and 21, strong bending and buckling is present. This is, of course, associated by severe mesh distortion such that the error for termination is, again, a negative determinant of the deformation gradient. To avoid this, a remeshing would be a suitable strategy. Additionally, possible self-contact has to be taken into account. However, for the current investigation, we emphasize that the model for thermodynamic topology optimization is robust enough to compute structures with extreme deformations including buckling.

Conclusions
We presented a novel approach to thermodynamic topology optimization that is valid for hyperelastic structures, but in principle also applicable to linear elastic structures. This approach does not need to make use of the strain energy density expressed in stress measures. In contrast, the usual definition in terms of strains or the right Cauchy-Green tensor can be applied. Regularization is employed by introducing a gradient enhancement of the energy which yields meshindependent finite element results; no checkerboarding is present. Furthermore, including a sigmoid function for the description of relative density allows for an analytical computation of the Lagrange parameter that accounts for mass conservation. An algorithmic treatment is presented enabling a straightforward numerical implementation. For the case of Neo-Hookean structures, numerical results for two boundary value problems are investigated. Moderate loading of the MBB beam gives structures that resemble to those obtained for linear-elastic structures. Severe loading causes buckling effects that terminate the standard simulations. However, the model shows to be robust enough to be evaluated also when arc-length control is applied. In this case, the "standard" solution for the MBB beam is obtained for small loads. For higher loads, however, buckling is present which causes strong remodeling such that the later optimal structures possess a unique topology. It is emphasized that the approach for topoplogy optimization itself needs only a negligible amount of computation power. The total simulation time is dominated by the time needed for computation of the displacements, i.e., for the finite element method to converge. Furthermore, we compute thick, three-dimensional problems and show that the Fig. 19 3D cantilever problem, arc-length control, undeformed configurations: "tension" load case (left) and "compression" load case (right) in different perspectives, minimum value for the iso-volume filter 0.33

Fig. 20
3D cantilever problem, "tension" load case, arc-length control: deformed configuration of the optimal structures for n = 771 in different perspectives, minimum value for the iso-volume filter 0.33

Fig. 21
3D cantilever problem, "compression" load case, arc-length control: deformed configuration of the optimal structures for n = 1024 in different perspectives, minimum value for the iso-volume filter 0.33 effect of tension/compression asymmetry is obvious in this case. Finally, we also see that optimizing for the deformed configuration leads to significantly different topologies and arc-length controlled simulations give access to optimized structures for complex and severe buckling in various spatial directions. It is thus inevitable to make use of optimization in the context of finite deformations if the loading of the final structure is not "small" for which the linearized theory can be applied.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

A Derivation of the numerical discretization of weak form
For the derivation of the numerical discretization of the weak form of the balance of linear momentum, we first mention in (16) Let us introduce the column matrix notation for the 2nd (65) The matrix form of the deformation gradient F can be defined by ∂u 3 ∂ X 1 T = 1 + u 1,1 1 + u 2,2 1 + u 3,3 u 1,2 u 2,3 u 1,3 u 2,1 u 3,2 u 3,1 T . (66) Then, (63) is given in matrix notation for each finite element (e) by where we introduced the well-known B e operator matrix; the vector δû e in (67) stores all of the three δû (k) introduced in (33 (34), see also [48].

B Tangent for the Newton scheme
we start with the investigation of the first integral which is the material tangent. It is given by where we introduced the material stiffness E := 2∂ s/∂ c = 2∂ s/∂ C · ·∂ C/∂ c. The second integral is termed geometric tangent for which we compute Keeping in mind that s remains fixed for this derivative, we find The geometric tangent for element (e) is obtained by assem- e .

C Component of the material stiffness
The components of the symmetric material stiffness E are computed to be with the above introduced parameters k 1 = ρ 3 λJ 2 and k 2 = ρ 3 [λ(J 2 − 1) − 2μ].