1 Introduction

The degradation of mechanical properties of materials, the development of cavities in the microscopic and mesoscopic scale within the material, and the macroscopic process of fracture are called damage; see [34, Sec. 1.1] and references therein. The evolution of microscopic cavities upon loading leads to reduced cross section areas and, thus, to a weakening of the material. Therefore, microscopic damage manifests itself at the macroscale by stress softening in terms of a reduction of material stiffness with an increase in stress, and strain softening in the sense of a reduction of stress with increasing strain. These physical phenomena are of fundamental engineering interest, and there exist several approaches to describe damage. Typically, the computational description either considers the damage on the micro- and/or mesoscopic length scale directly or suitable damage variables represent the microstructure in a phenomenological approach. The direct simulation of micromechanics is still an elaborate task, especially due to localization in the presence of softening behavior. Gitman et al. [21, Fig. 8] showed that hardly any representative volume element (RVE) exists in the case of softening. More recently, Liang et al. [29] modeled quasi-brittle damage based on direct micromechanical computations and the same RVE size dependency was observed. Instead, a so-called scaled damage law was proposed in order to overcome this issue. In the present paper, we focus on the phenomenological approach. We consider a scalar damage variable rather than a tensor-valued one. The damage variable, typically called D, is then associated with an effective cross section area reduction by \((1-D)\). We refer to [26] for the origin of this approach and to [34, Sec. 2.1.1] for a detailed historical overview. In the large strain setting, the \((1-D)\) model was derived in [31] and [41] and has turned out particularly important in the context of soft biological tissues, cf. e.g., [7].

The \((1-D)\) ansatz leads to a non-convex incremental stress potential. Even in the absence of damage, non-convexity is a major challenge mathematically and computationally. In the elastic regime, the notion of convexity is too restrictive and violates physical requirements and constraints, see [30, Sec. 6.4, p.376]. Ball [3] shows that in the finite elasticity case, polyconvexity is the decisive notion of semiconvexity for the energy density which leads to well-posed yet meaningful models. For anisotropy, a polyconvex basis has been introduced in [39] and successfully applied to soft biological tissues in [5]. For a more detailed mathematical discussion on convexity conditions, we refer to Dacorogna [18, Sec. 3.5] and Pedregal [37, Sec. 3.4]. However, continuum damage mechanics faces dissipative processes. In this context, the framework of incremental variational formulations introduced by a series of papers [16, 24, 33, 35, 36] successfully leads to thermodynamically consistent pseudo-elastic potentials per incremental time step which can be treated with the mathematical tools of finite elasticity. The incremental stress potentials, which are often non-convex, are relaxed, see [18, Sec. 3.6], i.e., the original non-convex energy density is replaced by a suitable semiconvex envelope, e.g., the poly-, quasi- or rank-one convex envelope. For polyconvex and special cases of quasiconvex envelopes, the existence of minimizers is guaranteed, while rank-one convex hulls are directly connected to the sound physical requirement of material stability, see [30, Sec. 6.3], which corresponds to the Legendre–Hadamard condition. Despite these favorable mathematical and mechanical properties, the numerical computation of semiconvex hulls is computationally expensive in the multidimensional case as the curse of dimensionality sets in. The reduction to one spatial dimension via the microsphere approach is a resort allowing for computational efficiency. Its natural combination with an adaptive discretization makes the overall approach computationally tractable for practical applications.

In the context of continuum damage mechanics, a convexified (relaxed) formulation has been constructed in the small strain regime in [23] and in the large strain case in [4]. The latter was proposed using a one-dimensional convexified model for fiber damage, which has been extended in [38] to a microsphere approach for the description of dispersed fiber orientations in soft biological tissues. Recently, in Schwarz et al. [40], instead of constructing directly a convex or semiconvex hull of a small strain incremental continuum damage mechanics potential, the underlying microstructures, which are usually obtained as a byproduct of the convex or semiconvex hull, respectively, are approximated (emulated) and a convex envelope is formulated based on them. This strategy is conceptually different from other convex or semiconvex hull approximation schemes, which are discussed in more detail in Sect. 3. In [40], strain softening was observed due to the evolutionary behavior of the emulated microstructures. However, the microstructure construction therein is limited by significant simplifications. Another recent contribution by Görthofer et al. [22] brought up a completely different model approach, which utilizes a compliance tensor to model stress softening. The novelty of this approach is the a priori convex modeling of the problem.

To the authors’ best knowledge, previous convexified models did not allow for the geometrically nonlinear modeling of strain softening. To overcome this, the present paper proposes a strain softening procedure building upon the microsphere scheme, see Bažant and Oh [12, 32] and Miehe et al. [20]. Therein, one-dimensional material laws, associated with single fiber orientations, are convexified, while incorporating failure of single fiber orientations, and afterward integrated over the unit sphere to construct a three-dimensional material response. For its practical realization, this paper introduces new algorithms for the efficient construction of the convex envelope of one-dimensional functions, thus providing useful tools and benchmarks for the continuum mechanics relaxation community.

The article is structured as follows. Section 2 recalls the theory of incremental continuum damage mechanics in the large strain case as derived in [4]. Section 3 discusses and compares state-of-the-art convexification algorithms. The novel strain softening microsphere approach is then introduced in Sect. 4.2 along with a suitable quadrature scheme on the sphere. Section 5 demonstrates the mesh independence of the new scheme before the paper is concluded in Sect. 6.

2 Mathematical and continuum mechanical theory

This section briefly recapitulates the incremental variational framework and recalls the concept of relaxation as proposed in [4, 23].

2.1 Incremental variational formulation

The formulation of the incremental variational problem requires basic concepts and notation of continuum mechanics, which are introduced first. The reference configuration of the physical body of interest is described by the bounded subset \({\mathcal {B}}\) of \({\mathbb {R}}^3\) with elements \({\varvec{X}}\in {\mathcal {B}}\). The deformation of the body relative to this reference configuration is described by the nonlinear deformation map \(\varvec{\varphi }_t : {\mathcal {B}} \rightarrow {\mathcal {B}}_t :=\varvec{\varphi }_t({\mathcal {B}})\) for times \(t\in {\mathbb {R}}^+\). Here, \({\mathcal {B}}_t \subset {\mathbb {R}}^3\) denotes the actual configuration of the body at time t with the deformation \({\varvec{x}}=\varvec{\varphi }_t({\varvec{X}}) \in {\mathcal {B}}_t\) for \({\varvec{X}}\in {\mathcal {B}}\). The deformation gradient is then given by \({\varvec{F}}({\varvec{X}}) = \text {Grad} \ \varvec{\varphi }_t({\varvec{X}})\), and the right Cauchy–Green tensor \({\varvec{C}}= {\varvec{F}}^T{\varvec{F}}\) serves as suitable deformation measure to ensure objective constitutive models automatically. The first, second and third principal invariants are denoted as

$$\begin{aligned} I_1 = \text {tr}\ {\varvec{C}}, \qquad I_2 = \text {tr} \left[ \text {Cof} \ {\varvec{C}}\right] , \qquad I_3 = \det {\varvec{C}}\end{aligned}$$
(1)

and build the basis for the construction of isotropic, hyperelastic strain energy densities \(\psi :=\psi ({\varvec{C}}) = \psi (I_1,I_2,I_3)\). Within continuum damage mechanics, the strain energy density can be modeled as

$$\begin{aligned} \psi ({\varvec{C}}, \beta ) = (1-D(\beta )) \, \psi ^0({\varvec{C}}) \end{aligned}$$
(2)

where \(\psi ^0\) is a material-dependent virtually undamaged strain energy density and \(D \in [0,1)\) is the monotonically increasing damage function of the internal variable \(\beta \). Throughout the paper, the compressible Neo-Hookean effective strain energy density

$$\begin{aligned} \psi _{\mathrm{NH}}^0({\varvec{C}}) :=\frac{\mu }{2} (I_1 - 3) - \mu \ln (J) + \frac{\lambda }{2} \ln (J)^2 \end{aligned}$$
(3)

is used with \(J = \sqrt{I_3}\) and the Lamé parameters \(\mu \) and \(\lambda \). We employ an exponential damage function of the form

$$\begin{aligned} D(\beta ) = D_{\infty } \left[ 1-\exp \left( \frac{-\beta }{D_0}\right) \right] , \end{aligned}$$
(4)

where \(D_{\infty } \in (0,1)\) and \(D_0\) are material-dependent parameters. This type of damage function was introduced in [31]. There, \(D_{\infty }\) is referred to as the maximum possible damage and \(D_0\) as a damage saturation parameter. The reciprocal value of the latter corresponds to how fast the asymptotic limit \(D_{\infty }\) is reached. Note that the approach proposed in the following sections is not restricted to this specific choice of strain energy density and damage function. For example, damage may also be considered to start to evolve only if a certain threshold is exceeded. Such an initial damage state may be specifically important in soft biological tissues where no damage should be assumed within the physiological range of loadings, cf. [6]. The internal variable \(\beta \) is modeled according to the discontinuous damage approach of [31], i.e.,

$$\begin{aligned} \beta _t :=\max _{s\le t} \left[ \psi ^0({\varvec{C}}_s)\right] \quad \text {for } s,t \in {\mathbb {R}}^+. \end{aligned}$$
(5)

Considering a discretization of time into a number of incremental time steps \(\Delta t_{n+1} := t_{n+1} - t_n\), the generalized work done in the body within an incremental time step

$$\begin{aligned} {\mathcal {W}}({\varvec{F}}_{k+1},\beta _{k+1}) :=\int _{t_k}^{t_{k+1}} \dot{\psi } + \phi \ \text {d}t \end{aligned}$$
(6)

is composed by two energetic quantities, the time derivative of strain energy density \(\dot{\psi }\) and the dissipation potential \(\phi \). Following the incremental variational framework, the minimization of \({\mathcal {W}}\) with respect to the internal variable \(\beta \) yields the incremental stress potential

$$\begin{aligned} W(\varvec{F}_{k+1}) = {\underset{\beta _{k+1}}{\inf }} \left[ {\mathcal {W}}({\varvec{F}}_{k+1},\beta _{k+1})\right] \end{aligned}$$
(7)

which solely depends on the current deformation gradient \({\varvec{F}}_{k+1}\). The dissipation potential \(\phi \) can be obtained by inserting the time derivative of the strain energy density \(\hat{\psi }({\varvec{F}}, D) = (1-D)\psi ^0({\varvec{F}})\) into the Clausius–Duhem inequality

$$\begin{aligned} ({\varvec{P}}- (1-D)\partial _{{\varvec{F}}} \psi ^0) : \dot{{\varvec{F}}} + \underbrace{\dot{D}\psi ^0}_{=:\phi } \ge 0. \end{aligned}$$
(8)

With this dissipation potential, by analytically minimizing (7), the incremental stress potential can be obtained as

$$\begin{aligned} W(\varvec{F}) = \psi ({\varvec{F}},D) - \psi ({\varvec{F}}_k,D_k)+\beta D -\beta _k D_k - \overline{D} + \overline{D}_k, \end{aligned}$$
(9)

where the shorthand notations \(D:=D(\beta )\), \(D_k :=D(\beta _k)\), \(\overline{D} :=\overline{D}(\beta )\), and \(\overline{D}_k :=\overline{D}_k(\beta )\) denote the evaluations of the functions at minimizers \(\beta \) of the current time step or the values of internal variables at the last time step \(\beta _k\). A detailed derivation can be found in [4, Sec. 3.1]. Here, \(\overline{(\cdot )}\) corresponds to the antiderivative and the index of the actual incremental step \((\cdot )_{k+1}\) was dropped. At this point, the principle of minimum potential energy is employed:

$$\begin{aligned} \inf _{\varvec{\varphi }} \left\{ \Pi (\varvec{\varphi })\;|\; \varvec{\varphi }=\hat{\varvec{\varphi }} \text { on } \partial {\mathcal {B}}_{\varphi }, \varvec{t}=\hat{\varvec{t}} \text { on } \partial {\mathcal {B}}_{\sigma }\right\} , \qquad \Pi (\varvec{\varphi }) :=\int _{{\mathcal {B}}} W({\varvec{F}}(\varvec{\varphi })) \ \text {d}V + \Pi ^{\mathrm{ext}} (\hat{\varvec{t}},\varvec{\varphi }). \end{aligned}$$
(10)

Here, \(\Pi ^{\mathrm{ext}}\) denotes the potential energy of external forces \(\hat{\varvec{t}}\) applied at the Neumann boundary. Considering the displacement field \({\varvec{u}}= {\varvec{x}}- {\varvec{X}}\), taking the Gâteaux derivative and setting it equal to zero yields the weak form in the total Lagrangian setting

$$\begin{aligned} \int _{{\mathcal {B}}} {\varvec{P}}: \text {Grad} \delta {\varvec{u}}\text { d}V -\int _{\partial {\mathcal {B}}_{\sigma }} \hat{{\varvec{t}}} \cdot \delta {\varvec{u}}\text { d}S = 0, \end{aligned}$$
(11)

subject to Dirichlet boundary conditions \({\varvec{u}}= \hat{{\varvec{u}}}\) on \(\partial {\mathcal {B}}_{\varvec{\varphi }}\), see, e.g., [25, Sec. 8.3] for a detailed derivation. Here, the first Piola–Kirchhoff stress tensor \({\varvec{P}}\) was introduced. The first Piola–Kirchhoff stress and its derivative, the nominal tangent moduli \({\mathbb {A}}\), are given by

$$\begin{aligned} {\varvec{P}}= \frac{\partial W({\varvec{F}})}{\partial {\varvec{F}}}, \qquad {\mathbb {A}} = \frac{\partial {\varvec{P}}}{\partial {\varvec{F}}} = \frac{\partial ^2 W({\varvec{F}})}{\partial {\varvec{F}}\partial {\varvec{F}}}. \end{aligned}$$
(12)

Equation (11) is solved computationally by a finite element discretization plus a Newton–Raphson scheme for the resulting system of nonlinear algebraic equations. The Newton–Raphson scheme requires the residual of (11) and the Jacobian. Thus, the first Piola–Kirchhoff stresses \({\varvec{P}}\) and its derivative \({\mathbb {A}}\) are required in each integration point of the finite element discretization. Unfortunately, due to the non-convex structure of \(W({\varvec{F}})\), this problem is ill-posed.

Relaxation theory replaces \(W({\varvec{F}})\) by its convex envelope in 1D or by a suitable convex or semiconvex envelope in the multi-dimensional case and, hence turns the problem into a well-posed one. Note that if \(W({\varvec{F}})\) is replaced, the requirements for solving (11) remain the same, i.e., relaxation only alters the response at the material point (integration point) level.

2.2 One-dimensional relaxation

The direct multidimensional relaxation is computationally demanding. In the finite strain setting of continuum damage mechanics, it has not been accomplished yet. This is why this paper promotes the reduction to one-dimensional relaxation as part of a microsphere approach. In one dimension, all notions of semiconvexity coincide with the classical notion of convexity, which is briefly recalled below.

A function \(W:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is said to be convex if

$$\begin{aligned} W(\xi F^+ + (1-\xi )F^-) \le \xi W(F^+) + (1-\xi ) W(F^-) \end{aligned}$$
(13)

holds for all \(\xi \in (0,1)\) and \(F^+, F^- \in {\mathbb {R}}\). Alternatively, if W is sufficiently regular, a function is convex if and only if its first derivative is monotonically increasing and the convexity condition can be reformulated as:

$$\begin{aligned} \left( W'(F^+) - W'(F^-)\right) (F^+ - F^-) \ge 0, \end{aligned}$$
(14)

which will be of great interest later. Within the relaxed continuum mechanics framework, the non-convex one-dimensional potential W is replaced by its convex envelope. The one-dimensional deformation gradient F can thus be described as the convex combination

$$\begin{aligned} F = \xi F^+ + (1-\xi ) F^-, \end{aligned}$$
(15)

if F is within a convexified (relaxed) regime, i.e., a maximal interval \((F^-,F^+)\), where W is greater than its convex envelope that is simply given by the line \(\xi \mapsto \xi W(F^+) + (1-\xi ) W (F^-)\). This formulation allows for the interpretation of \(F^+\) and \(F^-\) as the supporting points of the convex hull at the point F. The points \(F^+\) and \(F^-\) form a certain measure, namely a gradient Young measure, see, e.g., [10, Section 2.3], and describe points between which the gradient of the solution is oscillating. Further, they are minimizers of the original non-convex problem and it is known by rigorous mathematical theory, see, e.g., [37, Chapter 4],[9, Chapter 9], that they describe bifurcated microphases of the original problem. Therefore, relaxation can be understood as the homogenization of a microstructure which first bifurcates into a strongly and a weakly damaged phase. In the convexified regime, the evolution of damage then takes place in terms of an increasing volume fraction of the strongly damaged phase. This is actually in line with the physics of damage if the strongly damaged phase is interpreted as broken fibrils appearing within the fiber. The material state F is then described as a mixture of those phases \(F^+\) and \(F^-\). The convex coefficients \(\xi , (1-\xi ) \in (0,1)\) represent the corresponding volume fractions. The deformation gradients in the strongly and weakly damaged phase can be expressed in terms of F, \(\xi \) and a distance \(d=(F^+ - F^-)/F \in {\mathbb {R}}^+\), which corresponds to a microbifurcation intensity (cf. [4, Sec. 3.2.1]), as

$$\begin{aligned} F^+ = F(1+(1-\xi )d), \qquad F^- = F(1-\xi d). \end{aligned}$$
(16)

The convex envelope \(W^c\) of W can now be reformulated in a pointwise manner as the greatest convex function below W which depends on the volume fraction \(\xi \) and microbifurcation intensity d, i.e.,

$$\begin{aligned} W^c(F)=\inf _{\xi ,d}\left[ \overline{W}(F,d,\xi )\right] \quad \text { with } \overline{W} = \xi W(F^+) + (1-\xi )W(F^-). \end{aligned}$$
(17)

Both characterizations (14) and (17) will be of great interest later in the algorithmic computation of an approximation of the convex hull.

3 Convexification algorithms

There are various approaches for constructing the convex hull \(W^c\) for a given function W. Since we consider one-dimensional damage models, we restrict ourselves to the convexification of functions in one spatial dimension. We will focus on approaches which can be motivated through the different characterizations of convexity as mentioned in the previous section.

Motivated by (17), one can construct a convex hull via the solution of a constrained continuous optimization problem. In the context of relaxed continuum damage mechanics, [23, Section 5.1.4] used a scheme where the two supporting points of the one-dimensional convex hull are determined beforehand and used throughout the computation. In [4], a different strategy is applied, where the two supporting points are computed, while the simulation runs by a continuous non-convex and multi-modal optimization problem. Due to the nature of this minimization problem, a simple Newton optimization scheme is not sufficient for the computation of a minimizer. Hence, a multistart strategy needs to be applied, as used in [4] and further extended by an evolutionary search strategy for the multistart points in [38]. In order to overcome the drawbacks of the optimization problem formulation, e.g., the accompanying computationally expensive computation of a solution, a discrete approach motivated by the higher-dimensional rank-one convexification Dolzmann and Walkington [8, 19] seems worth considering. This approach is further motivated by the examination of the convexity condition (14) for a discrete grid function.

3.1 Discrete convexification

In order to approximate the convex hull of the energy density W, we first choose a (not necessarily equidistant) grid \(F_{j = 0,\dots , N}\) as a discretization for the deformation gradient variable F on a finite interval. We then approximate the given function W by a piecewise linear function, which is determined by the values \(W_{j} = W(F_{j})\) for \(j =0, \dots , N\). Afterward, the convex envelope of this grid function is computed. A possible procedure to realize this convexification in complexity \({\mathcal {O}}(N)\) is presented in [9, Sec. 9.3.4] and recalled in Algorithm 1. The procedure can be interpreted as the application of Graham’s scan [17, Sec 33.3] to the epigraph of the discretized function that is characterized through the planar points \((F_0, W_0), \dots , (F_N, W_N)\). This algorithm is known for its linear complexity in case the given points are sorted along one axis, as it is the case in our discretization \(F_{j = 0,\dots ,N}\).

figure a

The algorithm successively checks the convexity condition (14) in the grid points by iterating from the left to the right through the discretized interval. Points located above the convex hull are identified through the while loop (lines 6 to 8) and neglected. Points that support the convex hull are saved as the grid points \(F^c_{j = 0,\dots ,n}\) and the corresponding function values as \(W^c_{j = 0,\dots ,n}\) where \(n \le N\). From an implementation perspective, the arrays \(W^c\) and \(F^c\) should be initialized by zero vectors of length N and cut down to the length n afterward. The piecewise linear interpolation of these points gives an approximation to the analytical convex hull of W. A test problem can be seen in Fig. 1. This test problem is subject to multiple minima and not related to continuum mechanics. It is chosen to visualize the generality of the used convexification algorithm and can be used as a unit test for a convexification algorithm implementation. The figure shows a non-convex function, drawn as blue line. The blue circles correspond to the function values of W at the grid points of the algorithm, i.e., \(F_{j=0,\dots , N}\). The application of Algorithm 1 on the interval [0, 4.25] gives the approximation of the convex hull of W drawn in red (crosses), one may notice that the analytical convex hull of W is a lower function due to the local minimum at about 4.0 which needs to be resolved sufficiently well by the convexification mesh (compare Fig. 3). The array \(W^c_{j=0,\dots ,n}\) consists of five points at the left-hand side and three points at the right hand side of the interval [0, 4.25], since only these eight points \(n = 7\) support the convex hull of the piecewise linear function described by \(W_{j = 0,\dots ,N}\) (with \(N = 33\)). For a given deformation gradient F, the supporting points \(F^+\) and \(F^-\) of the convex hull can be determined by searching for the first larger and first smaller element of \(F^c_{j=0,\dots ,n}\) with respect to the given point F.

Fig. 1
figure 1

Test problem for convexification algorithm. Non-convex polynomial benchmark function \(W(F)=(F - 1)^2 (42 + 77\) \(F + 15 F^2 - 102.5 F^3 + 58.89 F^4 - 12.89 F^5 + F^6)\) drawn in blue and result of the algorithm drawn in red (color figure online)

3.2 Adaptivity for one-dimensional discrete convexification

The described algorithm in Sect. 3.1 needs expert knowledge in terms of the one-dimensional grid of trial deformation gradients. Thus, we present a scheme to obtain a computationally tailored one-dimensional discretization. The core idea is to apply a variation of Algorithm 1 first on a coarse grid and further utilize second-order derivative information of the function W in order to detect intervals where the supporting points of the convex hull are located. In Algorithm 2, this procedure is given as pseudocode. The algorithm requires three different inputs, the coarse grid \(F_{j=0,..,n}\), function values \(W_{j=0,..,n}\), and second-derivative information at the coarse grid points \(W''_{j=0,..,n} = W''(F_{j})\). The adaptive grid array \(\hat{F}_{j=0,..,a}\) as well as the bool array \(F^{\mathrm{mask}}_{j=0,..,n}\) can be pre-allocated and statically sized. Here, \(F^{\mathrm{mask}}_{j=0,..,n}\) is initialized to ones. Points that lie above the convex hull are identified by the algorithm and marked with zeros. As a result, one obtains the supporting points of the intervals where the piecewise linear approximation to W is non-convex. Here, n denotes the coarse grid length and \(n \ll N\) where N is the amount of points in a fine, equidistant discretization used in Algorithm 1. The goal is now to have an implementation at hand which splits up the work into two small portions, namely constructing an adaptive grid and convexifying the function W on it. The total workload should be smaller than the construction of the equidistant fine grid and convexifying on it with complexity \({\mathcal {O}}(N)\). The first while loop in Algorithm 2 is responsible for finding the supporting points of a non-convex regime. Here, the procedure searches from left, with corresponding index i, and right, with index k, at the same time in order to find \(F^+\) and \(F^-\). A nested while loop begins at line 8 if a non-convex regime starts and stops as soon as the right and left bounds of the interval are convex again. In this context, interval denotes the points from i to k. From line 9 to 22 the inner loop increments the right and left point of the current interval until two subsequent points are convex again. The increment is done with the function iterator that searches based on a given index, e.g., i, in the “\(+\)” case for the last true value index of \(F^{\mathrm{mask}}_{0,..,i}\) and in the “−” case for the first true index of \(F^{\mathrm{mask}}_{i,..,n}\). After the search of the supporting points of non-convex regions, an iteration over the second derivative array \(W''_{j=0,..,n} = W''(F_{j})\) is started. Here, the second derivative information is obtained by automatic differentiation; however, closed forms are also available in [4]. The lines 30 to 34 realize a search for local minima of the second derivative array. The identified indices with respect to the coarse grid are saved in the array \(F^{**}\). Afterward, the values of interest \(F^*\) and \(F^{**}\) are combined; they contain grid points in the coarse mesh which capture the qualitatively relevant information concerning convexity of the underlying function W. In a final step, the intervals bounded by the points of interest are taken individually in order to insert the grid points. The total number of grid points is bounded by the number a.

The construction of the adaptive one-dimensional grid is now performed by distributing grid points within the range of the detected points of interest in the previous step (given by the arrays \(F^*\) and \(F^{**}\)). For this purpose, a distribution polynomial \(dis \in C^1({\mathbb {R}})\) is defined on each detected interval \([{F}^- , {F}^+]\), where W is non-convex, with \({F}^-, {F}^+ \in (F^{*} \cup F^{**}) \subset F_{j=0,\dots ,n}.\)

figure b

The distribution polynomial dis on the interval \([{F}^- , {F}^+]\) of length \(\Delta F = F^+ - F^-\) is of the form:

$$\begin{aligned} {dis}: {\mathbb {R}}&\longrightarrow {\mathbb {R}} \\ t&\longmapsto \left\{ \begin{array}{ll} b \, t^p + c \, t + F^-, &{} \text {if } t< s^{-}_{r}, \\ e \, j + f, &{} \text {if } s^{-}_{r} \le t \le s^{+}_{r}, \\ - \left( b \,(j_{\max } - t)^p + c \,(j_{\max } - t) \right) + F^+, &{} \text {if } s^{+}_{r} < t. \end{array} \right. \end{aligned}$$

Given the left and right end points \({F}^-\) and \({F}^+\) of the interval, the amount of points \(j_{\max }\) within the interval, a radius \(r \in (0, \tfrac{\Delta F}{2})\) that characterizes the neighborhood for the refinement toward the end points, the minimal grid size \(h_{\min }\), and the polynomial degree p, the remaining parameters bcef, and \(s^{\pm }_{r}\) are defined as follows:

$$\begin{aligned} c&= \, h_{\min }, \\ e&= \, \frac{1}{2 \, j_{\max }} \, \Bigg (\sqrt{(\Delta F - (p - 1) \, (c \, j_{\max } -2 r))^2 + 4 c \, j_{\max } \, (p-1) \, (\Delta F - 2 r)} \\&\quad - c \, j_{\max } \, (p + 1) + \Delta F + 2 r \, (p - 1)\Bigg ), \\ s^{-}_{r}&= \, \frac{j_{\max }}{2} - \frac{\Delta F / 2 - r}{e}, \\ s^{+}_{r}&= \, j_{\max } - s^{-}_{r}, \\ f&= F^- + \frac{\Delta F}{2} - e \, \frac{j_{\max }}{2}, \\ b&= \frac{e-c}{p \, (s_{r}^{-})^{p-1}}. \end{aligned}$$

The choice is such that dis is of class \(C^1\) (in particular, the function and its first derivative are continuous at the points \(s^-_{r}\) and \(s^+_{r}\)) and the following conditions are met. The first index is mapped to \(F^-\) (\({dis}(0) = F^-\)), and the last index \(j_{\max }\) is mapped to \(F^+\) (\({dis}(j_{\max }) = F^+\)). The value \(\tfrac{j_{\max } }{2} \in {\mathbb {R}}\) is mapped to the midpoint \(F^- + \tfrac{\Delta F}{2}\) of the interval \([F^-, F^+]\). The values \(s^{\pm }_{r}\) are mapped exactly to the points where the change from the linear to the degree p polynomial occurs and vice versa, i.e.,

$$\begin{aligned} {dis}(s^-_r)&= F^- + r, \qquad {dis}(s^+_r) = F^+ - r. \end{aligned}$$

Note that even in the case of even polynomial degree p, the function dis is monotonic by construction. Finally, we use the distribution function to map the vector indices \(j = 0, \dots , j_{\max } \in {\mathbb {N}}_0\) of the grid points to a deformation gradient value, cf. Figure 2. This leads to a symmetric distribution of the grid points on \([F^-, F^+]\) that is refined toward the boundary with a grading which depends on the polynomial degree p. After the distribution of the grid points on all subintervals \([F^-, F^+]\), they are glued together to obtain \(\hat{F}_{j=0,\dots ,a}\) as a discretization of the whole interval \([F_0, F_n]\) of interest. Clearly, the choice of polynomial and the set of equations is flexible and can be adapted to the specific problem at hand. The proposed choice appears to be sufficiently general to capture regular multi-well functions efficiently (cf. Fig. 3). In Fig. 2 (left), the influence of the choice of r is plotted. Here, it can be seen that the larger r is chosen, the steeper the curve gets in the middle of the interval and the more localized the distributed grid points are around the beginning and end of the interval [0, 1]. In this part of the figure, the radius takes the values \(r\in \{0.0,0.025,0.05,0.1,0.2,0.5\}\), where the steepest curve with respect to the center of the interval is \(r=0.5\), the curve appearing linear corresponds to \(r=0.0\) and the function highlighted in red to \(r=0.1\). For all radius variations, the polynomial degree is fixed to \(p=5\). In the right-hand panel of the figure, the polynomial degree is varied from one to six, where the steepest curve corresponds to \(p=6\) and the linearly behaving function to \(p=1\). The curve highlighted in red shows \(p=3\). Within this plot, the radius is fixed to \(r=0.2\) for all polynomial degrees.

Fig. 2
figure 2

Parameterized polynomial used to distribute the one-dimensional grid points within the unit interval [0, 1]. All shown functions can be scaled and translated by an affine transformation to the actual non-convex regime detected by Algorithm 2

In Figs. 3 and  4, the benchmark function from Fig. 1 and a Neo-Hooke effective energy, respectively, were used as tests for the procedures of Algorithms 1 and 2. In these figures, the coarse grid is visualized at the very bottom. The short black lines correspond to the grid points. Further, the function drawn in black denoted as W(F) on \(F_{j=0,..,n}\) shows the information passed to the adaptive algorithm. The adaptive grid is visualized as short black lines above the coarse grid. It is remarkable that the coarse grid only consisting of less than 50 points already allows the adaptive scheme to construct a grid, which accurately resolves the incremental stress potential shape (drawn as red curve) as well as the start and end of the non-convex intervals.

Fig. 3
figure 3

Adaptive procedure applied to multi-well benchmark function from Fig. 1

Fig. 4
figure 4

Convexified incremental stress potential where a neo-Hooke energy \(\psi ^0\) was used. The black curve and the second derivative obtained by automatic differentiation at the same points are information entering Algorithm 2 to construct the adaptive grid which is shown as black lines at the bottom of the figure. Here, the first row of black bars corresponds to the adaptive grid, while the second row visualizes the coarse grid used to construct the adaptive one

To study this further, a compressible Neo-Hooke model with \(\mu =0.5\) and \(\lambda =0.1\), the Yeoh model with \(c_1=6.0\), \(c_2=1.0\), and \(c_3=1.0\), and the St. Venant–Kirchhoff model (St.V–K) with \(\mu =0.5\), \(\lambda =0.0\), are considered for the effective energy densities \(\psi ^0\). The according strain energy density functions are given by (3) and

$$\begin{aligned}&\psi _{\mathrm{St.V-K}}^0({\varvec{C}}) :=\frac{\lambda }{8} \left( I_1 - 3\right) ^2 + \frac{\mu }{4}\left( I_1^2 - 2I_1 - 2 I_2 + 3\right) , \end{aligned}$$
(18)
$$\begin{aligned}&\psi _{\mathrm{Yeoh}}^0({\varvec{C}}) :=c_1 \left( I_1 I_3^{-1/3}-3\right) + c_2 \left( I_1 I_3^{-1/3}-3\right) ^2 + c_3 \left( I_1 I_3^{-1/3}-3\right) ^3. \end{aligned}$$
(19)

The damage parameters \(D_{\infty }\) and \(D_0\) are kept to 0.99 and 0.5, respectively. In Table 1, the results of the associated convexifications are compared with regard to the convex hull supporting points, \(F^+\) and \(F^-\), for the adaptive and equidistant procedure. For an objective estimation of the difference, the relative distance \(\Delta (\cdot ) :=[(\cdot )^{\mathrm{adapt}}-(\cdot )^{\mathrm{equiv}}]/(\cdot )^{\mathrm{equiv}}\) is considered and given in Table 1. Herein, \((\cdot )^{\mathrm{adapt}}\) and \((\cdot )^{\mathrm{equiv}}\) denote quantities of interest computed from the adaptive and equidistant distribution of grid points, respectively. All considered incremental stress potential functions (with different effective strain energy densities) have exactly two intervals of non-convexity, one interval \([F^{-,c}, F^{+, c}]\) in the compression regime and the other interval \([F^{-, t}, F^{+, t}]\) in the tension regime. Note that the hyper-parameters of the adaptive scheme \(h_{\mathrm{min}}\), p, and r have been chosen rather arbitrarily; they have not been tuned to provide an optimal setting for the adaptive scheme. The non-convex regime of the Yeoh and the St. Venant–Kirchhoff model is approximately of same size and thus requires a similar amount of equidistant points to determine the convex hull supporting points appropriately. As can be seen from Table 1, a significantly reduced amount of a grid points in the adaptive scheme is sufficient to obtain comparable results (small relative deviations) instead of the N points of the equidistant grid.

Table 1 Comparison of convex supporting points \(F^+\) and \(F^-\) in terms of relative deviations between adaptive and equidistant grid in both, tension and compression zones. Here, superscripts \((\cdot )^{\pm ,c}\) and \((\cdot )^{\pm ,t}\) denote the compression and tension damage regime, respectively

The adaptive scheme is now benchmarked in terms of computating time against an equidistant grid for the Neo-Hooke effective strain energy density. The reference timing consists of constructing the incremental stress potential values W and convexifying them with the procedure presented in Algorithm 1. The benchmark considered 3000 samples on a laptop with an Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz in order to average out fluctuations. The benchmark software BenchmarkTools.jl was used for this purpose. Figure 5 shows the obtained measurements. Here, the adaptive grid was limited to 250 points in total and each non-convex interval needed to consist of at least 20 points. The construction of the adaptive grid as described in Algorithm 2 was initialized with \(n=20\) grid points for each \(\hat{F}_{j=0,..,n}\), \(W_{j=0,..,n}\) and \(W''_{j=0,..,n}\). This is the setting used for fitting the polynomials which distribute the points of the grid. The comparative equidistant grid starts from 0.001 with step size \(\delta =0.01\) and ends at \(F=20\). Therefore, the equidistant grid has in total 2000 points. Such an equidistant grid was used whenever boundary value problems with true degrees of freedom were solved without the adaptive scheme (as in Sect. 5) and thus, serves as a fair comparison. Different polynomial degrees are tested; however, as can be seen in Fig. 5, the difference is marginal. All computations based on the adaptive scheme required significantly reduced computing times.

Fig. 5
figure 5

Benchmark to compare the adaptive strategy with a typically used equidistant grid for a Neo-Hooke type base material within the continuum damage mechanics formulation. The adaptive grid was limited to 250 points, and the measurements show a successful adaptive strategy. Constructing an adaptive grid and convexifying it afterward is faster than convexifying an equidistant fine grid with 2000 points

4 Microsphere approach

The microsphere approach [12, 32] is an attractive technique to produce three-dimensional models based on one-dimensional material laws. It was originally derived for fiber or polymer chain-based materials. However, in [20] the procedure was generalized by introducing representative directions. Since the mathematical procedure is almost identical, we refer to both as the microsphere approach. Note that the procedure can be applied to any available one-dimensional material law. In the relaxation case, where dimensions are squared (W(F), where \(F \in {\mathbb {R}}^{d\times d}\)) within the problem, the curse of dimensionality makes the problem hard to solve—even in two spatial dimensions. Thus, microspherical integration of one-dimensional representative fibers is used to obtain a generalized three-dimensional constitutive response, which may particularly be realistic in case of materials reinforced with dispersed fibers. Denote by \(F^{\alpha }\) the elongation of the \(\alpha \)-th fiber (or representative direction), then the one-dimensional constitutive response can be obtained by

$$\begin{aligned} W^{\alpha }:=W(F^{\alpha }), \qquad P^{\alpha } :=\frac{\partial {W^{\alpha }}}{\partial {F^{\alpha }}}, \end{aligned}$$
(20)

where \(W^{\alpha }\) is the incremental stress potential of the \(\alpha \)-th fiber, which is modeled according to (9) with a one-dimensional effective strain energy density \(\psi ^0(F^{\alpha })\). Now, by integrating all fibers over the unit sphere, the three-dimensional first Piola–Kirchhoff stress and the nominal tangent moduli can be obtained (cf. [20]) by

$$\begin{aligned} {\varvec{P}}= \int _0^{2\pi } \int _0^{\pi } \rho \, P^{\alpha } {\varvec{A}}\otimes {\varvec{A}}\sin \vartheta \text { d}\vartheta \text { d} \varphi , \quad {\mathbb {A}} = \int _0^{2\pi } \int _0^{\pi } \rho \, \frac{\partial {P^{\alpha }}}{\partial {F^{\alpha }}} {\varvec{A}}\otimes {\varvec{A}}\otimes {\varvec{A}}\otimes {\varvec{A}}\sin \vartheta \text { d}\vartheta \text { d} \varphi . \end{aligned}$$
(21)

In these formulas, \({\varvec{A}}:={\varvec{A}}(\varphi ,\vartheta )\) denotes the direction of the fiber and depends on the spherical angles \(\varphi \) and \(\vartheta \). In this work, we restrict ourselves to a uniform orientation distribution function and thus \(\rho = 1/4\pi \), which can be pulled in front of the integral. By application of a numerical quadrature rule for the microsphere given by \(\alpha =1,\dots ,N\) directions and weights \(w^{\alpha }\), we obtain

$$\begin{aligned} {\varvec{P}}= \frac{1}{4\pi }\sum _{\alpha =1}^{N} w^{\alpha } P^{\alpha } {\varvec{A}}^{\alpha } \otimes {\varvec{A}}^{\alpha }, \qquad {\mathbb {A}} = \frac{1}{4\pi }\sum _{\alpha =1}^{N} w^{\alpha } \frac{\partial {P^{\alpha }}}{\partial {F^{\alpha }}} {\varvec{A}}^{\alpha } \otimes {\varvec{A}}^{\alpha } \otimes {\varvec{A}}^{\alpha } \otimes {\varvec{A}}^{\alpha }. \end{aligned}$$
(22)

4.1 Benchmark of discrete and continuous convexification

In this section, we compare the discrete convexification against the continuous optimization procedure within the context of the microsphere. All calculations are carried out by means of the Julia programming language [13] using the finite element toolbox Ferrite.jl [15] and the tensors package Tensors.jl [14] for automatic differentiation. The term discrete convexification refers to the algorithmic scheme given in Algorithm 1, without the adaptive scheme described in Sect. 3.2, whereas continuous convexification refers to the evolutionary search multistart Newton strategy of [38]. All numbers presented compare one fiber of the microsphere inside one integration point of a specific boundary value problem. The geometry of the boundary value problem is depicted in Fig. 6b, where a unitcube is illustrated whose backside is fixed in all degrees of freedom while its front is pulled displacement-driven in the \(x_1\)-direction (indicated by the red arrow). Thus, a pseudo-time-dependent Dirichlet boundary condition, which increases linearly to the maximum value of 4.0, is applied at the front, while a homogeneous Dirichlet boundary condition in each direction is applied at the back. The displacement field is approximated by trilinear basis functions and ten hexahedral elements along the \(x_1\)-direction and two elements in both of the \(x_2\)- and \(x_3\)-directions. For the analysis of computational performance, different metrics associated with the computing time are considered, see Fig. 6a. As can be observed from the results, the discrete convexification is superior to the multistart Newton strategy for all metrics, which is to be expected due to the discrete convexification complexity of \({\mathcal {O}}(N)\). Note that the total runtime of the discrete convexification includes the precompilation time of Julia. Thus, the measurements of total runtime should not be compared, but rather the average time required for assembling the global system of equations. This is also reasonable with view to the fact that the only difference between the compared approaches is associated with the computational effort at the integration point, not the global solution of nonlinear equilibrium equations. The assembly function was called 2000 times, and thus, all variations are averaged out. Nevertheless, the total assemble shows a similar speed up as the assemble average. It is crucial to keep in mind that these speed ups drastically depend on the given optimization procedure for the continuous convexification as well as the one-dimensional grid of the discrete convexification scheme. The parameters of the continuous convexification were chosen such that a conservative measurement is obtained, i.e., tuned parameters with expert knowledge. In contrary, the parameter of the discrete convexification was an equidistant grid with step size \(\delta =0.1\) starting from \(F=1.0\) and ending at \(F=16\). Since the complexity of the discrete convexification scheme scales linearly with the grid size, one can extrapolate the timings for smaller step sizes, i.e., halving the grid size doubles the time. The total speed up consists roughly of 237 times N fibers times the total amount of integration points within the finite element discretization of the boundary value problem. The comparative computations were carried out on a workstation with an Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60 GHz containing 12 physical and 12 virtual cores. Summarizing, significant speedups can be realized by the proposed discrete convexification scheme, even if no adaptive grid refinement is considered.

Fig. 6
figure 6

Benchmark measurements for a one fiber and b boundary value problem of the benchmark problem

4.2 Microsphere strain softening approach

The main idea of the new microsphere strain softening approach is to incorporate fiber failure on the microscopic level, i.e., on the fiber level. In Algorithm 3, a pseudocode of the procedure is given. Given a deformation gradient \(\varvec{F}\) in three spatial dimensions, the deformation gradient is projected along all fiber directions \(\alpha \) in each integration point of the finite element discretization. The projection happens due to the product of deformation gradient and fiber direction vector \(\varvec{A}^{\alpha }\). Afterward, the fiber stretch, denoted as \(F^\alpha \), can be computed by taking the Euclidean norm of the resulting vector. Now, a condition is utilized that checks whether or not the maximal fiber stretch \(F_{\mathrm{fail}}\) is exceeded. If this is not the case, the usual microsphere procedure is done, i.e., the first Piola–Kirchhoff stress \(P^{\alpha }\) and nominal tangent \({\mathbb {A}}^{\alpha }\) per fiber is obtained by evaluating the one-dimensional material law. The fiber stress and nominal tangent are then integrated by taking into account the appropriate dyadic product of the fiber direction. This procedure is repeated for each fiber independently and thus invites to parallelize the work of the fibers trivially. All the following results will use this fact where the parallelization happens in a shared memory context, i.e., threaded parallelism, by means of FLoops.jl [1]. In case the maximum fiber stretch is exceeded, the contribution can be skipped which is equivalent to setting it to zero. This procedure models an abrupt failure of the fiber as soon as the maximum fiber stretch is reached. In the following section, it is analyzed whether or not this leads to a smooth stress–strain curve showing strain-softening.

figure c

4.3 Influence of the integration scheme for the microsphere

The microsphere fiber failure approach to incorporate strain softening is investigated in terms of its dependence with respect to the chosen quadrature formula over the unit sphere. Note that the introduced discontinuities of abrupt fiber failure yield a challenging task in terms of integration and the calculation of the tangent moduli \({\mathbb {A}}\). The used 5294 Lebedev integration scheme is not sufficient to approximate the integral properly in terms of the second derivative of W. Strictly speaking, the tangent modulus for the individual fiber is not defined whenever the fiber fails. The Jacobian of the global Newton scheme depends on \({\mathbb {A}}\) and thus, on the moduli of the single fibers. Therefore, the Newton procedure may not see an appropriate search direction for finding the structural equilibrium. A potential way out is to include a soft fiber failure in the sense of a smoothed reduction of fiber stress to zero. In addition to that, an adaptive integration scheme refining around the failing fibers can be applied. The accuracy of the integration scheme is anyhow important and thus, the choice of quadrature formula should be considered as part of the constitutive modeling when using the microsphere approach, as pointed out in Verron [43]. In order to do so, we test the Lebedev [28] scheme and scale the integration points up to 5294 points, which are tabled in the open source Julia package Lebedev.jl [11]. Further, we compare this symmetric scheme with the unsymmetric scheme of Sloan and Womersley [42], which was also used in [43]. In Fig. 7, the resulting stress–stretch diagrams for the two different integration schemes are shown for varying numbers of integration points. As can be observed from the staircase-like response, the fiber failure introduces discontinuities in the material response owing to the abrupt failure of the fibers. However, as soon as the integration points are increased to a certain amount (roughly of magnitude 1000), the staircase-like behavior reduces to a smooth softening curve. In particular, it should be noted that the scheme by Sloan & Womersley leads to smoother curves when compared to equivalently many or less points within a Lebedev scheme. The disadvantage of a Sloan & Womersley scheme is that it is not symmetric, which is a desirable property from a mechanics perspective. From the results presented in Fig. 7, it can be concluded that a magnitude of 100 integration points does not suffice in order to get a smooth softening response and thus, none of the prominent schemes in Bažant and Oh [12] would be appropriate for integrating the fiber failure, since there, only schemes up to 122 integration points have been presented.

5 Analysis of mesh independence

The main motivation behind constructing a convexified incremental stress potential for damage is to ensure mesh-independent numerical calculations. Note that the microsphere approach does not guarantee sufficient convexity in the space of the three-dimensional deformation gradient, even though the one-dimensional models are convex. Due to the numerical integration over the unit sphere, a finite set of fibers (representative directions) is defined along which the model is convexified. The energy along deformation gradients not in line with the fiber directions may thus still be non-convex. However, practically, such cases hardly appear and may be thus tolerated from the application point of view. Therefore, here we investigate to what extent mesh dependencies can indeed be avoided. For this purpose, the proposed microsphere strain softening scheme is tested in the two element material perturbation test introduced in [27], where mesh-dependence of a relaxed plasticity formulation has been analyzed. This test has also been considered to show mesh independence of relaxed damage formulations in [23] and [4] for small and finite strains, respectively. The test problem consists of a uniaxial tension test discretized with two elements, arrayed in line with the tension direction. To introduce an inhomogeneity to the structural problem, the stiffness of one of the elements is slightly perturbed by modifying the material parameter \(D_{\infty } = 0.99\) through the perturbation \(\epsilon =10^{-6}\) such that the perturbed parameter value becomes \(D_{\infty } := D_{\infty } - \epsilon \). The total length of the domain L is fixed; however, the ratio of the elements is parameterized by a parameter denoted by \(\kappa \). Thus, the first element’s length can be described as \(\kappa L\) and length of the second one by \((1-\kappa ) L\). The two-element material perturbation boundary value problem is visualized in Fig. 8. Its major advantage is that mesh dependence can be analyzed by solely changing the mesh distribution, not the number of elements. Thereby, mesh independence can be checked independently from the standard finite element convergence behavior. Note that the perturbation is chosen small enough not to modify the mechanical outcome of the problem if a suitable numerical scheme is considered. However, for mesh-dependent formulations, this physically negligible perturbation will yield significant differences when considering different discretization in terms of varied \(\kappa \). The Neo-Hooke effective strain energy density \(\psi ^0\) with Lamé parameters \(\lambda = 0.0\) and \(\mu =0.5\) is used for this problem and the critical fiber stretch is set to \(F_{\alpha ,\text {fail}}=5.0\). Due to the nature of the microsphere approach, a three-dimensional material model is obtained and thus three-dimensional elements with trilinear basis functions are used. The nodes of the finite elements at the left-hand side of the problem in Fig. 8, drawn as blue circles, are blocked in all directions. The center nodes of the discretization, drawn as black circles, are free in \(x_1\) direction and held fixed in the direction \(x_2\) and \(x_3\). The nodes at the right-hand side, drawn as red circles, have a prescribed linearly increasing pseudo-time-dependent Dirichlet condition in the \(x_1\)-direction, while all other directions are prescribed to zero.

Fig. 7
figure 7

Study of the influence of the chosen quadrature scheme. The Lebedev (left) and the Sloan and Womersley (right) scheme are used and, in both cases, the number of integration points is increased to approximately 5000 points

Fig. 8
figure 8

Two element material perturbation boundary value problem

Fig. 9
figure 9

Visualization of the structural response in terms of a force-displacement diagram (subfigure 9a). A strong mesh dependency for the unrelaxed model can be observed, while the relaxed and fiber fail models turn out to be mesh-independent. A closer look (zoom-in depicted in subfigure 9b) shows a slight mesh dependency in the fiber failure model, due to the lack of exact integration of the fibers

The structural response of this system in terms of reaction force versus displacements at the right-hand side of the problem can be seen in Fig. 9a. For the one-dimensional unrelaxed model, which is integrated over the microsphere, the well-known strongly mesh-dependent behavior can be observed. There exists some critical value after which the finite element discretization leads to a strongly intensified inhomogeneity in the distribution of the deformation gradient which is unphysical. This unphysical behavior has already been observed in [27]. The relaxed model consisting of the convexified fiber model without fiber failure within the microsphere approach, as in [38], shows no mesh dependence at all. However, since one-dimensional convex functions are integrated over the unit sphere, it is not necessarily the case that the resulting three-dimensional energy is convex. In that case, it could even be that compatibility and frame indifference are lost. Although this may in principle happen, it is quite unlikely if convex orientation distribution functions are considered and so far this has not been numerically observed. However, the relaxed model from [38] does not allow for the description of strain-softening as can be seen from the flat line in the convexified regime, see Fig. 9a. For the proposed model where fiber failure is taken into account in the individual fiber representatives, clearly significant strain-softening can be described. Furthermore, the deviations of the force-displacement curves for varied \(\kappa \) are negligible when using 5294 Lebedev integration points, cf. Fig. 9b. These deviations magnify if the chosen integration scheme has less integration points. Therefore, it can be concluded that the remaining slight deviations mostly result from the numerical integration error, rather than from the fiber failure. Thus, the results for the proposed relaxed scheme indicate mesh independence.

6 Conclusion

This contribution utilized the framework of incremental variational formulations and relaxation theory to formulate a microsphere-based strain-softening scheme. For this, the incremental variational setting of continuum damage mechanics at finite strains is described in Sect. 2. Furthermore, relaxation and its recovery properties in terms of stating well-posed problems were introduced. Two topics were at the core of this contribution: efficient convexification schemes for one-dimensional functions and a microsphere-based strain-softening scheme. The former is discussed in Sect. 3, where first the state-of-the-art techniques were summarized. Thereafter, a discrete convexification scheme with approximation properties was applied for the first time in large-strain continuum damage mechanics. Here, a special adaptive strategy was presented in order to enhance and tailor the convexification scheme of [9]. The proposed adaptive scheme opens possibilities for relaxed continuum mechanics. On the one hand, multidimensional convexification can employ the presented one-dimensional convexification Algorithms 1 and 2; on the other hand, different non-convex mechanical problems as, e.g., plasticity or phase transition problems can be treated. While often optimization schemes are used, this method can be applied to sufficiently regular functions, which is in general the case in continuum mechanics formulations. Note that the regularity assumption is not only required in the adaptive discrete scheme but also in the alternative nonlinear optimization procedure. Comparing both methods, significant speed-up has been shown for the discrete convexification scheme. In the last part of Sect. 3, state-of-the-art convexification schemes based on multi-modal optimization were compared to the proposed discrete convexification scheme. The comparison showed the superiority of the discrete scheme. As a benchmark, a single fiber within the microsphere of a three-dimensional boundary value problem was compared. The microsphere approach is recapitulated in Sect. 4.2, and a novel scheme for strain-softening based on microsphere fiber damage was introduced. The latter was examined with respect to its dependency of the chosen quadrature formula for the integration over the unit sphere, where it was concluded that a certain number of integration points is required in order to smooth out the introduced discontinuities of the fiber failure. Further, an integration scheme study was carried out to test a Lebedev and a Sloan & Womersley type integration over the unit sphere. Here, the Lebedev scheme was outperformed by the Sloan & Womersley integration; however, the Lebedev scheme possesses symmetry properties desirable for mechanics-related integrations. Hence, the highest-order Lebedev scheme was chosen in order to study mesh independence in Sect. 5. The two-element material perturbation test was used, where it was shown that integrated fibers obeying a non-convex one-dimensional potential yield a strongly mesh-dependent behavior. Further, the test confirmed that fibers using the relaxed formulation of [4, 38] together with the discrete convexification scheme proposed here are mesh-independent. Lastly, the proposed microsphere-based strain-softening scheme was tested in this problem as well. There, it was shown that indeed significant strain-softening can be observed in numerical tests. Furthermore, only an insignificant mesh dependency appeared which was, however, concluded to result from the numerical integration error. This is likely to be resolved by an even higher and/or adaptive integration scheme. Since the used Lebedev integration scheme already consists of more than 5000 integration points, an adaptive scheme as described in, e.g., Badel and Leblond [2], could be utilized since symmetry can be guaranteed by construction and an error estimator can be easily conducted. Around every fiber which fails, new integration points should be placed which should lead to an optimal handling of the introduced discontinuity by the fiber failure. However, these discontinuities may also influence the consistency of the macroscopic Newton–Raphson scheme and thus, the computational effort related to the orientation integration can still be expected to be significant. Summarizing, the proposed relaxed damage formulation and its algorithmic treatment turned out to outperform previous convexified models in terms of efficiency and the capability not only to describe stress but also strain-softening.