1 Introduction

Additive layer manufacturing, or additive manufacturing (AM) in short, comprises a collection of techniques that allow the creation of components in a layer-by-layer, additive fashion. AM techniques are developing rapidly, and processes for a wide variety of materials are commercially available (Gibson et al. 2015). Compared to conventional subtractive fabrication techniques, AM offers a much larger design freedom, and a completely different cost structure: for AM, geometrical complexity of a component is no longer a main indicator of manufacturing costs.

Specifically in metal-based AM, advances over the past decade have improved quality and reliability of the technology to a level, that is suitable for creating fully functional end products instead of prototypes. Many branches of industry are looking to benefit from the opportunities this development offers (Gao et al. 2015). However, simply using an existing design and manufacturing it using AM is not an effective approach. Instead, parts need to be redesigned to truly benefit from AM opportunities (Atzeni and Salmi 2012). To fully exploit the large design freedom of AM, particulary topology optimization (TO) is universally recognized as a key enabling design technology (Rosen 2014; Zhu et al. 2015).

While AM technologies can create parts of unprecedented complexity, AM processes also have certain limitations. The dominant processes in metal-based AM currently use a powder bed, combined with laser or electron beams (SLM/EBM) that selectively melt metal powder to create structures. A universal limitation in these processes is that the inclination of downward facing (overhanging) surfaces is limited to a maximum angle with respect to the build direction. This overhang angle limitation has been extensively characterized, and typically the critical angle amounts to 40–50 (Wang et al. 2013; Mertens et al. 2014; Kranz et al. 2015).

In order to successfully print parts, one solution is to find a build orientation where no overhanging section exceeds the maximum overhang angle. This is often not possible for complex parts. Common practice is therefore to support the overhanging sections using sacrificial support structures, which are removed in a post-processing step. This solution consumes extra material, energy and time, and one has to take care that the added support structures are accessible for removal. Various strategies to automate the addition of minimal support structures have been proposed, see e.g. Strano et al. (2013), Vanek et al. (2014), and Calignano (2014). Others have proposed a procedure to augment a previously optimized design with additional structures, to make it self-supporting (Leary et al. 2014). The added structures are not sacrificial but become part of the design, which however alters the mass and performance of the original part. This paper presents another alternative: by including AM design limitations in the TO process, optimized designs can be generated that do not require any support structures.

The elimination of the need for additional supports at the TO design phase has been recognized and pursued by other authors: Brackett et al. (2011) proposed an overhang angle detection procedure to be combined with TO, but no integrated results were reported. A critical overhang angle of 45 was used. Gaynor and Guest (2014) introduced a wedge-shaped spatial filter for use during TO, that should ensure the presence of sufficient material in a region underneath all parts of the design. When the density average in the wedge exceeded a set threshold, the part above was considered to be properly supported. The published results show that generated designs are indeed self-supporting to a degree, but intermediate density material can readily be used by the optimizer to support fully dense structures, which is undesirable.

This paper proposes a new method to generate self-supporting, print-ready designs. It is also based on the idea of only instantiating material that is sufficiently supported, but applies a more detailed procedure to capture the essence of the targeted SLM/EBM AM processes. Using a simplified virtual AM fabrication model, implemented as a filter applied in a layer-by-layer fashion, at every optimization iteration an ‘as-printed’ design is created from a given blueprint. This operation can be classified as a nonlinear, adaptive spatial filter (Weeks 1996). The performance of this printed design is evaluated and optimized. In this way, unprintable designs are rigorously banned from the design space.

The filter is defined such, that it adds little computational effort and consistent sensitivities can be computed efficiently. While effective, it is not perfect: the employed numerical approximations allow for a small but gradual increase of density in the build direction, which sometimes shows in converged results. While it is significantly less problematic than in previous approaches, this tendency is still undesired. We show it can be controlled using the filter parameters, or by applying the AM filter in combination with established techniques to enforce crisp designs. Numerical examples are treated in Section 3, but first the following section introduces the formulation of the virtual layer-by-layer fabrication model.

2 Formulation

2.1 Fabrication model

The AM fabrication model is defined on a regular mesh, as is typically used in TO in an early design stage. For clarity we limit the discussion to the 2D case for a rectangular domain discretized by n i × n j elements, where the vertical direction is the printing direction. The extension to 3D is straightforward but will be discussed elsewhere (Langelaar 2016). Every element in the mesh is associated with a blueprint density variable x (i,j), where i and j denote the vertical and horizontal location of the element. The first layer on the base plate has index i = 1. Our aim is to express the printed densities ξ (i,j) in terms of the blueprint densities.

For an element at position (i,j) to be printable, it needs to be sufficiently supported by printed elements in the underlying layer i−1, etc. By definition, all elements supported by the base plate (i = 1) can be printed. For subsequent layers, we define that the printed density ξ (i,j) of an element cannot be higher than the maximum printed density Ξ(i,j) present in its supporting region S (i,j). This supporting region is chosen to consist of the element directly below the element, and the direct neighbours thereof, see Fig. 1. This choice is motivated by the fact that the critical self-supporting overhang angle for the considered processes typically amounts to 45 (Wang et al. 2013; Mertens et al. 2014; Kranz et al. 2015). In 2D, this results in a region of n S =3 elements. In 3D, one may use a support region of 5 or 9 elements. Mathematically, for the 2D case, this is expressed as:

$$\begin{array}{@{}rcl@{}} &&\xi_{(i,j)} = \min\left( x_{(i,j)}, \Xi_{(i,j)}\right) \quad\text{with} \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\Xi_{(i,j)} = \max\left( \xi_{(i-1,j-1)},\xi_{(i-1,j)},\xi_{(i-1,j+1)}\right) . \end{array} $$

At the domain boundaries the supporting region only consists of 2 elements, and either the left- or rightmost element is omitted from (2). This special consideration is omitted in the following for clarity. By sweeping through the domain from layer 1 to n i , the printed density field can be constructed. The process is illustrated in Fig. 2.

Fig. 1
figure 1

Definition of supporting region S (i,j) for element i,j

Fig. 2
figure 2

Conceptual layer-by-layer fabrication process

In this form, the fabrication model is not differentiable due to the nonsmooth min and max operators in (1) and (2). As gradient information is essential in the TO process, the model is cast in a differentiable form using smooth approximations smin and smax. In this paper we opt for the following approximations:

$$ \text{smin}(x,\Xi)\equiv \frac{1}{2}\left( x+\Xi\,-\,\left( (x\,-\,\Xi)^{2}+{\varepsilon}\right)^{1/2}\!+\sqrt{{\varepsilon}}\right)\!, $$
$$ \text{smax}(\xi_{1},\xi_{2},\xi_{3})\equiv \left( \sum\limits_{k=1}^{n_{S}} {\xi_{k}^{P}}\right)^{1/P}. $$

Here the parameters ε and P control the accuracy and smoothness of the approximations. For \(\varepsilon \!\rightarrow \!0\) and \(P\!\rightarrow \!\infty \) the exact min and max operators are obtained, but smoothness is lost. For other values, deviations arise in particular situations. For ε > 0 the smin operator gives exact results for equal inputs, i.e. smin(a,a) = a. For dissimilar inputs however the true minimum is slightly overestimated. The smax operator (P-norm) gives exact results for finite P for cases of the form smax(a,0,0) with a≥0, which is necessary to represent critical overhanging sections. For other inputs, the maximum is overestimated. The largest overshoot occurs for equal inputs, i.e. layers of uniform printed density. This overshoot is proportional to the density of the layer. For fully solid layers, this error is mitigated by the subsequent application of (3), with the fact that x (i,j) ≤ 1. However, for intermediate densities a build-up of layers of gradually increasing density could occur, which does not match the fabrication model assumptions. To counteract this without requiring extreme P values, a simple and effective solution is to slightly penalize the output of smax, such that the overestimation of the maximum of intermediate density regions is reduced. Thus, we redefine smax as follows:

$$ \text{smax}(\xi_{1},\xi_{2},\xi_{3})\equiv \left( \sum\limits_{k=1}^{n_{S}} {\xi_{k}^{P}}\right)^{1/Q} $$

with QP. Lower values of Q result in stronger penalization of lower densities. Given a layer density value 0 ≤ ξ 0 ≤ 1 for which zero overshoot is desired, it follows that Q should be chosen as:

$$ Q=P+\frac{\log n_{S}}{\log \xi_{0}} . $$

In this paper, we choose ξ 0 = 0.5 as default value. The support capability (maximum density) of uniform printed layers below this density is underestimated by (5), while for printed densities above ξ 0 the support capability is slightly overestimated. The effect of this penalization of the P-norm is investigated in Section 3. The reformulated, differentiable AM fabrication model now becomes:

$$\begin{array}{@{}rcl@{}} &&\xi_{(i,j)} = \text{smin}\left( x_{(i,j)}, \Xi_{(i,j)}\right) \quad\text{with} \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\Xi_{(i,j)} = \text{smax}\left( \xi_{(i-1,j-1)},\xi_{(i-1,j)},\xi_{(i-1,j+1)}\right), \end{array} $$

with smin and smax according to (3) and (5), respectively. Note that printed densities in layer i depend on densities in all underlying layers.

2.2 Sensitivity analysis

Optimization of printed parts involves performance criteria f p that depend on the printed geometry, which in turn depends on the blueprint design, i.e. f p (ξ(x)), where bold symbols indicate the entire density fields organized in vector form. Response sensitivities with respect to x are given by:

$$ \frac{\partial f_{p}}{\partial \textbf{x}} = \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}}\frac{\partial {\boldsymbol{\xi}}}{\partial \textbf{x}} . $$

Responses are computed using the as-printed design ξ, often involving finite element analysis. The term f p / ξ is then obtained by (adjoint) sensitivity analysis of the performance criterium. The term ξ/ x expresses the dependence of printed densities on blueprint densities, which can be obtained through direct differentiation of (7) and (8). However, as the printed densities in a given layer depend on blueprint densities of all underlying layers, ξ/ x is a densely populated triangular matrix. For large problems, the computational cost and required memory become prohibitive.

For a more efficient approach, an adjoint formulation is employed. Combining (7) and (8), the following relation holds between printed and blueprint densities:

$$ {\boldsymbol{\xi}}_{i}=\text{smin}\left( \textbf{x}_{i}, {\boldsymbol{\xi}}_{i-1}\right)=\breve{s}\left( \textbf{x}_{i}, {\boldsymbol{\xi}}_{i-1}\right) . $$

Here \(\breve {s}\) is introduced as shorthand notation for smin, and the single subscripts are layer indices. Using (10) as constraint equations, an augmented response \(\tilde {f}_{p}\) can be defined as:

$$ \tilde{f}_{p}= f_{p}({\boldsymbol{\xi}}(\textbf{x})) + \sum\limits_{i=1}^{n_{i}}{{\boldsymbol{\lambda}}_{i}^{T}}\left( \breve{s}(\textbf{x}_{i},{\boldsymbol{\xi}}_{i-1}) - {\boldsymbol{\xi}}_{i}\right) , $$

with λ i as multiplier vectors. From this point, for brevity the arguments of \(\breve {s}\) are omitted, and instead the layer of its first argument is given as a subscript. At the first layer, we define \(\boldsymbol {\xi }_{1}=\breve {s}_{1}\equiv \mathbf {x}_{1}\) and thus \(\partial {\breve {s}_{1}} / \partial{\mathbf {x}_{1}}={\mathbf {I}}\). Differentiation of (11) yields:

$$\frac{\partial \tilde{f}_{p}}{\partial \textbf{x}_{j}}\!= \sum\limits_{i=1}^{n_{i}}\!\left\{\! \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{i}}\frac{\partial{\boldsymbol{\xi}}_{i}}{\partial \textbf{x}_{j}}\,+\,{{\boldsymbol{\lambda}}_{i}^{T}}\!\!\left( \frac{\partial\breve{s}_{i}}{\partial \textbf{x}_{j}}\delta_{ij}\,+\,\frac{\partial \breve{s}_{i}}{\partial {\boldsymbol{\xi}}_{i-1}}\frac{\partial {\boldsymbol{\xi}}_{i-1}}{\partial \textbf{x}_{j}}\,-\,\frac{\partial {\boldsymbol{\xi}}_{i}}{\partial \textbf{x}_{j}} \right) \right\} $$

where δ i j denotes the Kronecker delta, and 1 ≤jn i . Since ξ i / x j = 0 for i < j (printed densities only depend on blueprint densities in underlying layers), terms in the summations with i < j vanish. Taking terms with i = j outside of the summations, and using \(\partial {\boldsymbol {\xi }}_{j} / \partial \textbf {x}_{j}=\partial \breve {s}_{j} / \partial \textbf {x}_{j}\), gives:

$$\begin{array}{@{}rcl@{}} \frac{\partial\tilde{f}_{p}}{\partial\textbf{x}_{j}}&=&\frac{\partial f}{\partial{\boldsymbol{\xi}}_{j}}\frac{\partial {\boldsymbol{\xi}}_{j}}{\partial \textbf{x}_{j}}\\ &&+\sum\limits_{i=j+1}^{n_{i}}\left\{ \left( \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{i}} - {{\boldsymbol{\lambda}}_{i}^{T}}\right)\frac{\partial {\boldsymbol\xi}_{i}}{\partial \textbf{{}x}_{j}} + {{\boldsymbol{\lambda}}_{i}^{T}}\frac{\partial \breve{s}_{i}}{\partial {\boldsymbol{\xi}}_{i-1}}\frac{\partial {\boldsymbol{\xi}}_{i-1}}{\partial \textbf{x}_{j}} \right\} . \end{array} $$

Next, the last term in the summation is written as a separate sum and the first term (i = j + 1) is taken out:

$$\begin{array}{@{}rcl@{}} \frac{\partial \tilde{f}_{p}}{\partial \textbf{x}_{j}}&=&\frac{\partial f}{\partial {\boldsymbol{\xi}}_{j}}\frac{\partial {\boldsymbol{\xi}}_{j}}{\partial \textbf{x}_{j}} + \sum\limits_{i=j+1}^{n_{i}}\left( \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{i}} - {{\boldsymbol{\lambda}}_{i}^{T}}\right)\frac{\partial {\boldsymbol{\xi}}_{i}}{\partial \textbf{x}_{j}}\\ &&+{\boldsymbol{\lambda}}_{j+1}^{T}\frac{\partial \breve{s}_{j+1}}{\partial {\boldsymbol{\xi}}_{j}}\frac{\partial {\boldsymbol{\xi}}_{j}}{\partial \textbf{x}_{j}} + \sum\limits_{i=j+2}^{n_{i}}{{\boldsymbol{\lambda}}_{i}^{T}}\frac{\partial \breve{s}_{i}}{\partial {\boldsymbol{\xi}}_{i-1}}\frac{\partial {\boldsymbol{\xi}}_{i-1}}{\partial \textbf{x}_{j}}. \end{array} $$

By reindexing, the last sum can be changed into a summation from i = j + 1 to n i − 1. By taking the last term i = n i out of the first summation, both sums regain the same limits. Again using \(\partial {\boldsymbol {\xi }}_{j} / \partial \textbf {x}_{j}=\partial \breve {s}_{j} / \partial \textbf {x}_{j}\) and recombining summations gives:

$$\begin{array}{@{}rcl@{}} \frac{\partial \tilde{f}_{p}}{\partial \textbf{x}_{j}}&=&\left( \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{j}} + {\boldsymbol{\lambda}}_{j+1}^{T}\frac{\partial\breve{s}_{j+1}}{\partial{\boldsymbol{\xi}}_{j}}\right)\!\frac{\partial\breve{s}_{j}}{\textbf{x}_{j}}+\left( \frac{\partial f_{p}}{\partial{\boldsymbol{\xi}}_{n_{i}}}\,-\,{\boldsymbol{\lambda}}_{n_{i}}^{T}\right)\!\frac{\partial{\boldsymbol{\xi}}_{n_{i}}}{\textbf{x}_{j}} \\ &&+\sum\limits_{i=j+1}^{n_{i}-1}\left( \frac{\partial f_{p}}{\partial {\boldsymbol\xi}_{i}} - {{\boldsymbol\lambda}_{i}^{T}} + {\boldsymbol{\lambda}}_{i+1}^{T}\frac{\partial \breve{s}_{i+1}}{\partial {\boldsymbol{\xi}}_{{i}}}\right)\frac{\partial {\boldsymbol{\xi}}_{i}}{\partial \textbf{x}_{j}} . \end{array} $$

From this equation, which holds for 1 ≤ jn i , it follows that computation of \(\frac {\partial {\boldsymbol {\xi }}_{i}}{\partial \textbf {x}_{j}}\)-terms can be avoided when choosing the multipliers as:

$$\begin{array}{@{}rcl@{}} {{\boldsymbol{\lambda}}_{j}^{T}}&=&\frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{j}}+{\boldsymbol{\lambda}}_{j+1}^{T}\frac{\partial \breve{s}_{j+1}}{\partial {\boldsymbol{\xi}}_{j}} \quad\textrm{~~~for~}1\leq j<n_{i}, \\ &&{\boldsymbol{\lambda}}_{n_{i}}^{T}=\frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{n_{i}}} . \end{array} $$

As each multiplier depends on the one associated with the layer above, the sequence of evaluation starts at the top layer and proceeds downwards. Note the resemblance to adjoint sensitivity analysis of transient problems (Van Keulen et al. 2005), caused by the layerwise, sequential nature of the AM filter. With multipliers according to (13), sensitivities of response f p follow from (12) as:

$$\frac{\partial f_{p}}{\partial \textbf{x}_{j}}=\frac{\partial \tilde{f}_{p}}{\partial \textbf{x}_{j}}=\left( \frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}_{j}} + {\boldsymbol{\lambda}}_{j+1}^{T}\frac{\partial \breve{s}_{j+1}}{\partial {\boldsymbol{\xi}}_{j}}\right)\frac{\partial \breve{s}_{j}}{\partial \textbf{x}_{j}} = {{\boldsymbol{\lambda}}_{j}^{T}}\frac{\partial \breve{s}_{j}}{\partial \textbf{{}x}_{j}}. $$

The derivatives of \(\breve {s}\) follow from (3), (5), (7) and (8) as:

$$\begin{array}{@{}rcl@{}} \frac{\partial \breve{s}(x,\Xi)}{\partial x} &=& \frac{1}{2}\left( 1 - (x-\Xi)\left( (x-\Xi)^{2}+\varepsilon\right)^{-1/2}\right) ,\\ \frac{\partial \breve{s}}{\partial \xi}&=&\frac{\partial \breve{s}}{\partial \Xi}\frac{\partial \Xi}{\partial \xi},\\ \frac{\partial \breve{s}(x,\Xi)}{\partial \Xi} &=& \frac{1}{2}\left( 1 + (x-\Xi)\left( (x-\Xi)^{2}+\varepsilon\right)^{-1/2}\right) , \end{array} $$


$$ \frac{\partial \Xi(\xi_{1},\xi_{2},\xi_{3})}{\partial \xi_{i}} = \frac{P\xi_{i}^{P-1}}{Q}\left( \sum\limits_{k=1}^{n_{S}} {\xi_{k}^{P}}\right)^{1/Q-1} . $$

In this last expression, only printed densities ξ k in the support region of an element in the next layer affect Ξ and give a nonzero contribution to the derivative. The local support of the filter operation thus limits the number of operations needed to evaluate the multipliers in each layer. The transformation of design sensitivities f p / ξ to f p / x using this adjoint approach involves simple operations with sparse or even diagonal matrices per layer, and is inexpensive compared to e.g. the finite element analysis of the design.

2.3 Integration in the TO process

The AM filter can be easily integrated in a conventional TO process, similar to other filters commonly used in density-based approaches (e.g. Bourdin (2001), Sigmund (2007)). A sample Matlab™ implementation, AMfilter.m, is digitally provided with this paper. It has been prepared for integration with the well-known 88-line topology optimization code by Andreassen et al. (2011). The implementation is adapted to the element numbering convention used in the 88-line code, and differs from the paper in the orientation of the vertical axis. Instructions to integrate AMfilter.m in the 88-line code are given in the AppendixAppendix.

By default, the 88-line code uses the optimality criteria (OC) optimizer. This procedure contains an inner loop where the volume constraint is evaluated repeatedly. This constraint involves the volume of the as-printed design ξ. This means that the AM filter is called multiple times in this inner loop, which raises the computation time. With other optimizers, e.g. the popular MMA method (Svanberg 1987), this inconvenience does not occur and only a single AM filter evaluation per design iteration is needed. In all cases the sensitivity transformation discussed in the previous section is performed once per iteration.

The proposed AM-fabrication filter can also easily be combined with other filtering techniques. As an illustration, in a setting of density variables associated with element centroids r e , by filtering a field of optimizer-controlled element densities x e a blueprint design field \(\tilde {\textbf {x}}\) can be defined as follows (Bruns and Tortorelli (2001), Bourdin (2001), Sigmund (2007)):

$$ \tilde{x}_{e} = \left. \left( \sum w_{e,i} x_{i}\right) / \left( \sum w_{e,i} \right) \right.. $$

Here \(\tilde {x}_{e}\) is the blueprint density at position r e , and w e,i defines the relative weight of a control variable at spatial position r i , using e.g. a linearly decaying distance function: \(w_{e,i} =\max (0, R - ||\textbf {r}_{i}-\textbf {r}_{e}||)\) with filter radius R. Subsequently, the printed density field ξ is obtained by applying the AM filter to \(\tilde {\textbf {x}}\). Consistent sensitivities of response quantities f p follow by the chain rule:

$$ \frac{\partial f_{p}}{\partial \textbf{x}}=\frac{\partial f_{p}}{\partial \tilde{\textbf{x}}}\frac{\partial \tilde{\textbf{x}}}{\partial \textbf{x}}=\frac{\partial f_{p}}{\partial {\boldsymbol{\xi}}}\frac{\partial {\boldsymbol{\xi}}}{\partial \tilde{\textbf{x}}}\frac{\partial \tilde{\textbf{x}}}{\partial \textbf{x}}. $$

To compute these sensitivites, first \(\partial f_{p} / \partial \tilde {\textbf {x}}\) is evaluated using the procedure discussed in Section 2.2. Next the effect of the density filtering is accounted for by multiplication with \(\partial \tilde {\textbf {x}} / \partial \textbf {x}\), which can be found in the mentioned references. Similarly, one can extend the chain of filters by adding e.g. a thresholding filter to obtain black-white designs. This is demonstrated in Section 3.4.4, where the AM filter is combined with density filtering and Heaviside projection techniques. Note that the AM filter should typically be the last filter in the chain, otherwise subsequent filtering operations could result in unprintable geometries, which would defeat the purpose of the filter.

3 Numerical examples

To illustrate the functionality and characteristics of the AM filter, we discuss several examples. All cases have been evaluated using the provided sample implementation, and additional examples can be created readily by the interested reader. As default parameters for the smooth min/max operators, ε=10−4, P=40 and ξ 0 = 0.5 are used.

3.1 Pattern tests

Before considering optimization results, we apply the proposed AM filter to a test pattern to highlight its characteristics. Figure 3a depicts the considered blueprint test pattern, together with processed results. It features straight and zig-zag lines of different densities (indicated by grayscale value), a solid portal that is connected to the baseline only on one side, closely spaced horizontal solid lines, and a ‘floating’ solid block. The ideal AM-filtered result obtained with exact min/max operators is shown in Fig. 3b. The second set of vertical lines is not connected to the baseline, and thus cannot be printed. The same happens to part of the portal, the horizontal lines, and the floating region. The portal clearly shows the 45 maximum overhang angle that is implicitly enforced by the filter. The other features remain identical to the blueprint.

Fig. 3
figure 3

Blueprint design used in pattern test, and ideal as-printed result. The bottom side is taken as the baseplate

Figure 4 shows the result of applying the proposed AM filter to this pattern. In Fig. 4a the filter is applied with a regular P-norm smax operator ((4), i.e. P = Q, ξ 0 = 0). In Fig. 4b, we use the modified smax version (5), where the overshoot is set to zero at uniform regions of ξ 0 = 0.5. Due to the applied smooth min/max approximations, neither of these filters achieves the ideal performance. However, a clear difference is observed between the two cases. Without additional overshoot suppression, densities can increase relatively quickly in subsequent layers (Fig. 4a). This is undesirable, as it allows features that are not properly supported. The case with ξ 0 = 0.5 shows this tendency to a far lesser extent. However, this comes at the cost of reduced propagation of intermediate density features. Thinner and less dense lines gradually fade in vertical direction, due to the undershoot introduced by Q < P. Solid features are however represented properly. Since in most applications of topology optimization a solid/void end result is desired, it is of primary importance that the filter fully complies with AM overhang restrictions for solid parts. By increasing blueprint densities in printing direction, the optimizer still has the option to create regions of constant intermediate density when this proves favourable. The subsequent optimization tests demonstrate how these characteristics translate to the generated designs.

Fig. 4
figure 4

Pattern test results of the AM filter, using the regular and penalized P-norm formulation

3.2 Problem definition

Two optimization cases will be considered. As our focus is on the performance of the AM filter, we restrict the discussion to linear elastic compliance minimization under a volume constraint, using the SIMP material interpolation (Bendsøe 1989; Rozvany et al. 1992). Examples use Young’s modulus E = 1 and Poisson ratio ν = 0.3, and an objective scaling factor of 0.01 was used. The first case is a problem designed to challenge the optimizer to create printable structures. It features a square domain loaded in tension at the top edge, as seen in Fig. 5. The second case is the well-known MBB beam problem. For the full problem definition the reader is referred to Sigmund (2001) and Andreassen et al. (2011). The design domain is rectangular with aspect ratio 3:1, as shown in Fig. 5. In both problems, a load of magnitude 1.0 is used, density filtering is applied and the SIMP penalization exponent is fixed at 3.0. Optimizations are initialized by distributing the available material uniformly over the domain.

Fig. 5
figure 5

Problems considered: tensile case (left) and MBB beam (right). Arrows denote unit point forces

In the tensile problem, the bottom side of the domain is taken as the baseplate. For the MBB beam, all four sides of the design domain will be individually considered as baseplate for the printing process. Sides are denoted by the cardinal directions N, E, S, W. Instead of redefining the problem in different orientations, this baseplate direction is implemented in the AM filter by transforming the blueprint density field.

3.3 Tensile test case

The domain of the tensile test case is discretized with 70×70 finite elements, the maximum volume is set to 30 %, and density filtering is applied with a radius of 3.0 element widths. In this test case, as in Section 3.1, both the regular (ξ 0=0) and modified P-norm formulation (ξ 0>0) have been applied, in order to illustrate the importance of the proposed modification. Figure 6 shows designs obtained using OC and MMA optimizers, for two different values of ξ 0. All cases have been run for 250 iterations with default optimizer settings, with exception of the OC ξ 0 = 0.5 case, which required a reduced move limit of 0.01 instead of 0.05 to converge.

Fig. 6
figure 6

Designs obtained for the tensile test case. The baseplate location is indicated in blue. M n d values are 30 %, 30 %, 28 % and 28 %, respectively

All designs in Fig. 6 feature a solid bar connecting the load to the support as expected, but have also formed structures to allow manufacturing of this bar at an elevated height from the baseplate. Overhanging sections at the limit angle of 45 can also be recognized. Note that in reality a bar-shaped design could simply be translated to the baseplate to eliminate the need for support material, but the intention of this test problem is to study the ability of the AM filter to create printable solutions. Note also that there are no prescribed displacements applied at base to provide mechanical support to the component in its loadcase (see Fig. 5): the creation of the supporting structures is driven by printability requirements.

Regarding objective values, P-norm results ( ξ 0 = 0) outperform the ξ 0 = 0.5 cases. However, both optimizers clearly exploit the possibility of gradually increasing density values in the build direction that was also seen in the pattern tests (Fig. 6a,b). This is not desirable. The modified P-norm, with suppression of overshoot at lower densities ( ξ 0 = 0.5), results in higher compliances as more material must be invested in supports. The MMA design (Fig. 6d) features a connecting bar at the base, which adds some stiffness in horizontal direction by connecting the two support struts. In this way, not all support material is ‘wasted’, as it now serves a dual purpose: printability and improving the objective. In the OC case (Fig. 6c), a different local optimum is found, but compliances are not far apart. As the modified P-norm results show superior printability, we will exclusively use this in the following examples.

The addition of a strongly nonlinear AM filter does not make the optimization problem easier to solve, and numerical tests indicate that the P-norm modification further increases its complexity. This makes that the OC optimizer needed a tightened move limit and an increased number of iterations in the latter case. MMA proved more suited to handle the increased nonlinearity, and moreover it does not require repeated AM filter evaluations within each iteration as the OC method does. The rest of the numerical examples have been optimized using MMA, with default settings.

The designs presented in Fig. 6 all contain a relatively large amount of intermediate density, as also indicated by the measures of nondiscreteness ( M n d , Sigmund (2007)) in the order of 30 %, reported in the caption. For the ξ 0 = 0 cases, the undesired low density support structures contribute to this value, but the main cause is the applied density filter with a relatively large radius of 3 element widths. This enforces a gradual transition between solid and void regions, leading to intermediate density zones near all structural boundaries.

To illustrate the influence of filter radius on the design and its nondiscreteness, case d) (MMA, ξ 0.5 = 0) has also been optimized using a density filter of half the size, and without any density filter. Results are depicted in Fig. 7. The M n d values (18 % and 1.5 %, respectively) show that nondiscreteness is primarily linked to filter size. The smaller filter allows finer detail, resulting in a lower compliance compared to Fig. 6d. While the designs obtained for different filters differ in their specific layout, the general concept with the dual-purpose support structures can be recognized in all three cases.

Fig. 7
figure 7

Designs obtained for the tensile test case, with a) smaller filter radius ( R=1.5 element widths) and b) without filtering. M n d values are 18 % and 1.5 %, respectively

The case without density filtering has resulted in a local optimum, as the compliance of the filtered design is better. Checkerboard patterns also appear (Fig. 7b), as these are properly supported structures according to the AM process definition, and therefore not suppressed by the AM filter. However, apart from the misleading artificial stiffness of checkerboard patterns, checkerboard patterns are also undesired for manufacturability reasons. Hence, use of a density filter in combination with the proposed AM filter is recommended, and all following examples use this combination.

3.4 MBB beam

3.4.1 Influence of part orientation

In this study, the MBB beam domain is discretized using 180×60 finite elements, and prior to the AM filter a density filter with a radius of 2.0 element widths is applied to create a filtered blueprint design. Optimization is performed for 4 different part orientations (N, E, S, W) using 300 MMA iterations. In addition, the problem is solved without the AM filter. Figure 8 shows the resulting designs.

Fig. 8
figure 8

Designs obtained for 180×60 discretizations of the MBB beam problem, for 4 baseplate orientations (indicated in blue) and a reference case without AM filter

The different orientations clearly have resulted in different topologies. All designs fully comply with the stated AM fabrication rules, and the influence of the 45 overhang angle limitation can be recognized by various members that reach exactly this angle. Interestingly, similar patterns to those suggested by Leary et al. (2014) to make existing designs printable by adding auxiliary structures, appear in these optimized designs directly due to the inclusion of the AM filter in the TO process. This is seen in particular in the S and N orientation.

Relative to the reference MBB beam optimized without AM filter, the N/E/S/W designs achieved compliances of 111 %, 101 %, 106 % and 100.0 %, respectively. Clearly the W orientation is preferred in this case, achieving a printable design without loss of performance. The reference design without AM filter cannot be printed in any of the considered orientations. The N case shows the largest compliance increase compared to the reference design, as a relatively large amount of material must be invested to support the long horizontal member at the bottom of the structure. The same holds for the S case, to a lesser extent. The E and W cases only require modest design changes, resulting in marginal compliance increases.

3.4.2 Influence of AM filter parameters

Using the most challenging case of the MBB beam, in the N orientation, we study the influence of parameters P and ε used in the definition of the smooth max and min operators, respectively. The aim is to give an impression of the effect of these parameters, and the sensitivity of designs, for this particular example. Results for various combinations of P and ε are shown in Fig. 9. As these parameters affect the accuracy of the smooth min and max operators, they also affect the overshoot of densities in uniform regions. This can result in printed density values ξ > 1, which affects the obtained compliance. To make an unbiased comparison in this parameter study, all densities have been capped exactly at 1 in a post-processing step, after which the compliance of the design was evaluated. This compliance, normalized by that of the reference design without AM filtering, is reported in Fig. 9 together with the maximum ξ-value found in the optimized design.

Fig. 9
figure 9

Results of a parameter sweep in P and ε, for the 180×60 MBB case in N orientation, with the baseplate indicated in blue. The numbers below the designs indicate the compliance relative to the reference case, M n d , and the maximum density, respectively

While the designs in Fig. 9 show a wide variety of topologies, a first thing that can be noticed is that the normalized compliances are not that far apart. The best and worst values differ only 7.5 %. The obtained performances are not very sensitive to the values of P and ε for this example, and selection of these values is therefore not very critical.

Considering ε, which controls the accuracy of the smooth min operator, as expected the overshoot in density increases for larger ε values. At the same time, the general trend is that better compliances are obtained for larger ε. We suspect that reduced ε, while giving improved accuracy, increases the nonlinearity and nonconvexity of the problem, making it harder for the optimizer to converge to superior (local) optima. The value of ε=10−4 seems a good compromise, with maximum density overshoot remaining below 0.5 %.

The P-norm parameter P has a more subtle effect on the results. Density overshoot increases slightly with decreasing P, as the P-norm increasingly overestimates printed densities in fully supported regions. This effect is most clearly seen for high ε, e.g. ε=10−3. Although the trend is not universal, the majority of cases indicate a slight improvement of compliance for increasing P. The main conclusion of this parameter study is however, that there are no clear ‘best’ values and that performances are fairly insensitive to the AM filter parameters.

3.4.3 Influence of mesh refinement

The aim of this section is to illustrate the effect of mesh refinement. The MBB beam in S orientation is considered as a test problem. As the discretization is refined, the filter radius (defined in element widths) is increased by the same factor, such that the absolute size of the filter remains unchanged. The obtained designs are depicted in Fig. 10a–d.

Fig. 10
figure 10

Results obtained for various discretizations of the MBB beam problem with the baseplate in S orientation (indicated in blue). The filter radius R is given in element units, and compliances C are normalized with respect to the unrestricted 180×60 reference design from Fig. 8e

Different topologies are obtained for each resolution, so in spite of using a density filter mesh-independence is not achieved. This is caused by the mesh-dependent nature of the AM filter, which defines relations between densities in adjacent elements. When refining the mesh, the AM fabrication conditions are enforced on a finer lengthscale than on coarser meshes. In terms of performance, no significant changes are observed in this case in final compliance and nondiscreteness. The M n d values increase slightly with mesh refinement, presumably this is linked to the number of members and boundaries present in the designs.

A limitation of the proposed AM filter can be observed in the 360×120 case (Fig. 10d): at the location indicated by the red arrow a member is present that is not fully dense, but shows increasing density in the build direction. Mesh refinement has lead to an increased number of layers in the build direction, and the numerical errors due to the smooth min/max operators can accumulate to noticeable positive density gradients in the build direction. As this member has primarily a support function for manufacturability, rather than a structural function, achieving full density to maximize its stiffness apparently is not optimal. Instead, the optimizer exploits the fact that the AM filter allows a slight gradual build-up of density at finite P and ε values, and renders this member with a density gradient such that it can contribute with minimal material usage to the support of the structurally important horizontal top beam.

This issue could be dealt with in post-processing, as a clear design interpretation is still possible. However, it is preferable to address it within the optimization itself. One solution is to reduce the approximation errors by using stricter settings for P and ε. Another effective approach is to apply continuation to the ξ 0 parameter, which controls the density level where the approximations result in zero error for regions of uniform density. Starting the optimization with the default value of ξ 0 = 0.5, and increasing this by a factor 1.15 at iteration 150, 225 and 300, resulted after 500 iterations in the design depicted in Fig. 10e. The problematic member has disappeared, as the more strict AM filter after continuation increasingly suppresses the option for the optimizer to build structures with gradually increasing density. However, the AM filter allows intermediate density members with constant or decreasing density, as long as they are properly supported. This design also features a layer of intermediate density (indicated by the arrow in Fig. 10e), which is properly supported. In addition, all boundaries not affected by the AM filter show intermediate densities due to density filtering. An approach to enforce fully dense results is discussed in the next section.

3.4.4 Combination with Heaviside projection

As seen in the previous examples, areas with intermediate densities may occur in obtained designs, as long as they are supported according to the assumed AM fabrication rules. However, in most cases a crisp black/white result is preferred, ready for printing without additional interpretation. To achieve this, the proposed AM filter can be combined with existing measures for suppression of intermediate densities. To demonstrate this, here we apply a volume-preserving Heaviside projection scheme (Xu et al. 2010) to the MBB problem on a mesh of 360×120 elements, again with a filter radius of 4 element widths. The \(\tanh \)-based formulation proposed by Wang et al. (2011) is applied, defined as:

$$ x_{i} = \frac{\tanh(\beta\eta)+\tanh(\beta(\tilde{x}_{i}-\eta))}{\tanh(\beta\eta)+\tanh(\beta(1-\eta))} . $$

Here \(\tilde {x}_{i}\) denotes a density variable obtained after applying density filtering on the design field, and x i is the blueprint density.

Continuation is performed by doubling the β parameter every 125 iterations, starting from 2.0. The η parameter, initially 0.5, is simultaneously adjusted to satisfy the volume constraint, making the parameter change less disruptive for the optimizer. In addition, continuation is applied to the AM filter parameter ξ 0 following the scheme introduced in the previous section.

The results obtained after 500 iterations are shown in Fig. 11. All designs are crisp and feature low M n d values, which shows that the AM filter can effectively be combined with Heaviside projection. It can be observed, that fine features of the design are smaller than the length scale imposed by the filter radius. Neither the AM filter nor the Heaviside projection preserve length scale, as discussed in Section 3.4.3 and in Wang et al. (2011), respectively.

Fig. 11
figure 11

Designs obtained for 360×120 discretizations of the MBB beam problem, combining the AM filter with Heaviside projection. The baseplate location is indicated in blue

The N/E/S/W designs achieved compliances of 106 %, 102 %, 102 % and 99.9 %, relative to the reference MBB beam optimized without AM filter (Fig. 11e). As before, the changes in performance caused by enforcing printability are relatively minor. In fact, the W orientation result performs slightly better than the unrestricted case, while topologies are quite similar. Either the reference case is a local optimum, or the sharp edges caused by the 45 overhang limit imposed by the AM filter provide an advantage over the smoothed Heaviside projection in this case ( β f i n a l = 16). The lower M n d value of the W design could be an indication for this.

The layout of the S orientation result (Fig. 11c) shows conceptual similarities to that obtained without Heaviside projection (Fig. 10e), but the geometries and the level of crispness and detail are clearly different. In terms of performance, Heaviside projection resulted in a 3.75 % lower compliance compared to the case only using density filtering. We speculate that this is caused by the fact that Heaviside projection allows for fully solid designs, while density filtering restricts solutions to structures with intermediate density boundaries that are less efficient in minimizing compliance.

4 Discussion

The examples demonstrate that the proposed AM filter is effective at generating designs that can be printed without additional supports. Not only does this save material, printing time and post-processing costs, it also makes modification of optimized designs for printability unnecessary, such that optimal performance is not compromised. It was also shown, that in some cases the optimizer can take advantage of the employed smooth approximations by creating support structures with slight density gradients in the build direction. This phenomenon could be counteracted effectively by a ξ 0-continuation strategy.

There are, however, some limitations that must be mentioned. Firstly, the filter is not process-specific and only provides an approximation of the fabrication envelope of a particular process. The overhang angle is fixed at 45 , although this can be adjusted by changing the element aspect ratios. Secondly, the presented filter is defined for a regular mesh where the part is oriented in a principal direction. That this limitation can be overcome by a mapping or transformation of the domain, at the cost of an increase in complexity. Thirdly, this AM filter primarily targets a particular geometrical limitation of many AM processes: the critical overhang angle. This does not exclude other undesired geometrical features such as enclosed cavities to appear in the solution. Possibly the present AM filter can be combined with approaches to suppress cavities, e.g. Liu et al. (2015). Other important aspects such as part deformation, overheating during processing and residual stresses are not included and require far more sophisticated and computationally demanding process models.

Given these properties, this AM filter is expected to be most useful in an early design stage, where the use of regular meshes is not an important limitation and the chosen process model is sufficient. Nonetheless, decisions made in early stages strongly impact final performance and costs, and therefore including even approximate AM restrictions from the beginning can be crucial.

5 Conclusion

A new filter for density-based topology optimization is proposed that mimics a typical powderbed-based Additive Manufacturing (AM) process. In a layerwise process, it transforms a given blueprint design to an ‘as-printed’ design for performance evaluation. In this way, unprintable designs with e.g. infeasible overhanging sections are rigorously banned from the design space. The computational cost of both the filter and its sensitivity analysis is small compared to the finite element analysis, and the filter can be successfully combined with other techniques commonly applied in density-based topology optimization, e.g. Heaviside projection for black/white designs.

The proposed AM filter only targets fundamental geometrical printability aspects, and does not include more sophisticated criteria related to internal stresses, distortion, enclosed cavities, etc. Also, the critical overhang angle and part orientations are directly linked to the applied discretization in the described formulation. These limitations are to be addressed in future investigations.

With this paper, a sample Matlab™ implementation is provided for use with popular topology optimization scripts. An implementation in 3D has also been developed and will be presented elsewhere. Given its ease of use and low computational burden it is expected that the present virtual fabrication filter can be of considerable practical value as a first-order approximation.