1 Introduction

The unprecedented design freedom offered by additive manufacturing (AM) techniques makes them a promising option for fabricating highly complex and performant components. However, AM processes suffer from specific limitations and, if overlooked during the design stage, these limitations can cause various defects. Both these factors, i.e., increased design freedom and the need to address AM limitations during the design stage, make the design process for AM highly challenging. Topology optimization (TO) allows for computational exploration of the design space while considering pre-defined constraints (Bendsøe and Sigmund 2003). Hence, it has been universally recognized as the ideal tool for designing AM parts (Leach and Carmignato 2020). There has been a significant research effort to integrate AM limitations within TO schemes, with a strong emphasis on controlling overhanging features (Liu et al. 2018). However, an important AM limitation, which is not yet explicitly addressed in the context of TO, is that of local overheating or heat accumulation during the manufacturing process. Recent experimental observations and better understanding of AM process physics reveal that overheating is not uniquely associated to overhangs, and dedicated analysis of the local thermal history is needed to characterize overheating (Adam and Zimmer 2014; Ranjan et al. 2020). The effect is observed in both polymer and metal manufacturing. However, it is especially relevant for the metal precision parts as operating temperatures are higher and overheating adversely impacts the part quality. For this reason, we focus on Laser Powder Bed Fusion (LPBF) which is the most prevalent metal AM technique and discuss local overheating in more detail below.

The LPBF process involves selective melting of powder layers using laser beams as a heat source. This means that heat flows from the newly deposited topmost layer toward the baseplate. It is observed that whenever incident thermal energy is not transmitted quickly enough to the baseplate, local overheating or heat accumulation occurs (Sames et al. 2016; Mertens et al. 2014). In the in situ monitoring study conducted by Craeghs et al. (2012), local overheating is characterized by an enlarged melt-pool observed near regions which obstruct heat flow. Overheating leads to defects such as balling and dross formation, which compromise the surface quality of manufactured parts (Craeghs et al. 2010). Moreover, local overheating can adversely affect the micro-structural evolution, which has a significant impact on resulting physical properties (Leary et al. 2014). Kastsian and Reznik (2017) highlight that local overheating can lead to undesired deformations, which cause re-coater jamming, and consequently in build failure. Lastly, Parry et al. (2019) reported that local overheating contributes significantly to residual stresses resulting into part distortions upon removal from the substrate. The issue becomes even more relevant for precision components with tight geometric tolerances (Leach and Carmignato 2020). Hence, considerations should be made for avoiding local overheating at the design and process planning stage.

The factors causing local overheating can be characterized into three broad groups. The first group is associated with the AM process parameters, e.g., scanning strategy, scan velocity, laser power, etc. As the input energy density depends on the process parameters, they have a significant impact on the local thermal history of the part (Thijs et al. 2010). The second group is related to the thermal properties of the material used. For example, material with high thermal diffusivity will facilitate faster heat evacuation as compared to a material with lower diffusivity. Finally, the third group is associated with the part design. Geometric features which do not allow sufficiently fast evacuation of heat cause local overheating (Craeghs et al. 2010).

In this research, the main focus is on the aspects directly controlled by the part design, i.e., the relationship between part layout and its thermal behavior during the printing process. In other words, we study the design-related factors that influence local overheating while assuming a constant set of process parameters and material properties.

The most common example of design features which cause local overheating is down-facing or overhanging surfaces. In the LPBF process, a down-facing surface is scanned with loose powder beneath it, instead of solid material. Due to lower (and non-uniform) conductivity of loose powder as compared to the bulk material, the applied laser energy is less effectively conducted toward the baseplate than in non-overhanging regions, causing local overheating near the melt zone (Craeghs et al. 2010; Mertens et al. 2014). Therefore, design guidelines related to overhang angles have been recommended, i.e., the angle as measured between part surface and the baseplate should not be less than a critical value \(\theta ^{\text {cr}}\) which typically amounts to 40\(^{\circ }\)–50\(^{\circ }\) (Cloots et al. 2013; Wang et al. 2013). However, a number of studies suggest that thermal behavior of an overhanging feature is not uniquely determined by the overhang angle. As a consequence, geometric overhang control does not necessarily guarantees overheating control. For example, Adam and Zimmer (2014) fabricated a Y-shaped specimen, for which discoloration, which is an indicator of overheating, was observed near the top region of an overhanging design feature. Although the feature had a constant overhang angle, the lower part of the overhang was free from overheating. A similar observation was presented by Patel et al. (2019), showing dross formation even when acute overhangs were avoided. Finally, Ranjan et al. (2020) presented LPBF thermal models and showed that the same degree of overhang can result in different thermal behaviour, depending on the heat evacuation capacity of other features in the vicinity. Hence, the geometrical approach of using a unique critical overhang angle throughout the domain could be insufficient for preventing overheating in some regions. On the other hand, using a single critical overhang angle might be over-restrictive. In such cases, nearby features can facilitate the heat conduction toward the baseplate. and henceforth, a lower critical overhang angle can be allowed. For example, it is well known that for overhangs of limited length, more acute overhang angles can be tolerated (Mertens et al. 2014).

In the context of TO, multiple researchers have successfully integrated a geometrical overhang constraint within TO procedures, for example, Gaynor and Guest (2016); Langelaar (2016, 2017); Van de Ven et al. (2018). These TO formulations tackle the issue as a purely geometric problem and prevent overhanging features with an angle less than a prescribed critical value. However, a TO method which could address the issue of overheating by directly taking into account the thermal evolution during the AM process would provide important advantages over existing geometric approaches.

Integration of a detailed AM simulation with TO is challenging as the computational cost associated with detailed AM models is extremely high (see, for example, Denlinger et al. (2014); Keller and Ploshikhin (2014)). Therefore, there has been a research interest in developing simplified AM models which capture essential AM-related aspects and make it possible to address them in a TO framework. For example, Wildman and Gaynor (2017) coupled a simplified thermo-mechanical AM model with density-based TO for reducing deformations. For approximating the thermal history, a constant temperature drop was assumed for each time step, and therefore, the relationship between part layout and its thermal behaviour was not captured. Next, Allaire and Jakabcin (2018) also integrated a thermo-mechanical AM model with the level-set TO method in order to minimize thermal stresses and deformations. However, it was reported that the associated computational cost was very high. More recently, Boissier et al. (2020) coupled a simplified thermal model with a 2D level-set TO where scanning path optimization is performed. However, it is expected that the computational cost of such a model remains high.

Alternatively, there is another category of AM-oriented TO methods where part design is considered fixed and supports are optimized considering structural and/or thermal aspects (Allaire and Bogosel 2018; Zhou et al. 2019; Kuo et al. 2018). Among these methods, Zhou et al. (2019) is most relevant for our purpose as it integrates a transient thermal AM simulation with density-based TO. As a simplification, a slow laser velocity of 1 mm/s and thick layers of 1mm were assumed. Still, the computational cost remained significantly high (5 min per iteration for \(10^4\) finite elements in Matlab). Therefore, to the best of our knowledge, a TO method which focuses on local overheating of AM parts and delivers optimized part designs within a practical time frame is still lacking.

In order to address overheating within the context of TO, first an adequate AM process model is required which can quickly identify design features that lead to overheating. In our previous study (Ranjan et al. 2020), a series of simplifications in thermal modeling of the LPBF process were investigated along with their implications in the context of detecting overheating. The most simplified model employs a steady-state thermal response in a local domain close to the heat source. It was demonstrated that this model can accurately capture overheating tendencies while providing very high computational gains. Therefore, in this paper, the computationally inexpensive steady-state process model presented by Ranjan et al. (2020) is coupled with density-based TO. The robust TO method presented by Wang et al. (2011) is used and compliance minimization is considered. Throughout this paper, identified zones of local overheating are referred to as ‘hotspots’ and hence, the simplified thermal model is referred to as ‘hotspot detection’ method. By including the hotspot information as a constraint, optimized designs can be found with reduced overheating risks.

The remainder of the article is organized as follows. For self-containment, the concept of hotspot detection following Ranjan et al. (2020) is summarized in Sect. 2. Formulation of the novel hotspot constraint and a finite element (FE)-based numerical implementation is presented in Sect. 3. A quantitative relationship between overhang angles and hotspot temperatures is established in Sect. 4, which is used to calibrate the overheating constraint. Problem definition, integration of overheating constraint with topology optimization, and preliminary results are presented in Sect. 5. Further results obtained by investigating the effect of several key parameters are presented in Sect. 6. A comparative study is presented in Sect. 7 where the novel TO method is compared with an existing geometry-based TO approach. The primary aim of this paper is to introduce the novel TO method while thoroughly investigating the behavior of the optimization problem. For this purpose, we choose to discuss the idea in a 2D setting for clarity and perform experiments for characterizing the influence of different parameters. However, the formulation can be directly extended to the 3D setting which is shown by a 3D numerical example presented in Sect. 8. Finally, conclusions and future directions are given in Sect. 9.

2 Simplified AM model and modifications for TO integration

2.1 Hotspot detection

Fig. 1
figure 1

Detecting heat accumulation using slab-based analysis. a The geometry under consideration, be Subsections of the geometry with the contour levels of temperature attained with a steady-state thermal analysis. For each slab, a heat flux is applied at its top, while its bottom act as a heat sink. Part-powder interfaces \(\Gamma \) are insulated and denoted by magenta. The maximum temperature for each material point is recorded and shown in f, which is referred to as ‘hotspot map’ \(T_{\text {HS}}\). (Color figure online)

The 2D geometry shown in Fig. 1a is used to explain the hotspot detection method. It is purposefully designed to include overhanging features along with relatively thin sections, since these features are the most commonly known sources of overheating (Leary et al. 2014; Wang et al. 2013; Craeghs et al. 2010). Note that all overhanging regions have identical overhang angle (\(\theta =45^{\circ }\)) so that variation in their thermal response due to local conductivity of nearby features can be observed, if any. Figure 1b–e represents different stages of the AM process when the part is manufactured with a vertical build direction. It was shown in Ranjan et al. (2020) that a computationally fast slab-based steady-state thermal analysis can capture hotspots under two considerations: it leads to qualitative temperature field representing overheating risks, and local domains (slabs) should be considered for analysis. A brief description of these considerations is given below. For an in-depth discussion along with validation using higher fidelity AM process models, the reader is referred to Ranjan et al. (2020).

The first consideration associated with the use of steady-state thermal analysis for hotspot detection is that the resulting temperature field no longer represents a quantitative prediction of the actual temperature transients. Instead, it provides a representation of the overheating risks associated with design features (Ranjan et al. 2020). For integration with TO, an overheating constraint needs to be formulated. Later, in Sect. 2.2, a normalization scheme is introduced which facilitates the formulation of the overheating constraint.

The second consideration for using steady-state analysis is that a relevant local computational domain must be considered, instead of the entire part. Steady-state analysis provides information about the overall conductance of the entire domain that is considered. However, heat flow during the AM process is a transient phenomenon where only features in the vicinity of the top layers are relevant for overheating. In order to address this, we consider only a subset of the geometry near the topmost layer in the intermediate build, as shown in Fig. 1b–e. We refer to this subset geometry as slab with slab thickness s. These slabs are defined such that subsequent slabs largely overlap, see Fig. 1c–e. The physical significance of slabs and motivation behind slab overlap is provided later in Sect. 2.2.

A steady-state thermal analysis is performed on every slab with a heat flux applied at the topmost surface, while the bottom surface acts as a heat sink. These boundary conditions (BC) are inspired by the AM process, where the thermal energy is applied at the topmost layer while the previously deposited layers and the thick baseplate acts as a heat sink. Note that the temperature BC for the slab’s bottom surface are a choice made in this study, while other options, e.g., flux-based BC, can also be investigated. Apart from the most significant simplification of using a localized steady-state analysis, several additional simplifications are used. Instead of simulating the actual laser scanning, we assume the entire top layer is simultaneously exposed to the incident heat flux. The interfaces between the solid and the powder, represented by \(\Gamma \) in Fig. 1, are assumed to be thermally insulated as conduction through powder is neglected. Also, convection and radiation heat losses from the top surface are neglected. Furthermore, we do not consider phase transformation and material properties are assumed to be temperature independent. These additional simplifications are commonly used in part-scale modeling of AM processes in order to reduce the computational burden (Zaeh and Branner 2010; Peng et al. 2018; Zeng et al. 2012; Yang et al. 2018). A detailed discussion about implications associated with these simplifications can be found in Ranjan et al. (2020).

Under these assumptions, the 2D steady-state heat equation for each slab is given as

$$\begin{aligned}\frac{\partial ^2 T}{\partial x^2}+\frac{\partial ^2 T}{\partial y^2}=0, \end{aligned}$$
(1)

while the heat flux, insulated and sink boundary conditions are given as

$$\begin{aligned} -k_0\frac{\partial T}{\partial y}\bigg \vert _{y=s}=q_0, \end{aligned}$$
(2)
$$\begin{aligned} \frac{\partial T}{\partial x} v_x+ \frac{\partial T}{\partial y} v_y =0 \;\;\;\; \text {on} \;\;\; \Gamma , \end{aligned}$$
(3)
$$\begin{aligned} T(x,0)=T_0, \end{aligned},$$
(4)

respectively. Here T(xy) is the temperature field, \(T_0\) is the sink temperature, x and y represent spatial coordinates within the slabs with origin located at left bottom, \(v_x\) and \(v_y\) are the x and y components of the outward unit normal vectors on \(\Gamma \), and \(k_0\) and \(q_0\) are thermal conductivity and input heat flux, respectively. The boundary value problem given by Eqs. (14) is solved numerically using finite element analysis (FEA) and temperature field T(xy) is obtained for each slab, as shown in Fig. 1b–e. Details on the FEA implementation are given in the next section.

Subsequent slabs may overlap to a large extent. Consequently, every material point is associated with multiple slabs. As a final step, the maximum temperature is obtained for each material point from all slabs it is associated with. This temperature field is referred to as ‘hotspot map’ denoted by \({T}_{\text {HS}}\), and is plotted in Fig. 1f. It can be seen that relatively higher temperatures are found near the thin sections, at the overhanging boundaries. This shows that the simplified model for overheating prediction is in agreement with experimental observations (Adam and Zimmer 2014; Toor 2014; Patel et al. 2019). It is noteworthy that although the considered geometry has a single overhang angle of \(45^\circ \), the thermal response varies based on the local conductivity of the features in the vicinity of the topmost layer of an intermediate build. This demonstrates that a computationally inexpensive thermal model can be used for detecting overheating.

2.2 Adaptation for TO integration: normalization

The hotspot detection method is based on the physics of the AM process, unlike the widely used purely geometrical overhang constraints. However, as discussed in Sect. 2.1, the predicted temperatures are only a qualitative representation of the overheating risks associated with design features. Therefore, we propose a normalization step which facilitates formulation of an overheating thermal constraint. For this purpose, the steady-state thermal response from each slab’s geometry is compared with that of a fully solid rectangular slab of same material and height, subjected to the same boundary conditions. An example of such a slab is shown in Fig. 2a. The solid slab is subjected to a heat flux \(q_0\) at the top, while the bottom acts as a heat sink. The rectangular geometry and the boundary conditions allow for an 1D analysis. Using Fourier’s law of heat conduction, the temperature difference between top and bottom of this fully solid slab at steady-state is \(N_c\)=(\(q_0\) \(s\))/\(k_0\). The normalization is done as \({\hat{T}}=T/N_c\), where \({\hat{T}}\) and \(N_c\) are normalized temperatures and normalization constant, respectively. Note that a rectangular slab with no void represents the best case scenario of unobstructed heat flow. This essentially means that, for any given geometry, \({\hat{T}}\) values close to 1 indicate thermal behavior similar to a bulk solid with no void, while higher values indicate overheating with increasing severity. Fig. 2b gives the normalized hotspot map \({\hat{T}}_{\text {HS}}\) for the geometry considered in Fig. 1.

Fig. 2
figure 2

a A fully solid rectangular slab with the same thermal conductivity and boundary conditions as the slabs during the hotspot detection. b The normalized hotspot map for the geometry shown in Fig. 1. Magenta boundaries indicate fully insulated part-powder interface. (Color figure online)

Apart from facilitating TO integration, there is another benefit associated with the proposed normalization step. The normalized hotspot map becomes invariant of \(q_0\), \(k_0\) and \(T_0\). However, the value of the slab thickness \(s\) influences the hotspot temperatures. The selected slab thickness dictates which subset of the geometry will be included within the slab and this has a direct influence on the normalized temperatures. It basically signifies the thermal interaction length \(\kappa \) up to which features significantly influence the heat flow at the newly deposited layer. In case of LPBF, this distance \(\kappa \) is significantly larger than thickness of a layer, and hence, subsequent slabs are defined with large overlaps. In Ranjan et al. (2020), the appropriate slab thickness is taken to be the characteristic lengthFootnote 1 which is given as \(\sqrt{\alpha t_h}\), where \(\alpha \) is the thermal diffusivity and \(t_h\) is the heating time for the layer. The heating time further depends on process conditions, e.g., layer area, number of lasers, number of parts and their relative position in the build chamber etc. In the context of TO, the design is not known beforehand and hence, it is difficult to pre-determine the heating time. Thus, in this paper, we consider slab thickness as a constant parameter for simplicity and will discuss in detail about the implications of this choice in Sect. 6.1.2.

3 Numerical implementation

In this section, a 2D finite element (FE) implementation of the hotspot detection method is presented which is subsequently used for formulating the hotspot constraint. The presented finite element implementation can be applied to any geometry. Here, we choose the geometry already considered in Fig. 1 to explain the numerical implementation. As a first step, an embedding domain is discretized with a structured mesh of bi-linear four-node square elements, as shown in Fig. 3. Next, an extra slab is added beneath the part for emulating the thermal influence of the baseplate (shown in red color in Fig. 3). We choose to keep the baseplate only as thick as a slab for simplicity. The number of elements used to discretize the part in x-direction and y-direction is represented by \(n_x\) and \(n_y\), respectively. The number of elements required to discretize a slab in the y-direction is \(n_s\). In Fig. 3, \(n_s\) is arbitrarily chosen as 2 as an example. A slab numbering scheme is introduced in Fig. 3, that starts from the baseplate slab. The second slab is defined by shifting the first slab by one element in build direction (indicated by y axis). The process continues until the topmost, i.e., the \(m{\text{th}}\) slab, where \(m=n_y+1\). It is evident from the choice of boundary conditions that for the steady-state thermal analysis, maximum temperatures are attained always at the topmost nodes of any given slab. Consequently, this procedure of defining subsequent slabs ensures the detection of any hotspots for the given mesh resolution, since every node in the part geometry is at the top of a slab.

Fig. 3
figure 3

Discretization of an example geometry along with the baseplate into finite elements. A set of overlapping slabs is defined such that slab numbering starts from the bottom baseplate. Each slab is subjected to a thermal loading, for example, loading for the topmost slab is indicated by vertical arrows. (Color figure online)

We aim to integrate the hotspot detection method with a density-based TO approach (Bendsøe and Sigmund 2003). A density variable \({\tilde{\rho }}\) ranging between 0 and 1 is defined for each element in order to describe the layout of a design. As per the AM process, heat should only be applied to the top surface where material is present. Therefore, following the classical approach, we use a SIMP-inspired relationship (Bendsøe and Sigmund 2003) for scaling the elemental conductivity and heat flux with the density as

$$\begin{aligned} k_e=k_{\text {min}}+(k_0-k_{\text {min}}){\tilde{\rho }}_e^{r}, \end{aligned}$$
(5)

and

$$\begin{aligned} q_e=q_0{\tilde{\rho }}_e^{r}, \end{aligned}$$
(6)

respectively. Here, \(k_e\) and \(q_e\) are thermal conductivity and heat flux for Element e, respectively. The exponent \(r\) represents penalization for an intermediate density and \(k_{\text {min}}\) is introduced to avoid singularityFootnote 2. Using elemental values for conductivity and surface flux, the global conductivity matrix \(\mathbf{G} \) and thermal load vector \(\mathbf{Q }\) are assembled for each slab, following standard FE procedures (Cook et al. 2001). Next, a set of discretized steady-state heat equations given by

$$\begin{aligned} \mathbf{G ^{(J)}}\mathbf{T ^{(J)}}=\mathbf{Q ^{(J)}} \quad \forall \quad J \in [1,m] \end{aligned}$$
(7)

is numerically solved and nodal temperatures \(\mathbf{T} ^{(J)}\) are obtained for Slab J. Next, slab temperatures are normalized with \(N_c\) i.e.,

$$\begin{aligned} \hat{\mathbf {T}}^{(J)}=\frac{\mathbf {T}^{(J)}}{N_c}. \end{aligned}$$
(8)

It is noteworthy that Eq. (7) can be solved independently for each slab \(J=1\ldots m\) and hence, temperature fields associated with all the slabs can be computed in parallel.

Recall that due to the considered boundary conditions and steady-state analysis, maximum temperatures are attained only at the topmost nodes of any given slab. Therefore, as the next step, normalized temperatures for these nodes are collected in an array \(\hat{\mathbf{T }}^{\Omega }\), where \(\Omega \) represents the design domain. Note that the array \(\hat{\mathbf{T }}^{\Omega }\) basically represents the hotspot map information. Finally, if the maximum temperature in the hotspot map is less than a critical value i.e., \(\text {max}(\hat{\mathbf{T }}^{\Omega }) \le T^{\text {cr}}\), the part layout is not prone to overheating during AM. Determination of \(T^{\text {cr}}\) is discussed in the next section.

The max operator is non-differentiable, whereas a smooth operation is required for calculating the sensitivities needed in TO. Therefore, a P-mean aggregation scheme is used over \(\hat{\mathbf{T }}^{\Omega }\) for specifying the constraint as

$$\begin{aligned} f=\frac{1}{{T^{\text {cr}}}}\Bigg [\frac{1}{n}\sum _{i=1}^{n}{({\hat{T}}^{\Omega }_i)^P}\Bigg ]^\frac{1}{P}-1 \le 0. \end{aligned}$$
(9)

here, \({\hat{T}}^{\Omega }_i\) is the \(i{\text {th}}\) member of array \(\hat{\mathbf{T }}^{\Omega }\), P is the exponent used for defining P-mean, and n is the total number of nodes.Footnote 3

4 Defining critical temperature using a geometry–temperature relationship

It remains to determine a critical temperature \(T^{\text {cr}}\) for the hotspot constraint. For this purpose, we propose a calibration procedure where features known to be causing unacceptable overheating issues are first analyzed using the hotspot detection method. Next, the associated hotspot temperatures are used for setting up \(T^{\text {cr}}\). Here, we choose to use overhanging geometries as they are the most commonly identified cause of overheating (Mertens et al. 2014). It should be noted that this is not the only option and with advancing capabilities of capturing in situ experimental data, an empirical calibration can also be done. Finally, it is important to note that by using overhangs for calibration purposes, our aim is not to propose an overhang avoidance scheme. Instead, the method evaluates thermal behaviour of designs and avoids local overheating not necessarily linked with an overhang. This distinction is further elaborated in Sects. 6 and 7.

Typically, a limiting overhang value \(\theta ^{\text {cr}}\) for an AM system is experimentally determined using benchmark geometries, see, for example, Cloots et al. (2013). Here, AM system refers to a combination of material and process parameters. This implicitly means that the thermal conditions while fabricating overhangs with \(\theta < \theta ^{\text {cr}}\) can lead to overheating. We use a similar idea for calibrating the hotspot constraint. For this purpose, geometries with overhang angles \(\theta \) ranging between \(30^{\circ }\)\(60^{\circ }\) with an interval of \(5^{\circ }\) are constructed and subjected to thermal loading, using the slab-based analysis discussed in Sect. 3. Figure 4 shows one of such geometry with a height that is equal to the slab height s and thickness b. This mimics the situation when an overhanging geometric feature is encountered within one of the slabs. The aspect ratio associated with this geometry is defined as \(a=b/s\). The temperature field normalized using \(N_c\) is shown in Fig. 4 and referred to as \({\hat{T}}_C\), where subscript C denotes its calibration functionality. Note that \({\hat{T}}_C\) is different from a hotspot map \({\hat{T}}_\text {HS}\), as the latter is found by combining \({\hat{T}}_C\) from multiple slabs. The maximum normalized temperature \({\hat{T}}_c^{\text {max}}\) occurs at the top left vertex of the wedge, as shown in Fig. 4. The minimum feature size, typically controlled in TO using filtering techniques, gives a lower bound for the thickness b, while a constant value of slab thickness s is selected before starting the optimizationFootnote 4. In practice, the minimum feature size is determined based on the resolution of the manufacturing process that is used to realize the TO design.

Fig. 4
figure 4

Normalized temperature field \({\hat{T}}_C\) obtained by subjecting an overhanging geometry to heat flux on the top surface with the bottom as heat sink. (Color figure online)

Fig. 5
figure 5

The variation of maximum normalized temperatures \({\hat{T}}_c^{\text {max}}\) with respect to overhang angles for selected values of aspect ratios \(a=b/s\). (Color figure online)

In Fig. 5, \({\hat{T}}_c^{\text {max}}\) is plotted as a function of \(\theta \) for selected aspect ratios a. A first observation is that \({\hat{T}}_c^{\text {max}}\) decreases with increasing overhang angle \(\theta \), for a constant a value. This signifies higher overheating for more acute overhangs. Next, for a constant \(\theta \), \({\hat{T}}_c^{\text {max}}\) increases with increasing aspect ratios, ranging from \(a=0.1\) until it saturates near \(a=4\). Note that the slab thickness s remains constant during the optimization, while thickness b varies for different features during design iterations. This implies that the range of a from 0.1 to 4 corresponds to increasing b. Also note that the width of the top surface, which is subjected to the heat flux, increases with a higher b value. Hence, the increase in \({\hat{T}}_c^{\text {max}}\) with a is an artefact caused by the fact that an entire layer is assumed to be exposed to heat simultaneously, while a concentrated heat load is used in the real process. The true heat load will depend on the scanning strategy, and the chosen model constitutes a worst-case situation, hence guarantees overheating prevention. Therefore, \(T^{\text {cr}}\) is set as \({\hat{T}}_c^{\text {max}}\) obtained for \(\theta ^{\text {cr}}\) and minimum aspect ratio \(a=b/s\), defined using the selected minimum feature size and slab thickness.

5 Integration wtih TO

The 88-line topology optimization Matlab code by Andreassen et al. (2011) has been extended to incorporate hot spot detection. The method of moving asymptotes (MMA) (Svanberg 1987) has been used for optimization. Here, we used default MMA parameters for all the results. An investigation into the influence of MMA parameters is considered outside the scope of this study. The problem definition along with default TO parameters are given in Sect. 5.1. In the remainder, the baseplate is located underneath the domain, defining the print direction with exception of Sect. 6.2 where various other printing directions are studied.

5.1 Problem definition

The primary focus of this section is to show the usability of the novel TO method with hotspot constraint. Consequently, we restrict our discussion to linear elastic compliance minimization with a volume constraint, using the SIMP interpolation scheme (Bendsøe and Sigmund 2003). An additional thermal constraint described by Eq. (9) is included to suppress design features associated with overheating during the AM process. The complete problem is stated as

$$\begin{aligned}&\min _{\varvec{\rho }}:\ C(\varvec{\rho })=\mathbf{u} ^T\mathbf{K} {} \mathbf{u} , \end{aligned}$$
(10a)
$$\begin{aligned}&\text {subject to} \end{aligned}$$
(10b)
$$\begin{aligned}&\frac{V(\varvec{\rho })}{V_o}-f_o \le 0, \end{aligned}$$
(10c)
$$\begin{aligned}&\frac{1}{{T^{\text {cr}}}}\Bigg [\frac{1}{n}\sum _{i=1}^{n}{({\hat{T}}^{\Omega }_i)^P}\Bigg ]^\frac{1}{P}-1 \le 0, \end{aligned}$$
(10d)
$$\begin{aligned}&\mathbf{G} ^{(J)}\mathbf{T ^{(J)}} =\mathbf{Q ^{(J)}} \ \forall \quad J \in [1,m], \end{aligned}$$
(10e)
$$\begin{aligned}&\mathbf{K} {} \mathbf{u} =\mathbf{f} , \end{aligned}$$
(10f)
$$\begin{aligned}&\mathbf{0} \le \varvec{\rho } \le \mathbf{1} . \end{aligned}$$
(10g)

Here, C is the compliance, \(\mathbf{u} \) and \(\mathbf{f} \) are the arrays containing the global displacements and nodal forces, respectively, \(\mathbf{K} \) is the global stiffness matrix, \(\varvec{\rho }\) is the array of design variables, \(V(\varvec{\rho })\) and \(V_0\) are the total material volume and design domain volume, respectively, and \(f_o\) is the prescribed volume fraction. The meshing scheme as described in Sect. 3 is used. The density filtering scheme described by Bruns and Tortorelli (2001) has been used to impose a length-scale and avoid checkerboarding. It gives the relation between design variables and element densities as

$$\begin{aligned} {\tilde{\rho }}_e=\frac{\sum {w_{\text {e,i}}\rho _i}}{\sum {w_{\text {e,i}}}}, \end{aligned}$$
(11)

where, \({\tilde{\rho }}_e\) is the density of an element e centered at position \(\mathbf{r} _e\) and \(w_{\text {e,i}}\) is a weight factor at position \(\mathbf{r} _i\). The weight factor is defined using a linearly decaying distance function: \(w_{\text {e,i}} = \text {max}(0, R-||\mathbf{r} _i - \mathbf{r} _e||)\) with filter radius R. The sensitivity derivation for the novel thermal constraint, see Eq. (9) or Eq. (10d), is given in [Appendix].

A cantilever design case is considered here for demonstrating the performance of the proposed hotspot-based TO method. The design domain measuring \(180\ \text {mm} \times 60\ \text {mm}\) is shown in Fig. 6 along with a concentrated load acting on the lower right vertex, while the left edge is considered fixed. The structural problem assumes plane stress condition for solving the 2D problem. For the thermal analysis, out-of-plane thickness has no influence on the hotspot map. This is due to the fact that input heat flux is defined per unit area and a layer-by-layer heat deposition is assumed. However, an out-of-plane thickness of 50 mm is assumed for the theoretical calculation of slab thickness value, as discussed later in Sect. 6.1.2. The optimization problem given by Eq. (10a) is initiated with uniform density of \(\varvec{\rho }=f_o\) and filter radius \(R=2\) mm is used. Finite elements of \(1\ \text {mm} \times 1\ \text {mm}\) are used. Build orientation is indicated with \(\mathbf {b}\) in Fig. 6 and default values of parameters are listed in Table 1.

Table 1 Default parameters
Fig. 6
figure 6

The cantilever test case with a concentrated load on the lower right vertex and fixed left edge. Build orientation is considered in the direction of \(\mathbf{b} \) which is normal to the baseplate, indicated by the green rectangle underneath the domain. (Color figure online)

As explained in Sect. 3, a P-mean is used for estimating the maximum temperature. A P-mean typically underestimates the true maximum and the error decreases with increasing P value. However, in our case, a correct estimate of the maximum temperature is important since an overshoot beyond \(T^{\text {cr}}\) indicates the risk of overheating which defeats the purpose. Hence, an adaptive scheme suggested by Le et al. (2010) is applied for correcting the maximum found by the P-mean by scaling it with the true maximum. This means that a scaling factor \(\Psi \) is incorporated in the hotspot constraint given by Eq. (9) as

$$\begin{aligned} f=\frac{\psi }{{T^{\text {cr}}}}\Bigg [\frac{1}{n}\sum _{i=1}^{n}{({\hat{T}}^{\Omega }_i)^P}\Bigg ]^\frac{1}{P}-1 \le 0. \end{aligned}$$
(12)

where \(\Psi \) is defined as the ratio of the true and P-mean maximum from the previous iteration, i.e.,

$$\begin{aligned} \Psi ^{(I)}=\Bigg [\frac{\text {max}(\hat{\mathbf{T }}^{\Omega })}{\Big (\frac{1}{n}\sum \limits _{i=1}^{n}{({\hat{T}}^{\Omega }_i)^P}\Big )^{\frac{1}{P}}}\Bigg ]^{I-1}, \end{aligned}$$
(13)

where I represent iteration number. Due account for the scaling factor \(\psi \) is made for the sensitivity calculation. As discussed in Le et al. (2010), this scheme can cause convergence difficulties as \(\Psi \) changes in a discontinuous manner. Hence, the scaling factor \(\Psi \) is adjusted only once every 25 iterations in a total of 400 iterations permitted for the optimization. With this continuation scheme, numerical investigation reveals that \(P=15\) is suitable for calculating the P-mean, while still achieving desirable accuracy in predicting the maximum values for the temperature constraint. Note that due to the utilization of the scaling scheme which compensates for the error in prediction, P-norm can also be used instead of P-mean which overpredicts the true maximum temperature.

5.2 Topology optimization

The minimum compliance design without the hotspot constraint is shown in Fig. 7a which is referred to as the reference design with compliance as \(C_\text {ref1}\). The design obtained with the hotspot constraint is shown in Fig. 7b. It is observed that due to the hotspot constraint, there is a tendency for avoiding the long overhangs present in the reference design, which are expected to cause overheating. However, there is a high utilization of intermediate densities for artificially meeting the hotspot constraint, which is satisfied for the shown design. The measure of non-discreteness \(M_\text {nd}\), as introduced by Sigmund (2007a), is used to quantify this effect. A fairly high value of \(M_\text {nd}=27.4\)% is reported for the design shown in Fig. 7b compared to the \(M_{\text {nd}}=14.1\)% for the reference design shown in Fig. 7a. The results presented here are for \(T^{\text {cr}}\) calculated using \(\theta ^\text {cr}=45^{\circ }\). It was also observed that the tendency of using intermediate densities becomes more pronounced for cases with lower \(T^{\text {cr}}\) or high \(\theta ^\text {cr}\). This is due to the fact that the constraint becomes more strict and presumably forces the optimizer toward intermediate densities. Lastly, note that the compliance of the hotspot-constrained design is 1.13 times the compliance for the design without the hotspot constraint. This is seen as a compromise in compliance performance by activating the hotspot constraint and thereby reducing the design freedom.

Fig. 7
figure 7

a TO without the hotspot constraint, b TO with the hotspot constraint, c TO with the hotspot constraint using higher thermal penalization exponent \(r=9\). The green rectangle at the bottom signifies the baseplate. Default parameters listed in Table 1 are used while any variations are reported. (Color figure online)

The default value of \(r=3\) is used for generating the design shown in Fig. 7b. Increasing it to \(r=9\) only marginally improves the discreteness of the result shown in Fig. 7c with \(M_\text {nd}=25.1\)%. This is due to the fact that steady-state temperatures are proportional to the ratio of heat flux and conductivity. Recall that conductivity \(k_e\) and flux \(q_e\) were equally penalized for intermediate densities in Eqs. (5) and (6), respectively. This implies that intermediate densities are not explicitly penalized in the current formulation as they do not significantly influence the resulting temperatures.

The high utilization of intermediate densities is a serious problem for fabrication. Typically, a thresholding operation is performed to convert a density-based TO result into an STL file for printing. When converted to 0/1 using a threshold, a design with high non-discreteness might result in an STL file which does not meet the hotspot constraint and exhibits overheating, defeating the purpose of the proposed TO method. Adding Heaviside filter proved ineffective to lower non-discreteness. Hence, in order to solve this issue, we use the robust TO formulation which is discussed in the next section.

5.3 Robust topology optimization

Fig. 8
figure 8

a Robust TO without the hotspot constraint. b Robust TO with the hotspot constraint on eroded, intermediate, and dilated designs, c robust TO with the hotspot constraint only on intermediate design. (Color figure online)

In order to prevent the aforementioned problem of intermediate densities, the robust formulation (Wang et al. 2011) is employed. It uses dilated, intermediate, and eroded designs using three projection thresholds \(\eta =0.25\), \(\eta =0.5,\) and \(\eta =0.75\), respectively. The Heaviside thresholding operation is given as

$$\begin{aligned} \tilde{{\tilde{\rho }}}=\frac{\tanh (\beta \eta )+\tanh (\beta ({\tilde{\rho }}-\eta ))}{\tanh (\beta \eta )+\tanh (\beta (1-\eta ))}, \end{aligned}$$
(14)

where \(\tilde{{\tilde{\rho }}}\) is the projected density and \({\tilde{\rho }}\) is the filtered density obtained using Eq. (11). For all the results using robust TO, \(\tilde{{\tilde{\rho }}}\) represents physical density and \(R=6\) mm is used. \(\beta \) is a parameter which controls the intensity of the Heaviside projection. In this study, \(\beta \) is initialized as \(\beta =1\) and then doubled every 50 iterations till \(\beta _{\text {max}}=64\). For details about this method, readers are referred to Wang et al. (2011); Sigmund (2007b).

Typically, the robust optimization problem is formulated as a min-max optimization problem where the objective is calculated for all three projection designs. However, for the case of compliance minimization, Lazarov et al. (2016) showed that it is sufficient to consider the eroded design only which results in reduced computational cost for evaluating the objective. Hence, we use the eroded design for calculating the compliance, while the hotspot constraint, given by Eq. (10d), is initially implemented on all three projected designs. The intermediate design found using the robust TO without the hotspot constraint is shown in Fig. 8a, while that with the hotspot constraint is presented in Fig. 8b. The compliance for the reference design is referred to as \(C_{\text {ref}}\). Once again, we present results for the commonly used \(\theta ^\text {cr}=45^{\circ }\) for the TO with hotspot constraint. Hotspot maps superimposed on the optimized designs are normalized to a common scale ranging from 0 to the maximum temperature obtained for the reference design shown in Fig. 8a. The long overhang in the top region of the design shown in Fig. 8a is identified as a source of severe overheating. It is observed that by using the hotspot constraint material is redistributed such that this long overhang is avoided. Also, the robust TO design is almost black and white with \(M_\text {nd}=0.35\)%. Again, compliance of the hotspot-constrained intermediate design is 1.26 times that of the design without the hotspot constraint. Note that the design shown in Fig. 8a becomes significantly different from that shown in Fig. 7a due to the length-scale considerations associated with Robust formulation.

The robust method is generally used for providing robustness against the uncertainties of the manufacturing process where the part boundaries might shift during fabrication. However, the targeted LPBF process offers high precision and STL files can generally be printed with high accuracy. Hence, in the remainder, we choose to apply the hotspot constraint only on the intermediate design which is seen as the final result, while eroded and dilated designs are used for evaluating compliance and applying the volume constraint, respectively. This offers another computational gain as hotspot constraint has to be evaluated only once instead of three times. The result of this lean robust formulation is shown in Fig. 8c, where the topology is very similar to that shown in Fig. 8b. Imposing the hotspot constraint only on intermediate design allows for relatively higher design freedom and hence it reduces the compromise in performance caused by the hotspot constraint. This is evident by the reduced compliance value (\(=1.2C_{\text {ref}}\)) for the case where hotspot constraint is imposed only on the intermediate design. This formulation is found to be able to generate crisp designs with desired overheating control. Hence, it is used for creating all the results presented in subsequent sections.

5.4 Evaluation using high-fidelity transient simulation

In order to further investigate the susceptibility of optimized designs to overheating, they are subjected to a high-fidelity transient LPBF simulation. The high-fidelity simulation performs an FE analysis on the heat equation within a time integration and is detailed in Ranjan et al. (2020). Consequently, time evolution of design’s thermal response is determined for layer-by-layer material deposition with temperature-dependent thermal properties, while convective and radiative thermal losses are also accounted for. The maximum temperature for each FE node is recorded across the entire history of the simulation and used for creating the corresponding hotspot map. The hotspot maps constructed from high-fidelity simulations for the designs obtained with (Fig. 8a) and without (Fig. 8c) the hotspot constraint are shown in Fig. 9a and b, respectively. For a better comparison, a common temperature scale ranging from the sink temperature of 180 \(^\circ \)C to the maximum temperature found within both of the designs is used. Note that no normalization is performed, and the actual maximum temperatures are reported. It is evident that the long overhang in Fig. 9a causes severe overheating leading to a peak temperature of 6708 \(^\circ \)C. On the other hand, the maximum temperature for the design shown in Fig. 9b remains at a much lower value of 1678 \(^\circ \)C. This high-fidelity transient simulation which better mimics the LPBF process physics shows that indeed the design found using hotspot TO is not susceptible to overheating, as compared to that found using TO without any hotspot constraint. It is noteworthy that the evaluation performed here simply compares the thermal behaviour of both designs using a high-fidelity model and should not be seen as a validation of the actual temperatures found by the steady-state model. For latter, readers are referred to Ranjan et al. (2020), where a quantitative comparison of the simplified model with high-fidelity simulations is presented.

Fig. 9
figure 9

Hotspot map obtained from high-fidelity LPBF transient simulation for designs found using a robust TO without the hotspot constraint, b robust TO with the hotspot constraint. The high-fidelity transient simulation considers temperature-dependent thermal properties, convection, and radiation and all simulation parameters are taken from Ranjan et al. (2020). (Color figure online)

6 Parameter study

6.1 Influence of hotspot analysis parameters

In this section, we analyze the effect of various parameters associated with the hotspot analysis on optimization results. There are a total of six parameters that are introduced in this formulation, i.e., critical temperature \(T^{\text {cr}}\), slab thickness s, thermal penalization exponent r, input heat flux \(q_0,\) and the thermal conductivity of solid and void regions, \(k_0\) and \(k_\text {min}\), respectively. Recall that the temperatures are normalized using \({N_c=q_0s/k_0}\), hence the material property \(k_0\) and input flux \(q_0\) have no effect on the optimization process. Also recall that due to this normalization scheme, temperatures are reported relative to those obtained for a solid slab. Next, it was found that the relative value of \(k_\text {min}\) with respect to \(k_0\) affects the optimization process. Extremely low void conductivity, such as \(k_\text {min}\approx k_0\times 10^{-10}\), leads to very high hotspot temperatures in an intermediate slab where material is disconnected from the bottom heat sink. This causes multiple thermal constraint violations leading to slow convergence. For this reason, \(k_\text {min}=k_0\times 10^{-4}\) is used for all the examples. This is also reminiscent of the fact that surrounding powder has lower but finite thermal conductivity (Rombouts et al. 2004). For the robust TO, it is found that the thermal penalization exponent r has a negligible effect on the optimized designs. This is because the physical density \(\tilde{{\tilde{\rho }}}\) is driven toward 0/1 as optimization progresses. The influence of the remaining two parameters is discussed in detail as they significantly effect the resulting optimized design.

6.1.1 Influence of critical temperature

Fig. 10
figure 10

Designs and hotspot fields \({\hat{T}}_\text {HS}\) obtained for the cantilever beam problem with critical temperatures \(T^{\text {cr}}\) calculated using a \(\theta ^{\text {cr}}=30^{\circ }\), b \(\theta ^{\text {cr}}=40^{\circ }\) and c \(\theta ^{\text {cr}}=50^{\circ }\). All designs are obtained after 400 MMA iterations and \(\text {max}({\hat{T}}_\text {HS})\) denotes the maximum hotspot temperature of the corresponding sub-figure. The thermal constraints were met for all hotspot implementations. The relative compliances of the hotspot designs and reference case shown in Fig. 8a are given by \(C/C_{\text {ref}}\). (Color figure online)

Fig. 11
figure 11

The variation of compliance with respect to the overhang angles used for defining the critical hotspot temperatures. The compliances are normalized against that for the reference design shown in Fig. 8a. (Color figure online)

Recall that in Sect. 4 a numerical calibration step is used for determining \(T^{\text {cr}}\) based on a critical overhang value \(\theta ^{\text {cr}}\) and aspect ratio a, i.e., \(T^{\text {cr}}={\hat{T}}_C(\theta ^{\text {cr}},a)\). Here, a is the ratio of minimum feature size b and slab thickness s. A relationship between b, \(\eta \) and R is presented graphically in Qian and Sigmund (2013) which, for our implementation with \(\eta =0.25\) and \(R=6\) mm, leads to \(b=6\) mm. This further gives \(a=b/s=0.5\) for the default value of \(s=12\) mm. This implies that the green curve in Fig. 5 is used for deciding critical hotspot temperatures. The results for \(\theta ^{\text {cr}}=30^\circ ,40^\circ \) and \(50^\circ \) are presented in Fig. 10a–c, respectively. The critical temperatures found using the described calibration process are \({T^{\text {cr}}(30^\circ )=4.1}\), \(T^{\text {cr}}(40^\circ )=2.5,\) and \(T^{\text {cr}}(50^\circ )=1.8\) for the designs shown in Fig. 10a–c, respectively. The temperature constraints are met in all hotspot TO implementations as the maximum hotspot temperatures remain lower than the respective \(T^{\text {cr}}\). It can be seen that the hotspot occurring due to the long horizontal overhang in the reference case, shown in Fig. 8a, is avoided. Consequently, the maximum hotspot temperatures for the cases shown in Fig. 10 remain much lower than that obtained in the reference case \(T_{\text {ref}}=39.1\). Also, as \(\theta ^{\text {cr}}\) increases or \(T^{\text {cr}}\) decreases, different designs are found such that the maximum hotspot temperature is further reduced in accordance with the constraint. A green colored baseplate is added at the bottom of the designs to indicate the build direction.

Fig. 12
figure 12

Convergence behaviour for the cantilever case with hotspot TO implementations for \(\theta ^{\text {cr}}=30^\circ \), \(40^\circ, \) and \(50^\circ \). The snapshots of intermediate designs are given at iterations 50, 150, 250, and 400 for \(\theta ^\mathrm{cr}=50^\circ \). (Color figure online)

Fig. 13
figure 13

Density fields of designs optimized for the cantilever beam problem using a Robust TO without hotspot constraint bd hotspot TO formulation with critical temperatures set using different \(\theta ^{\text {cr}}\). All designs are obtained after 400 MMA iterations. Overhang angles are geometrically determined and printed near the respective overhanging feature. (Color figure online)

The variation of compliances for different critical overhang values is shown in Fig. 11. It is observed that as \(\theta ^{\text {cr}}\) increases or \(T^{\text {cr}}\) reduces, the constraint becomes more strict and design freedom reduces. Consequently, the compliance of the corresponding designs increases, as more material is dedicated to manage the temperature and less freedom remains to improve structural performance. This is also highlighted by the hotspot fields presented in Fig. 10. For example, in the most strict case of \(\theta ^{\text {cr}}=50^\circ \), hotspot temperature remains close to the critical value for most of the features. This shows that the optimizer has to give a lot of priority in meeting the hotspot constraint over improving compliance.

The optimization with hotspot constraint converges relatively smoothly for a problem involving constraint aggregation, as shown in Fig. 12. However, convergence requires more iterations when the constraint becomes more strict. Snapshots of density fields are shown in Fig. 12 for iteration number 50, 100, 150, and 300. Also, there is an observable jump at iterations where \(\beta \) is doubled which disrupts the convergence.

Lastly, in order to examine the influence of the critical temperature on overhanging features, the density fields of the optimized designs are presented in Fig. 13 and the actual overhang anglesFootnote 5 are superimposed on the designs. It can be observed that as \(\theta ^{\text {cr}}\) increases, fewer features with acute overhang angle tend to appear. Also, note that most overhangs are higher than \(\theta ^{\text {cr}}\) which suggests that hotspot-constrained TO leads toward conservative designs. This is due to the consideration of worst case scenario of simultaneous layer heating which leads to higher temperatures for thicker geometries, as demonstrated in Sect. 4. As a consequence, the optimizer prefers \(\theta >\theta ^{\text {cr}}\) in order to meet the hotspot constraint for features which are thicker than minimum feature size b. As a downside, this could lead to over-restrictive designs compromising the performance while ensuring overheating avoidance. It is expected that a more detailed AM model can address this issue.

Next, it is also observed that few overhangs with an overhang angle less than \(\theta ^{\text {cr}}\), are permitted in the final designs. These are marked in magenta (Fig. 13) and referred to as ‘benign’ overhangs. Recall that all presented designs meet the thermal constraint which implies that for all benign overhangs, hotspot temperatures do not exceed the respective \(T^{\text {cr}}\). The presence of geometric features which facilitate effective heat flow in the proximity of the benign overhangs is identified as the cause for maintaining acceptable temperatures. This allows for the benign overhangs to exist without violating the thermal constraint. Another observation reveals that even short horizontal overhangs are allowed, as indicated by red arrows in Fig. 13b–d. It is known that difficulties associated with manufacturing of flat overhangs increase with increasing length (Mertens et al. 2014), and hence, small flat overhangs can be thermally benign. The hotspot-based approach naturally recovers this phenomenon without any explicit geometrical rule, which is one of the advantages of a physics-based manufacturing constraint.

6.1.2 Influence of slab thickness

Fig. 14
figure 14

Hotspot TO designs obtained using a \(s=6\) mm, b \(s=12\) mm, and c \(s=20\) mm. The magenta box represents the slab used for finding the design. The thermal influence of different neighboring features increases with increasing slab thickness which could both reduce or increase the design freedom. The former is explained by analyzing the thin-funnel-shaped feature marked with a red circle in a while latter is explained by analyzing the presence of a hole, marked by the orange star in b and c. The red arrow next to the hole represents a converging feature leading to overheating, while the green arrow denotes a diverging feature. (Color figure online)

Until now, we used \(s=12\) mm for all the presented results. Next, in order to understand the influence of s, we present results for \(s=6\) mm and 20 mm in Fig. 14a and c, respectively, along with slabs marked as magenta color boxes. For comparison, the design for \(s=12\) mm is also shown in Fig. 14b. They all are crisp and satisfy the respective hotspot constraint. A close comparison of these designs reveals that slab thickness governs how the hotspot constraint influences the design freedom during the optimization, which is explained in the subsequent paragraphs.

First, recall that \(T^{\text {cr}}\) reduces with the aspect ratio \(a=b/s\), as shown in Fig. 5. This implies that \(T^{\text {cr}}\) slightly decreases with increasing slab thickness s making the hotspot constraint more strict and hence, reducing the design freedom. However, there is another more dominant effect of neighboring features influencing the local overheating, which defines the influence of slab thickness on hotspot constraint. As a general understanding, a larger slab thickness would more likely include nearby geometric features which could influence heat flow at the top of the slab. On the other hand, for small slab thickness values, the thermal analysis domain remains small and the effect of neighboring features on each other’s thermal behavior diminishes. It is found that hotspot constraint can both increase or decrease design freedom with increasing slab thickness, depending on the heat evacuation/obstruction capacity of nearby design features. Examples for both are discussed below.

First, consider a nearby geometric feature that helps in heat evacuation. A larger slab would more likely encapsulate it, reducing the hotspot temperatures at the top of the slab. This would result in a less strict hotspot constraint, thereby increasing the design freedom. Contrary to this, a smaller slab excludes such a feature and thus faces a more strict hotspot constraint. An example of this phenomenon is the small holes marked by orange star signs in Fig. 14b and c. As the entire hole is included in a single slab, the diverging feature near the lower half of the hole (marked by green arrows) helps in dissipating the heat which would accumulate in the converging feature near the upper half (marked by red arrows). In order to verify this, we subject the design optimized with \(s=20\) mm (shown in Fig. 14c) to a post-optimization hotspot analysis with \(s=6\) mm. The hotspot map for \(s=6\) mm is shown in Fig. 15a and it can be seen that the small hole indeed violates the hotspot constraint, signifying that a similar hole is less likely to appear for the case of \(s=6\) mm.

Next, consider a nearby geometric feature that acts as a thermal bottleneck. For example, the funnel-like shape in Fig. 14a (marked by red circle) is a thermal bottleneck but it does not violate the hotspot constraint with \(s=6\) mm. On the other hand, when this design is subjected to hotspot analysis using \(s=20\) mm as a post-optimization step, the funnel-like feature violates the hotspot constraint, as shown in Fig. 15b. Hence, in this case, increasing the slab thickness would force the optimizer to avoid such a thin funnel-like feature, even though it might be beneficial for reducing compliance. Consequently, designs with larger slab thicknesses show a tendency for having thicker members since thin members can cause hotspots. An upper bound on member sizes can be imposed if thicker members are not desired (Lazarov and Wang 2017). This second example shows that design freedom can also reduce with increasing slab thickness. This varying influence on design freedom is also responsible for the non-monotonic behavior of design performances, as observed in Fig. 14.

These findings suggest that overheating avoidance cannot be guaranteed if the slab thickness is much higher or lower than the thermal interaction length \(\kappa \) applicable for the given set of process and material parameters. Therefore, an accurate estimate of \(\kappa \) is crucial for obtaining feasible designs. For example, while considering Aluminum parts, a larger \(\kappa \) would be more suitable than that for Ti-6Al-4V parts as thermal diffusivity of Aluminum is higher. Similarly, a slow laser speed would allow for longer time spans for thermal interactions, thus encouraging a higher value of \(\kappa \). This implies that \(\kappa \) needs to be determined on a case-by-case basis. In this context, the recent studies by Moran et al. (2021); Roy and Wodo (2020) present a methodology for estimating \(\kappa \) by using high-fidelity transient thermal models. Ranjan et al. (2020) used the analytical solution for the 1D heat equation and showed that \(\kappa \) is characterized by \(\sqrt{\alpha t_h},\) where \(\alpha \) is the thermal diffusivity and \(t_h\) is the layer heating time. The parameter for heating time is further estimated as \(t_h=A/hv\), where A, h, and v are layer area, laser hatch thickness, and scanning velocity, respectively. Using this, slab thickness \(s=12\) mm is estimated assuming relevant process parameters and Ti-6Al-4V parts. This is used as a default value in the remainder.

Fig. 15
figure 15

a Hotspot map for the design optimized using \(s=20\) mm but analyzed using \(s=6\) mm as a post optimization step. Hotspot temperature at point A above the circular hole violates the hotspot constraint. b Hotspot map for the design optimized using \(s=6\) mm but analyzed using \(s=20\) mm as a post optimization step. Hotspot temperature at point B inside the funnel shape violates the hotspot constraint. (Color figure online)

6.2 Influence of part orientation

In order to demonstrate the versatility of the method in different configurations, four different build orientations are considered. These orientations are referred to as South/North/East/West based on the relative position of the baseplate with respect to the part. The results are shown in Fig. 16 where build directions are also marked. In our implementation, the structural boundary conditions remain the same, whereas the thermal loading direction and boundary conditions change according to the build orientation. The problem definition along with all other optimization parameters remain the same as in Sect. 5.3. Therefore, the result of robust TO shown in Fig. 8a is used to compare the compliance values. Results are shown for 400 MMA iterations and the thermal constraint is met for all presented results.

Fig. 16
figure 16

Results for four different build orientations where vector \(\mathbf{b} \) denotes build direction. These cases are named as South/North/East/West based on the relative position of the baseplate (shown in green) with respect to the part. Compliances relative to that of the standard TO design (shown in Fig. 8a) are reported. (Color figure online)

The results for Fig. 16c and d are only marginally different from the standard TO result. This is also reflected in the fact that \(C/C_{\text {ref}}\) values for these designs are close to \(100\%\). On the other hand, designs shown in Fig. 16a and b differ significantly from the reference design. In particular, in the North orientation, a lot of material has to be used near the bottom for avoiding the long overhang. This makes the optimization problem rather strict and a high value for \(C/C_{\text {ref}}\) is found.

7 Comparison with geometry-based TO method

Another interesting observation is made by comparing hotspot TO design with that obtained using a geometry-based AM-TO method. For this purpose, the cantilever problem is optimized using the overhang control method proposed by Langelaar (2017) with the same set of applicable parameters as used in Sect. 5.1. This method efficiently prohibits overhanging features with \(\theta <45^{\circ }\) in the optimized design. The obtained self-supporting design is subjected to hotspot analysis with \(s=12\) mm as a post-optimization step and the design along with its hotspot field is shown in Fig. 17a. The same optimization problem is solved using hotspot TO with \(\theta ^{\text {cr}}=45^{\circ }\), \(s=12\) mm and the obtained design with its hotspot field is shown in Fig. 17b. Both fields are normalized to a common scale for comparison. For this purpose, the \(\text {max}({\hat{T}}_\text {HS})\) of the hotspot field shown in Fig. 17a is used as a normalization factor. It is observed that the design shown in Fig. 17a has several funnel-shaped features which are identified as severe hotspots with \({\text {max}({\hat{T}}_\text {HS})=4.5}\) which is significantly greater than \(T^{cr}(\theta ^{\text {cr}}=45^{\circ })=2.1\). Note that all features satisfy the geometrical overhang design rule, resembling the situation reported in the literature where overheating is observed even after following the overhang criterion (Adam and Zimmer 2014; Patel et al. 2019). In contrast, the hotspot-based TO redistributes material in such a way that these hotspots are avoided. Lastly, the compliance for the hotspot TO design is found to be slightly higher than that for the geometry-based TO.

Fig. 17
figure 17

Comparison of hotspot fields for designs found by optimizing the cantilever problem using a AM filter-based TO (Langelaar 2017), b hotspot-based robust TO. Discretization and other TO parameters are as mentioned in Sect. 5.1. (Color figure online)

It is demonstrated in Fig. 17 that a geometry-based TO is insufficient for preventing local overheating as overhang avoidance does not necessarily ensure overheating avoidance. Moreover,it is demonstrated in Fig. 13 and discussed in Sect. 6.1.2 that by virtue of neighboring features which facilitate heat evacuation, the hotspot-constrained TO method allows for short acute overhangs without violating the hotspot constraint. This is also in line with the experiences from LPBF practice (Mertens et al. 2014).These advantages establish the superiority of the proposed hotspot avoidance scheme over TO methods which prohibit overhangs on a purely geometric basis.

A recent advancement in LPBF machines allows for in situ control of input laser energy based on part geometry with the aim of reducing the possibility of overheating. However, the control algorithms are presumably based on geometry-based information which might not be enough to guarantee overheating avoidance. Moreover, such in situ control practices are currently in development stage and not a default feature of every LPBF machine. Hence, designs which are less prone to overheating are still highly desirable.

8 Extension to 3D

Although the main focus of this paper is to thoroughly investigate the hotspot-based TO in 2D setting, the extension to 3D also deserves attention and is, in fact, straightforward. The formulation of the simplified LPBF model remains the same, while the FE implementation is slightly altered to tackle a 3D case. For this purpose, the 3D TO implementation from Liu and Tovar (2014) is used and robust formulation from Wang et al. (2011), simplified LPBF thermal model and hotspot constraint are integrated into it.

In general, the critical overhang angle for a given LPBF system is experimentally determined with printing wedge-shaped geometries. These parts are simply an extrusion of 2D shapes and provide information about degree of overheating due to the overhanging angle of choice, see, for example, Cloots et al. (2013); Kranz et al. (2015). Using the same logic, the procedure described in Sect. 4 is directly applied for 3D cases, where a critical temperature \(T^{\text {cr}}\) corresponding to a \(\theta ^{\text {cr}}\) is determined.

For demonstration purposes, a 3D cantilever beam problem is considered, as shown in Fig. 18a and b, where the surface shaded in green is assumed to be fixed, while a uniformly distributed load is applied on the right bottom edge. The problem is solved using both standard and hotspot TO. The domain is discretized using iso-parametric cubic (side = 1 mm) 8-noded finite elements with tri-linear shape functions. The design domain then requires 100, 50, and 50 elements in the x, y, and z directions, respectively.

The optimized design found using standard TO and hotspot TO is presented in Fig. 18a and b, respectively. Both designs are obtained after 150 MMA iterations using a volume constraint of \(f_o=0.3,\) while the latter accounts for \(T^{\text {cr}}(45^{\circ })=2.1\) with a slab thickness \(s = 8\) mm. We note in passing that the slab thickness is again calculated in accordance with the characteristic length \(\sqrt{\alpha t_h}\) where heating time \(t_h\) is now computed using the layer area of the design domain. Similar to the observations in Sect. 7, a sacrifice in compliance performance is made for ensuring manufacturability. For comparing the thermal performances, hotspot fields associated with both designs are presented in Fig. 18c and d which show that the long almost horizontal overhang in standard TO design leads to severe overheating with \(\text {max}({\hat{T}}_\text {HS})=16.8\). A section view shown in Fig. 18e reveals the location of the maximum temperature. On the other hand, the design obtained from hotspot-based TO maintains much lower temperatures everywhere in accordance with the hotspot constraint. Figure 18f once again depicts this hotspot field using its full range of temperature values to clearly show the \({\hat{T}}_{\text {HS}}\) distribution.

Regarding computational times, the scalability of the simplified LPBF model has already been shown in our previous work (Ranjan et al. 2020). It was reported that the wall-clock time for a real-size 3D part with 2.2 million nodes was in order of only few minutes. For the new 3D TO example presented in this paper with approximately 0.8 million degrees of freedom, each TO iteration takes approximately 3.5 min on a HPC cluster. The implementation has been done in Matlab R2020b and has not been heavily optimized/parallelized. Nevertheless, this shows that the inexpensive steady-state analysis still keeps the 3D TO problem computationally tractable.

Fig. 18
figure 18

Result of a cantilever problem obtained using a standard TO b hotspot-based TO implemented in 3D. Both problems are solved for a volume constraint of \(f_o=0.3\) and results are reported after 150 MMA iterations. The hotspot-based TO is carried out using \(T^{\text {cr}}(45^{\circ })=2.1\) and a slab thickness of \(s = 8\) mm. The hotspot fields are presented for c standard TO design d Hotspot TO design. A section view at \(y=25\) mm is presented in e reveals the location of the maximum temperature. As ce use a common temperature scale, the temperature distribution associated with hotspot TO design is not clearly visible in d. For this purpose, the hotspot field depicted in d is again presented in f with scale ranging between 0 and \(\text {max}({\hat{T}}_\text {HS})\)

9 Conclusions

This paper presents a novel TO scheme which addresses the issue of local overheating in AM parts. A computationally inexpensive AM thermal model that utilizes localized steady-state analysis for detecting hotspots is taken from the literature. It is demonstrated that this simplified modeling approach captures the influence of local geometry on heat evacuation during the AM process. This model is then integrated with density-based TO. A thermal constraint is formulated using temperatures relative to a solid slab with no void. It is shown through several numerical examples that the proposed method, combined with robust TO formulation, can deliver designs that outperform standard TO designs, when analyzed for local overheating behaviour during the fabrication process. The use of steady-state analysis offers significant computational gain which makes it possible to find optimal topologies within practical time-frames.

It is observed that geometry-based design rules do not ensure overheating avoidance. Moreover, the proposed method allows for localized benign horizontal and acute overhangs in optimized designs, enhancing the design freedom. As physics-based approaches capture relevant local conditions in a more realistic manner, their integration with TO offers promising advantages. However, the model employed here invokes several simplifications and still remains an approximate representation of the real process. Efficient integration of TO with more detailed models e.g., a transient thermal simulation remains a challenge for future research. In this regard, a more detailed transient model which, e.g., accounts for laser movement, can also be considered. The main challenge of integrating a higher fidelity AM model with TO is that of addressing the associated high computational cost. Another important aspect is to develop a framework for estimating slab thickness for a given set of material and process parameters, instead of relying on (empirical) calibration.

A major AM issue is that of residual stresses and deformations which develop during the part fabrication. There exists a strong relationship between the process thermal history and resulting mechanical behavior. Hence, it is foreseen that the hotspot maps can also be used to identify problematic features from the context of deformations. However, the idea needs thorough investigation which is seen as an avenue of future research. Lastly, it was shown here that extension of the hotspot constraint to a 3D setting is straightforward. This is also exemplified by Sinico et al. (2019) where the method was applied for TO of an industrial injection mold design. Experimental validation of 3D designs using optical tomography-based in situ monitoring technique is currently under investigation.