1 Introduction

Laser powder bed fusion (L-PBF), also referred to as Selective Laser Melting (SLM), is one of the most promising additive manufacturing (AM) technologies for creating metal parts directly from CAD models, especially for aerospace, automotive and medical industries [1]. Compared to conventional subtractive and formative manufacturing technologies, one of the most significant benefits that L-PBF offers is the flexibility in part design [2]. The layer-wise approach allows the creation of highly complex geometries. This enables a complete change of the way parts are designed—from shape-oriented design to function-oriented design. However, uncertain quality of the final products is one of the most serious hurdles to the industrial adoption of additive manufacturing [3]. To meet industry standards, the process stability and reproducibility of L-PBF need to be improved. In [4], Rehme used the Ishikawa diagram method to feature 157 factors influencing the process, including aspects ranging from environmental conditions to material quality. He emphasized the strong impact of the complex part geometry on its temperature evolution during the process. By only considering the melting process, parameters such as laser power, scan speed, laser spot size, etc. control the process and influence both the building rate and the stability of the process; hence, the quality of the finished parts [5]. Without knowing the quantitative correlation between those inputs (parameters and boundary conditions) and outputs (quality features such as relative density or surface roughness), full-factorial experimental investigations are still one of the most common approaches for determining the optimal process parameter combinations, which are enormously time and resource consuming. To improve the efficiency of the process parameter development and the understanding of the underlying physics of the melting process at micro scale, different methods have been developed to numerically [6,7,8] and analytically [9] investigate the thermo-mechanical process.

Parameter developments nowadays generally aim at finding one global parameter combination to obtain maximum density and surface quality [10] and to simultaneously provide a process that is robust and stable to perturbations. During the entire process, the parameters are kept constant. However, the initial boundary conditions of each respective exposure differ from the previous layer due to the varying heat dissipation conditions throughout the manufacturing process. Therefore, constant process parameters neglect the influence of the local geometry on the melt pool dimensions.

Previous studies show that feedback control systems can improve the quality of the downfacing surfaces by adjusting the laser power [10,11,12]. Craeghs et al. [10] successfully showed the feasibility of using both a photodiode and a high-speed CMOS camera to obtain the radiation information of the melting zone, which are included in a closed-loop in order to control the laser power output in real-time. Renken et al. [12] have investigated the feasibility of such a control system incorporating a thermal camera, a color sensor (RGB) and a topography sensor. The sensor system was able to acquire the information on the process parameter needed for satisfactory control. For laser cladding, a similar approach has been presented, leading to a stabilization of the molten pool temperature in combination with a predictive controller [11].

In contrast to feedback control systems, feedforward control uses information generated by process simulations, mathematical models and process knowledge in order to optimize the process parameters in advance [13,14,15,16]. Illies et al. [13] investigated the ability of the thermal camera to obtain temperature data, which are used for validating his simulation results. The simulation results will then be considered to develop adaptive process parameters for critical regions. Druzgalski et al. [14] used a computational approach with feature extraction to identify critical scan vectors and applied results from simulation-based feed forward control models to generate optimized scan strategies. Yeung et al. [15] scaled the laser power with a geometric conductance factor, which is based on the relative ratio of solid to powder material near the melt pool, in order to compensate for the overheating due to overhang geometries. That way, they avoided the melt pool instabilities created by the less conductive powder [17,18,19,20], which slows down the heat dissipation of the melt pool when nearing overhangs or filigree features.

In this study, a simulation model is set up to predict the temperature development of the surface parts right before the following layer of powder is deposited, as the thermal condition at this point will significantly influence the subsequent laser–material interaction. The simulation results are validated by contactless temperature measurement through an infrared camera in a reference build with constant energy input and two builds with decreasing energy input over the build height. The ability of the simulation to react to energy input changes (either caused by power variation or scan speed variation) is a key element for simulation-based feed forward control. The experiments demonstrate the suitability of this approach to reduce overheating of the parts with overhang structures as well as to increase the energy efficiency and speed of the process.

2 Materials and methods

2.1 Experimental setup

An off-axis contactless monitoring system using an infrared (IR) camera (Equus 81kM, IRCAM, Erlangen, Germany) was installed on a commercial L-PBF machine (SLM 500HL, SLM Solutions, Lübeck, Germany) to measure the surface radiation intensity during the process. The IR camera was mounted on top of the build chamber, and the IR radiation was detected through a protective window as shown in Fig. 1. The protective window is made of zinc sulfide (ZnS) that is transparent for radiation in the wavelength range of 0.4 to 15 µm, according to the manufacturer. The inclination angle between the optical axis of the IR camera and the build platform was set to 60°.

Fig. 1
figure 1

Schematic illustration of the experimental setup. Right: side-view, left: top-view

The wavelengths detectable by the IR camera range from λIR = 3 to 5 µm, hence not interfering with the laser wavelength of λ = 1070 nm. A lens with a focal length of 50 mm was used. The resulting working distance between the lens and the object plane of the setup was 633.4 mm. Due to an inclination angle, the camera covers a distorted Field of View (FOV) on the powder bed. In the center, the FOV has a length of x = 96 mm and a width of y = 88.68 mm as shown in Fig. 1. With a detector resolution of 320 × 256 pixels, a pixel on the center axis represents 300 µm in x- and 346 µm in y-direction. The sampling rate used for the investigations was 300 frames per second with an integration time of 0.1 ms.

As material, Ti–6Al–4V Grade 23 (Tekna Advanced Materials Inc., Sherbrooke, Canada) with a particle size distribution of 15–45 µm (according to manufacturer) was used throughout the experiments.

2.2 Calibration of the IR camera

Since temperature cannot be measured directly with the IR camera, it is necessary to convert the camera signal, which is correlating with the intensity of the surface radiation, into absolute temperatures. Such temperature conversion often requires the assumption of a constant emissivity in a certain temperature range and a homogeneous surface finish [21, 22]. By assuming that the workpiece surfaces act as diffuse radiators whose spectral intensities are independent of direction, which is usually the case when observing real part’s radiation, the Lambert’s cosine law can be applied to express the arriving emissive power at the sensor, which is not in the direction normal to the surface [23]. This equation is expressed as

$$I\left(\beta ,T\right)={I}_{n}\left(T\right) \; \mathrm{cos} \beta ,$$
(1)

where \({I}_{n}\left(T\right)\) is the directional emissive power in the direction normal to the surface (β = 0). For the reason of the directional emission, the incline angle of the IR camera should remain 60° for the calibration.

The calibration function was determined experimentally, for both powder and solid material, in a separate setup outside of the L-PBF machine. The samples were heated in a heat treatment oven (N 41/H, Nabertherm GmbH, Lilienthal, Germany), and the oven temperature was recorded by two thermocouples of type K. To keep the setup of the ex-situ calibration experiment as close as possible to the setup inside the L-PBF machine, the same protective window made of zinc sulfide was installed in front of the IR camera lens, hence taking the transmission coefficient into account. As the goal of the experimental investigation is to validate the incremental temperature increase after each layer depending on geometry, the signal of the thermal camera is evaluated only in the cool-down phase here (also cf. Figure 5, measurement point). Thus, the calibration needs to be valid in a temperature range of 200–300 °C, with 200 °C being the build platform preheating temperature.

It is known from literature [24] that Ti–6Al–4V will experience accelerated oxidation only at temperatures above 480 °C, which then results in different emissivity inducing that will lead to errors in the thermal measurement. Thus, it can be assumed that the surface does not change in the temperature range used in the investigations presented here, and the application of Lambert’s cosine law stays valid. However, to ensure that the surface of the small powder particles do not exhibit locally different behavior during the calibration, which could not be conducted in an inert atmosphere as optical access to the oven of the IR camera was required, sequential heating up to 480 °C was performed: the specimens were heated from room temperature to a defined temperature T1 = 50 °C, followed by a cooling phase down to room temperature. In the next cycle, the specimens were heated to T2 = 75 °C and cooled down to T1 = 50 °C. The procedure goes on with heating up to T3 = 100 °C before cooling down to 75 °C. IR measurements were taken when target temperatures were reached. That way, two data points for the radiation intensity, expressed by so-called counts, are available for a specific oven temperature, originating from the same specimen but taken at different temperature histories: one measurement at the maximum temperature of one cycle (heating phase), the other at the minimum temperature of the consecutive cycle (cooling phase). This sequence of heating and cooling phases was repeated until the surface of the specimens started to oxidize, which is indicated by an emerging difference between the measured emissive power at a specific temperature due to the changes in the surface structure. The temperatures recorded by the thermocouples and the corresponding radiation intensities (black dots) are plotted in the graph shown in Fig. 2. Note that at temperatures of around 300 °C, different output values of the IR sensor are present at the respective temperatures, marking the beginning of the surface oxidation. The conversion curve created in above mentioned manner stays valid for the measurements during the real process, even if the part’s surface will exceed the melting temperature and cool down to the valid range of the calibration. This is because that the process is conducted in an inert gas atmosphere, which prevents oxidation from happening.

Fig. 2
figure 2

Polynomial regression (red line, serves as conversion curve) of measured temperatures with thermocouples (black dots) up to 305 °C and directional emissive power (counts) during calibration in an oxygen-containing environment. The green line represents the extrapolation of the polynomial regression

The red line in Fig. 2 represents the polynomial regression that was conducted to form a function to convert the intensity measured into temperatures. The curve fits very well in the lower temperatures up to around 300 °C with an error of approximately ± 10 °C in the range from 200 to 300 °C, which is the temperature range expected right before the recoating. This conversion curve is expected to be valid for the experimental measurements, because the experiments take place in an inert gas atmosphere, which prevents oxidation happens in the first place. For higher temperatures (above 305 °C), extrapolation can be conducted in a trade-off of some accuracy that is indicated with the green line. The combination of the assumptions made above regarding emissivity, surface finish and observation angles lead to uncertainties of the determination of the absolute temperatures. When underestimating the real emissivity, the converted temperature will be larger than the real temperature and vice versa. For this study, this error remains constant as all experiments were conducted under identical boundary conditions.

Because of the nature of layer-wise manufacturing, the processed layer represents the new boundary conditions for the following exposure. Hence, knowing the temperature of the cross-section that the next exposure will scan helps to calculate the right amount of energy input in order to make the process more resource efficient or faster. The temperature of the exposed part surface immediately after a new layer of powder is spread is expected to range from 200 to 300 °C due to high cooling rate of the melt pool [25]. For this matter, the temperature range of room temperature to 300 °C is sufficient. In addition, the simulation calculates the average temperature of a whole layer, which is expected to be within the temperature range.

2.3 Experimental procedure for the validation of the numerical modeling

For the experimental validation of the numerical model, the same manufacturing process is examined numerically and experimentally. Two combinations of process parameters are used to perform the core and contour exposures, respectively. Table 1 shows the process parameters used for manufacturing the specimens.

Table 1 Process parameters used for manufacturing the specimens

Figure 3 shows a schematic illustration of the scan strategy. The green arrows are scan vectors, along which the core area of a part’s cross-section is molten, whereas the red arrows indicate the scan vectors for the contour of the parts. The process parameters that control the energy input are listed in Table 1. The length of the scan vectors lvector for the core exposure is 5 mm long and "stripes" strategy has been selected as the hatching pattern. Using this strategy, the part’s cross-section will be divided into sections with a maximum width of the vector length. The scan vectors move in a meander manner perpendicular to the feed direction with a hatch spacing Δy of 0.1 mm. The feed direction of the stripes rotates clockwise by 67° per layer. This rotation is intended to reduce the thermally and mechanically induced residual stresses. The build platform pre-heating was set to 200 °C.

Fig. 3
figure 3

Schematic illustration of the "stripe" strategy that was used in the experiment. The strategy consists both core (green) and contour (red) exposures

For each experiment, four geometries with different overhang angles (cf. Fig. 4) were manufactured and the process was recorded with the IR camera. All specimens have a square cross-section of 10 × 10 mm2 in the xy-plane, indicated in Fig. 1, and are 10 mm high.

Fig. 4
figure 4

Schematic illustration of the specimens’ geometry and the location of the region of interest (ROI)

The overhang angle φ is defined as the angle between the xy-plane (build platform) and the connecting line of both top and bottom square’s centers, as shown in Fig. 4. The overhang angle φ has been chosen as 90°, 60°, 45° and 30°. For each experiment, two specimens of each geometry were manufactured simultaneously with two laser beams. To obtain the radiation intensity, the top cross section of each specimen was selected as the region of interest (ROI). Inside the ROI, the detected intensity was averaged over all pixels.

Figure 5 shows a typical output signal detected with the IR camera for one ROI while processing one layer and four snapshots of the recorded footage corresponding to the signal peaks. The plummeting intensity after cooling is due to a new layer of powder being spread within the ROI via the recoater, which allows for clear identification of one full layer processing (tlayer). During t1, a 60 µm thick layer of powder is completed for the whole build platform, and the surface keeps cooling down until the exposure begins. The first three consecutive local maxima (a., b. and c.) indicate the exposure of the core with the stripe strategy and the fourth local maximum (d.) represents the contour exposure. The absolute (average) temperature increases from peak to peak because the heat dissipation capability of the exposed part is smaller than the energy input for the current exposure. After the exposure, the cooling takes place, and the intensity drops rapidly.

Fig. 5
figure 5

Output signal of a defined region of interest for one arbitrary layer of specimens with 30° overhang

For the validation of the simulation, a measurement point was defined to be 0.5 s prior to the recoating (red dot at the end of t2). The durations of t2 vary from specimens to specimens, which are experimentally determined and are considered in the simulations. This is because that the last peak of the measured values fluctuates due to spatters and noises of the sensors. An average of the cooling time of three consecutive layers were determined to feed to the simulation. The measurement point on the one hand is chosen as the layer state at the end of the cooling phase represents the initial state of the next layer; as discussed above, the initial boundary conditions influence significantly the following melting process and the process parameters. On the other hand, by averaging the temperature over the whole cross-section, a homogeneous temperature distribution over the cross-section was assumed. Because the scan strategy divides the cross-section into stripes which are irradiated consecutively, this assumption is not quite true. Nevertheless, during the cooling process the inhomogeneity in temperature induced by the scan strategy is lessened, and the measurement point is chosen to be at the end of the cooling process for maximum agreement between reality and assumption. As a consequence of this choice, the real maximum local temperatures are always slightly higher than the measured average ones.

3 Numerical simulation

The multiphysics finite element code Diablo [26], developed at Lawrence Livermore National Laboratory, was used to perform thermal simulations of the parts being built. Diablo is an implicit, Lagrangian code with distributed memory parallelism. Linear hex elements were used to solve the thermal balance of energy using a Broyden [27] (quasi-Newton) non-linear solution scheme. The balance of energy, solved throughout the domain \(\Omega ,\) is given by

$$\rho {c}_{p}\dot{T}=\nabla \cdot \left(k\nabla T\right)+{r}_{\mathrm{ext}},\mathrm{ in }\Omega ,$$
(2)

where \(\rho\) is the density, \({c}_{p}\) is the constant pressure specific heat capacity, T is temperature, k is the isotropic thermal conductivity, and \({r}_{\mathrm{ext}}\) represents the volumetric heat input from external sources such as the laser (see Eq. 4). Boundary conditions are prescribed over the surfaces \({\Gamma }_{D}\) and \({\Gamma }_{N}\), which represent the portion of the surface with prescribed Dirichlet and Neumann boundary conditions, respectively. They are expressed as

$$T\left({\varvec{x}},t\right)={T}_{0},\mathrm{ on }\;{\varvec{x}}\in {\Gamma }_{D}q\left({\varvec{x}},t\right)=q\cdot n=h\left(T-{T}_{\infty }\right)+{\sigma }_{SB}\varepsilon \left({T}^{4}-{T}_{\infty }^{4}\right),\mathrm{ on }\;{\varvec{x}}\in {\Gamma }_{N},$$
(3)

Here \({\Gamma }_{D}\) is defined as the bottom face of the computational baseplate and \({T}_{0}\) is the build platform preheat temperature. \({\Gamma }_{N}\) is defined as the top free surface, where the heat flux \(q\) is determined by heat transfer through convection and radiation to the external environment at temperature \({T}_{\infty }\). This is governed by the convection coefficient \(h\), the emissivity \(\varepsilon\), and the Stefan-Boltzmann constant \({\sigma }_{SB}\). Values for these parameters are listed in Table 2 and an illustration of the location of \({\Gamma }_{D}\) and \({\Gamma }_{N}\) is shown in Fig. 6. Note in Table 2 that the bottom of the build platform is fixed to 160 °C as that appears to be the initial temperature of the surface from the experiments, even though the heater was set to 200 °C. The difference between the heater set point temperature and actual temperature is attributed to heat loss and thermal contact resistance between the heater and build platform.

Table 2 Boundary condition values
Fig. 6
figure 6

Cross section of initial mesh for 30° overhang

A voxel mesh was created for each geometry using 0.5 × 0.5 × 0.5 mm3 elements. The voxel meshing was necessary to ensure the constant build direction height so that elements can be activated in a layer-by-layer manner. At least 5 mm of surrounding powder was included as this was found to be an important heat transfer mechanism in the simulations, especially for the parts with larger overhangs. The part and powder were attached to a 15 mm build platform. An image of the initial mesh generated for the φ = 30° overhang is shown in Fig. 6.

To perform simulations as close to the physical layer size as possible, h-type adaptive mesh refinement (AMR) has been implemented in Diablo. This refinement is programmed to occur in a layer-wise manner, where all refinement/de-refinement occurs at the activation of each new layer. Three levels of isotropic pre-refinement were performed on the part and surrounding powder. As the element size is halved in each direction during each refinement step, the resulting element size is 0.0625 × 0.0625 × 0.0625 mm3. This allows for activation of 62.5 µm computational layers, nearly identical to the physical layer size of 60 µm. As each new layer of elements is added, the elements existing a pre-set number of layers below the top surface are de-refined. This is done to minimize the total degrees of freedom in the problem, as illustrated in Fig. 7.

Fig. 7
figure 7

Illustration of mesh showing three levels of refinement. Finest elements on the top surface are 62.5 µm in edge length

Even when using AMR, it remains too computationally expensive to simulate each individual laser pass. Thus, we speed up the simulations by applying heat over an entire layer at once. The amount of heat applied is calculated such that the total amount of energy deposited is equal to that supplied in the physical process. The volumetric power input is given by:

$${r}_{\mathrm{ext}}=\frac{{P}_{a}}{A{d}_{a}}=\frac{1}{{t}_{\mathrm{flash}}}\times \frac{\alpha {P}_{p}}{{\Delta y}_{p}{v}_{p}{d}_{p}}.$$
(4)

In this equation, variables with subscript a refer to agglomerated or computational values, and variables with subscript p refer to physical process values. P is power, Δy is hatch spacing, v is scan speed, d is layer thickness, and A refers to the cross-sectional area of the part that is exposed to the laser, which in this case is 100 mm2. The amount of time power is applied for \({t}_{\mathrm{flash}}\), is set to 0.0125 s. The effective absorptivity value,\(\alpha\), is equal to 0.7 as determined from experimental measurements [6].

The choice of the free parameter \({t}_{\mathrm{flash}}\) is worthy of some discussion. For a large \({t}_{\mathrm{flash}}\) (and consequently lower \({P}_{a}\)), the input heat will be conducted away before the powder material reaches the melt temperature. Conversely, for very small values of \({t}_{\mathrm{flash}}\) (and high \({P}_{a}\)), the time step required to resolve the heating becomes prohibitively small due to the steep thermal gradients, and the peak temperature reached by the material can become unphysically high. In Patil et al. [28], a study was performed to identify the effect of the layer heating time (\({t}_{\mathrm{flash}}\)) on peak temperature, temperature after interlayer cooling, and residual stress. It was found that the influence of this parameter on interlayer temperature and residual stress was minimal provided that complete melting occurred. Thus, a value of 0.0125 s is chosen for this work, which is small enough to produce complete melting of the powder layer, but without producing unphysically high peak temperatures and unnecessarily small time steps.

With the value of \({t}_{\mathrm{flash}}\) set, and knowing all the physical process values on the RHS of Eq. 4, it is simple to solve for the agglomerated power, \({P}_{a}\), to be applied in the simulation.

Layers are initially added as powder material, then converted to bulk Ti–6Al–4V after receiving enough heat to reach the melt temperature and overcome the latent heat of melting. Powder thermal conductivity is obtained from experiments and simulations conducted in Refs [17, 29] and are given in Table 3. The powder specific heat was determined to be half that of bulk material due to the reduced density. Bulk Ti–6Al–4V material properties are the same as those provided in [30]. Further details regarding the numerical simulation strategies for AM employed by Diablo are provided in [8, 30].

Table 3 Thermal Conductivity of Ti–6Al–4V powder, extracted from [16, 25]

Detailed information was obtained from the experimental builds regarding the recoat time for each layer, and time between recoating and beginning of the laser scan. To account for the small difference between simulated, 62.5 µm, and physical, 60 µm, layer size, a scaling factor was applied to the recoat time and time after recoat but before laser scanning. This scaling factor was equal to the ratio of computational to physical layer size, \(\mathrm{factor}=\frac{62.5}{60}=1.042\). The scale factor was needed to keep the total build time consistent between the experiments and simulations. Temperature comparisons were performed with the experiments 0.5 s prior to recoating after every 0.5 mm of build. Note that the simulation predictions were performed prior to viewing the experimental measurements; thus, offering a blind prediction of the temperatures using only the build settings as input.

4 Experimental validation of the simulation using constant process parameters

All simulations were performed on commodity HPC clusters at Lawrence Livermore National Laboratory. The simulations utilized 72 processors for the 90° parts and up to 144 processors for the 30° parts, with more processors being used for increased overhang angles due to the additional volume in the domain. The total run-time ranged from approximately 24 h (90° parts) to 70 h (30° overhang). Figure 8 shows the measured and simulated temperatures obtained during the reference build, where constant process parameters (Table 1) were used for manufacturing the specimens. The determination of the temperatures was conducted in a discrete manner for each 0.5 mm build height and approximately 20 s after the exposure of the whole layer, immediately before a new layer is added. The measured temperatures are within the range of the expected room temperature to 300 °C. In addition, oxidation is expected to be avoided due to the inert gas atmosphere. Hence, the conversion curve obtained in the ex-situ calibration should be able to be applied to the measurements.

Fig. 8
figure 8

Left: simulated (dash line) and measured (solid line) temperatures over build height of all specimens manufactured with constant process parameter (reference build). Right: the results of the first 4 mm

Because the simulation and the build were performed independently, the initial temperature of 160 °C and the top surface convection coefficient h for the simulation was based on previous experience for 90° specimens. This initial start temperature varied for some of the experiments. In order to compare the heat development of both experiments and simulations quantitatively, the graphs are shifted to have similar initial values of approximately between 159 and 160 °C.

The dependency of the temperature increase on the overhang angle was confirmed both experimentally and numerically. As observed, the specimens with no overhang angle (90°) were heated the least, approximately 15 °C over the initial temperature throughout the build. The 60°, 45° and 30° specimens were heated by 25 °C, 30 °C and 45 °C, respectively. The local slopes of the simulation results are very similar to those of measured ones, which indicates a correctly balanced energy equation. The only exception is the 30° specimens. The reason for this is assumed to be that for smaller overhang angles, the area of powder below the part increases. Hence, the uncertainty in the thermal conductivity of the powder material becomes more important in the heat transfer equation, which decreases the accuracy of the prediction.

The observation shows that in both simulated and measured cases, the temperatures of all four geometries increase steadily during the process with similar slope until z = 3 mm. Additionally, starting from a distance of about 3–4 mm from the build platform, the temperature increase rate of different geometries start to differ from each other due to the overhang angles. The flatter the overhang angle, the faster does the temperature increase. This observation the expectation, that the powder underneath the part decreases the heat dissipation. The geometry induced change of the boundary conditions of the melting process leads to excessive energy input when constant process parameters are used. Further experiments shall investigate the feasibility of using locally varying parameters.

Figure 9 shows the differences between the measured and simulated results, which are given by the equation:

Fig. 9
figure 9

Differences between experimental results and simulation results for all geometries in the reference build

$$\Delta T = {T}_{\mathrm{exp}}- {T}_{\mathrm{sim}}.$$
(5)

Most of the simulation results under-predict the experimental measurements except for the largest overhang angle (30° specimens). The difference between simulation results and measured temperatures ranges from − 12 to + 6 °C. The simulation results for 90°, 60° and 45° specimens show steady inaccuracies (ranging from − 3 to + 6 °C) throughout the entire build height, while the inaccuracy of the prediction for the 30° specimens is increasing after 4 mm build height. These deviations of temperatures remain within the estimated error range from the ex situ calibration. Furthermore, the magnitude of the differences increases with decreasing overhang angle. This indicates the increasing impact of the overhang structure, where the thermal properties of the powder become more significant in the balance of energy for the part. The main error sources in the model are attributed to uncertainty in material properties, especially for powder. The parameter values used for the boundary conditions such as emissivity and convection coefficient were based on the range of values reported in the literature, which are not always consistent.

In order to estimate the impact of heat transfer to the powder material on the heat build-up within the specimens, the powder surrounding the specimens was neglected in a second set of simulations. Figure 10 shows the results—the steeper the angle, the lower the impact of the powder. On the 30° specimens, powder made 80 °C difference. On the vertical specimens with no overhang, it made approximately 20 °C difference.

Fig. 10
figure 10

Comparison between the temperatures simulated without powder and with powder

Another possible error source in the comparison lies within the experimental setup. Because of the overhang angle, the cross-section that is being exposed moves in y-direction over the build height. This leads to changes in the angle between the specimen surface and the IR camera. The surface travels 17 mm in y-direction that makes approximately a 4° incline angle difference. However, the resulting change in the intensity detected by the IR camera with respect to the incline angle described by Eq. 1 was negligibly small (∆I ≈ 0.2%).

5 Experimental validation of the simulation using locally adaptive process parameters

To compensate for the heat accumulation induced by the overhang structure, one possible approach is to use adaptive process parameters to reduce the energy input by locally increasing the scan speed or decreasing the laser power. Hence, validation experiments with varied process parameters were conducted to determine the ability of the numerical model to predict the temperatures with varied energy input. Two experiments separately varying laser power and scan speed were conducted. The energy input in both experiments was linearly decreased over the build height from 100% (reference input) to 80%, which is approximately the amount of the energy input used to perform contour exposure. Because of the high heat dissipation close to the build platform, the first 4 mm of the specimens were manufactured with the reference parameters in order to ensure the attachment of the specimens to the build platform. Without a proper connection to the build platform, parts tend to elevate due to the thermally induced residual stress, which could lead to collision with the recoater.

Figure 11 shows the temperature differences between measured and simulated processes of each specimen \(\Delta T = {T}_{\mathrm{exp}}- {T}_{\mathrm{sim}}\), in which linearly decreasing laser power was used starting at 4 mm build height. For all specimens, the simulation is able to predict the temperature with an accuracy of within ± 3 °C for constant laser power in the first 4 mm.

Fig. 11
figure 11

The differences between measurements and simulations in the varying laser power build

For the specimens with less overhang (90°, 60° and 45°), the simulation tends to under-predict the temperature when laser power is varied; the difference increases up to 10 °C at the end of the build, which is still within the approximated calibration error range mentioned previously. For the largest overhang 30°, the simulation over-predicts the temperature up to 14 °C, which is also observed in the reference build results.

Figure 12 displays the differences of the measured and simulated temperatures of the specimens in the varying scan speed build. As it was observed in the varying power build, the simulation results could reach an accuracy of within 5 °C when predicting the temperatures of the 45° and 60° overhang structures. Also, it over-predicted the largest overhang of 30° with around 15 °C after 10 mm build height.

Fig. 12
figure 12

The differences between measurements and simulations in the varying scan speed build

Figure 13 shows the absolute temperatures of all three builds of each overhang geometry resulting from the simulations. In all cases, the resulting temperature decreases caused by decreasing energy input at higher than 4 mm build height can be observed. Overall, the change in scan speed has a slightly higher impact on the temperature. In addition, the temperature increases in the first 4 mm during constant energy input can also be observed, which correlate to the different overhang angles. Differences in the simulation results in each build job below 4 mm can be observed. The reason for these differences is the slightly different cooling time in each build job, which were experimentally determined and fed to the simulations.

Fig. 13
figure 13

Simulated temperatures over build height of top left: 30° specimens, top right: 45° specimens, bottom left: 60° specimens, bottom right: 90° specimens in reference build (‘ref’) with constant energy input and in both decreasing laser power build (‘varyingP’) and increasing scan speed build (‘varyingV’)

All in all, the accuracy of the simulation in predicting part’s temperature decreases with increasing build height, especially when the overhang angle decreases from 90° towards 30°. However, despite the uncertainties both in the simulation and the measurements, the results are able to predict the change in temperatures resulted from varying energy input. This ability can further be considered when developing geometry-dependent adaptive scan strategy, in order to narrow the trial–error approach of parameter development.

6 Feasibility of locally varied process parameters to build overhang structures

To ensure the feasibility of the applied process parameter variations, the resulting porosity of each specimen was analyzed. Figure 14 shows the optical microscope images of 30°, 45°, 60° and 90° specimens manufactured using both varied laser power and scan speed over the build height. The reduced energy input has not significantly affected the relative density of the specimens. The fact that no defects caused by lack of fusion were found in the upper area of the specimens confirmed the assumption that the process parameter combinations used to form a solid connection between parts and the build platform exceeded the energy required for full melting of a layer at elevated heights, and lower energy input could be used to build the part.

Fig. 14
figure 14

Microscope image of the cross-section parallel to the building direction z-axis of 30, 45°, 60° and 90° specimens manufactured with bot decreasing laser power and increasing scan speed over the height

Despite the small deviation of the absolute temperatures obtained by measurement and simulation, the respective changes of the process parameters have been reproduced sufficiently by the simulation. This important first step of obtaining the quantitative correlation between process signatures (temperature) and process parameters will support the development of a control system for the L-PBF process. In general, experiments showed that the heat accumulation induced by overhang geometry and constant process parameters can be compensated by reducing laser power and/or increasing scanning speed locally. The total amount of the overheating was reduced by approximately 20 °C compared with the reference builds. The total exposure time decreased by 7.5%, and the laser power used decreased by 7% compared to the reference builds with constant process parameters. It is notable, that the chosen variation of the process parameters was based on the experiences, hence, the experiments only showed the feasibility of using varying parameters, and do not represent optimized solutions. In further works, target temperature of each layer could be set as an input in order to calculate the material and geometry specific corresponding process parameters to achieve this.

7 Conclusion and outlook

In this study, the Diablo FEA code was used to model heat build-up during the L-PBF process and was experimentally validated. An infrared thermal camera was installed in order to obtain the surface temperature of the exposure plane. The geometry induced heat accumulation, i.e. temperature increase, could be both measured and calculated at constant and varying energy input. The suitability of adapting the energy input to decrease the overall part temperature was demonstrated. The deviations between simulation and measurement remain in single digit range for smaller overhang structures (90°, 60° and 45°). For large overhang structures (30°), the simulation tends to over-predict the temperatures up to 15 °C.

The optical microscopy analysis of the specimen confirmed that adjusting the laser power or scan speed as a function of build height can reduce total energy consumption or manufacturing duration without leading to increased porosity. In order to build a real time feedforward control system able to guide the process in-situ with adjustable energy input, quantitative correlation of the process parameters and the process signatures has to be determined. Undoubtedly, experimentally validated physics-based modeling will play a significant role in this, as the effort of purely experimental investigations is unreasonably high.

Further experiments with varying process parameters should be conducted the other way around, using the model to calculate the energy input required for each layer to obtain previously defined temperatures. Being able to predict the thermal condition of the part with consideration of the impact of the complex geometry not only leads to faster process parameter optimization but also contributes to a closed-loop feedforward control system that could make the process much more stable and robust. The presented work focused only on one isolated geometry feature: overhang. In real applications, multiple geometry features are potentially asymmetrically combined and influence each other, which increases the complexity of the temperature evolution within the parts. Thus, more complex parts should be investigated in order to further improve the simulation model’s ability to predict the thermal evolution of real parts. However, micro scale simulation that consider the melt pool formation is computational expensive and often not practical in an industrial environment. On the road to industrialization of the technology, deep understanding of the physics will help creating correctly simplified solutions.