1 Introduction

Additive manufacturing (AM) has been used in a variety of industrial areas where high-value-added parts with complex geometries are needed, such as in aerospace technologies [1]. Since AM enables the creation of complex geometries that are difficult or impossible to manufacture with conventional milling or casting processes, novel design approaches such as topology optimization and cellular structures can be used in the design stage for AM, which further brings up the research field of design for additive manufacturing (DfAM) [2, 3]. On the part dimension, conventional DfAM usually aims at improving the functionality and manufacturability of AM parts [4]. Thus, to achieve these objectives, a number of previous works have focused on the development of innovative DfAM methods for structural topology optimization or lattice design for AM parts [4]. For example, in terms of material layout optimization, the multi-agent algorithm has been used in a method for designing, evaluating, and optimizing the manufacturability of AM parts [5]. Furthermore, the method based on constructive solid geometry is developed to generate multiple design variants and select the optimum design based on a genetic algorithm [6]. In terms of the cellular structure design, a method based on the moving asymptotes method is proposed [7]. In this area, another method for gradient lattice design using bidirectional evolutionary structural optimization has been developed [8], and the method based on the optimality criteria algorithm is proposed for the functional gradient lattice design [9]. In terms of support structure design, the method based on a genetic algorithm is developed to create and optimize tree-like support structures of AM parts [10].

Nevertheless, in assessing these works, it is seen that they do not consider other objectives in addition to the geometry and manufacturability, such as the energy performance that will be discussed in this paper. Therefore, conventional DfAM methods do not ensure the improvement of the environmental impact of AM in the product design stage. However, environmental issues have been another important topic in AM, in addition to the DfAM topics [11]. While the technology was emerging, AM was often considered to be generally more environment-friendly than conventional manufacturing processes. For example, Huang et al. have summarized the environmental benefits of AM, such as the absence of cutting tools or dies in AM leading to reduced resource usage, limited scraps during the build process leading to reduced wastes in AM, and the improved engineering performance of AM parts using lightweight leading to environment-related benefits during their usage phase [12]. However, the latest findings also point out that the environmental benefits of AM can only be ensured if they are considered during the design stage [11, 13]. Otherwise, the environmental benefit of AM (i.e. AM is more environment-friendly than conventional manufacturing) may be just an illusion [11]. Furthermore, recent studies have proven that energy consumption is the main contributor to the overall environmental impacts of the AM-based production phase [14], and therefore, the improvement of energy performance is currently an emerging research topic [15].

In existing methods, the energy performance is usually studied during the process planning or during the process chain planning for AM and is rarely considered in the design phase. Thus, chances that would help improve the energy performance of AM by varying product-related features are not fully exploited [16].

Aiming at the above background, this paper introduces the development of a new framework that considers energy performance as the optimization objective during the DfAM. The proposed framework consists of three key parts: structural topology optimization (1), tool-path length assessment (2), and multi-player competition algorithm (3). These three parts are combined in a holistic computational procedure, which enables the exploration of possible design variants from a given domain with the aim of finding out the design variant with the highest energy performance. To validate the framework, two use cases are presented to illustrate the feasibility of developed methods and tools.

2 Research Background

2.1 Energy Performance Issues in Additive Manufacturing

Energy performance is the key issue for ensuring the environmental sustainability of AM, considering that energy use may cause 67% to 75% of environmental impacts in the production phase [14]. In the past years, it was frequently argued that AM has more environmental benefits than conventional manufacturing for reasons that AM requires no cutting tools or dies, results in limited scrap, and enables design benefits leading to environmentally-related benefits during the lifetime of AM products [11,12,13]. Nevertheless, the latest findings have shown that these benefits can only be ensured if they have been carefully examined and validated during the design stage [11]. Thus, the energy performance of AM should be analyzed and assessed prior to the start of the build process of AM products. With respect to energy conversion [17], current studies on the energy issues of AM can be distinguished between primary energy, electricity (use energy), and thermal energy (final energy) issues.

The assessment and improvement of primary energy use of AM are usually discussed when determining the design solution related to a manufacturing process chain, supply chain, production network, or entire product life-cycle with AM. For example, primary energy demand can be used as an indicator to compare the use of AM versus conventional manufacturing in order to validate the benefits of AM (i.e. AM-based scenario has less primary energy demand than that of the conventional manufacturing-based scenario for providing the same function) [18,19,20,21]. For these works, the primary energy demand is usually quantified using the life-cycle assessment (LCA) or the cumulative energy demand (CED) method (e.g. [22]).

For electricity uses, energy performance quantification and improvement are performed when defining the parameters for a build process or a process chain. For example, the estimated electricity usage can be used as an indicator to compare the different design solutions with different process parameters or different manufacturing process chains based on different AM processes. For the electricity demand estimation, prediction software tools or analytical models are used [23,24,25,26]. For the improvement of energy performance in AM, optimization algorithms (e.g. genetic algorithm) can be used to optimize the process parameters [27]. Moreover, electricity issues are also widely analyzed using experiments, where process parameters (e.g. layer thickness and laser power) are varied, and the corresponding electricity consumption is measured using power meters. The relation between process parameters and electricity consumption can be analyzed using statistical methods (e.g. regression models), which can be further contributed to the design of AM build tasks [28,29,30].

For the thermal issues, the energy performance is improved when defining materials or energy input-related process parameters. For example, mixing copper powders with additive nanoparticles or defining a higher laser power will increase the energy absorption during the laser processing of laser powder bed fusion [31, 32].

In assessing the above works, only limited studies have addressed the DfAM and the energy issues in the same time, e.g. [20, 21, 23, 33,34,35]. The key issues of these approaches have been listed in Table 1. However, in these works, the authors consider the design activities (i.e. DfAM) separately from the energy performance evaluation activities, as depicted in Fig. 1. In those works, the product design (e.g. DfAM activities) is performed separately. Then, an AM build process or a manufacturing process chain based on AM is modeled to represent an AM-based production scenario. After that, the energy performance of the AM-based production scenarios is quantified and assessed, and further decision-making for product improvement is made. In such a way, the DfAM and the energy performance are rather two separate topics. This results in that the energy performance improvement is not integrated into the DfAM activities of current DfAM approaches. Subsequently, the time and effort for the design would be high due to the repeated DfAM and evaluation activities required by the improvement loop.

In our proposed work, the DfAM and energy performance assessment are integrated, as depicted in Fig. 1. The benefit of our approach is that it does not require repeated individual design and evaluation activities, which further saves time and cost in the design phase of AM. Moreover, if the time of a design cycle is reduced, AM designers are able to explore more different design possibilities, which also implies more chances to improve the functionality and utilization performance of AM products. To outline the difference between our approach from the existing approaches, the key issues in our approach have been described in italics and listed in Table 1.

Fig. 1.
figure 1

Difference between the existing approaches and the approach in this work.

Table 1. List of the existing approaches and our approach.

2.2 Research Target and Tasks for This Work

Based on the evaluation of the research background introduced in Sect. 2.1, it is clearly seen that the literature still lacks a methodology in which the energy performance evaluation is integrated with the DfAM. Therefore, this work aims at this research gap and contributes to a DfAM computational framework in which the energy performance evaluation and DfAM are iteratively executed in one algorithm. For this target, we have defined three research tasks. First, a method is proposed to describe the energy performance of AM during the DfAM. This is important since the description and quantification of the energy performance is the pre-request to the optimization of the energy performance. Second, the improvement of the energy performance of AM in the DfAM needs to be formulated as an optimization problem together with a computational technique to solve the problem. The third task is to implement and validate the feasibility of the proposed computational framework. In the remainder of the paper, the results of performing these three research tasks are discussed.

3 Framework of Energy Performance Improvement in DfAM

3.1 Overview of the Framework

The concept of the proposed computational framework developed in this work is illustrated in Fig. 2 and described in the following subsection. The computational framework includes three core parts.

  • The first part is the structural topology optimization (TO) method, which enables the generation of a material layout with the best mechanical performance for a given design space and boundary conditions. In this framework, we have used the method of “smooth-edged material distribution for optimizing topology (SEMDOT)”, which is a state-of-the-art TO algorithm suitable for AM [36]. In comparison to other TO algorithms, SEMDOT enables the creation of smooth geometric boundaries, which further ensures the manufacturability of the TO results for AM processes [36], and this is also the reason why this framework choses SEMDOT as the TO tool.

  • The second core part is the “tool-path length assessment” method, in which the tool-path length to create the geometry using the AM process is estimated. The reason for choosing this method is that in the DfAM, the energy consumption calculation is difficult because the energy is a time-dependent process characteristic (i.e. energy is the time integral of power) instead of a product characteristic [16]. Therefore, it is necessary to define a parameter that is relevant to the product and energy at the same time. Thus, the evaluation of the energy can be replaced by the assessment of this parameter. In this work, the AM tool-path length has been regarded as such a parameter, which has a proportional relation to the energy consumption (i.e. longer tool-path leads to higher energy demand) and part geometry (i.e. more internal features lead to longer tool-path).

  • The third part of the framework is the method of “multi-player competition algorithm”, which is an iterative optimization technique to compare and select the optimum geometry variant with the minimum AM tool-path length (i.e. the highest energy performance in this work) [37]. The multi-player competition algorithm is used to iteratively execute the SEMDOT and the tool-path length assessment in one computational procedure. The reason for choosing this algorithm is that it compromises the computational efficiency and search quality for our problem.

The details of these three parts will be explained in the next subsections.

Fig. 2.
figure 2

Overview of the proposed framework.

3.2 Structural Topology Optimization

In general, the design space for TO can be regarded as a discrete domain with a number of elements, as shown in Fig. 3. Thus, the material layout for this design space under certain boundary conditions can be considered as defining the relative material density Xe (a dimensionless value, not the density in g/cm3) for each element in that discrete domain. Given a minimum relative material density ρmin, the density of each element Xe should be a value between ρmin and 1. For example, as depicted in Fig. 3, if Xe is 1, it means that this element is defined with a full material (i.e. black element), while if Xe is ρmin, it means this element contains no material (i.e. while element). Thus, the objective of a TO problem based on varying the material density of elements can be described by Eq. 1 [36], where C represents the compliance; K, u, and f are the global stiffness matrix, the displacement vector, and the force vectors, respectively; Ve and V* represent the volume of elements and pre-defined target volume, respectively; and M is the total number of elements.

$$ \begin{array}{*{20}c} {\begin{array}{*{20}c} {minimize{:}} & {C\left( {X_{e} } \right) = f^{T} u} \\ {\begin{array}{*{20}c} {subject\,to{:}} \\ {\begin{array}{*{20}c} {} \\ {} \\ \end{array} } \\ {} \\ \end{array} } & {\begin{array}{*{20}c} {K\left( {X_{e} } \right)u = f} \\ {\frac{{\mathop \sum \nolimits_{1}^{M} X_{e} V_{e} }}{{\mathop \sum \nolimits_{1}^{M} V_{e} }} - V^{*} \le 0} \\ {0 \le \rho_{min} \le X_{e} \le 1;e = 1, 2, \ldots M} \\ \end{array} } \\ \end{array} } \\ \end{array} $$
(1)

However, element-based TO does not have a smooth edge, which does not meet the manufacturability requirement for AM. The SEMDOT method uses an approach involving inserting grid points within elements and then using a level-set function to represent the smooth edges of the geometry [36]. For each grid point, a grid density is defined (ρe,g), and therefore, the relation between Xe and ρe,g can be expressed by Eq. 2 [36], where N represents the total number of grid points of an element.

$$ \begin{array}{*{20}c} {X_{e} = \frac{1}{N}\mathop \sum \limits_{g = 1}^{N} \rho_{e,g} } \\ \end{array} $$
(2)

Finally, by using the Heaviside function, the smooth edges for the geometry can be generated [36]. During SEMDOT, the formulation of geometry is controlled by two pre-defined parameters: the radius of a circular filtering domain by elements (rfilter) and the heuristic radius of a circular filtering domain by grid nodes (Yfilter) [36]. Different combinations of these two parameters will result in different geometries, as shown by examples in Fig. 3. In general, the rfilter should be varied between 1 and 3.5, and Yfilter should be varied between 1 and 3. Thus, in this work, the parameters (rfilter, Yfilter) are varied to propose different variants in the TO part.

Fig. 3.
figure 3

Computational procedure and results of SEMDOT algorithms (own illustration based on [36]).

3.3 Tool-Path Length Assessment

In general, the term ‘energy performance’ is regarded as a metric to express energy efficiency, energy use, or energy consumption of systems or processes [38]. Therefore, the prerequisite for the improvement of energy performance in AM is to quantify the energy required to perform a build task of AM. In general, the quantification of energy demand requires power and time information since energy is the time integral of power. However, the power and time information is related to the process parameters (e.g. laser power, laser speed, build orientation, and layer thickness), which directly affect the build time and power demands of AM machines. In DfAM, process parameters are not considered because they are mainly considered during the process planning stage in which product design has already been completed. Subsequently, missing process parameters in DfAM leads to difficulty in quantifying the energy demand for a build task in AM. To overcome this challenge, we use an equivalent evaluation indicator which should satisfy two requirements. First, the equivalent indicator should be related to the geometrical features of a product, and therefore, it can be described and investigated at the DfAM stage. Second, the indicator should be positively or inversely proportional to the energy consumption of AM, and therefore, the minimization of the energy consumption can be realized by the minimization or maximization of this equivalent indicator.

In this work, the AM tool-path length is regarded as the equivalent indicator since it is related to the geometrical features and energy consumption of AM at the same time. In general, the analogy of an AM tool can be considered as a means for processing the material in AM, and different AM processes have different AM tools. For example, in laser powder bed fusion, the AM tool is a laser beam, while in fused deposition modeling, the AM tool is a nozzle with a heating core. For each layer, the path of an AM tool can be generally distinguished between ‘contour’ and ‘hatching’, as depicted in Fig. 4 on the left.

Fig. 4.
figure 4

Illustration of tool-path and its relation to the geometry variants.

The contour describes the path of the AM tool traveling on the edge of geometry, while the hatching refers to the path of the AM tool walking through the cross-sectional area of that geometry. Thus, the total tool-path length (ltp) of the AM tool can be the sum of the path length for the contour (lc) and the hatching (lh), as expressed in Eq. 3, where LC and LH indicate the length of individual contour lines and hatching lines, respectively, and Ncontour and Mhatching represent the total number of contour and hatching lines, respectively.

$$ \begin{array}{*{20}c} {l_{tp} = \overbrace {{\mathop \sum \limits_{n = 1}^{{N_{contour} }} L_{C\left( n \right)} }}^{{l_{c} }} + \overbrace {{\mathop \sum \limits_{m = 1}^{{M_{hatching} }} L_{H\left( m \right)} }}^{{l_{h} }}} \\ \end{array} $$
(3)

Given a geometric design (G0) with a specific area (A0), the tool-path to create this geometry can be denoted as ltp0. If this geometry can be modified to other shapes (Gk) with the same area size A0, the tool-path length for the new geometry would be different since there may be many or fewer internal holes and more or less complex curves in the new geometry. Given the constant power of AM tool during the material processing, the energy demand of the AM tool to create a geometry is related to the tool-path for reasons that the length of AM tool determines the processing time. If the AM tool-path lengths of different geometries with the same area A0 are different, the energy required to produce them would be different, as shown in Fig. 4 on the right. Thus, the problem of finding out a design variant with the least required energy consumption can be understood as being equivalent to the problem of finding out a design variant with the shortest ltp, as expressed in Eq. 4, where G represents a geometry variant generated by the SEMDOT method, SG represents the set of all geometries for a given population K, and G* represents the optimum geometry to be found. This method is denoted as the tool-path length assessment (TLA) in this work.

$$ \begin{array}{*{20}c} {l_{tp} \left( {G^{*} } \right) = \mathop {{\text{argmin}}}\limits_{{G^{*} \in S_{G} }} \left\{ {G^{*} \in S_{G} : l_{tp} \left( {G_{k} } \right) \ge l_{tp} \left( {G^{*} } \right){\text{ for }}G_{k} \in S_{G} , k \in \left[ {1,K} \right]} \right\}} \\ \end{array} $$
(4)

Finally, to implement the TLA, we have used an image processing technique, as shown in Fig. 5, and explained in the following. First, the result of the SEMDOT method is an optimized geometry, which can be exported in standard image file format (e.g. JPG files). After that, we have used the python library OpenCV to detect and extract the contour of the geometry. At the same time, a hatching template is prepared in the form of another image. In the example shown in Fig. 5, the distance between any two neighbor hatching lines is defined to be 1 mm. Furthermore, the extracted contour is infilled with the hatching line template. The result of that is the tool-path, including the contour and hatching for the given geometry. Finally, the tool-path length of the image can be conveniently detected and calculated using OpenCV.

Fig. 5.
figure 5

Image processing technique to implement TLA.

3.4 Multi-player Competition Algorithm

To solve the problem formulated in Eq. 4, we propose the multi-player competition algorithm (MPCA), which is inspired by sport games in which multiple players compete in multiple rounds, and finally, only one player wins the game (e.g. table tennis and running). Considering our problem as a game, the scenario would be that multiple players are selecting geometry variants from a design domain, and the one who picks up the geometry with the shortest ltp wins the game.

For a better understanding of the MPCA, the computational procedure is illustrated in a pseudo code (as depicted in Fig. 6) and explained in the following. First, it is assumed that there are a certain number of players (K), which is expressed in a set \(S_{player} = \left\{ {1, 2, 3, \ldots , K} \right\}\), and the game will be repeated in nmax rounds. In the first round, each of them should pick up a parameter combination (rfilter, Yfilter) stochastically based on respective value ranges (i.e. [rmin, rmax] and [Ymin, Ymax]). Thus, all parameter combinations can be summarized in a new set \(S_{rY} = \left\{ {\left( {r_{filter}^{\left( 1 \right)} , Y_{filter}^{\left( 1 \right)} } \right), \ldots \left( {r_{filter}^{\left( K \right)} , Y_{filter}^{\left( K \right)} } \right)} \right\}\). After that, the method SEMDOT is applied based on each parameter combination in the set SrY, and the result is a set of geometry variants, denoted as \(S_{G} = \left\{ {G^{\left( 1 \right)} , G^{\left( 2 \right)} , \ldots , G^{\left( K \right)} } \right\}\). By applying the TLA for each geometry variant in the set SG, a new set, including tool-path lengths for all geometry variants, is obtained \(S_{ltp} = \left\{ {l_{tp}^{\left( 1 \right)} , l_{tp}^{\left( 2 \right)} , \ldots , l_{tp}^{\left( K \right)} } \right\}\). Finally, all ltp values from the set Sltp are ranked in ascending order, and the player with the shortest ltp is regarded as the winner in this round. For the next round, the total number of players K is updated by \(K = K\left( {1 - \eta_{eli} } \right)\), where \(\eta_{eli}\) is a parameter between 0 and 1 and indicates the percentage of players that should be eliminated from the game. This is intended to enable the convergence of the computational procedure. Moreover, it is to mention that only Kηeli players with the lowest ranking positions (i.e., players who have longer ltp) are eliminated from the game, whereas the surviving players will continue to the next round. As an example, Fig. 6 shows a case in which 10 players are in the game. Assuming the \(\eta_{eli}\) of 0.3, three players will be eliminated from the game (i.e. the players marked with red colors).

Moreover, to enable convergence, the pick-up domain of (rfilter, Yfilter) is scaled down for each round following the strategy of “area selection by minimum mean ltp (MeanToPaL)”. The MeanToPaL strategy is illustrated in Fig. 6 and explained in the following. First, the value ranges for (rfilter, Yfilter) are initially defined to \(r_{filter} \in \left[ {1, 3.5} \right]\) and \(Y_{filter} \in \left[ {1,3} \right]\). . Therefore, it can be regarded as a square playground, where each play should pick up a point indicating a combination of (rfilter, Yfilter). Assuming 10 initial players in the game and player P9 wins Round 0, the playground will be split in four regions: the regions at top-left (TL), top-right (TR), bottom-left (BL), and bottom-right (BR). One of these four regions will be selected for a new competition round. In the MeanToPaL strategy, the mean ltp of the players present in each region is calculated, and the region with the lowest mean ltp is selected as the region for a new competition round. In the example shown in Fig. 6, player P9 wins Round 0, and the region TR has the minimum man ltp, and therefore, the other three regions are excluded in the next round (i.e. Round 1). In the next two rounds, the region keeps narrowing down, and finally, player P9 wins the game.

By iteratively using TO and TLA in multiple competition rounds, the winner of the last round is regarded as the final winner of the game.

Fig. 6.
figure 6

Pseudo code and update logic for (rfilter, Yfilter) to explain MPCA.

4 Use Cases

4.1 Use Case 1: 2D Optimization Problem

4.1.1 Description of the Scenario and Implementation Procedure

In this work, the computational framework is first implemented in a 2D optimization problem, where a simply supported beam is studied with force acting on the lower side in the middle, as shown in Fig. 7. The reason for choosing this problem in the use case is because this problem has been considered as a classical benchmarking geometry in the TO research field. Therefore, studying this geometry in this use allows us to compare our results with the results in the existing literature. The optimization problem is to reduce 70% of the volume by generating multiple geometries and comparing them to find the variant with the minimum ltp.

Figure 7 shows the flowchart of the MPCA implementation for this use case, in which the initial population K is defined to be 30, the ηeli is set to 0.2, and the maximal round nmax is set to 5. Afterward, a loop is set to iteratively run the TO and the TLA methods until only one player survives or the maximal number of rounds is achieved. Moreover, to enable the comparison of the method without MPCA, we have also performed a baseline study, in which rfilter and Yfilter values are varied from 1 to 3.5 and 1 to 3, respectively, with a step of 0.1. For each variation, the SEMDOT and TLA are performed, and the ltp is calculated. In total, 546 variations (i.e. 21⋅26) are considered. Since this method has explored every combination of (rfilter, Yfilter) in the full search space, it can be regarded as an exact solution approach, and the geometry variant with the shortest ltp can be denoted as the global optimum, and the longest ltp can be regarded as the worst variant. Finally, the winner of MPCA is compared with the global optimum and the worst variant for the 2D problem.

Fig. 7.
figure 7

Description and implementation of the computational framework.

4.1.2 Results of the Computational Framework

Figure 7 shows the results of the winner of the MPCA, the global optimum, and the worst variant. The corresponding ltp values of them are listed in Table 2. In comparing the geometries of the global optimum with that of the winner of the MPCA, it is observed that they look very similar. Moreover, this is also supported by their ltp values, as shown in Table 2. The ltp of the global optimum and the winner of the MPCA are 3338.8 mm and 3340.2 mm, respectively, with a difference of 1.4 mm. Thus, although MPCA fails to capture the global optimum, the difference is negligible. In comparing the geometries of the winner of MPCA with the worst variant, it is observed that the worst variant contains five large internal contours, which enlarges the ltp length. To verify the effectiveness of the energy consumption reduction by the MPCA, the geometries of the global optimum, winner of MPCA, and worst variant are printed with a material extrusion printer (the printer Ultimaker 3). The energy consumption is measured using a power meter, and the results are also listed in Table 2. In comparing the results, a reduction rate of approximately 6% is observed for this case, which verifies the feasibility of MPCA for improving the energy performance during the DfAM in a 2D problem.

Table 2. List of the ltp and energy required to produce the geometries.

Moreover, for a better understanding of the functional logic of the MPCA, Fig. 8 shows the convergence process of the MPCA approach. First, it is noted that the 3D contour surface is plotted based on the results of the full exploration (i.e. ltp of 546 design variants). The markers on the 3D contour surface describe the pick-up positions of players in different rounds. In assessing the 3D contour surface, two areas are highlighted, as described in “mountains” and “plains”, where the deepest position implies the global optimum, as seen in in Fig. 8. In Round 0, it is seen that the markers are almost equally distributed in the entire playground. The markers can be seen both in montain and plain regions. For the next round (i.e. Round 1), the markers are only distributed in the plain region. For Rounds 2 and 3, the markers are moving to the region, where the global optimum is located. In the final round, although the players do not capture the global optimum, the difference between the final winner and the global optimum is not significant. The moving behaviors of the markers on the 3D contour surface clearly show the convergence of the MPCA. Moreover, the violin chart shows the ltp of players in each round, where it is seen that the height of the violins is narrowed to smaller value ranges. This is also proof of the convergence and computational capability of MPCA.

Fig. 8.
figure 8

Convergence process of the approach.

4.2 Use Case 2: 3D Optimization Problem

4.2.1 Description of the Scenario and Implementation Procedure

To verify the capability of our approach in 3D optimization problems, we have implemented our MPCA approach for a beam use case, as shown in Fig. 9. Similar to the reasons for choosing the deeply supported beams that this case has been chosen because it is a classic design problem in the TO research field, which allows for benchmarking and comparison of different TO methods. In this case, the beam is fixed on two sides with a load in the center. The length, width, and height are 60 mm, 10 mm, and 10 mm, respectively. Unlike the 2D optimization problem, in which the result of SEMDOT is a 2D contour suitable directly for image processing, the result of 3D SEMDOT is a 3D mesh, which cannot be directly used for image processing. Thus, in this 3D optimization problem, the mesh is first sliced into 50 layers, and each layer is further inserted with the hatching patterns. To evaluate the ltp of a 3D mesh, the sum of the ltp for all 50 layers is calculated and compared. In addition to the slicing steps, the remaining steps are kept the same as the computational procedure in the 2D use case, as shown in Fig. 7. Moreover, the full exploration has also been carried out as the baseline study.

4.2.2 Results of the Computational Framework

Figure 9 shows examples of the hatching image of a layer for the three meshes. Table 3 summarizes the ltp values of the components for the global optimum, final winner of MPCA, and worst variant. The mesh of the winner from MPCA is finally converted into an STL file, which can be directly used for AM.

In assessing the point and contour chart in Fig. 9, it is observed that the final winner of the MPCA is not the global optimum. Nevertheless, the difference between the global optimum and the winner of MPCA can be neglected. The (rfilter, Yfilter) values of the global optimum, winner of MPCA, and the worst variant are, (1.5, 1.1), (1.2, 1.7), and (3, 1.1), respectively. Moreover, it is also observed in the contour chart that the pick-up points of players are converged to the plain region, where the global optimum is allocated. Moreover, in assessing the violin chart depicting the ltp of players, it is seen that the distribution of the ltp picked up by players are successively narrowed into a smaller range. Based on these observations, it is concluded that our MPCA is converged.

Furthermore, to verify the energy saving of MPCA, the STL files are used to print the respective geometry using the same material extrusion printer as in the case of 2D problem. The energy consumption measured is summarized in Table 3. In comparing the energy consumptions, it is observed that the reduction rates for the global optimum and the winner of MPCA are 1.8% and 1.5%, respectively. This implies that the MPCA is suitable for the 3D problem.

Fig. 9.
figure 9

Design problem and results for the case MBB-beam.

Table 3. List of the ltp and energy required to produce the geometries in 3D problems.

5 Discussion

Based on the use cases, two issues should be further discussed. First, in comparing the 2D design problem with the 3D design problem, it is observed that the reduction rate of 2D problem (6%) is higher than that of the 3D problem (1.5%). This implies the fact that the energy reduction performance of our MPCA is case-specific. Nevertheless, in terms of the effectiveness of the energy consumption reduction for the MPCA, it is still concluded that the MPCA enables the energy-saving in 3D design problems for AM.

Second, in terms of the result quality of MPCA, it is seen that in both use cases, the global optimum is not achieved. This can be reflected by the contour charts in Fig. 8 and Fig. 9 that none of any pick-up positions has been overlapped with the position of the global optimum. The reason is that the MPCA approach is not an exact algorithm that explores every possible pick-up position in the playground. The MPCA approach is an approximation method that searches the playground according to several pre-defined rules (e.g. MeanToPal for updating the search domain), which also means that missing the optimal solution is always possible. On the contrary, the full exploration approach, which is considered the baseline for both use cases, is a typical exact solution that ensures the capture of global optimum. However, ensuring the result quality comes at the cost of the computational time. For example, in the 2D problem, the computational time for the full exploration is approximately 37 h, while the time for the MPCA approach is only approximately 11 h using a laptop with an i7 CPU and 32 G RAM. This means that the MPCA approach has saved almost 70% computational time. Based on this observation, it is still concluded that the MPCA is suitable for the energy-saving of AM processes in the DfAM stage.

Third, the MPCA approach is proposed in this work by us and has never been compared with other existing optimization methods yet, such as the genetic algorithm and the particle swarm optimization. Therefore, although the effectiveness of the MPCA approach has been confirmed in this work, it still requires to be compared with conventional optimization methods in future work.

6 Conclusion and Outlook

In summary, this paper introduces the development and validation of a computational framework in which the energy performance of AM is evaluated and improved during the DfAM. Based on the use cases, three conclusions are drawn. First, it is confirmed that AM tool-path length can be used as an equivalent indicator during the DfAM to approximate the energy performance assessment since the DfAM lacks process parameters for precise energy consumption quantification. Second, in a 2D problem, it is confirmed that the proposed MPCA enables a reduction rate of approximately 6% for energy savings, while in a 3D problem, the reduction rate of energy consumption has been observed to be approximately 2%. Third, the MPCA approach is a suitable approximation approach to balance the result quality with the computational time in the proposed optimization problem.

In terms of future works, the following three topics are suggested. The first one is to keep improving the computational efficiency considering that the current computational time of the MPCA can be up to hours. The second future work is to compare the MPCA with other optimization techniques. Finally, future work should consider multi-objective optimization scenarios to include more objectives (e.g. mechanical performance and manufacturing cost), whereas this work only considers the energy performance improvement as a single-objective problem.