Introduction

Metal lattice structures produced by additive manufacturing (AM) have attracted extensive attention owing to their advantages such as light weight, complex structure, and integrated structure function [1,2,3]. AM technology creates geometrically complex parts by connecting materials layer-by-layer with 3D model data drawn by computer-aided design (CAD) [4]. Selective laser melting (SLM) is widely used in the preparation of metal lattice structures owing to its short preparation cycle and wide application range [5]. SLM selectively melts a metal powder by scanning it with a laser [4]. Kagome, cone, octagonal truss, the body-centered cubic (BCC), and composite lattice structures are commonly used in existing research [6, 7]. AM can be used to fabricate these structures using stainless steel [8], titanium [9], and aluminium [10]. Inherent defects occur during manufacturing processes using additive manufacturing technology [11]. Furthermore, the three main types of inherent defects in metal lattice structures include oversizing, undersizing, and waviness [12,13,14,15]. Such defects affect the mechanical properties and failure response, resulting in the unavailability of certain special functions [12, 16,17,18]. Therefore, the relationship between the inherent defects and mechanical properties of lattice structures is crucial.

Most studies on lattice structure defects and mechanical properties use computer tomography (CT) scanning to classify defects and then combine the finite element (FE) model for further analysis. Campoli et al. [18] proposed a method to transform CT-reconstructed struts into a FE model. Lei et al. [7] equivalently incorporated inherent defects (oversizing and waviness) into an FE model and performed mechanical performance simulations. We concluded that the FE model with defects was closer to the actual mechanical experiment. Liu et al. [12] investigated the elastic and failure responses of inherent defects using nonlinear FE analysis of detailed imperfect models of regular octet and rhombicuboctahedron structures. Furthermore, oversizing struts is considered an important parameter affecting the lattice structure. Liu et al. [12] proposed the number of defects in a two-dimensional honeycomb structure and verified it in a three-dimensional metal lattice structure. This study provides a basis for future research. Baxter et al. [6] investigated the mechanical properties of lattice structures with hybrid topologies using an FE model. The majority of studies on the mechanical properties of lattice structures are based on FE models, which suggest that defects influence the structural quality. Additionally, the establishment of the corresponding FE model has limitations when the defect type or number of lattice structures increases [19, 20]. Therefore, the modelling process and accuracy become more difficult. Therefore, the law of the influence of various defects on performance is still not perfect. As a result, it is necessary to carry out in-depth theoretical and methodological research.

The performance prediction of lattice structures mainly focuses on fatigue life [21, 22]. Few studies exist on the prediction of maximum stress in metal lattice structures. The maximum stress can be a good reflection of the load-bearing capacity of a structure. Studying performance prediction methodologies based on data-driven lattice structures based on the distribution pattern and different characteristics of defects is required, and requires in-depth research because of the randomness or diversity of defects. Machine learning (ML) provides a new approach for lattice structure performance prediction. Therefore, a fast prediction model was established based on the known measured data of the sample using ML, and the prediction of the maximum stress of the lattice structure was carried out with full consideration of the random distribution pattern and diverse characteristics of defects. Furthermore, it was closer to the actual situation of the sample and more suitable for the randomness and diversity of lattice structure defects, with stronger generalisation compared to the FE model.

ML is a type of artificial intelligence that relies on a set of algorithms that learn from data without direct programming [23]. Jin et al. [24] summarised the application of machine learning in AM in detail. The main applications are geometric design, process parameter configuration, and in-situ anomaly detection [25,26,27]. There are few studies on the prediction of maximum stresses in metal lattice structures. The maximum stress can be a good reflection of the load-bearing capacity of the structure and can simplify the complexity of the prediction model when combined with machine learning. Malakar et al. [28] designed a hierarchical feature selection (HFS) model based on a genetic algorithm to optimise the local and global features extracted from each handwritten image. Bacanin et al. [29] proposed an improved version of the Firefly algorithm, which corrects the defects of the original algorithm through explicit exploration mechanisms and a chaotic local search strategy and is verified in the deep learning subdomain of convolutional neural networks. We used five standard reference datasets in the image processing: MNIST, Fashion-MNIST, Semeion, USPS, and CIFAR-10. The aforementioned studies are characterised by a novel and promising research field, namely, a hybrid approach between meta-heuristics and machine learning. This new field of research successfully combines machine learning and swarm intelligence methods and has proven capable of obtaining excellent results in different fields.

A data-driven XGBoost-BGP maximum stress prediction model is proposed in this study based on the aforementioned research. We used a Bayesian hyperparameter optimisation method to optimise the hyperparameters and to further improve the prediction and generalisation abilities of the model. Furthermore, we also analysed the relationship between the four input parameters and the maximum stress. The significant contributions of this study are as follows:

  • We proposed a data-driven prediction model of the AM lattice structure with inherent defects based on improved XGBoost.

  • We used the Gaussian process to optimise the super parameters of the XGBoost model instead of selecting them based on experience.

  • We presented the relationship between the four types of input parameters of the model and the maximum stress prediction results.

The remainder of this paper is organised as follows: “Related works” introduces machine learning research on the prediction of lattice structures. “Methodology” introduces the prediction model used in this study. The analysis and findings of the experiment are presented in “Experimental results and analysis”. The discussions are presented in “Discussion”. Conclusions and future work are discussed in “Conclusion and future work”.

Related works

Machine learning has advanced and is now used in various sectors [30]. Nasiri and Khosravani [31] focused on the use of ML to forecast the fracture and mechanical behaviour of AM items. Investigations, reviews, and discussions have been conducted on the use of ML in the characterisation of polymers and AM components. The research and analysis also highlight the constraints, difficulties, and potential of commercial ML applications in AM. We consider the advantages of ML for predicting mechanical qualities, improving AM parameters, and assessing 3D-printed objects. Furthermore, Jin et al. [32] presented a three-layer neural-network structure to predict the mechanical characteristics of an ultra-fine-grained Fe-C alloy. Experiments on samples prepared from metal AM often require repeated experiments, which are time-consuming and expensive. If the model is simulated, the simplified assumptions made during the simulation process may differ from the experimental defects because of the complex inherent defect characteristics of the lattice structure. Therefore, predicting the mechanical properties of metal lattice structural parts fabricated using additive manufacturing is challenging. Many studies have demonstrated the feasibility of ML methods to address the aforementioned metal AM process optimisation challenges. Furthermore, they have also introduced ML methods to solve process optimisation problems in metal additive manufacturing. However, the majority of the research objects in the existing literature are fatigue life predictions of metal lattice structures based on machine learning and there are few prediction studies on the maximum stress.

Zhang et al. [21] proposed a fatigue process assessment method based on an adaptive network fuzzy inference system. A high-cycle fatigue life was successfully predicted using the fatigue data of 139 SS316L components from the same SLM machine. However, when they used data from the published literature for evaluation, the results were less than ideal. Therefore, the composition of the data during model training is important for the generalisation ability of the model. Chen and Liu [22] proposed using a probabilistic physics-guided neural network to study the effects of different parameters on probabilistic fatigue life. However, the initial model parameters were not selected.

Taking the inherent defects of metal BCC prepared by AM as the research object, a data-driven XGBoost-BGP maximum stress prediction model is proposed. The model contained four parameters (number of layers, thick-dominated struts, thin-dominated struts, and bend-dominated struts) as the input and one structural feature (maximum stress) as the output. We used a Bayesian hyperparameter optimisation method to optimise the hyperparameters of the model to further improve the prediction and generalisation abilities of the model. The experimental results show that the proposed prediction model can accurately predict the maximum stress of structural samples containing defects. The analysis results of the relationship between the four input parameters and the maximum stress show that the maximum stress of the lattice structure specimen is most affected by the thick-dominated strut.

Methodology

The maximum stress prediction model proposed in this paper is mainly divided into two parts: the construction of the dataset and the training of the XGBoost-BGP model, as shown in Fig. 1.

Fig. 1
figure 1

Frame diagram of the XGBoost-BGP maximum stress prediction model

In the first part, defects were counted, and the dataset was constructed as illustrated in Fig. 2 and Table 1. The dataset’s creation can be divided into four steps: scanning the samples, segmentation of the scanned images according to the edge detection method, defect detection, and defect statistics. The details are presented in “Dataset”.

Fig. 2
figure 2

Dataset preparation flowchart

The second part is about the training of the XGBoost-BGP maximum stress model, which is mainly divided into three steps.

Table 1 Metrics chosen as features
Table 2 Data statistics

The first step was to initialise the parameters of the XGBoost model. The XGBoost model is described in detail in “XGBoost model”. The second step is to define the Bayesian optimisation function and determine the hyperparameter search space, as described in “Bayesian hyperparameter optimization method”. The third step was to obtain the maximum value of the acquisition function of the Bayesian optimisation function using a Gaussian process and a 5-fold cross-validation in this process. The Gaussian process method is described in “Gaussian process”, and the acquisition function is described in “Acquisition function”. The definition of root mean square error (RMSE) is shown in “Evaluation measures”. These details are described in the following sections.

Dataset

The data used in this study consisted of 115 FE models. The type and number of inherent defects in each model were calculated based on the models created using the CT sections. The statistical content included the number of layers, thick-dominated struts, thin-dominated struts, and bent-dominated struts. A stepwise flowchart showing the automated metrology and analysis is shown in Fig. 2. In step 1, the scanning data of the part obtained by the CT scanner are subjected to noise filtering and threshold function processing to obtain a binary image of the lattice structure. In step 2, strut edges were identified using the Canny edge detection function, and the image segments for each strut were extracted using this method. In step 3, the extracted image segment of each strut was compared with the standard strut image to confirm the defect type of each strut. In step 4, we counted the number of various types of defects according to the classification method in step 3.

FE models were established for the mechanical performance simulation based on the above statistics, and the maximum stresses were taken as the label of the dataset. The metrics chosen as features are listed in Table 1.

Table 2 lists the data statistics. Figure 3 shows a histogram of the frequency distribution and a quantile-quantile (Q-Q) of the maximum stress. Furthermore, the maximum stress can be approximately regarded as a normal distribution, as seen in the figure, which is conducive to the training of machine learning models. Figure 4 shows the box and whisker diagrams of features. There are outliers in the data set. However, they were not eliminated during the training process of this study to improve the robustness of the model. In addition, Fig. 5 shows a correlation matrix diagram.

Fig. 3
figure 3

The frequency distribution histogram and Q-Q plots of maximum stress

Fig. 4
figure 4

The box and whisker diagram of features

Fig. 5
figure 5

The feature correlation matrix diagram

XGBoost model

Chen and Guestrin [33] proposed extreme gradient boosting (XGBoost), which is an improved machine learning method based on tree boosting with a strong learning ability [34,35,36]. XGBoost employs the second derivative (Hessian) to determine the direction and amount of the greatest descent more accurately than simply calculating and following the gradient [37, 38]. A range of regularisation techniques are also supported by XGBoost to improve model generalisation [39].

The dataset (D) in this paper consists of n samples. \(D = \{(F_i, s_i)\}(i=1,2,3...n)\), where \(F_i\) are the the defect feature of the input and \(s_i\) is the maximum stress.

In this paper, XGBoost is a model consisting of K regression trees and \(\hat{s_i}\) is the sum of all scores predicted by K trees. The formula is described as follows:

$$\begin{aligned} \hat{s_i} = \phi {(F_i)} = \sum _{k=1}^{K}f_k(F_i),\quad f_k\in H, \end{aligned}$$
(1)

where \(f_k(F_i)\) is prediction scores of a regression tree and H is the hypothesis space of \(f_k(F_i)\).

$$\begin{aligned} H = {f(F) = \nu _{I(F)}}, \end{aligned}$$
(2)

where \(\nu \) is the leaf score, one of the parameters used to measure the complexity of the model. I(F) is the Fth sample’s leaf node. Equation (3) shows the result of the t-th iteration prediction.

$$\begin{aligned} \hat{s}_i^t = \hat{s}_i^{t-1} + f_t{(F_i)}. \end{aligned}$$
(3)
Table 3 The range of hyperparameters

In this paper, the objective function of the maximum stress prediction model is defined as follows:

$$\begin{aligned} J(f_t) = \sum _{i=1}^{n}L{(s_i, \hat{s_i}^{t-1} + f_t(F_i) + \varOmega (f_t))}, \end{aligned}$$
(4)

where L is the loss function and \(\varOmega (f_t)\) is the model’s complexity.

$$\begin{aligned} \varOmega (f_t) = \gamma T + \frac{1}{2} \lambda \sum _{j=1}^T\nu _j^2, \end{aligned}$$
(5)

where \(\gamma \) and \(\lambda \) are hyperparameters, also known as the coefficients of the penalty term. T represents the total number of leaf nodes.

In order to generalise Eq. (4) without specifying the specific formulation of the objective function, a second-order Taylor expansion is used to simplify. The simplified formula is as follows:

$$\begin{aligned} J(f_t)= & {} \sum _{i=1}^{n}[L{(s_i, \hat{s_i}^{t-1}) {+} g_if_t(F_i) + \frac{1}{2}h_if_t^2(F_i)}] {+} \varOmega (f_t) \nonumber \\ \end{aligned}$$
(6)
$$\begin{aligned} g_i= & {} \frac{\partial {L}{(s_i,\hat{s_i}^{t-1})}}{\partial {\hat{s_i}^{t-1}}} \end{aligned}$$
(7)
$$\begin{aligned} h_i= & {} \frac{\partial ^2{L}{(s_i,\hat{s_i}^{t-1})}}{\partial {\hat{s_i}^{t-1}}}. \end{aligned}$$
(8)

Finally, the objective function of the prediction model is obtained.

$$\begin{aligned} J(f_t) = \sum _{i=1}^{n}{[g_i\nu _I(F_i) + \frac{1}{2}h_i\nu _I^2(F_i)]} + \gamma T + \frac{1}{2} \lambda . \sum _{j=1}^T\nu _j^2 \end{aligned}$$
(9)

The prediction results and generalisation ability of XGBoost’s prediction model are affected by model hyperparameters [33]. Table 3 shows the range of hyperparameters to be optimised for the XGBoost-BGP prediction model.

Bayesian hyperparameter optimization method

The hyperparameter-tuning problem of the XGBoost prediction model cannot be solved using traditional optimisation methods [40]. However, Bayesian optimisation methods can effectively solve these problems. The Bayesian optimisation process is illustrated in Fig. 6.

Fig. 6
figure 6

Bayesian optimization process

Equation (10) shows the purpose of Bayesian optimisation, which is to find a function that maximises the value of this function at the sampling point.

$$\begin{aligned} \textbf{y} = \arg \max _{\textbf{z}\in M}f(\textbf{z}), \end{aligned}$$
(10)

where M represents the \(\textbf{x}\) search space.

The principle of Bayesian optimisation (BO) is to infer the posterior information of the function through the prior distribution of the function and the information of the sample points. Subsequently, the optimal solution to the function is obtained using posterior information and a criterion. This criterion is also known as the acquisition function (AC). Considering that the hyperparameter search space in this study is a continuous numerical value, a Gaussian process (GP) is used to obtain the posterior information.

Gaussian process

The prior distribution of the Bayesian optimisation in this study is GP. The GP function is expressed as follows:

$$\begin{aligned} f(\textbf{z}) \sim \mathcal{G}\mathcal{P}(\mu (\textbf{z}), c(\textbf{z}, \textbf{z}')). \end{aligned}$$
(11)

The \(\mu (\textbf{z})=0\). \(c(\textbf{x}, \textbf{z}') = exp (-\frac{1}{2\theta }\Vert \textbf{z} - \textbf{z}'\Vert ^2)\),where \(\theta \) is a parameter of the kernel width. \(\textbf{x}\) and \(\textbf{x}'\) represent samples.

The posterior distribution of \(f(\textbf{z})\) is mainly divided into two steps. The first step is to find a new training set \(S_{1:i-1} = \{\textbf{z}_i, f(\textbf{z}_i)\}^{i-1}_{1}\). The new training set consists of \(i-1\) observations.

The value of f follows a multivariate normal distribution \(f \sim \mathcal {N}(\textbf{0}, \textbf{C})\), where

$$\begin{aligned} C = \begin{bmatrix} c(\mathbf {z_1}, \mathbf {z_1}) &{} c(\mathbf {z_1}, \mathbf {z_2}) &{} \cdots &{} c(\mathbf {z_1}, \mathbf {z_{i-1}}) \\ c(\mathbf {z_2}, \mathbf {z_1}) &{} c(\mathbf {z_2}, \mathbf {z_2}) &{} \cdots &{} c(\mathbf {z_2}, \mathbf {z_{i-1}}) \\ \vdots &{} \vdots &{} \cdots &{} \vdots \\ c(\mathbf {z_{i-1}}, \mathbf {z_1}) &{} c(\mathbf {z_{i-1}}, \mathbf {z_2}) &{} \cdots &{} c(\mathbf {z_{i-1}}, \mathbf {z_{i-1}}) \\ \end{bmatrix} . \end{aligned}$$
(12)

Finding the function value of the new sampling point \(\mathbf {x_t}\) in accordance with f is the second step. Given the assumption of the Gaussian process, \([\textbf{f}_{1:i-1} f_{i}]^\textrm{T}\) still follows the i-dimensional normal distribution:

$$\begin{aligned} \begin{bmatrix} \textbf{f}_{1:i-1} \\ f_{i} \\ \end{bmatrix} \sim \mathcal {N} \left( \textbf{0}, \begin{bmatrix} \textbf{C} &{} \textbf{c} \\ \textbf{c}^ \textrm{T} &{} c(\mathbf {z_i}, \mathbf {z_i}) \\ \end{bmatrix}\right) , \end{aligned}$$
(13)

where \(\textbf{f}_{1:it-1} = [f_1, f_2, \cdots , f_{i-1}]^ \textrm{T}\), \(\textbf{c} = [c(\mathbf {z_i}, \mathbf {z_1}) c(\mathbf {z_i}, \mathbf {z_2}) \cdots c(\mathbf {z_i}, \mathbf {z_{i-1}})]\) and \({f_i}\) follow one-dimensional normal distribution i.e. \(f_t \sim \mathcal {N}(\mu _{i}, \sigma _i^2)\). It can be seen from the joint normal distribution property that \(\mu _i(\mathbf {z_i})=\textbf{c}^ \textrm{T}C^\mathrm {-1}\textbf{f}_{1:i-1}\), \(\sigma _i(\mathbf {z_i})^2=c(\mathbf {z_i}, \mathbf {z_i})-\textbf{c}^ \textrm{T}C^\mathrm {-1}\textbf{c}\).

Acquisition function

As aforementioned, the acquisition function expresses the epistemic measure computed in accordance with the GP to seek the next place to evaluate the function [41]. The Expected Improvement (EI) acquisition function was used in this study because of its utility and simplicity. EI is a type of acquisition function based on improvement criteria. The EI function estimates the degree of improvement a point can experience when examining the area around its current ideal value. The EI formula is as follows:

$$\begin{aligned} EI(\textbf{y}) = (\mu (\textbf{y} - f(\textbf{y}^+))\varPhi (Q)) + \sigma (\textbf{y})\varphi (Q), \end{aligned}$$
(14)

where \(Q = \frac{\mu (\textbf{y}) - f(\textbf{y}^+)}{\sigma (\textbf{y})}\). \(\mu (\textbf{y})\) is the mean of the probability density function, \(\sigma ^2(\textbf{y})\) is the variance of the probability density function.

Evaluation measures

In this study, the model assessment indicators were R-square (\(R^2\)) and RMSE. Assume that \(s_1, s_2, \cdot \cdot \cdot , s_n\) are the actual values, \(\hat{s}_1, \hat{s}_2, \cdot \cdot \cdot , \hat{s}_n\) are the predicted values, and \(\bar{s}\) is the mean of \(s_i\); these indices can be calculated as

$$\begin{aligned} R^2= & {} 1-\frac{\sum _{i=1}^n(\hat{s_i}-y_i)^2}{\sum _{i=1}^n(s_i-\bar{s})} \end{aligned}$$
(15)
$$\begin{aligned} \textrm{RMSE}= & {} \sqrt{\frac{1}{n}\sum _{i=1}^n(\hat{s_i}-\bar{s})^2}. \end{aligned}$$
(16)

Experimental results and analysis

Training the prediction model

The environment configured in this study was Ubuntu 16.04 LTS, Anaconda 3, XGBoost 0.4.1, and Bayesian Optimisation 1.2.0.

The training and test sets were split in a ratio of 7:3. The initial number of GP points was six, and the number of optimisation iterations was 30. The number of training iterations was 100, and 5-fold cross-validation was used to record the optimal results. Optimal results were obtained after multiple trainings, and the corresponding hyperparameters were recorded.

Fig. 7
figure 7

The mean RMSE values of the default hyperparameters and the hyperparameters optimized by the three hyperparameter optimization methods on the training and validation sets

Fig. 8
figure 8

The RMSE differences difference of four groups of hyperparameters

Table 4 \(R^2\) of typical methods

Training results and prediction results

The mean values of RMSE for the training and test sets based on the default hyperparameters, the grid search (GS), the Harris Hawks optimisation algorithm (HHO) and the BO-GP methods are shown in Fig. 7. The prediction model is assumed to be more stable if the difference between the two is smaller and stable. The RMSE differences for the three hyperparameter optimisation methods are shown in Fig. 8. The generalisation ability of the XGBoost-BGP model is better; the difference between the training and validation sets is approximately 12.33. Therefore, the \(R^2\) value was 0.82 and the RMSE was 17.40. The \(R^2\) and the RMSE of the model trained with the default hyperparameters of XGBoost are only 0.78 and 19.19, respectively, before tuning the hyperparameters.

Table 4 shows a comparison of the prediction results of other typical methods. Furthermore, the method proposed in this study yields the best prediction results. Additionally, we used the same hyperparameter optimisation method.

Hyperparameter optimization comparison

To demonstrate that the BO-GP method is better suited for the prediction model proposed in this study, the GS hyperparameter optimisation method and the HHO are used for comparison. GS is widely used in hyperparametric optimisation, and HHO is a state-of-the-art (SOTA) baseline method. Experiments show that BO is more effective than GS in terms of data volume and parameter dimensions. In addition, the proposed BO-GP hyperparameter optimisation method is slightly better than HHO, but inferior to HHO in terms of convergence speed. The comparison results are presented in Table 5. The final proposed model was built using the hyperparameters obtained by BO-GP. Furthermore, the chosen parameters can be regarded as optimal after numerous interactions of optimisation, with the RMSE serving as the observation basis. The final hyperparameters are listed in Table 6.

Table 5 Hyperparameter optimization comparison
Table 6 Hyperparameters
Table 7 The sign test results

The sign test results are shown in Table 7 to further illustrate that the BO-GP hyperparameter optimisation algorithm used in this study is applicable to the XGBoost model. The sign method calculates the total number of winning cases, and the winning times are distributed according to a binomial distribution. Figure 9 shows the comparison of deviation of three optimization methods based on XGBoost. According to the test criterion, if the calculated result is greater than \(\frac{n}{2}+\sqrt{n}\), the algorithm is better, with \(p<0.05\) [46]. Therefore, the hyperparameter optimisation algorithm used in this study is slightly better than the other two methods.

Fig. 9
figure 9

Comparisons of deviation

Analysis of prediction results

To better demonstrate the prediction effect of the XGBoost-BGP model, actual and predicted scatterplots on the test set are generated, as shown in Fig. 10a–c show the deviation of the predicted value from the actual value. The overall deviation results are acceptable and the fitting effect of the XGBoost-BGP model is good.

To demonstrate the generality of the model in this study, we also refer to the data in Ref. [15]. According to the literature, three sets of data are equivalently obtained, including only typical defects. The predicted results are shown in Table 8. The test results show that the XGBoost-BGP model has good generality.

Analysis of factors affecting mechanical properties of structures

XGBoost can determine the relationship between variables and output values [47]. This study considered the relationship between the four input variables and the maximum stress. The principle is to count the number of times the input feature appears in all the trees. The greater the frequency with which the input feature appears, the more obvious the effect of the input feature on maximum stress. The individual input feature scores are shown in Fig. 11. ’Oversize’ (the thick-dominated struts) has the greatest effect on the maximum stress. The influence of ’undersizing’ (the thin-dominated struts) on the maximum stress is second only to that of ’oversize’. The effect of these two features on the maximum stress is mainly because of the residual stress concentration caused by the irregular strut size, which changes the maximum stress. This fully reveals the relationship between the strut and maximum stress. In addition, ’waviness’ (the bend-dominated struts) is also one of the main factors affecting the maximum stress. However, there is a certain uncertainty in the maximum stress influence because of its complex characteristics. The effect of ’number of layers’ on the maximum stress is the least among the four features, but this does not mean that the feature of ’number of layers’ can be ignored. In addition, by making appropriate changes to the input features for different structures, prediction of the maximum stress of different structures can still be achieved.

Fig. 10
figure 10

Comparison of predicted and actual values

Table 8 Prediction with other data
Fig. 11
figure 11

The importance of each variable

Discussion

The XGBoost-BGP prediction model proposed in this study can accurately predict the maximum stress of SLM-fabricated BCC structural samples. This study only selected the most important features as training features because of the complex representation of inherent defects in lattice structures and the limited available data. Furthermore, given sufficient data, the training features can be extended to include the type of powder prepared, powder diameter, powder composition, standard radius of the struts, angle between struts, and angle between struts and the ground.

The prediction model proposed in this study had some limitations. The main factor-limiting models are features of the lattice structure. Features of datasets commonly have an impact on the upper bound of the model. This study selected four typical features based on the actual sample. Additionally, the accuracy of the model can be improved by adjusting hyperparameters if features are added.

Conclusion and future work

In this study, we proposed an XGBoost model based on the Gaussian process method in Bayesian hyperparameter optimisation to achieve maximum stress prediction of SLM-fabricated BBC structures. The model contained four parameters (number of layers, thick-dominated struts, thin-dominated struts, and bend-dominated struts) as the input and one structural feature (maximum stress) as the output. The datasets were derived from the actual prototypes and simulations. According to the experimental results, the maximum stress prediction model of XGBoost-BGP proposed in this study optimises \(R^2\) and RMSE to 0.82 and 17.40, respectively. In addition, we discussed the relationship between the four input parameters and maximum stress. The thick dominant strut had the greatest influence on the maximum stress of the lattice structure sample.

Future work will focus on three areas. The first direction is to consider samples with more complex structural features to raise the upper limit of the model. The second direction is to improve XGBoost parameter optimisation using a meta-inspired learning hyperparameter optimisation method. The third direction is to carry out the analysis of other performance parameters to reveal the influence of defects on other performance parameters.