An integrated approach for the modelling of silicon carbide components laser milling process

The paper deals with characterisation and modelling of laser milling process on silicon carbide hard ceramic. To this end, a Yb:YAG pulsed fiber laser was adopted to mill silicon carbide bars. Square pockets, 5×5 mm2 in plane dimension, were machined at the maximum nominal average power (30W), under different laser process parameters: pulse frequency, scan speed, hatching distance, repetitions and scanning strategy. After machining, the achieved depth and the roughness parameters were measured by way of digital microscopy and 3D surface profiling, respectively. In addition, the material removal rate was calculated as the ratio between the removed volume/process time. Analysis of variance (ANOVA) was adopted to assess the effect of the process parameters on the achieved depth, the material removal rate (MRR) and roughness parameters, while response surface methodology (RSM) and artificial neuronal networks (ANNs) were adopted to model the process behaviours. Results show that both RSM and ANNs fault in MRR and RSm roughness parameters modelling. Thus, an integrated approach was developed to overcome the issue; the approach is based on the use of the RSM model to obtain a hybrid Training dataset for the ANNs. The results show that the approach can allow improvement in model accuracy. Graphical abstract Graphical abstract


Introduction
Silicon carbide (SiC) is a high-performance advanced or technical ceramic; it has attracted increasing attention thanks to its superior properties in terms of specific strength, specific modulus, hardness, wear resistance, thermal and chemical resistance and thermal stability. As a consequence, SiC has application in several industrial fields, such as aerospace, automotive, mechanics, chemicals, electronic, defence, textile and biomedical.
Even if advanced ceramic components are produced in near net shapes by way of power sintering, further machining is generally required to give the required tolerance and finishing or to produce complex shape and micro-sized features. Conventional machining of ceramic materials is associated with low material removal rate (MRR), high tool wear and high manufacturing costs [1][2][3]. Thus, it is mainly adopted for the shaping of green bodies (i.e. before the sintering). In last years, different non-conventional techniques were proposed to machine advanced ceramic components, such as electrical discharge machining (EDM and WEDM), abrasive water jet (AWJ) or laser machining (LM). However, each of these shows some limitations. EDM and WEDM are applicable only on electrically conductive materials or require a complex process to machine the insulating ones [1,2,[4][5][6]. In addition, EDM is characterised by low MRR and tool wear as well as the presence of the heat-affected zone (HAZ) [1,2]. AWJ does not suffer problems like low MRR (generally higher than EDM) and tool wear and can machine any material, including insulators [1][2][3]. However, also in this case, damage (micro-cracks and network cracking) can occur, due to the high compressive load resulting in the impact of the abrasive particles [7]. Furthermore, in order to have precise, smooth and small feature, complex systems (5 axis machine) and high focussing AWJ equipment (i.e. small jet diameter) are required [8,9].
Laser milling (LM) is a competitive process to machine advanced ceramics [2,[10][11][12][13][14][15]. In this technique, the laser beam removes material layer by layer following predetermined patterns. Since a single removed layer has a small depth (1-10 μm), a deep profile can be obtained by repeating the process more times, up to achieve the required depth. During the process, the machined surface can be varied to obtain 2D ½ shapes.
Compared to other processes, LM offers several advantages, such as no mechanical contact, no tool and tool wear, complex fixing equipment is not required and easy automation. In addition, since the laser beam is few microns in diameter, it is possible to obtain complex and fine shapes with good accuracy. It is environmentally friendly since it does not require any additional pollutant components (as the dielectrics for EDM). Therefore, there is a great interest in applying this technology for the machining of micro-sized features on ceramic materials [16][17][18]. However, also laser milling shows some drawbacks, such as low MRR and the presence of thermal damages (recast layer, microfracture, HAZ), and it requires process parameters optimisation. As a matter of fact, laser interaction involves several removal mechanisms: melting and vaporisation [19,20], thermal degradation [21][22][23], thermal stress [24,25] and mechanical effect (the molten material pulls-out due to the vaporisation-induced recoil force [19,26,27]). Thus, the presence of each mechanism depends on the laser characteristics (wavelength, spot diameter, pulse length, average power and pulse frequency) and the investigated material as well as the process parameters different. Consequently, researchers have spent many efforts to identify the effect of the process parameters on the removal mechanisms and achieved geometry and surface quality, as well as provide models able to forecast the geometry and the surface quality once the process parameters are assigned.
In the literature, several methods are proposed to model the processes behaviours, such as finite element analysis (FEM), statistical methodology (SM) and cognitive systems (CS). However, each method shows advantages and limitations. FEMs are generalised methods, but require deep knowledge of the interaction mechanisms [28] and long computational times depending on the phenomenon complexity and the resolution degree.
Statistical models have been often used because they require a limited number of trials for the process analysis; they are implemented in commercial software, which simplifies the model building and reduces the computational time to a few minutes. In addition, statistical methodologies allow obtaining a robust model that can be also adopted in process optimisation [16,20,[29][30][31][32][33][34]. However, statistical models impose an existing model (i.e. an equation) on the given data (experimental data); in other words, the relationship between input and output is obtained as the best fitting solution. This introduces a systematic error since the adopted equation is not sure that it is the best one for the specific problem [35][36][37]. Conversely, cognitive systems, and in particular artificial neural networks (ANNs), extract the relationship directly from data as an input/output (domain/range) pair by imposing a priori architecture (the NN) rather than an existing model. The advantage of this approach is that, under the appropriate conditions, ANNs are able to model a complicated relationship for the modelling to adapt to the different problems [38,39], solving engineering issues in industrial applications, such as in-process control [40,41], and optimisation procedures [42][43][44], as well as in non-destructive evaluation [45]. Moreover, ANNs can allow robust models even in the presence of a high noise ratio [38,46,47]. However, ANNs required an adequate number of experimental trials to train the network (i.e. large experimental dataset), an appropriate selection of the network configuration [38,46,48,49] and a training phase that can be significantly long (several hours).
There are several studies about laser milling of advanced ceramics [11-14, 16-19, 24, 25, 30]; however, in the authors' knowledge, there is lack of paper specifically focussed on the laser milling of SiC.
The present paper deals with the laser milling of SiC with the aim to detect which and how the process parameters affect the laser beam-material interaction, and to explain the effect of the process parameters on the achieved depth, the MRR and surface quality. Therefore, a 30-W Yb:YAG pulsed fiber laser was adopted to machine silicon carbide bars, changing the following laser process parameters: pulse frequency, scan speed, hatching distance and repetitions. Thus, the achieved depth and the roughness parameters were measured by way of digital microscopy and 3D surface profiling, respectively. The material removal rate was calculated as the ratio between the removed volume and the process time. Analysis of variance was adopted to assess the effect of the process parameters on depth, MRR and roughness parameters. Response surface methodology (RSM) and artificial neuronal networks were adopted to model the process behaviours. Results show that, for the adopted laser and under the tested conditions, it is possible to machine SiC obtaining a depth up to 240 μm with a moderate surface roughness (Ra in the range 0.6-3.3 μm). 2 Materials, equipment and procedures

Materials
The investigated material was a sintered silicon carbide (SiC) in the form of bars, 25×5×5 mm 3 in dimension, supplied by TKC-Technische Keramik GmbH. SiC is technical ceramic characterised by exceptional hardness, high strength, low density (compared to steel), high elastic modulus, high thermal shock resistance, chemical inertness, high thermal conductivity (compared to other ceramic materials) and low thermal expansion. Thanks to its properties, SiC has found extensive application in aerospace and automotive components (brake discs, bearings or as oil additive), electronics component (for the production of LEDs, diode and semiconductor), hydraulic valve bodies and seal rings, missile parts, munitions, pistons, watch components, orthopaedic equipment and, of course, as abrasive (carborundum).
In Table 1, the chemical composition and the main properties of the SiC ceramic are reported as declared by the producer, while in Fig. 1, images of the specimen and its surface are reported. Figure 2 shows a schematic of laser milling processes (LMP). As aforementioned, in this technique, the laser beam is adopted to remove the material layer by layer. First, the contour of the geometric pattern is defined in CAD environment; then, the inner area delimited by the contour is machined from side to side by hatching: the laser beam moves line after line up to completely cover the area to be machined. The distance between the lines is called hatching distance (Hd). Since the machined layer is thin, the process is repeated several times, up to the required depth. The number of times the area is hatched is called repetitions (R).

Laser system
The experimental tests were performed adopting a laser system (LASIT Fly30 fiber) equipped with a Q-switched pulsed Ytterbium fiber laser (IPG, model: YLP-RA30-1-50-20-20) working at the fundamental wavelength λ=1064 nm. The system is equipped with a galvanometric scanning head, a "flat field lens" 160 mm in length, a suction system and a monitoring camera and it is controlled via a personal computer. The latter allows the geometric patterns generation (contour and hatching distance) and setting of the process parameters: average power (Pa), scan speed (Ss) pulse frequency (f) and repetitions (R). Since the laser source works at a fixed pulse duration (D=50 ns), the pulse energy (Ep) and the pulse power (Pp) can be calculated by way of the well-known equations [50,51]: Moreover, knowing the beam footprint or the beam diameter (ds) and the hatching distance (Hd), it is possible to calculate the overlapping factor along the directions parallel (Of %) and orthogonal (Ho %) to the laser beam travel, that represents the superposition between two consecutive pulses (pulse overlapping) and between two consecutive lines (hatching overlap), respectively: This kind of laser source was chosen for the good absorption of SiC at its wavelength, the possibility to release high pulse power (up to 20 kW) and the higher conversion efficiency (about 25%) as well as its flexibility, which makes it suitable to perform different operations, like marking, texturing, micromachining and surface treatments on several materials, such as metals [20,43,52,53], composite materials [29,54] and, of course, ceramics [13,19,55]. In Table 2, the detailed characteristics of the laser system are reported, while in Fig. 3 the laser equipment and an example of SiC milled specimen are reported.

Experimental procedures
In order to study the process behaviours, a systematic approach to planning design of experiments (DOE) was adopted. This approach involves different tasks: problem definition; a collection of information about the process by way of bibliographic analysis and relevant background; control factors (i.e. the process parameters) and their level definition, response variables (measured quantities) identification, experimental planning, tests execution and data analysis [56][57][58]. As aforementioned, laser machining involved different mechanisms: melting and vaporisation [19,20], thermal degradation [21][22][23], and mechanical effect (the molten material pull-out due to the vaporisation-induced recoil force [19,26,27]). The presence of each mechanism depends on the laser characteristics (wavelength, spot diameter, pulse length, average power and pulse frequency), the investigated material as well as the adopted process parameters (scan speed, hatching distance, pulse frequency, repetition,…).
The selection of the control factors and their values is a critical issue in experiment planning. Based on relevant bibliography [11,14,15,30,[32][33][34]48], previous studies [13,19,29,59,60] and preliminary tests, a 3 3 ×2 2 full factorial plane was developed according to the design of experiment (DOE) [56,57,61]. It was chosen to fix the average power at the maximum level (30W) and to change the scan speed, the pulse frequency and the number of repetitions on 3 levels. The hatching distance and the scanning strategy (Strategy) were selected on 2 levels (the scanning strategy represents the hatching mode). Table 3 summarises the factors and their levels adopted for the experimentation.
In detail, the average power was fixed at the maximum value to assure a high material removal rate. The scan speed and the pulse frequency were selected based on preliminary tests. During these tests, it was noted that too high scanning speed or pulse frequency (that corresponds to low pulse energies or pulse power according to Eqs. (1-2)) does not allow machining, while too low scan speed (<500 mm/s) allows the  formation of a greater amount of molten material, and this material solidifies on the machined surface producing the protrusion of the same from the surface and the presence of micro-fracture; this is consistent with that reported in [19]. The hatching distance controls the lateral superposition (hatching overlap) between two parallel scansions. Too large value does not assure a sufficient overlapping, especially when small surfaces were machined, while too narrow Hd may involve local over-heating, micro-fractures formation and debris accumulation. Thus, to assure an adequate hatching overlap, Hd was selected as a fraction of the declared beam spot at the focusing point (about ¼ and ½ of the declared beam spot diameter, see also Fig. 2). It is worth noting that, while the overlapping percentage varies in the range 60-90% (Eq. (3), for Ss=1000 and f=30 kHz and Ss=500 and f=60 kHz, respectively), the hatching overlap varies between 25 and 50% (Eq. (4), for Hd=60 and Hd=40, respectively). The difference in overlapping may result in texture formation on the surface (due to the formation of burr rows at groove sides), with a consequent increase in roughness. Therefore, in order to reduce this phenomenon, it was decided to adopt a pattern made of horizontal lines placed along with four different directions 0°, 90°and ±45°(CROSS strategy) or two single directions: 0°and 90°(NET strategy), as reported in Fig.  4, where each angle is carried out during a single repetition. The number of repetitions was selected to consider the need to use a multiple of four (due to the CROSS strategy that requires a minimum of four repetitions) and to obtain a depth to be measured with a low error and in any case less than the depth of field of the laser system (about ±0.5 mm respect the focussing point) to avoid excessive defocusing. To reduce the effect of any noise factor, the order of trials was fully randomised. No test replication was performed; in other words, one test for each control factor combination(treatment) was performed. This choice may introduce a limitation in the determination of higher-order interactions (as a matter of fact, no high order interaction can be checked), as well as in terms of model robustness. In addition, the replications adoption is not useful for ANNs modelling since it introduces errors during the trial phase and prevents the fitting [46,62]. However, also considering the number of tests carried out (108 tests or treatments), this choice is an acceptable compromise in terms of the resolution of the statistical model and the number of tests to perform.  To quantify the process behaviour, the following response variables were selected: machined depth, material removal rate (MRR), calculated as the machined volume/process time, and the surface roughness parameters. The depth was measured by way of digital microscopy (Hirox, mod. KH-8700) as the average value of three profiles, 10 mm in length, acquired over the whole cavity. The surface roughness was measured by way of a 3D surface profiling system (Taylor Hobson, mod. Talysurf CLI 2000) equipped with an inductive gauge 2 μm radius diamond stylus. For each specimen, 4 profiles, 4 mm in length, were acquired directly in the cavity, adopting an axis resolution of 0.5 μm, a lateral resolution of 0.5 μm and a vertical resolution of 40 nm. Before the measuring, in order to avoid debris retention, each specimen was cleaned with acetone in an ultrasonic bath for 5 min.
Surface analysis software (TalyMap Universal, release 3.1) was adopted to measure the roughness parameters Ra (arithmetic mean deviation of the assessed profile), Rz (maximum height of the profile), Rt (total height of the profile), and RSm (mean spacing of profile elements), according to UNI EN ISO 4287:2009 standard [63]. For each treatment, the average value of the four profiles was adopted as roughness parameters. Once the depth was measured by digital microscopy, the material removal rate (MRR) was calculated as the ratio between the machined volume/treatment time by way of the equations: where t i is the machining time and L the size of the square pocket side (5 mm).
In addition, electronic microscopy (ZEISS mod. LeoSUPRA 35) was adopted to study the surface morphology.

Statistical analysis and model development
In order to detect the presence of relationships between the control factors and the response variables, analysis of variance (ANOVA) was adopted. The ANOVA tests the significant differences between the means of the response variables by partitioning its total variation into different sources (error, experimental group membership…) and comparing the variance due to the between-groups (or treatments) variability with that due to the within-group (i.e. the same treatment) variability. The analysis was carried out by way of the software Minitab®18. A confidence level of 95% (α = 0.05) was adopted during the analysis. Before the analysis, the ANOVA assumptions were successfully checked by way of the analysis of residuals, according to what was reported in [56]. Once the significant parameters were determined, it was possible to develop the statistical model by way of the response surface methodology (RSM). For each response variable and, since the Strategy is a categorical factor, for each Strategy level, RSM provides a regression equation (seconddegree polynomial) to model the process behaviour. Each equation relates to the control factors (Ss, f, R, Hd) and their products (Ss 2 , F 2 , R 2 , Ss×f, Ss×R, Ss×Hd, f×R, f×Hd, R×Hd) to a response variable (Depth, MRR and the roughness parameters: Ra, Rz, Rt RSm) by way of 14 constants (K 1 , K 2 , ..., K 14 ). The basic equation is the following: where the term "Source" indicates the response variable. This methodology is often adopted for the multiobjective optimisation [15,20,29,59,60,64]. In addition, like the ANOVA, RSM provides information about the statistical significance of the control factors and their combination inside the model as well as the errors performed in the estimation.

Artificial neural network configuration set-up
One of the issues to take into account in the development of neural networks is the choice and implementation of the network architecture. More specifically, the type of network (Back Propagation, …) and the number of nodes in the input, hidden and output level strongly influence the predictive response. Based on bibliographic analysis [42,48,49,65,66] and relevant background [46,62,67], it was chosen to adopt feed-forward back propagation neural networks (FFBPNN) since the latter is particularly suitable to understand functional relationships between given inputs and outputs. In FFBPNN, two-or more layer cascade-network can learn by examples any finite input-output relationship arbitrarily well given enough hidden neurons.
Since, as can be inferred by Section 3.1, all the control factors are involved in the definition of all the response variables (directly or by way of an interaction) according to [46,48,62], the input nodes were selected equal to the control factor numbers: 5 (i.e. each input node represents a factor numbers). Conversely, according to previous studies [46,62], the size of the output layer was fixed at 1. Thus, each response variable was elaborated separately. It is worth noting that a restricted number of tests with a multi-output configuration (i.e. adopting many response variables at the same time) were also carried out during the pre-test phase. However, these tests confirmed the results of the previous studies [46,62], i.e. a lack of ability in the depth prediction. Therefore, these results have not been reported here for the sake of briefness.
For the determination of the nodes in the hidden layer, the function "Intelligent Problem Solver -IPS" of the software Statistica® R7 was adopted. This function automatically develops and trains several network architectures and compute predictions from each by adopting advanced algorithms to determine the selection of inputs (i.e. the control factors to be adopted in the input layer), the number of hidden units, and other key factors involved in the network design. The algorithms can search indefinitely, although after some a priori unknown period, it is unlikely to make further progress. During this phase, the designer must guard against over-learning, using techniques such as "early stopping". In addition, several techniques, such as regularisation and sensitivity analysis, are adopted to help in the design process. The algorithms can be stopped either when certain of networks are analysed or after a fixed time (here it was selected a "searching time" of 2 h, corresponding to a no less of 1400 analysed networks for each response variable). Thus, the best ANN is selected adopting the function "Balance Error Against Diversity". Although the IPS may require significant computing times (several hours), it can test a large number of different types of networks, then propose the best solution/solutions or an ensemble of solutions in the analysed domain. Clearly, some of these networks are inadequate to model the process since they use too few response variables. The main advantage of this technique is that it is "fast", i.e. doing the same procedure manually, the computational times would be tremendously greater.
The software allows the setting of the nodes in the hidden layers (min and max) and the number of hidden layers (1 or 2). Here, it was chosen to vary the node numbers in the range 5-50, and to adopt both the single-and double-layer architectures. However, all networks with two layers, with the same processing time, produced higher errors. Therefore, for the sake of briefness, below, only the results obtained with single-layer networks are discussed. In Fig. 5  In order to build the FFBPNN, the Training-Validation-Testing (T-V-T) was adopted. By this method, the experimental dataset is divided into three separate subsets: Training, Validation and Testing. The ANN model is initially fit on a set of examples (the Training dataset), used to adjust the weights of the connections between the neurons in the network. Successively, the model is used to predict the responses for the observations comparing the model data to the experimental data contained in the Validation dataset. This subset is not used to train the network but acts as an independent check during the training phase. Moreover, it is adopted to stop the training process when the error on the Validation dataset increases because of the over-fitting problems (i.e. the network tends to model the noise present in the Training dataset rather than the overall phenomenon; thus, the Training error is reduced but the Validation and Testing errors increase). For this purpose, the Sum-squared function (the sum of the squared differences between the target and actual output values on each output unit) is adopted. Finally, the Testing dataset is used to provide a further unbiased evaluation of the final model adequacy. This aspect is often neglected since the Validation dataset is, generally, also adopted for the error measurement, as it avoids the subtracting of useful data for network training. However, the adoption of the Testing dataset allows having a more impartial evaluation.
The selection of the dimension of the three subsets (T-V-T) is a critical issue in ANN development, since it affects the network response. A small number of points in the Training dataset can affect the convergence of the model. At the same time, also a reduced size of the Validation or Testing dataset can prevent a correct stop of the learning phase or invalidate the network validation.
Here, it was chosen to adopt 20 treatments for the Testing dataset, and to divide the remaining values between the Training dataset (68 treatments, corresponding to about the 77% of the remaining data) and Validation dataset (20 treatments, corresponding to about 23% of the remaining data). The selection of the three datasets was fully randomised, assuring that an equal amount of data for the two strategies was adopted for each dataset. In addition, a second group of ANNs were developed adopting for the Training dataset both the experimental data and the data obtained by the RSM model. Conversely, for the Validation and Testing datasets, only experimental data were adopted. These ANNs are described in detail below in Section 4.

ANOVA results and effect of process parameters
The ANOVA assumes that the observations are normally and independently distributed with the same variance for each treatment or factor level. Before the analysis, the ANOVA assumptions were successfully checked by way of the analysis  [56]. However, for the sake of brevity, this analysis was not reported here. In Table 4, the results of ANOVA are summarised for all the response variables in terms of p-value and the F-value. The analysis was carried out adopting a 95% confidence level (α = 0.05). Thus, a control factor is statistically significant (i.e. its variation affects the response variable) if the p-value is less than 0.05. While the presence of an interaction (p-value<0.05) indicates that the effect of a control factor on the response variables depends on the level (value) of the other control factor. The F-value indicates the weight of the effect; the greater the F-value, the greater the variation of the response variables at control factors change. The It is worth noting that, in all the cases where one or more control factors are not significant, there is at least one strong interaction (with high F-value). As shown below, this indicates the presence of anti-synergic interactions that can mask the presence of a main effect.
Once the significance of the control factors and their interaction has been verified by ANOVA table, to study their effect on the response variables, the main effect plots and the interaction plots were carried out. However, for the sake of briefness, only the factors and interactions with greater weight (high F-value) are discussed in the following section.

Interaction mechanisms and effect of process parameters
The possibility to effectively machine very brittle materials, as the SiC, without degradation or fracture production depends on the interaction mechanisms (cold ablation, vaporisation, melting, mechanical effect, cracking…) that determine the quantity of molten material produced and how heat is accumulated in the machining area during the process itself.
For the adopted sources (see Table 2), the power density (Pd>1.99*10^1 2 W/m 2 ) is high enough to produce, during the laser pulse action, the material melting and vaporisation, as reported in [10,19]. The vaporised material forms hot plasma from the leading edge of the pulse and the plasma is sustained during the rest of the pulse. Therefore, a significant part of the removal mechanisms is due to the vaporisation-induced recoil force (so-called recoil pressure [26]). According to the described mechanisms, the molten material is partially ejected from the surface by the gas vapour and plasma pressure and only a limited part of the material is deposited on the surface or in the neighbour. The latter quickly dissipates in the surface of the material leading to the formation of a recast zone (recast layer). Moreover, although SiC melts at about 1900°C, in presence of oxygen at above 1100°C, SiC decomposition is obtained according to the following reaction [68]: The SiC decomposition contributes to the gas formation and molten material expulsion, as confirmed by the presence of white porous deposit (formed by incoherent SiO 2 ) on the edge pocket (see Fig. 6), small filaments on the surfaces (Fig. 7) and light However, the contribution of each removal mechanism (melting, vaporisation, degradation, recoil pressure) and then the overall mechanism depends on the process parameters. In details, low pulse frequency corresponds to high pulse energy and pulse power (see Table 3); thus, the plasma formation and the mechanical effect are favoured: this allows to obtain a smooth and regular surface characterised by a texture formed during the last scansion, a very limited number of droplets and some voids due to the droplets or SiC particles detachment, as visible in Fig. 8, where surfaces obtained at Ss=500 mm/s, f=30 kHz and R=60 are reported. Conversely, at high frequency, the reduction of the pulse energies and the increase of the pulse overlapping results in a reduction of the mechanical effect and, at the same time, an increase of the local overheating. Consequently, the molten material increases and the surface results characterised by a large number of droplets. As can be inferred by the analysis of Fig. 9, where images of the surface obtained at 1000 mm/s, 60 kHz and 20 repetitions are reported. From the analysis of the various surfaces, it was possible to note that the increase in the number of droplets seems to be linked neither to the cutting speed nor to the number of repetitions. Conversely, the adoption of a larger hatching distance results in the formation of droplets that are larger in size as visible comparing the image on the right and left side of Fig. 9. However, excluding the presence of the pores, no evident damages (fractures) are visible on the surface also when magnification of 2000× is adopted (for instance in Fig. 7). This is due in part to the selection of the value to be adopted in the experimentation during the pre-tests and in part to the fact that, also in the actual worst conditions, the molten materials form only a thin recast layer or droplets; consequently, the contraction due to the phase transition is not such as to produce fractures.
In Fig. 10, the main effect plots for the different response variables are reported. In the figure, the continuous lines indicate the significant factors, vice versa the dashed ones.
From Fig. 10a, the depth decreases at the increasing of scan speed, hatching distance and pulse frequency, while it increases at the increasing of the repetitions and passing from CROSS to NET strategy. In addition, the interaction Hd×Strategy shows an F-value higher than the frequency f.
The effects of Ss, R and Hd are expected, since the latter determine the total energy released per unit area (Es). This can be calculated through the following: where t i is the machining time and S is the machined area (L 2 ). Obviously, the higher the Es (lower Hd and Ss), the higher the depth, as also confirmed by Fig. 11, where the depth is reported against Es as a function of the strategy. This is consistent to previous studies performed on different materials [19,59,60].
It is worth noting that the effects of f and the Strategy are not justified by the Es dependence (Eq. (9)). However, they can be related to the pulse power and the temperature reached by the materials during the machining itself, respectively. More in detail, the lower the frequency, the higher the pulse energy and the pulse power (Eqs. (1) and (2)). Consequently, at low frequencies, also the plasma developed during the ablation is more intense; this results in a greater contribution of the mechanical effect. However, compared to the other control factors, the effect of frequency is quite limited, as confirmed by the low F-value. On the other hand, for a small machined area, the Strategy influences the local temperature, since the time required to reposition the laser on the same point changes at the Strategy changing (the time doubles passing from NET to CROSS). Clearly, a higher temperature of the substrate allows a higher material removal, since the energy required for material machining is reduced. This effect is confirmed in the Hd×Strategy interaction (Fig. 12a) observed that when adopting the narrow step (which allows the material local heating) and the NET strategy, the reached depth increases. The previous behaviours are confirmed by the MRR diagrams: as a matter of fact, the MRR increases passing from CROSS to NET strategy (Fig. 10b). The effect of Hd on MRR is explained considering that, despite low Hd promotes local heating, it results in a longer process time. Since the MRR is obtained as a ratio between the removed volume and the time, the effect of the latter prevails over the heating effect.
The roughness parameter Ra varies in the range 0.6-3.3 μm, and that, considering the adopted laser source (a nanosecond pulse duration), is a good result. Ra decreases at the Ss increase or at the f and R increase (Fig. 10c), while it seems insensitive to both Hd and Strategy. The increase of Ss has two effects: a decrease of both the pulse overlaps and the local temperature. Therefore, the single laser pulse produces a less deep crater with a wide distance between the previous and the following crater, resulting in a smoother surface. Conversely, as aforementioned, the increase of the frequency results in overheating phenomena, the mechanical effect is reduced and droplets are produced; therefore, the roughness increases. The roughness increase due to the repetitions increases is a consequence of the surface reworking; consequently, the surface roughness can only worsen, and this is consistent with the literature analysis [13,19,60].
About the effect of Hd and strategy, it is worth noting that their effects are hidden by the presence of an anti-synergic interaction as visible by the interaction plots (Fig. 12b). As expected, the roughness parameters Rz and Rt follow the Ra behaviours, with the difference that the first is weakly influenced by the Strategy while the second is weakly influenced by both Hd and Strategy ( Fig. 10d and Fig. 10e). it is reasonable to assume that this is a consequence of the droplets' size increase occurring under these conditions. However, the F-values of Hd and Strategy, when compared with those of Ss, f and R, are one or more order lower (Table 4).
About the RSm, it varies in the range 0.03-0.13 mm, and these values suggest that the RSm may depend on the presence of burrs and drops of recast material produced at the edges of the laser path, the material structure, since the latter is obtained by powder sintering, as well as the direction of the laser beam along the last surface scansion, as visible in Fig. 8 and Fig. 9. From Fig. 10f, the effect of Ss and R is similar to those described for the other roughness parameters.
The effect of Hd and Strategy is more complicated to describe, since their interaction is significant (from Table 4, p-value=0.000) and the F-value of the interaction is of the same order as that the single parameter ones (from Table 4  interaction, respectively). Generally speaking, an increase in Hd involves an increase in the distance between the channels produced by the laser during the single travel (see Fig. 2). This is true for the CROSS strategy, as inferred by Fig. 12c where the interaction plot Hd-Strategy is reported for the RSm. Conversely, for the NET strategy, a change in the Hd does not affect the RSm (as it was confirmed by the ANOVA test performed on only the NET data).
About the Strategy, it is worth noting that for the CROSS strategy, the last beam travel is placed along the 45°and the roughness was acquired along the 0°direction, then the distance between two parallel lines is higher than the hatching distance (0.057 μm and 0.085 μm for the Hd=0.4 μm and Hd=0.6 μm, respectively); for the NET strategy, the roughness was acquired along with one of the scanning directions; in both cases (CROSS and NET), the RSm average values are highest than the Hd. Consequently, the RSm is not only a function of the distance of the grooves formed during the laser machining (i.n. Hd), but it also depends on the mode in which the droplets and the burr are produced on the surface. Adopting the CROSS strategy, the distance between the groove edge is higher, and each surface scansion partially removes the surface topography produced in the previous scansion; consequently, the surface is smoother and more regular, and the droplets have no preferential organisation; thus, the measurement is more sensitive to actual Hd. Conversely, the surface produced adopting the NET is the superposition of more travel in the same direction, resulting in a different organisation of droplets and ribs, as visible comparing the surfaces reported in Fig. 9. Consequently, for the NET strategy, the effect of the change in Hd is partially hidden by the presence of the debris.

Response surface model
Due to the complexity of the phenomena occurring during laser milling operation, the response surface methodology (RSM) was adopted to model the process. As aforementioned, RSM provides a regression model (second-degree polynomial) able to describe the relationship between control factors and the response variables. Generally speaking, the model requires a number of points not necessarily larger, so it allows obtaining a model with a limited number of tests, provided the experimental plan is properly designed for the analysis (for instance, the quadratic terms, necessary to model the curvature presence, are available only for control factors with at least three levels). On the other hand, since the commercial software uses predefined laws (here a second-degree Fig. 9 Images of surfaces obtained at Ss=1000 mm/s, f=30 kHz, R=60, a Hd=40 μm, strategy=CROSS; b Hd=60 μm, strategy=CROSS; c Hd=40 μm, strategy=NET; d Hd=60 μm, strategy=NET polynomial), it is not sure these are the best appropriate equations to model the physical phenomenon.
Before the model construction, the analysis of the statistical significance of the model term was carried out for each response variable. Table 5 summarises the results together with the errors performed in the estimation. As for the ANOVA table, this test assesses the significance of the control factors in the model and provides an estimation of their weights in the model (the F-value). From Table 5, the significant terms are almost the same of the ANOVA (Table 4); thus, the model is sufficiently sensitive in identifying the significant factors and interactions. The table also provides the error estimators: Error [%], R-sq [%]; R-sq (adj) [%], and R-sq (model) [%]. The first three terms have the same meaning described for the ANOVA table; the last one indicates a measure of how well the model predicts the response for new observations. However, the analysis of the errors performed by the models shows a good correlation for the Depth, Ra and Rz (high R-sq.s); a correlation just enough for the Rt parameters (good R-sq.s) and a low correlation for the MRR and RSm parameters (low R-sq.s). In Table 6, the regression equation in uncoded units for the different response variables, separate for the two strategies, is reported. In Fig. 13, the model data, calculated by means of the equations of Table 6, are reported against the experimental one. In the figure, the dotted line at 45°represents the reference line: model   Figure 13 clearly shows that the RSM model is able to follow the general trend of all the control factors, since in any case the points align along the reference line. Depth, Rz and Rt show a better fitting since their points are closer to the reference line. The Ra parameter has a singular behaviour: the trend follows the reference line; the points are enough closed; however, the model overestimates the experimental values (there is an upward shift). Conversely, both MRR and RSM present some points far from the reference line (i.e. show a certain data dispersion).

ANN
In Table 7, the architecture and Error [%] obtained in Training, Validation and Testing datasets of the best fitting ANNs are reported. From the table, the number of nodes in the hidden layer varies within the selected range (minimum 8, maximum 32 nodes, compared to the 5-50 nodes adopted in the networks development), so it is confirmed the validity of the adopted choices. On the other hand, the Error [%] performed on the Training, Validation and Testing datasets are different, as also clearly visible in Fig. 14, where the three errors are reported for the different response variables. The latter indicates that the network is not reliable enough to guarantee a limited error on the entire dataset domain (or similarly for a wide range of process conditions). However, even with this limitation, neural networks offer a forecasting property that is comparable to that obtained with RSM model. This is easily noticed by comparing Fig. 13 to Fig. 15, where, in the latter, the data obtained by the ANNs, calculated for the Validation and Testing datasets, are reported against the experimental one. More in details, from the figure, a good agreement between the model and the experimental data is observed for the Depth, Rz and Rt, since the data fault straddling and close to the 45°line (model response=experimental data), while some scattered points are still visible for MRR, RSm and, in a lesser way, for the Ra parameter.

Development of a hybrid artificial neural network (H.ANN)
As aforementioned, both RSM and ANN show a good ability in modelling some parameters with low error (Depth, Ra, Rz, Rt), while they seem inadequate in modelling either the MRR or the RSm (Error [%] = 59 and 70, respectively). In general, results provided by the models are affected by different sources of uncertainty; the latter is related to the sample dimension, to the process variability and to the simplification introduced by the model itself. The first source of uncertainty can be overcome by increasing the experimental levels, while the process variability is usually reduced by introducing test replications. However, both these solutions require an increase in the experimental efforts. The third cause (the simplification adopted in the model) is a systematic error for which statistical methodologies, especially when the interpolation law cannot be selected, do not provide a useful tool, while cognitive methods can be used fruitfully [35][36][37].
On the other hand, one of the problems affecting the ANNs effectiveness is the number of points adopted in the training phase. ANNs need substantial data to train upon, especially when the model has complex relations (as for MRR and RSm parameters). If they are too few, the network may not achieve the convergence or may achieve the convergence only in a sub-domain of the overall hypervolume, corresponding to the process condition of the Training subset (this also happens when over-fitting problems occur). However, this may also depend on the complexity of the relationships between dependent and independent variables. As a matter of fact, neural networks can model the depth, since the latter has a "simple relationship" within the control factors (see Fig. 11), while A possible solution is to increase the number of points in the Training dataset, decreasing the numbers for Validation and Testing ones; however, this solution would result in a less effective model in performance assessment. In other words, even having a reduced error in the training phase, it would not be possible to verify the model! Anyhow, further ANNs were also developed by increasing the number of points in the Training dataset to 78 and reducing the Testing dataset to 10. However, also for these ANNs, errors and behaviours comparable with that just described were observed. Therefore, in order to investigate the possibility to forecast the MRR and RSm parameters by way ANNs, taking also into account the ability of neural networks to work in the presence of noise [38,46,47], it was decided to develop a new series of networks combining both experimental data and RSM model to obtain a new Training dataset. This new approach has the advantage of not requiring further tests. This operation requires a careful selection of the data in terms of number and process conditions, since an excess of RSM data could lead the network to converge towards the RSM model itself, while adding data on the borderline or outside of the experimental process conditions, since the RSM model is valid only within the tested conditions, could add an excessive noise.
Hence, a new group of ANNs, namely hybrid ANNs (H.ANN) was developed adopting for the Training dataset parts of the experimental data (58 treatments) and data from RSM model (96 treatments). The latter was calculated by Eq. (8) in the middle range of the control factor levels. Table 8 shows the selected levels.
Furthermore, to impose the ANN convergence to the experimental data (i.e. the stop of the training phase) and to validate on an actual dataset, 40 and 20 experimental data, taken in equal parts from the CROSS and NET strategy, were adopted for the Validation and Testing datasets, respectively. In addition, to cover the entire hypervolume, all the experimental data were randomly selected making sure that their distribution was such as to superimpose the Training dataset one. Thus, before the analysis, the normal distributions of Training, Validation and Testing datasets adopted were compared and, since a good superposition of the curves was observed, the new datasets were obtained. All the other settings of the H.ANN were unchanged, except for the searching time of MRR and RSm parameters that was extended at 4 and 8 h, respectively, since the latter result is difficult to model.  Table 9 shows the architecture and Error [%] obtained in Training, Validation and Testing datasets for the best fitting H.ANNs. The same data are also compared in Fig.  16. From the results, the errors obtained on Training, Validation and Testing datasets are closer than those for the ANNs (see Table 7 and Fig. 14). Thus, H.ANNs are more robust in the forecasting of the overall hypervolume. On the other hand, the error obtained by the H.ANNs on the Testing dataset is less, while the error measured on the Training and Validation dataset appears higher than the one measured for the ANN. Therefore, it can be deduced that, although to a limited extent, ANNs are affected by the RSM model data introduction. However, the most significant result is the strong reduction of the error measured on the Testing dataset for both MRR and RSm. This is confirmed also by the analysis of Fig. 17 where the ANNs data are plotted against the experimental data. From the latter, in addition, the absence of scattered points far from the reference curve can be observed.

Discussion
To compare the performances of the three methods (RSM, ANN and H.ANN) in Fig. 18, the overall errors obtained by the ANN models on both the Validation and Testing datasets (Table 7 and Table 9, column "V+T") were compared to the RSM Error [%] ( Table 5) It is worth noting that, compared to the ANN, in the case of the Depth and Rt, the adoption of the H.ANN results in error increasing. A possible reason is that the adoption of the H.ANN is useful mainly when the response of RSM is better than the response of the ANN (i.e. ANN has a lower R[%]). In this condition, the increase of the training dataset allows a better convergence of the results (e.g. for MRR, Ra, Rz and RSm) since it increases the trainset dimension. Conversely, when the ANN shows already a response better than the RSM, the adoption of additional data (affected by a certain error) can only get worse the response. These observations, on the one hand, confirm that, compared to the RSM, the ANNs have a greater ability in complex phenomena modelling; on the other hand, they need a sufficiently large training dataset.
Either way, results appear encouraging; moreover, the hybrid approach appears as a useful method especially when the regression model (i.e. the RSM) is not fully adequate to predict the response variable (such as in the case of MRR and RSm) or when the number of repetitions is such as not to constitute a sufficiently robust database. In these cases, the integration of the two methods in a hybrid one can lead to an improvement in the response of the predictive model itself.  Laser milling of SiC advanced ceramic was performed by way of a 30-W Q-switched Yb:YAG fiber laser changing the scan speed, the pulse frequency, the hatching distance, the repetition and the strategy. ANOVA was performed to analyse the effect of the process parameters. Response surface methodology (RSM) and artificial neural network (ANN) were adopted to model the process. From the results, within the experimental conditions adopted in this work, the main conclusions are the following: & The adopted fiber lasers can mill the SiC ceramic, allowing a MRR up to about 5 mm 3 /min, and a smooth surface (Ra varies in the range of 0.6-3.3 μm).  & Moreover, the integrated approach appears as a useful method especially when the regression model is not full adequate to predict the response variable or when the number of repetitions is such as not to constitute a sufficiently robust database. & Apart from the results obtained for the most critical parameters (MRR and RSm), comparing the H.ANNs and RSM techniques, it must be highlighted that the latter has the advantage of requiring shorter processing times. Moreover, RSM also provides a set of equations immediately available for process optimisation.

Declarations
Ethical approval The manuscript has not been submitted to more than one journal for simultaneous consideration. The manuscript has not been published previously.
The study was not split up into several parts. The data have not been fabricated or manipulated to support hour conclusions.
No data, text or theories by others are presented as if they were the authors' own.
Consent to submit has been received from all co-authors and responsible authorities at the institute/organisation where the work has been carried out before the work is submitted.
Authors whose names appear on the submission have contributed sufficiently to the scientific work and therefore share collective responsibility and accountability for the results.

Consent to participate Not applicable.
Consent to publish Not applicable.

Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.