Annealing of Monel 400 Alloy Using Principal Component Analysis, Hyper-parameter Optimization, Machine Learning Techniques, and Multi-objective Particle Swarm Optimization

The purpose of this paper is to investigate the effect of the annealing process at 1000 °C on machining parameters using contemporary techniques such as principal component analysis (PCA), hyper-parameter optimization by Optuna, multi-objective particle swarm optimization, and theoretical validation using the machine learning method. Results after annealing show that there will be a reduction in surface roughness values by 19.61%, tool wear by 6.3%, and an increase in the metal removal rate by 14.98%. The PCA results show that the feed is more significant than the depth of cut and speed. The higher value of the composite primary component will represent optimal factors such as speed of 80, feed of 0.2 and depth of cut of 0.3, and values of principal components like surface roughness (Ψ1 = 64.5), tool wear (Ψ2 = 22.3) and metal removal rate (Ψ3 = 13.2). Hyper-parameter optimization represents speed is directly proportional to roughness, tool wear, and metal removal rate, while feed and depth of cut are inversely proportional. The optimization history plot will be steady, and the prediction accuracy will be 96.96%. Machine learning techniques are employed through the Python language using Google Colab. The estimated values from the decision tree method for surface roughness and tool wear predictions using the AdaBoost algorithm match well with actual values. As per MOPSO (multi-objective particle swarm optimization), the predicted responses are as follows; surface roughness (2.5 μm, 100, 02, 0.45), tool wear (0.31 mm, 40, 0.40, 0.60), and MRR (material removal rate) (5145 mm3/min, 100, 0.4, 0.15). As validated by experimentation, there are small variations as the surface roughness varied by 1.56%, tool wear by 6.8%, and MRR by 2.57%.

Resistance heating with gas burners, oven heating with electric arcs, and other methods such as induction, radiation, and laser assistance can all be used [4]. Few researchers have proposed heating techniques for difficult to machine materials, namely, plasma assisting cutting [5], heating by induction [6], hardening by flame [7,8], laser heating [9], cryogenic machining [10], and Al3 Cr3 Sn alloy, hot ultrasonic method [11]. Sun et al. [12] proposed a review of thermally enhanced machining processes focusing on laser and plasma theory for materials such as MMC and ceramics. It includes temperature distribution, material removal mechanisms, and tool wear. Sukumaran et al. [13] presented different heat treatment cycles ranging from 625 to 1025 °C for the 625 alloy to obtain optimum mechanical properties. At room temperature, there was a decrease in strength and hardness, and, at higher temperatures, ductility increased. Parida et al. [14] studied chip formation, tool life, and tool wear in hot turning using flame heating at room and hot temperature conditions of 300-600 °C. It is observed that there is a significant reduction in response parameters for all three nickel-based alloys under heating conditions.
Machine learning and deep learning play a critical role in developing intelligent systems for descriptive, diagnostic, and predictive analytics for machine tools and process health monitoring. Data-driven models are more effective and have strongly increased in the industrial sector due to computing power, high reliability, and easier data acquisition. It has motivated the authors to work in this area of synergistic applications of data-driven models to develop a system that provides a fast and more stable response with less time and cost optimization of parameters by multi-objective techniques. It will have greater significance in optimization parameters for improving the machining performance of super alloys. The prediction of machining parameters has become necessary considering the need for increased production rate, reduced cost of production, and sustainability of quality. However, with the nonlinear nature of the machining process, applying and identifying a suitable and adequate technique is crucial to achieving machining performance.
Few researchers have studied microstructure, mechanical and physical properties, and MRR (material removal rate) using non-traditional cutting processes, but there are no studies on modeling, optimization using PCA (principal component analysis), Optuna, machine learning techniques, MOPSO (multi-objective particle swarm optimization), and the use of the annealing process for M 400 alloy, creating a research gap. The novelty of this research work lies in the fact that, to the best of the authors' knowledge, either none of the researchers or very few have studied the annealed super alloys machining comprehensively, which has tremendous potential in the aerospace, marine, chemical, and automobile industries. Among all the heating methods, annealing is simple in design, less costly, time-saving, requires minimum equipment, is readily available, and easy to control. In this study, a novel method of hyper-parameter optimization by Optuna has been used for the first time in predicting significant cutting parameters. Identifying the most significant parameters using multivariate data analysis involving computing principal components by PCA and multi-objective optimization with the help of MOPSO. The experimental results have been validated with machine learning techniques, namely the decision tree, random forest, and the Ada boost algorithm in the Python language using Google Colab for 625 alloys. Therefore, the primary objectives of the present study are to analyze the effects of cutting factors and heating aspects on metal removal rate, roughness, and tool wear and compare the results at room and annealed conditions. In order to satisfy the requirements of manufacturing in Industry 4.0, expert systems, machine learningbased prediction models, and optimum cutting parameters are fundamental requirements. This research work would be a better attempt to improve the optimization and behavior of machining for high-strength metals in the industry and to identify and introduce the most significant parameters.

Literature Review
Some researchers have conducted experimental work with coated and uncoated cutting tools for the estimation of machining indicators for different materials. Lajis et al. [15] demonstrated an experimental result for AISI D2 steel in hot cutting process end milling by a coated carbide tool with the effect of temperature, feed, and cutting speed on improving surface integrity and micro hardness. Srithar et al. [16] conducted an experimental analysis of AISI D2 steel using carbide and CBN (Cubic Boron Nitride) tools to improve MRR, or surface roughness, based on process factors that show feed rate is more important than speed and depth of cut. Kumar et al. [17] used various dielectric mediums like paraffin, kerosene, and servo herm oil in EDM for investigating electrode wear and MRR for Monel-400 based on L 18 orthogonal using the Taguchi technique. Pervaiz et al. [18] proposed machinability studies and challenges involved in determining the future research direction for productivity improvement for nickel and titanium alloys based on tooling techniques, tool materials, wear mechanisms, surface integrity defects, microstructural alterations, and residual stresses. Childs et al. [19] conducted an experimental analysis of Cu-Ni alloys using single crystal diamond tools in micro-machining to analyze tool wear, heat conduction, and finite element chip models established for the relationship between cutting conditions and temperature.
Statistical methods have been adopted by a few researchers for the theoretical prediction of response parameters for Monel 400 and other alloys. Sonawane et al. [20] presented Anova and principal component analysis for optimization of machining factors for estimating response features, namely overcut and surface finish and metal removal rate in the EDM process for the Nimonic-75 alloy. Mathan Kumar et al. [21] developed a copper titanium diboride electrode for Monel 400 using four input parameters, pulse current, titanium diboride percentage, flushing pressure, and pulse current for evaluating metal removal rate and tool wear by regression model and desirability based multi-objective optimization. Rajamani et al. [22] presented an experimental work for Monel 400 for analyzing response factors for studying micro hardness and kerf width using the Box-Behnken design of RSM (Response surface methodology). They have developed quadratic models have been developed for speed, gas pressure, arc current, and standoff distance and assessed their performance using ANOVA (analysis of variance). Palanisamy et al. [23] proposed experimental analysis for temperature and surface roughness by examining the turning behavior of the 800 H super alloy based on the L 27 orthogonal array using the Taguchi-Grey approach for predicting cutting force, tool wear, surface roughness, and specific cutting pressure depending on cutting parameters. Arun Kumar et al. [24] suggested RSM and ANOVA methods for analyzing response factors, namely MRR and surface roughness, in the dry WEDM (wire-EDM) process for Monel 400 alloy based on input factors. Jayakumar et al. [25] used the Taguchi design L 9 orthogonal array for Monel K-500 in drilling using process factors, namely cutting fluids, speed, tool materials, and feed, to analyze surface finish and MRR.
Machine learning techniques are finding their applications in several fields, and various methods are evolving [26][27][28][29][30][31]. Optuna is an automatic hyper-parameter software framework for the optimization process, and it automatically finds optimal hyper-parameter values by making use of different samplers, such as grid search, random, Bayesian, and evolutionary algorithms. Hyper-parameters are critical, have direct control over the training algorithm, and significantly affect the performance of the model being trained. Aaron Klein et al. [32] suggested a Bayesian optimization tool for hyper parameters of machine learning algorithms, namely SVR (support vector regression) and deep neural networks. To accelerate hyper-parameter optimization, constructed a model which selects training time as a function of database size and automatically trades off information gain. Takuya Akiba et al. [33] proposed a new design criterion for nextgeneration hyper-parameter optimization software, which includes design techniques that are demonstrated through experimental results and real-world applications. Bharathi Raja et al. [34] used various optimization methods, a review of other as per cutting factors. Li Yang et al. [35] presented a review paper on various state-of-the-art machine learning algorithms for hyper-parameter optimization techniques and configurations, their strengths, limitations, and challenges suitable for real-time applications. Machine learning techniques suited for smart manufacturing have been adopted by some researchers for the theoretical prediction of response parameters for different materials. Abu-Mahfouz et al. [36] introduced KNN (k-nearest neighbors), SVR, and other decision trees to predict surface roughness. Barrios et al. [37] compared three models using a decision tree algorithm for analyzing the surface roughness of glycol parts manufactured in 3D. Yaman Hamed et al. [38] proposed a new hybrid model using SVR, KNN, and a suggested model that was used for calibration process behavior in pipeline corrosion measurements.
PSO (particle swarm optimization) is a global optimization method and a population-based tool suitable for solving various optimization problems. Some researchers have proposed PSO and a combination of multi-performance optimization for different materials. Yusupi et al. [39] discussed an overview of PSO, GA (genetic algorithm), and SA (simulated annealing) for optimizing process parameters for traditional and modern machining based on cutting parameters such as speed, rake angle, feed, and depth of cut to minimize or maximize machining performance. Bharathi Raja et al. [40] developed a PSO for predicting optimum cutting factors for materials such as brass, copper, aluminum, and mild steel in turning based on cutting conditions for minimizing cutting time and surface roughness. Marko et al. [41] suggested a particle swarm optimization technique to obtain optimum parameters in turning based on cutting factors for predicting tool life, cutting force, and surface roughness. Alaimi et al. [42] presented an ANFIS-Quantitative PSO machine learning approach for dry turning to optimize surface roughness for ALSI 304 steel. Babuna et al. [43] studied an adaptive particle swarm optimization algorithm, the ANN approach, and an electromagnetism optimization algorithm to minimize surface roughness based on cutting parameters. Surinder Kumar et al. [44] used particle swarm optimization in the turning of unidirectional glass fiber reinforced plastic composites to predict MRR based on the Taguchi L18 orthogonal array during dry, wet, and dry-wet cutting environments. Manay et al. [45] proposed PSO in turning for AISI 4340 steel for optimization of cutting force and tool life using CVD-coated (chemical vapor depositioncoated) multi-layer carbide tools. This paper is organized as follows. The materials and method section are presented first, including work piece materials, their mechanical and physical properties, chemical composition, schematic representation of the experimental setup, levels and factors, and equipment used. The methodology represents PCA, Optuna, machine learning, and MOPSO. Finally, the paper concludes with results, discussions, conclusions, and future directions for further research.

Materials and Methods
The Monel 400 material is more resistant and can withstand temperatures above 1000 °C. Sandvik Coromant, tin-coated inserts, SNMG 120404, square shank, fitted to PSBNR 2525 M12 tool holder were used with a clearance angle of 6°, back rake angle of 6°, and a cutting edge angle of 75°. An optical microscope was used to measure tool wear, and maximum flank wear of VB = 0.3 mm as per ISO 3685 standards was taken as the wear criteria. The average surface roughness value was measured with a Taylor Hobson tester at four different locations. The diameter and length of the work piece selected were 70 mm round bar, 300 mm long, and trial runs were conducted using the center lathe, Jessery Major, at room temperature and an annealed temperature.
The alloy's physical, mechanical, chemical properties and composition are represented by Tables 1 and 2, respectively. The levels and control factors are based on the L16 array, as shown in Table 3. Figures 1 and 2 show  schematic representations of the experimental setup and equipment used in this work. Annealing is a method of external heating of the material before the machining process. The softened material can easily be machined to increase the metal removal rate and decrease surface roughness and tool wear. To heat the work piece, a Nabertherm P330 (model P33) oven was used. The Monel  400 alloy was placed in the furnace, and the temperature was raised from room temperature to 500 °C in 30 min; the sample was kept for 50 min, and the temperature was raised to 1000 °C in 55 min, for a 9 h heating cycle, as shown in Fig. 3. The experimental setup and measured data are shown in Fig. 4 and Table 4.

Principal Component Analysis
Principal component analysis reduces data complexity by reducing factors to a few uncorrelated parameters and independent principal components. It transforms multiple linked response data into a set of uncorrelated quality features that indicate the main components [20][21][22].
Step 1: Arrangement of the measured multiple performance attributes during machining as represented in Eq. 1.
where, k is the response, i is the experiment number.
Step 2: Normalization of multiple quality features array was implemented in this step, as tool wear and surface roughness are smaller the better performance aspects, and the original sequence was normalized by Eq. 2.
The metal removal rate is higher the better the performance features, normalized by Eq. 3.
where, X i (k)-normalized data for kth response, min Z i (k) smaller data for Z i (k) of kth response, max Z i (k) larger data for Z i (k) of kth response, and X is a normalized array represented using Eq. 4 and output values of step 2 are illustrated in Table 5. Step 3: Variance-covariance matrix M from the normalized data is calculated by Eqs. 5, and 6.
Step 4: The correlation coefficient matrix was analyzed and represented by λ j and V ij , and the output values of step 4 are depicted in Table 6.
Step 5: Principal component ψ j was evaluated by Eq. 7. The eigenvector V ij represents the weighting parameter of j number performance features for jth principal component.
Every component ψ j represents a variation of performance features such as accountability proportion, as shown in Table 7.
Step 6: For each experiment, CPC (Composite primary components) was estimated using Eq. 8. It will be an index of multi-composite quality for multi-performance features and will be a combination of main components and individual eigenvalues.
Corresponding s/n ratios were also calculated for the obtained composite primary component values using Eqs. 9 and 10, respectively.
where, n is number of repetitions, Y ij is ith experiment at jth test.
Results for confirmatory experiments and composite primary components with s/n ratio as represented in Tables 8,  9 respectively.
ij (Higher -the -better)  The optimum level of parameters will be recognized using the experiment number for the maximum value of the CPC, Ψ. The highest value of (CPC, Ψ) is 1.4889, which is the 12th experiment. Hence, the optimal parameters are A 3 , B 4 , C 2 and 80 speed, 0.2 feed, and 0.3 depth of cut.
Step 7: The value of Ψ (predicted Y value) is calculated by performing regression for the actual parameters and the (CPC, Ψ) values. The value of Ψ is recognized by identifying the maximum (highest) value of predicted Y, which is 1.27, and the S/N ratio is calculated for the highest predicted value of 2.076.
Step 8: Optimal process parameter settings were determined, and ANOVA on CPC values was used to identify significant factors influencing performance characteristics and step 8 output values, as shown in Table 10.
The percentage contribution of factors on CPC for the means, response table for means, S/N ratio, and main effect plots are represented in Fig. 5, Table 11, and Fig. 6, respectively.

Hyper-parameter Optimization Using Optuna
It is a hyper-parameter optimization technique based on a Bayesian approach to indicate the space for hyper parameters. Preferred networks have developed this framework.
It is an open-source software for automatic hyper-parameter optimization that is used to analyze the behavior of machine learning algorithms. It will automatically search for and find optimal hyper-parameter values using Python and a trial and error approach. Optuna has some features, like search spaces for integer, floating, continuous, discrete, and categorical parameters. The optimization methods include random sampler, grid sampler, TPE sampler, and CmaEs sampler [26][27][28]. For calculating model efficiency, different statistical tools were used, such as mean square error (MSE), root means square error (RMSE), and mean absolute error (MAE). The MSE is a measure of how close a fitted line is to data points, and the lower values will indicate the model is perfect. RMSE is the square root of the mean square error, assuming that data error will follow a normal distribution, can relatively predict the data accurately, and is used in performance analysis. A mean absolute error is the average of all absolute errors of forecasted and measured values. Both the MAE and RMSE can range from 0 to ∞. The lower values of MSE, MAE and RSME imply a higher accuracy of the model, as indicated in Table 12. The correlation between variables and response parameters and optimization of the history plot are as represented in Table 13 and Fig. 10, indicating a higher accuracy of 96.968619, respectively. The steps involved in this hyper parameters optimization are as shown below. It has been observed that the machining   1. Steps a) Define the search space as well as the objective function. b) To create a study object for optimization of the objective functions. c) The number of trials and the beginning of the optimization process d) To maximize the objective functions, create a study function. e) To choose the direction, visualization, and plot.

Observations
i) Speed is a predominant variable that influences surface roughness and tool wear. ii) Feed has a negative impact on surface roughness and tool wear. iii) The depth of cut has not been a significant factor for surface roughness and tool wear. iv) The metal removal rate will not be affected by the input parameters.

Machine Learning Methods
These models offer a complementary approach to predictive capability by using expert knowledge and measuring data to build a model. Each machine learning method is unique, tailored to a specific application, based on output and input data, and used to solve classification and regression problems. The commonly used methods of machine learning are random forest (RF), the Ad boost algorithm, logistic regression, support vector machine (SVM), KNN (k-nearest neighbors), and decision trees (DT). Implementation of these approaches follows some steps, namely, data collection, data reprocessing, model training, model performance evaluation, and final model prediction. The use of machine learning in manufacturing can result in a significant increase in production efficiency and help in creating new a product development strategies. To analyse the influence of machining parameters on minimizing surface roughness and tool wear, using the reliability and robustness of three machine learning models, namely DT, RF Adaboost by Python language using Goggle Colab, which is a better platform for deep learning [29][30][31][32]. The data set for this study contains 361 samples, with 80% used for training and 20% used for testing. Fig.11 Decision tree architecture

Decision Tree Regression
Supervised methods are associated with learning algorithms to analyze data for classification purposes or assign classes when the data is not linearly separable. It is a multidimensional learning technique that is commonly used for problems involving categorical or nominal data. Decision trees follow a CART (Classification and Regression Trees) framework, where the training starts at the root node, and the data is progressively split into smaller and smaller subsets. The resulting classifier is a tree-like structure composed of if-then rules, with each node acting as a problem, focusing on one or more features and making a decision. This algorithm creates interpretable results, and sometimes a strong correlation between certain features and a class can be observed. The C 4.5 algorithm, developed by Quinlan (1996), is the most widely used algorithm today. It is a popular tree construction approach that produces a decision tree constructed using WEKA (Waikato Environment for Knowledge Analysis). The decision tree architecture and the decision tree are represented in Figs. 11 and 12 respectively. Decision tools are based on records of class and regression situations, which are very often frequent and correct and are largely preferred in machining studies.

Random Forests
It is a learning method where the same or multiple algorithms are used multiple times and put together into a model that's more powerful than the original. Predictions will depend on the weighted sum of several trees, which is more accurate compared to a single tree. These algorithms are more robust to changes in the dataset as these changes can impact one tree but not the whole forest of trees. Figure 13 depicts the architecture of the random forest. It will consist of a five step process, and they are as follows: 1. From the training set, select a random K data point. 2. Construct a decision tree using K data points. 3. Repeat steps 1 and 2 with N trees and trees to be built. 4. Each N tree will estimate the value of Y at a new data point and take the mean of all predicted Y values. 5. The average makes predictions from a random Forest better than those of a single decision tree, and it reduces overfitting.

AdaBoost Algorithm
Boosting is a practical algorithm based on a theoretical concept used long ago, and adaptive boosting was a successful approach to doing the same thing. It will be adopted as a statistical learning theory to enhance decision tree performance as per the binary classification approach, and it will be based on a decision tree model with a depth of one. This algorithm was designed to address both classification and regression issues. It involves weak learners as a one-level decision tree will be added sequentially to each subsequent model to correct estimations made by the previous model as per the sequence. It will be carried out by assigning a higher weight to the training dataset points with higher prediction errors. Estimations will be calculated based on a weighted average of weak classifiers. A weak learner will calculate the estimated value of the new input as either + 1.0 or − 1.0. If the sum is positive, the first class is estimated. If the sum is negative, the second class is predicted. The formalization of the algorithm is as below: The AdaBoost algorithm was implemented in the Python programming language with the Google Colab IDE. The Ada Boost Regressor function from the Sklearn library was used. The base estimator was a decision tree with a depth of one, and a total of 100 estimators (weak learners) were used before the boosting was stopped. A linear loss function was used to update weights after every iteration, and the best final model after hyper-parameter tuning was chosen based on the R2 score.
To analyze the influence of machining parameters for minimizing surface roughness and tool wear, using three machine learning models, namely decision tree, random forest and Adaboost by Python language using Goggle Colab, are represented in Table 14.

Multi-objective Particle Swarm Optimization (MOPSO)
It is the most stochastic technique due to its easy implementation and better convergence speed. It is a heuristic swarm intelligence algorithm that solves optimization issues by imitating the swarm behavior of birds. Because of its many advantages, such as robustness, efficiency, and simplicity, PSO is becoming more popular. PSO has been discovered to require less processing effort when compared to other stochastic algorithms. Each particle is equivalent to a bird in the population, and every particle has its own position and speed. Each particle represents a solution and exchanges information to increase the quality of particles in the swarm and the particle move in the solution space to obtain the global optimum solution. Optimization will start with the initialization of the swarm and followed by evaluation and density assessment solutions. In this algorithm, the selection of gbest is essential as it directly affects the capability and convergence speed. The size of the particle population is fixed, and the position of the particles in the population is required to be adjusted to update the pbest and gbest. The pbest process is a comparison of the particle's current position and the former pbest position of each particle, and the gbest process involves non-dominated solutions updated in the previous phase. The various terms used in this algorithm are as follows: • Swarm-number of particles • Particle-one individual in the swarm who represents a possible solution to the problem at hand. • Velocity-it specifies the direction in which a particle needs to move and explains the change of the particle's position from one iteration to the next. • Leader-a particle that guides another particle to explore better regions of search space.
• pbest-best position the particle has achieved • gbest-best position achieved by all particles The velocity of particles in this algorithm will be as shown by Eq. 12.
Where V i pre -present velocity for ith particle, inertia-W, C1, C2 are learning parameters, R a 1, R a 2 values between 0 and 1. The particle position will be updated by Eq. 13.
Where X i N = new position of ith particle, Xi P = current position of ith particle, V new = new velocity of ith particle.
The inertia weight will be calculated by Eq. 14.
Where, maximum weight = W max = minimum weight = W min , current iteration = ITR p ITR t = total number of iteration and MOPSO frame work as shown by Table 15.
The following parameters are set in the program for a given application, as shown in Table 16, and the convergence graphs for fitness value versus iteration, as represented in Fig. 14. Based on the convergence graph for surface roughness, tool wear, and MRR, the estimated and measured values for responses are represented in Table 17. Upper range (100,0.4, 0.6) 7 Population size 100

Effect of Input Parameters on Surface Roughness, Tool Wear and Metal Removal Rate
The effects of input parameters on response factors, such as surface roughness, tool wear, and metal removal rate, are represented in Figs. 15 and 16 respectively. At higher speeds, more generation of heat at the cutting zone results in softening of the material and machining with ease, causing a reduction in built-up edge formation. Surface roughness has been considered as the main factor of product quality, and it has a direct impact on the performance of the components. As speed increases, roughness value decreases due to short chip tool contact length and higher cutting zone temperature, reducing the formation of built up edge. At higher speeds, feeds, tool wear and surface roughness will decrease due to higher cutting forces. The presence of hard particles in the alloy will generate pressure on the cutting tool, resulting in a decrease in tool life. At a higher speed and feed metal removal rates will increase due to an increase in the area of contact and higher cutting force.

Experimental Results and Discussions
• The turning of M 400 alloy under different cutting conditions based on the L16 array has been studied using multi-response, hyper-parameter optimization, machine learning techniques, and multi-objective particle swarm optimization, and the influence of annealing temperature at 1000 0 C on response parameters, namely surface roughness, tool wear, and metal removal rate. • According to experimental Table 4, the response parameters at room and annealed temperatures show that surface roughness and tool wear values have been reduced, and the metal removal rate has increased. Flank wear is a significant factor during machining at room temperature. It will decrease at annealed temperature, causing a minimal change in the shear strength of the work piece due to the application of heat on the surface as compared to room temperature. • The results of confirmatory experiments, CPC with S/N ratio, and regression are as shown in Tables 8 and 9. The higher value of composite primary components is 1.489, which results from the 12 th experiment. Hence, the opti-mum parameters are X 3 Y 2 Z 2 , which translates into speed of 80, feed of 0.2, and depth of cut of 0.3, respectively. The Anova of CPC is represented in Table 10, and Fig. 5 indicates that feed will be a more significant factor than the depth of cut and speed. As per the response Table of means and S/N ratio presented in Table 11 and the main effect plot for means, the S/N ratio represented by Fig. 6 indicates that feed will be a more significant factor than the depth of cut and speed. As per  Table 12. Table 13 shows the correlation analysis, and Fig. 10 represents the optimization history plot. • The decision tree, architecture, and random forest architecture are represented by Figs. 11, 12, and 13, respec-  Table 14. • Multi-objective particle swarm optimization for the minimization of surface roughness, tool wear, and maximization of MRR using MATLAB. The MOPSO framework and parameters for PSO are presented in Tables 15 and  16 respectively. Based on the convergence graph for surface roughness, tool wear, and MRR, presented in Fig. 15, the estimated and actual values of responses are presented in Table 17. • The effects of input parameters on response factors, such as surface roughness, tool wear and metal removal rate, are represented in Figs. 15 and 16 respectively.

Conclusions
Based on the research work, the following conclusions are drawn: • The annealing process was carried out, and cooling in the air was recommended, which has a significant impact on improving the crystal structure and mechanical properties. • The heating of the Monel 400 work piece at 1000 0 C has a direct influence on minimizing tool wear (6.3%), surface roughness (19.61%) and increasing MRR (14.98%) as per the relative percentage. The heating of the work piece above its recrystallization temperature causes a reduction in shear strength and a reduction in hardness. The material can become soft and can be machined easily with minimum cutting force. • To study the influence of significant parameters using an optimization method for each response. They have indicated different influential cutting parameters for each response. • The PCA results show that the feed (57.38%) is more significant than the depth of cut (25.1%) and speed (17.34%).
Accountability proportion values of three principal components, like surface roughness (Ψ1 = 64.5), tool wear (Ψ2 = 22.3), and metal removal rate (Ψ3 = 13.2). As per the response tables for means and S/N ratio, the main effect plot, represented by Table 11, Fig. 6, indicates that feed will be the dominant factor. PCA can predict optimum factors for the performance of the cutting process and may be generalized to any other machining process. • For hyper-parameter optimization, speed positively impacts tool wear and surface roughness. The feed and depth of the cut are inversely proportional to the metal removal rate, surface roughness, and tool wear. It can be used successfully for machining parameters for this study and can be generalized to any other manufacturing operation. Lower values of MSE, MAE and RSME will indicate the model is perfect and the optimization history plot will be steady and constant, showing a prediction accuracy of 96.66%. The lower values will indicate the model is perfect • To analyze the influence of machining parameters and the reliability and robustness of three machine learning models, by programming in the Python language using Goggle Colab, which is a better platform for deep learning. The predicted values obtained by the decision tree regression method are found to be better for surface roughness than Adaboost and random forest. For tool wear, the theoretical values predicted by Ada boost are found to be better than the random forest and decision tree algorithms based on relative error. • For this study, machine learning algorithms such as decision trees, random forests, and the Adaboost algorithm can be effectively used for the validation of machining responses, giving better results. Data-driven models have demonstrated a high potential to predict machining parameters more accurately than models based on analytical and statistical models. • As per PSO, optimum predicted responses are surface roughness (2.5 μm, 100, 02, 0.45), tool wear (0.31 mm, 40, 0.40, 0.60), and MRR (5145 mm 3 /min, 100, 0.4, 0.15). These PSO predicted values reveal small variations for surface roughness of 1.55%, tool wear of 6.8%, and metal removal rate of 2.56% which have been validated by experimentation. • PSO is an effective optimization tool for calculating optimal factors for minimizing surface roughness, tool wear, and maximizing metal removal rate. The PSO algorithm is simple, easy to implement, and has good computational efficiency. It has been proven to be better based on the time factor and has good convergence speed as compared to PCA. • The influence of input parameters at higher cutting speeds (80-100) yields an increase in surface roughness and tool wear due to higher cutting force. The presence of hard particles in the alloy will generate pressure on the cutting tool. At a higher feed of (0.3-0.4 mm/rev), the metal removal rate will increase due to an increase in the area of contact and higher temperatures in the cutting zone. • In future work, metaheuristic algorithms can be used, namely desirability function analysis for optimization of multi-response factors and multi-criteria decision-making methods, combinative distance-based assessment, hybrid neuro-fuzzy inference system with multi-objective optimization.

Availability of Data and Material
All available data is present in the manuscript. All data, models, and code generated or used during the study appear in the submitted article.

Conflict of Interest
The authors declare that, they have no known competing financial interests that could have appeared to influence the work reported in this paper.
Ethical Approval Not applicable.
Consent to Participate Not applicable.

Consent to Publish Yes.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.