The Colony Predation Algorithm

This paper proposes a new stochastic optimizer called the Colony Predation Algorithm (CPA) based on the corporate predation of animals in nature. CPA utilizes a mathematical mapping following the strategies used by animal hunting groups, such as dispersing prey, encircling prey, supporting the most likely successful hunter, and seeking another target. Moreover, the proposed CPA introduces new features of a unique mathematical model that uses a success rate to adjust the strategy and simulate hunting animals’ selective abandonment behavior. This paper also presents a new way to deal with cross-border situations, whereby the optimal position value of a cross-border situation replaces the cross-border value to improve the algorithm’s exploitation ability. The proposed CPA was compared with state-of-the-art metaheuristics on a comprehensive set of benchmark functions for performance verification and on five classical engineering design problems to evaluate the algorithm’s efficacy in optimizing engineering problems. The results show that the proposed algorithm exhibits competitive, superior performance in different search landscapes over the other algorithms. Moreover, the source code of the CPA will be publicly available after publication.


Introduction
Optimization methods are not limited to single-objective methods, and every single objective idea can be extended for dealing with more classes of problems that have more than one or many objective functions. Common optimization approaches include fuzzy optimization [1] , large scale problem solving [2] , memetic and hybrid approaches [3] , multi-objective optimization (an extension of the single-objective methods) [4] , robust optimization, and many objectives [5] . Swarm-based stochastic methods involve any type of mathematical form and various inspirations. In recent years, metaheuristic algorithms [4] have attracted much attention and have been extensively used in numerous fields [6][7][8][9][10][11][12][13][14][15][16][17] . Such popularity is attributed to the ability of MAs to solve many possible complex feature spaces in practical problems in neural network-based control [18,19] , formation control [20] , deep learning models and feature understanding [21,22] , adaptive control [23,24] , machine learning-based implements [25,26] , and artificial intelligence [27] . In addition to their characteristic continuity, discreteness, and constraints, MAs can also avoid local optimum, exhibit simplicity, and provide satisfactory solutions to complex problems without the need for gradient information [28,29] .
Nowadays, metaheuristic algorithms have gained momentum in new engineering and technical problems [30] . In a previous work, we developed numerous MAs motivated by the behavior of biological and physical systems in nature. Metaheuristics can be divided into four categories [31] : Evolutionary Algorithms (EAs), Physics-based algorithms, Human-based algorithms, and Swarm Intelligence (SI) algorithms. Specifically, Holland proposed the Genetic Algorithm (GA), one of the most popular EAs [32] , based on Darwin's theory of biological evolution. GA simulates the process of biological evolution, then, searches for the optimal solution in a solution space. The Differential Evolution (DE) algorithm [33] is another popular EA that simulates the cooperative relationship between individuals within a group and the swarm intelligence of competitive production to guide the direction of an optimization search. The stochastic components [34] contribute more variety to the searching patterns in DE. Other established EAs; include Genetic Programming (GP) [35] , Evolution Strategy (ES) [36] , and Evolutionary Programming (EP) [37] . Physics-based algorithms are inspired by the physical laws such as Simulated Annealing (SA) [38] , which simulates the annealing process of metals in searching for the optimal solution of a problem. The Central Force Optimization (CFO) and Gravitational Search Algorithms (GSA) [39] are some other physics-based metaheuristics. Among the SI algorithms, Grey Wolf Optimizer (GWO) [40] , Bat-inspired algorithm (BA) [41] , Cuckoo Search (CS) [42] , Artificial Bee Colony (ABC) [43] , Slime Mould Algorithm (SMA) [44] , Harris Hawks Optimization (HHO) [45] , Hunger Games Search (HGS) [46] , Krill Herd (KH) [47] , Moth Search Algorithm (MSA) [48] , Monarch Butterfly Optimization (MBO) [49] , Moth Flame Optimization (MFO) [50] , Marine Predators Algorithm (MPA) [51] , and Whale Optimization Algorithm (WOA) [52] are widely used. Some human-based optimizers include Tabu Search (TS) [53] and Teaching Learning-based Optimization (TLBO) [54,55] . While metaheuristic algorithms have their own advantages and disadvantages compared to alternative solvers, they all provide the benefits of simplicity and relatively fast running time.
MAs are divided into four categories that share two principal characteristics: exploration and exploitation. In brief, the randomness of the search is essential to explore the search space as much as possible to ensure that the algorithm has enough exploration capability; thus exploration shows the ability and richness of randomness. Then, the exploitation stage is based on a promising area achieved by the exploration phase, focusing on the local search aptitude. Finding an outstanding balance between these two stages is one of the most challenging metaheuristics problems, which directly relates to the algorithm's performance. Therefore, first-class algorithms, such as DE and ABC are the optimal approaches to maintain the right balance between exploration and exploitation.
Even though numerous excellent algorithms have been proposed, the No Free Lunch (NFL) theorem [56] in search indicates shows that none of these methods is a universal best technique for solving any existing or future problems. In other words, each algorithm can only solve one or a class of optimization problems. Therefore, this work developed an effective metaheuristic algo-rithm, the Colony Predation Algorithm (CPA) based on the coexistence of animals. Specifically, CPA mimics the supportive behavior of social animals and predation strategy of hunting animals. In order to improve its superiority, we applied CPA to engineering problems and achieved good results.
The paper is structured as follows. Section 2 presents the background and inspiration for CPA, including its formula, pseudocode, and time complexity. Section 3 discusses the results of the CPA in solving different benchmark problems and the selection of CPA coefficients. Section 4 describes the experiments performed on engineering design problems using the developed algorithm. Section 5 summarizes and concludes the paper. fittest, where the loser is replaced with the winner. More specifically, the leaders of lion and wolf packs are those who win a fight with the previous leader.

Mathematical model
The mathematical simulation of the position of the algorithm is given here. Fig. 1 displays the search process of groups and individuals in two and three dimensions, where a predator at position (X, Y) can update its position according to the target's position (X best , Y best ). Fig. 2 shows how the search agent updates its position based on the predator leader and other predators in the 2D search space. It can be observed that the final position will be a random position within the circle defined by the positions of the predator leader and other predators in the search space. The gray circle represents the final direction of the updated position.

Communication and collaboration
Animals who hunt in groups have an increased success rate of predation through communication and cooperation. The following formulas represent individual cooperative communication and food searching behavior: 1 1 2 ( +1) ( )+( ) (( ( )+ ( ))/2), where r is in the range of [0, 1], 1 ( ) i j t r X is the individual looking for food; 1 r X and 2 r X are the two closest posi-tions to prey in the j-th dimension; 1, 2,..., ; j d i m  and ( +1) is the latest updated position of the individual.

Disperse food
The first strategy in colony predication drives the prey in different directions, separating the prey from its group. The predation strategy displayed by individuals in search is simulated mathematically as follows: best 1 ( +1) ( ( ) ), t S ub lb lb      r r = X X r (2) where ( +1) t r X is the position of a population; best r X is the position of food; S represents the strength of prey, where its absolute value decreases from a to 0 with the number of evaluations, r 1 represents 1  The formula of S is as follows: where N represents the number of individuals; S 0 decreases from a to 0 with the number of evaluations; t represents the current number of evaluations; and, r 2 is a random number of [0,1]. The formula of a is as follows: where w = 9.

Encircle food
The hunting group will use the second strategy to surround and approach the prey. This stage can be represented by mathematical simulation as: where D is the distance between the current individual and prey, and D is different between different individuals; l is a random number of [0,1]; and, π tan( ) 4 l is the encirclement curve of the hunter.
The formula of D is as follows: where ( ) t r X represents the current hunter population. The execution probabilities of these two predatory strategies are expressed as:

Supporting closest individual
Considering that the group may encounter difficulties in hunting prey, the nearest individual calls for peer support, which can be expressed as follows: nearest ( 1) , t   r r X P (9) where nearest r P is the location of the nearest predator in

Searching for the food
The remaining individuals will find an alternative food source if no prey is found nearby or food too far away from the prey. This behavior can be expressed as follows: where 1 D denotes the distance of random group movement, r 4 is a random number of [0,1], and rand r X is a new individual position formed randomly by individuals.
The formula of rand r X is as follows: rand 5 (( ) ), r ub lb lb     r X (13) where ub and lb are the upper and lower bounds of functions, respectively, dim is the dimension of population; r 5 represents The probability of implementing supporting the closest individual and searching for the food is determined by the r 6 between the group and prey, which can be expressed as follows: where r 6 is a random number of [−2, 2]. Eq. (10) and Eq. (12) describe the randomness so the searched solution can traverse the solution space. Specifically, Eq. (10) utilizes the current solution as a reference and randomly generates multiple random solutions, so that the current solution is replaced by the generated optimal solution to achieve the purpose of avoiding the local optimum. The current solution is then updated by a completely random solution and replaced by the current solution in Eq. (12), which is equivalent to searching randomly for fresh prey when the current prey cannot be captured.
When the predator's position exceeds the upper or lower limit, we introduce the rule of survival of the fittest in nature and replace the position beyond the limit with the current optimal position. The specific formula is as follows: is the position beyond the boundary and best ( ) j r X is the optimal position, 1, 2,..., j d i m  . The CPA proposed in this paper imitates the process of colony predation. We simplify the algorithm as much as possible to maximize its scalability. Algorithm 1 shows the pseudo-code of the CPA. where MaxFEs represents the maximum number of evaluations, value indicates the fitness of each evaluation, N denotes the population number, and dim is the problem's dimension.
As a unique optimizer with stable performance, CPA has exhibited high potential to solve optimization cases, which is attributed to the following reasons: (1) The idea of selective abandonment adopted in the algorithm breaks the boundary between exploration and exploitation and increases the exploratory tendency even in the middle and later stages of exploitation, which further helps to prevent from dropping into the Local Optimum (LO).
(2) The value of the current optimal solution is used rather than the value of transboundary individuals, which avoids the problem of excessive spatial dispersion.
(3) The introduction of adaptive weight a ensures that the algorithm can quickly transit from exploration to exploitation in the early stage, allowing more time to execute the exploitation, while S guarantees the perturbation of the algorithm so it will not fall into the LO prematurely.
(4) Innovative use of the communication and cooperation mechanism can increase the diversity in the early stage and strengthen exploration of the local solution space in the later stage.  Fig. 4 further demonstrates the authenticity of the preceding ideas. First, we chose test function, F1, to prove point 2 and F12 to prove the influence of points 1, 3, and 4 on CPA's exploration ability. In the figure, CPA is in red, CPA with linear parameter a is in blue, CPA with the communication and cooperation mechanism removed is in green, and CPA with the abandonment mechanism removed is pink. Comparing (d), (c), and (d) proves that the current optimal value can effectively reduce the distribution of space midpoint and improve the performance. It can be seen from (f), (g), and (h), it can be seen that the CPA variants fall into the LO earlier than CPA on its own, thus proving that points 1, 3, and 4 are correct.

Computational complexity analysis
In the proposed CPA initialization, fitness evalua-tion, communication and collaboration, parameter updating, and location updating are performed. In the respective functions, N is the number of individuals in the population, D is the dimension of the problem, and MaxFEs indicates the maximum number of evaluations.
( ) O N refers to the computational complexity of initialization, fitness evaluation, parameter updating, and communication and collaboration, while the computational complexity of location updating is . From this, we can obtain the complexity of the whole algorithm:

Experiments and results
This section compares the proposed CPA with a number of conventional and recent optimizers in the field. All experiments were conducted on Windows Server 2008 R2 operating system with Intel (R) Xeon (R) CPU E5-2650 v4 (2.20 GHz) and 128 GB of RAM. We coded all algorithms for comparison using MATLAB R2014b.

Benchmark function validation
The algorithm was tested on 53 functions, among which F1~F23 are benchmark functions (Table A1) and F24~F53 are CEC2014 functions (Table A2). These functions cover both the monomodality and multimodality of problems. In the respective functions, Dim represents the dimension of the function, Range refers to the definition domain of the function, and f min reveals the optimal solution of the function.
All experiments were conducted under the same conditions to ensure the fairness of the experimental results [62][63][64] . Under the evaluation framework, the population size was set to 30, and the number of evaluations and dimensions were set to 1000 and 30, respectively. This ensures that there is no partiality of the system, as per literature [26,[65][66][67][68][69][70][71][72] . At the same time, we tested each algorithm 30 times to exclude the influence of random factors. We applied the Friedman test and the Wilcoxon-signed rank test to compare the performances of the algorithms. The Friedman test is a non-parametric statistical program that allows us to do further analysis of the algorithm's average performance ranking. The Wilcoxon test is often used for statistical testing, in which a p-value less than 0.05 indicates that the performance of CPA performance is better than its competitors. Fig. 5 presents the results of a qualitative analysis of CPA on benchmark functions. F4, F5, F7 and F9 (from Table A1) to analyze its exploration and exploitation ability. We selected 2 single-mode functions and 2 multi-mode functions as the evaluation criteria. The results consist of four major components: 1) The search history shows the location and distribution of individuals in each evaluation; 2) the trajectory of the first individual reveals the individual's motion law throughout the evaluation process; 3) the average fitness of all individuals monitors how the average fitness of the entire population changes during the optimization process; and 4) the convergence behavior reveals the changing trend of optimal fitness.

Qualitative analysis
We can see that an individual explored most of the solution space according to its historical position, revealing that the algorithm has strong search ability and can avoid falling into local optimal solutions on complex multimode functions. At the same time, we can also observe that most of the search locations are around the optimal solution, which suggests that the algorithm can be accurately developed in the target area. Moreover, the convergence speed is quick on all functions except F4, yet it can still reach the medium term's optimal value.
The trajectory diagram shows that individuals have sharp fluctuations in the initial stage and the medium term of search. The fluctuation coverage exceeds 50% of the solution space, proving that the CPA has a powerful search ability. The algorithm can find the optimal solution quickly from the function image and, thus, performs very well on simple single-mode or complex multimode functions.
The algorithm tends to converge quickly in the early stages of evaluation by monitoring the overall average fitness. CPA obtained the best average fitness of all functions, except F15 and F23, in a concise time with repetition. Although the average fitness fluctuated at times, its gradual decrease reflects the algorithm's super search and pioneering ability. The convergence curve further reveals the algorithm's fast convergence speed.
The data in Table 1 provides the results of CPA compared with other traditional MAs based on average results and ranking of the Friedman test in the last column. In the table, "+" indicates that the CPA's performance is better than that of the corresponding algorithm; "-" means that CPA performance is worse than the corresponding algorithm; and "=" CPA and the corresponding algorithm. The average (AVG) and standard deviation (STD) values were obtained from the Friedman test and correspond to the algorithm's average ranking result. We can intuitively find from Table  1showsthat CPA ranks first and that it was challenging for the competitive algorithms to defeat CPA on most of the 53 functions. In addition, the Friedman rank of CPA is only 3.20, which is much smaller than the other algorithms. Compared with the second DE, CPA's average is about 0.9 lower than DE. CPA also ranks in the top three on F6, F13, F21, F22, F23, F25, F26, F27, F29, F32, F34, and F38. Moreover, because the STD of CPA reached 0 on F1, F2, F3, F4, F9, F17, F47, and F48, we can conclude that the CPA has excellent performance. Table A4 shows the results of the Wilcoxon test most of the p-values are less than 0.05, accounting for 88% of all data. Even in SCA, only two p-values are higher than 0.05. CPA is far superior to the DE and ABC on most functions, although the number of p-values higher than 0.05 accounts for 13% and 19% of the total, respectively. This further shows that the CPA has a strong statistical significance. Fig. 6 reveals that the convergence speed of the CPA is very fast, which provides an index to judge the algorithm's performance. CPA found the optimal value in the early stage of F1 and F4, so there is no CPA curve in the image. For F1, F4, F7, F15, F46, F50, and F53, CPA also converged the fastest among all algorithms, while some other algorithms converged quite slow, and fell into local optima. Functions F46, F50, and F53 demonstrate that CPA has high accuracy in solving problems and can quickly find the global optimum at the beginning of the evaluation. While the convergence speed of some other algorithms was also very fast, their accuracy in finding the solution was not as high as CPA's, and the solution found by CPA is better. Even though its CPA convergence speed was slower on F24, CPA still found the global optimum before other algorithms; some of which even fell into the local optimum at the beginning of the evaluation. It can be concluded from the performance on F15 and F35 functions that CPA has a strong ability for global exploration, can find the target area in the early stage and fully balances the exploration and exploitation, further confirming its superior performances.
The data in Table 2 reveal that the CPA has an excellent performance in both single-mode and multimode functions, especially in the fixed dimension multimode function. The average of CPA results is 2.88, which is much smaller than the average of other algorithms. For instance, this average is one-fifth the average of MWOA, m_SCA, OBSCA, IWOA, CDLOBA, CBA, and CGPSO algorithms, the strongest algorithms defeated CPA on only 11 out of the 53 functions, and MWOA did not beat CPA on any function. This shows that the CPA has a strong optimization ability. Table A5 lists the p-values of CPA and the advanced algorithm on all test functions and shows that all values of MWOA are less than 0.05. We can also see that the difference between the values is more than 0.05, although the number of ALCPSO's values greater than 0.05 approaches 14. These test results further confirm that the CPA is superior to other competitive algorithms.
F1, F4, F10, and F20 belong to 23 benchmark functions, while F26, F28, F35, F39, F43, F50, F52, and   Fig. 7 shows CPA found the optimal solution on F1 and F4 early in the evaluation when other algorithms were just beginning to converge, suggesting that CPA is faster and more accurate than its competitors. Although CPA fell into local optimum on F26, it still ranked as the second-best algorithm. It can be observed from F10 and F20 that CPA found the optimal solution at a very fast speed in the early evaluation period, while some of the comparative algorithms fell into the local optimum. For F28, F39, and F43, CPA was still the first algorithm to find the optimal solution, although its convergence rate was slower. We can infer from the results that CPA's exploring and exploiting abilities are powerful, and the two stages have a good balance.

Parameter sensitivity analysis
Since an algorithm's parameters affect its convergence speed and accuracy, we analyzed the following parameters of CPA: Population size (N), the maximum number of evaluations (MaxFEs), parameter (w) and abandonment of the upper limit of probability. We set the abandonment probability upper limit to start from a when the abandonment probability upper limit was tested. The parameter of a was fixed to w = 9 for a total of eleven values. Similarly, in the analysis of a, we initialized the upper limit of probability to 2/3a and w to 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, respectively. N and MaxFEs were set to 30 and 1000, respectively, and remained unchanged when the test abandoned the upper limit of probability. Each algorithm was tested on the 53 functions 30 times. Table 3 provides a reference for comparing different values of the abandonment probability upper limit, which has a significant impact on an algorithm's performance. Specifically, the best performance was achieved when the abandonment of the probability upper limit was 2/3a. Moreover, the maximum difference between the averages reached 3.7. Table 4 shows the influence of coefficient w is relatively small, and the maximum difference between the average is only about 1.2.
We used benchmark function F7 to test the influence of N and MaxFEs on CPA, where N was set to 5, 10, 30, 50, and 100, respectively. MaxFEs was initialized to 50, 100, 200, 500, and 1000. According to the test results in Fig. 8, the increase of N and MaxFEs improved the accuracy of CPA, but when they reached a certain level, the influence becomes minimal. Therefore, it is suggested to set these values according to specific needs in an experiment, because if the values are too large or too small, convergence will take a long time and results will not be ideal.

Comparative performance of CPA on engineering design problems
To further evaluate its efficiency, CPA was tested on various engineering design problems. The constraints included the death penalty and annealing, static, dynamic, co-evolutionary, and adaptive behavior. To narrow down results, we selected the death penalty for comparisons in this work. The method assigns a large objective function value to them when searching individuals that violate any constraints. This method aims to automatically eliminate infeasible solutions in the heuristic algorithm's optimization, so it is unnecessary to calculate this scheme's infeasibility. The most prominent advantages of the death penalty are simplicity and low time consumption.
CPA was tested on the following five engineering constraints: Tension-compression spring, Welded beam, Pressure vessel, I-beam, Multiple disk clutch brake.

1 Tension-compression spring problem
The mathematical model of a Tension/Compression Spring (TCS) design is designed to minimize the weight of the spring. This problem requires the optimization of three design variables, including the wire diameter (d), mean coil diameter (D), and number of active coils (N). Fig. 9 illustrates this problem.
The mathematical model of this problem is as follows: Objective function: The model has been solved by mathematical optimization tools and metaheuristic methods. For instance, Coello et al. realized this problem by using Genetic Algorithms (GA) and obtained an adaptation value of 0.0127048. The experimental results, after testing many algorithms, show that the optimal weight can reach 0.0126650. Table 5 reveals that the weight calculated by CPA is smaller than the current optimal weight, which also proves the superiority of the CPA in TCSD.

2 Welded beam design problem
The purpose of this problem is to find the lowest cost of welded beams under four constraints; shear stress (), bending stress (), buckling load (P C ), and deflection (). These four variables of this problem include the welding seam thickness (h), welding joint length, beam width (t), and beam thickness (b). Fig. 10 demonstrates the elements of this problem.
The performance of CPA was compared with those of RO, SSA, CDE [88] , GWO, WOA, and GSA are compared for this problem. The current optimal weight is 1.72452. We can find from Table 6 that the SSA has the best performance, achieving a weight of 1.723916.

3 Pressure vessel design problem
This problem is considered a well-studied structural optimization test case, which aims to minimize the production cost of cylindrical pressure vessels. One end of the container is covered, and the other end is a hemisphere, for which the total manufacturing costs is determined by four variables -the thickness of the shell T s , the thickness of the head (T h ), inner radius (R), and the span of the cylindrical part of the container (L)determine the total manufacturing cost. Fig. 11 contains an illustration of this problem.
The relevant formulas are as follows: 19.84 Fig. 11 The structure of the pressure vessel.   Fig. 12 The structure of I-beam.
For this engineering problem, this work compared the performance of CPA with those of PSO, MFO, GWO, WOA, BA, CPSO [89] , HPSO [90] , Lagrangian multiplier [91] and Branch-bound [92] . Table 7 shows that the optimal value of the CPA was about 1.2 higher than those of MFO, BA, and HPSO. Thus, the latter three MAs achieved minimum optimal consumption, but still exceeded the current optimal weight of 6059.7143.

4 I-beam design problem
The purpose of this problem is to minimize the vertical deflection of the I-beams. Related parameters include the length, height, and two thicknesses. Fig. 12 shows the configuration of this problem. The mathematical model of the problem is as follows: Table 8 is a comparison between CPA and  Fig. 13 The structure of multiple disk clutch brake.
ARSM [93] , IARSM [93] , CS, and SOS [94] on I-beam problems. We can find from Table 8 that CPA can help the vertical deflection of the I-beam to minimize more than the other four algorithms, which is the same as the current optimal value of 0.006626, which shows that CPA is more suitable for engineering problems.

5 Multiple disk clutch brake problem
The objective of this minimization and discrete optimization problem is to use five discrete design variables to minimize the quality of multiple disc clutch brakes. The five variables are the actuating force, inner and outer radii, number of 27 friction surfaces, and thickness of discs. Fig. 13 shows the configuration of this problem.
The mathematical model for this problem is as follows: m a x ( ) 0 sr rz sr This work compared CPA with WCA [95] , PVS [96] , and TLBO to minimize the quality of multiple disc clutch brakes. Table 9 shows that the quality of CPA, reaching 0.313656, is equal to or less than that of other algorithms. This indicates that the proposed algorithm has a stronger optimization ability and can find more high-quality problem solutions.

Conclusions and future work
This paper proposes a metaheuristic algorithm, named the Colony Predation Algorithm (CPA), inspired by social animals' characteristics to solve the optimization problem. The algorithm achieves the correct balance between exploitation and exploration and can quickly converge in the early and middle stages without falling into Local Optimization (LO). The algorithm motivated by the common foraging characteristics of social animals follows the mechanism based on the idea of selective abandonment to simulate the impact of this strategy on individual activities.
This study qualitatively analyzed the algorithm according to the four-index search history, first-dimensional trajectory, average fitness, and convergence curve. The algorithm's performance was evaluated on benchmark and CE2014 functions, and the Friedman test and Wilcoxon test were used for statistical evaluation. The experimental results show that the algorithm has a strong search ability to find the target solution space quicker and subsequently perform exploitation more effectively than other algorithms. The CPA can also identify the optimal solution faster and better on complex multimode functions and exhibits a stronger exploitation ability on complex functions.
Simultaneously, to verify its applicability in practical problems, CPA was applied to tension-compression spring, welded beam, pressure vessel, and other engineering problems. The experimental results show reveal that CPA's average is about 0.9 lower than the second DE and about 0.84 lower than the second improved DECLS. Thus, the CPA can achieve the optimization of production engineering problems and significantly reduce manufacturing costs.
This work further streamlined the CPA to make it easier to add new mechanisms. In future research, CPA can be applied for parameter optimization, binary feature selection, and image segmentation of machine learning algorithms and combined with machine learning to predict disease for disease prediction.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.