CS-PSO: chaotic particle swarm optimization algorithm for solving combinatorial optimization problems

Combinatorial optimization problems are typically NP-hard, due to their intrinsic complexity. In this paper, we propose a novel chaotic particle swarm optimization algorithm (CS-PSO), which combines the chaos search method with the particle swarm optimization algorithm (PSO) for solving combinatorial optimization problems. In particular, in the initialization phase, the priori knowledge of the combination optimization problem is used to optimize the initial particles. According to the properties of the combination optimization problem, suitable classification algorithms are implemented to group similar items into categories, thus reducing the number of combinations. This enables a more efficient enumeration of all combination schemes and optimize the overall approach. On the other hand, in the chaos perturbing phase, a brand-new set of rules is presented to perturb the velocities and positions of particles to satisfy the ideal global search capability and adaptability, effectively avoiding the premature convergence problem found frequently in traditional PSO algorithm. In the above two stages, we control the number of selected items in each category to ensure the diversity of the final combination scheme. The fitness function of CS-PSO introduces the concept of the personalized constraints and general constrains to get a personalized interface, which is used to solve a personalized combination optimization problem. As part of our evaluation, we define a personalized dietary recommendation system, called Friend, where CS-PSO is applied to address a healthy diet combination optimization problem. Based on Friend, we implemented a series of experiments to test the performance of CS-PSO. The experimental results show that, compared with the typical HLR-PSO, CS-PSO can recommend dietary schemes more efficiently, while obtaining the global optimum with fewer iterations, and have the better global ergodicity.


Introduction
Particle swarm optimization algorithm (PSO) is a heuristic optimization technology, presented by Kennedy and Eberhart (1995), which mimics the swarm behavior of bird flocks in performing their tasks, and to discover an optimal solution based on an objective function (Kennedy and Eberhart 1995;Eberhart and Kennedy 1995;Chang et al. 2014).With fewer parameters, PSO algorithm can achieve a faster convergence, while being simpler and easier to implement (Xu et al. 2012).PSO has already been applied to many fields, such as electric power systems, job scheduling of workshops, wireless sensor networks, route planning, and robotics (Lei 2014;Kumari and Jha 2014;Yao et al. 2012;Liao et al. 2012;Lee and Kim 2013).However, the performance of PSO still has space for improvement.For example, due to the fast convergence of PSO, it is easy to fall into local optima in solving multimodal optimization problems, potentially leading to the premature convergence of particle swarms.In the initialization and updating phase, the stochastic strategy of PSO generates a group of particles and finds the optimal solution through multiple iterations.During the iterations, the positions and velocities of particles are randomly updated, resulting in a low computational efficiency.There are mainly two ways to improve performance of the PSO: the first adjusts the parameters and procedure of PSO, such as dynamically adjusting the search step length and optimizing the update strategy of the particles (Chi et al. 2011;Zhao et al. 2014).Another approach would be the combination with other intelligent optimization algorithms, such as the genetic algorithm (GA), and the simulated annealing algorithm (Sharma and Singhal 2015;Nancharaiah and Mohan 2013).Most related research Guo et al. (2014), Zhu et al. (2014), Sorkunlu et al. (2013), Shi and Eberhart (1998), Elbedwehy et al. (2012) about the improvement in PSO now mainly focuses on the continuous optimization problems, while the combinatorial optimization problems (e.g., the combination of integer programming and the 0/1 knapsack) do not attract enough attentions, and the current research results are usually suitable to certain scenarios, which are not pervasive.
In order to solve combinatorial optimization problems more efficiently, we propose a novel chaotic particle swarm optimization algorithm (CS-PSO).The main contributions of this paper are as follows.First of all, the chaos initialization and the chaos perturbing of the chaos search method are introduced into PSO in place of the random initialization and the random perturbing.The ergodicity, regularity, and randomness of the chaos search method can contribute to address the PSO issues, including the local optimum and the poor search efficiency.In the initialization phase, the priori knowledge of the combination optimization problem is used to optimize the initial particles.Furthermore, the quality of the particles and the search efficiency of the algorithm are improved.In the chaos perturbing phase, a brand-new set of perturbing rules is presented to perturb the velocities and positions of particles sufficiently to realize the ideal global search capability and adaptability, effectively solving the premature problem of particles.Subsequently, we designed the fitness function of CS-PSO, which utilizes the concept of the personalized constraints and general constrains to produce a personalized interface, which is used to solve a personalized combination optimization problem.Finally, we built a personalized dietary recommendation system, Friend, which is based on CS-PSO to address a healthy diet combination optimization problem.Friend is able to recommend more reasonable dietary schemes, which proves that CS-PSO has an enhanced performance compared to other improved PSO algorithms, such as the typical PSO for generating healthy lifestyle recommendations (HLR-PSO) (Pop et al. 2013).
The rest of the paper is organized as follows: Sect. 2 presents the related works, Sect. 3 discusses CS-PSO in detail, and Sect. 4 describes the prototype personalized dietary recommendation system called Friend applied with CS-PSO.Experiments and performance analysis are presented in Sect.5, and finally, Sect.6 concludes the paper by summarizing the main contributions of this paper and commenting on future directions of our work.

Related work
PSO is a bio-inspired optimization meta-heuristic, which is inspired by the foraging behavior exhibited by birds, which is based on the assumption that a flock of birds are randomly distributed in an area with only one piece of food, as shown in Fig. 1a.The dot on the tree represents the available food, and its position is unknown to each bird, although they know their distance from it.Furthermore, the nearest bird to the food can notify other birds to fly to it.The food is assumed to be the optimal value, as shown in Fig. 1b, where each bird is seen as a particle, and the distance between a bird and the food is a value of the objective function.Therefore, the birds flock foraging process can be defined as a function optimization process.In Fig. 1b, X i is the closest particle to the goal, and it is set as the current global optimal particle.Its distance from goal is N best i , which is the global optimal value (Kennedy and Eberhart 1995;Eberhart and Kennedy 1995).The main idea of PSO (Pop et al. 2013) is that in a set of particles, each particle is defined with a position and a velocity, searching for the global optimum of an NP-hard problem.The particles iteratively update their positions according to their individual local optimal position and the global optimal position visited so far.The new position of a particle (e.g., particle i ) is defined as: where t is the current (temporal) status, t + 1 is the status post-updating, X i (t) is the current position of the particle, and V i (t + 1) is the new velocity of the particle.Note that time difference t = (t + 1) − t is indeed a time unit.The velocity of particle i is defined as: where V i (t) is the current velocity of the particle, X p i is the best position so far visited by the particle (i.e., the local best There is much research focusing on the improvement in the performance of the original PSO.In Wen et al. (2013), the authors propose a new modified particle swarm optimization algorithm based on sub-particle circular orbit and zero-value inertial weight (MDPSO).MDPSO utilizes the trigonometric function based on nonlinear dynamic learning factors and on a prediction method of population premature convergence, which can achieve a better balance between the local exploring ability and the global converging ability of particles (Wen et al. 2013).However, MDPSO is mainly suitable for solving the composition optimization problem of Web service, and it is not, therefore, universal.In Gao et al. (2005), the authors propose a general particle swarm optimization model (GPSO), which can be naturally extended to solve discrete and combinatorial optimization problems.GPSO uses the genetic updating operator, further improving the quality of solution and the stability of convergence, and significantly saving the computational cost (Gao et al. 2005).However, the genetic updating operator brings randomness into GPSO, which cannot guarantee the diversity of the final solution.
In Guo et al. (2011), the authors propose a hybrid particle swarm optimization algorithm with the Fiduccia-Mattheyses algorithm (FM), inspired by GA, utilizing the regeneration mechanism of particle's position of discrete particle swarm optimization (DPSO).In particular, it is based on genetic operations to update the position of the particle defined as two-point crossover and random two-point exchange mutation operators to avoid generating infeasible solutions.To improve the ability of local exploration, FM is applied to update its position.A mutation strategy is also built into the proposed algorithm to achieve better diversity and break away from local optima (Guo et al. 2011).However, similar to Wen et al. (2013), the algorithm is not universal and cannot solve the multi-objective optimization problems.In Ibrahim et al. (2012), the authors propose a novel multistate particle swarm optimization algorithm (MSPSO) to solve discrete combinatorial optimization problems, which is different from the binary particle swarm optimization algorithm (BinPSO).In MSPSO, each dimension variable of each particle can attain various states, and it has been applied to two benchmark instances of the traveling salesman problem (TSP).The experimental results show that MSPSO outperforms BinPSO in solving the discrete combinatorial optimization problem (Ibrahim et al. 2012).However, MSPSO utilizes the concept of multistate, leading to the exponentially growing requirements of storage space and computation time.Therefore, the efficiency is affected when MSPSO is applied to solving high-dimensional combinatorial optimization problems.In Gao and Xiei (2004), the authors attempt to apply chaos search method to PSO, while using its ergodicity, regularity, and randomness to search the current global best particle in the chaotic way, replacing a stochastic selected individual from the current "population."The performance of PSO is improved with the chaos search method, which motivates our work.The evolution process is quickened, and the abilities to seek the global optimum, the convergence speed, and accuracy are all improved (Gao and Xiei 2004).In Wang and Wu (2011) and Yang et al. (2015), improved PSO algorithms with the chaos search method are presented and applied to the optimization of logistics distribution route and vehicle routing problem with specific time windows, respectively.However, these results all simply adopt the chaos search method, not further improving the mechanism of chaos initialization and chaos perturbing or providing the personalized interface.Therefore, the diversity of the final solution cannot be guaranteed, and the search efficiency is still unsatisfactory (Wang and Wu 2011; Yang et al. 2015).In Sfrent and Florin Pop (2015), the authors introduce a simulation infrastructure for building/analyzing different types of scenarios, which allows the extraction of scheduling metrics for three different algorithms, namely the asymptotically optimal one, FCFS and a traditional GA-based algorithm.These are combined them into a single hybrid algorithm, addressing asymptotic scheduling for a variety of tasks related to big data processing platforms.A distributed and efficient method for optimizing task assignment is introduced in Iordache et al. (2006), which utilizes a combination of genetic algorithms and lookup services.In Bessis et al. (2012), an algorithm based on a variety of e-infrastructure nodes exchanging simple messages with linking nodes is discussed, with the aim to improve the energy efficiency of the network performance.

Basic idea
The current PSO algorithms designed for solving combinatorial optimization problem generally exhibit the following issues: -Most PSO algorithms are only suitable for one particular scenario, and they are not universal.-Most PSO algorithms are not based on multi-objective, or do not provide a personalized interface.So, they cannot effectively solve discrete, multi-objective, and personalized combinatorial optimization problems.-With the increasing of the particle dimension, the requirements of storage space and computation time will grow exponentially, which will lower the efficiency when solving the high-dimensional combinatorial optimization problem.
The CS-PSO proposed here adopts the chaos search method (Lorenz 2005).The chaos initialization and the perturbation of the chaos search method are used instead of the random initialization and the random perturbing.In the initialization phase, CS-PSO optimizes the initial particles according to the characteristics of combination optimization problems.Via item classification, similar items are grouped into the same category, thus reducing the number of combinations.Therefore, it is possible to enumerate all combination schemes and improve the search efficiency.In the chaos perturbing phase, a new set of perturbing rules is designed to perturb velocities and positions of particles sufficiently, so that CS-PSO has good global search capability and adaptability, and the premature convergence problem of particles is also effectively solved.In the above two phases, CS-PSO controls the number of selected items in each category to ensure the diversity of the final combination scheme.The fitness function of CS-PSO utilizes the concept of the personalized constraints and general constrains to get a personalized interface, which can be used to solve the corresponding personalized combinatorial optimization problem.

Chaos search method
Definition 1 (Chaos search) Chaos search is the random movement with pseudorandomness, ergodicity, and regularity, which is determined by a deterministic equation (Lorenz 2005).
Through the chaos iteration, a set of random sequences with the ergodicity and the pseudorandomness are generated.Usually, the logistic mapping equation (Dong et al. 2013) is used to generate pseudorandom sequences: where Z is a chaotic variable, corresponding to α n , and μ is the control parameter.If μ = 4, the logistic map will show entirely chaotic dynamics, and the trajectory of chaotic variable are dense over the whole search space.We assume that the initial value of Z , namely α 0 , is not equal to 0, 1.25, 0.5, 0.75, 1, otherwise it would be eventually periodic.

Model of combinatorial optimization problem
Definition 2 (Combinatorial optimization) Combinatorial optimization refers to the process of optimizing an object via the combination of a finite set of components.
A typical case of combinatorial optimization is the "qualitycost" model of manufactured products, where a specific product consists of m components, and each of them can be chosen from a variety of options.The parameters of each optional component include a weight representing its quality and the index of cost, with the constraint that the total expenditure of the product does not exceed the available budget.
There are a variety of examples where combinatorial optimization plays a crucial role.These include, for example, the assembling of the different parts of a car, such as an engine, chassis, tires, transmission, and electrical equipment, while optimizing quality versus cost.The "quality-cost" model of combinatorial optimization problem is defined as where i is the index of category, j is the index of item, m is the total number of categories, n i is the total number of items in the i-th category, -w i, j is the weight of the j-th item in the i-th category, c i, j is the cost of the j-th item in i-th category.
-O is the object cost, which means the manufacturing cost shall not exceed the object cost and the quality of the product shall be the optimal; x i, j ∈ {0, 1}, ∀i is the mapping value of the item.
Note that if the j-th item in the i-th category is selected, then x i, j = 1, otherwise x i, j = 0.
x i, j = 1 implies that only one item is selected from each category.

Chaos initialization
Chaos initialization refers to the process of a chaotic variable of the logistic map, which randomly identifies a value as its initial value of particle.
The parameters of chaos initialization are as follows: 1.All items are divided into m categories, which are defined as vectors, B i , for i = 0, 1, . . ., m − 1, for category i. 2. The total number of items in category i is defined as N i , for i = 0, 1, . . ., m − 1, which implies that B i = (x i,0 , x i,1 , . . ., x i,N i −1 ). 3. According to the above points, the position of particle i can be obtained, which is defined as a vector Suppose that only one item is selected from each category.Therefore, m random values are sequentially generated within the interval [0, 1], and each of them is mapped onto an item of each category.Via these m random values, the position of the first particle can be obtained.Take category B 0 as an example, so that the chaos initialization process is as follows: 1. Suppose there are N 0 items in B 0 , the chaos search space [0, 1] is divided into N 0 subspaces.2. The random function is used to generate a random number between 0 and 1, described as k 0,0 , which is assigned to the chaotic variable as the initial value of the B 0 category.3. The parameter k 0,0 is subsequently assessed to identify which subspace it belongs to.Supposing that k 0,0 belongs to the μ-th subspace, x 0,μ = 1, and others variables of B 0 are all initialized to 0. It means that the μ-th item is selected in B 0 = (0, 0, . . ., 1, . . ., 0).
By repeating the above procedure m times, m random values k 0,0 , k 0,1 , . . ., k 0,m−1 are generated sequentially, and each random value is mapped to a corresponding item of each category.The initializations of other categories B 1 , . . ., B m−1 can be completed in the same way.B 0 , B 1 , . . ., B m−1 are combined together to get vector X i .Supposing that there are n particles, the n × m initialization chaotic variables matrix K can be defined as: In the initialization phase, the velocity V i and the local best position P i of particle i are all equal to X i , that is:

Chaos perturbing
Definition 3 (Chaos perturbation) In the updating process of particles, their velocities and positions will be perturbed sufficiently and the search space will be traversed as sufficient as possible.
Definition 4 (Fitness value) Fitness value is a value obtained through a fitness function, which is a quantitative indicator and used to evaluate the advantage and disadvantage of individual.
Take particle i as an example.The parameters of chaos perturbing are as follows: 1.The local best fitness value is defined as f i (i = 0, 1, . . ., n − 1). 2. The global best fitness value is defined as F. 3. The local best position is defined as P i (i = 0, 1, . . ., n − 1). 4. The global best position is defined as G.
Subsequently, the positions of n particles are obtained, and their fitness value is initialized to 0. In particular, a higher value of the fitness value will have a positive impact on the position of particles.During the updating process of f i and P i (i = 0, 1, . . ., n − 1), F and G can be obtained.
We define two velocity vectors V p i and V g i , where Consequently, the new velocity of the particle i is updated with 123 The subtraction between two positions, for an example between X p i and X i , is defined as where Rand(1) is used to randomly generate either 0 or 1.
According to the chaos initialization, we know that just one item or a few items are selected in every category.In fact, most of the variables are equal to 0. As a consequence, the updating rule of velocity is redefined as −1 otherwise.
The addition between a position X i (t) and a velocity V i (t + 1) is also redefined as: where i is the number of particle, j is the index of position X i (t), x i, j is the variable with index j from the i-th current position, x p i, j is the variable with j from the i-th local best position, x g i, j is the variable with j from the global best position, -C(x i, j ) is a perturbing function of x i, j with j from the position of particle i, and finally, -J (x p i, j ) and J (x g i, j ) are simple assessments based on the process initialization.
The detailed perturbing process of the function is defined as follows: 1. First, according to the parameter j, we can determine the associated item of the corresponding category x i, j .In particular, the assertion implies that x i, j is associated with the chaotic variable k i,h .
2. The logistic map is used to iterate k i,h once and generate a new chaotic variable k i,h .3. Subsequently, the subspace k i,h is assessed to understand which subspace it belongs to.Suppose that k i,h belongs to the p-th subspace, x i, p = 1 and the others variables of the category B h are equal to 0. If j − s i=0 N i = p(s ≤ m), then x i, j (t + 1) = 1.Otherwise, x i, j (t + 1) = 0 , and B h = (0, 0, . . ., 1, . . ., 0).
In order to ensure the suitability of the final solution, the following rules are assumed (without loss of generality, consider the variable J (x p i, j )): 1.If x p i, j = 1, then x i, j (t + 1) = 1 and other variables of the corresponding category are assigned to 0. 2. If x p i, j = 0, and x i, j = 0, then the only action carried out is to assign 0 to x i, j (t + 1). 3. If x p i, j = 0, and x i, j = 1, then x i, j (t + 1) = C(x i, j ).

Design of the fitness function
The fitness function is used to evaluate the performance of a combination scheme under certain constraints.Therefore, the properties of the fitness function will directly affect the combinatorial optimization results.More specifically, most combinatorial optimization problems are multi-constraints based.
In this article, we use both personalized constraints and general constraints, where the former are used to design the fitness function, and the latter are used as its constraints.We also combine the satisfaction of personalized constraints into the score model, and the average of scores is identified with the fitness value.The bigger the fitness value is, the higher the degree of satisfaction of personalized constraint will be, and the better the position of particle is considered to be.Suppose a combinatorial optimization problem with constraints A, B, and C, and both A and B are the personalized constraints, the C is a piece of general constraint.The fitness value is calculated as if constraint C is satisfied, where S(A) is the score of A, and S(B) is B. The algorithm can be applied to different scenarios, with different constraints and fitness functions.

Pseudocodes of CS-PSO
Algorithm 1 shows the pseudocodes of CS-PSO.In particular, N is the number of particle, M is the total number of categories, N [] is the number of items of each category, K is

A CS-PSO application: a healthy diet scheme
As a case study, we will consider a healthy diet scheme, which includes a balance of nutrients and an appropriate variety of different types of food.Clearly, this can be viewed as a typical combinatorial optimization problem.The main nutrients include water, protein, carbohydrate, lipids, dietary fiber, vitamins, and minerals.The main categories of food include staple food, vegetables, fruits, eggs, seafood, milk.In order to ensure the diversity of diet and satisfy users, the healthy diet scheme would better recommend a food item of each category that users prefer.CS-PSO can be utilized in this context, and it involves the following elements: 1.For m types of food items category, every category is defined as a vector B i , for i = 0, . . ., m − 1. 2. The total number of food items in each category is defined as N i , for i = 0, . . ., m − 1. 3. The vector of the diet particle is defined as X i = (B 0 , . . ., B m−1 ) and B i = (x i,0 , . . ., x i,N i −1 ), and the dimension of a particle is The chaos initialization and the chaos perturbing are the same as the above, while the fitness function needs to be redesigned and further developed.We consider three constraints, namely calories, nutrients, and costs.User's preferences are assumed to be a general constraint, which has to be satisfied prior to the initialization of the process.On the other hand, the constraints of calories, nutrients, costs are used as the personalized constraints.The fitness function is defined as: where S c , S n and S p are the scores of the above four constraints, R is the recommended value corresponding to each constraint, and S is the standard value corresponding to each constraint.Figure 2a shows the score model of the constraint of calorie intake.If the calorie of recommended food is equal to S, then the score value is 100, otherwise the score value is less than 100.The bigger the distance from the S, the less the score value will be.Figure 2b shows that the score model of the constraint of nutrient intake is similar with the former.
Figure 2c shows the score model of the constraint of cost.If the cost of recommended food is less than or equal to S, then the score is 100, otherwise the score is less than 100.The workflow of the diet recommendation system with CS-PSO includes the following steps, as shown in Fig. 3: Step 1 The chaos initialization generates n diet particles.
Based on the user's preferences, a food item of each category is initialized as the position of diet particle.
Step 2 According to the user's basic background, including height, weight, gender, age, and activity level, the amount of required calories and nutrients are calculated.And the constraint of cost can be provided by user.These values are used as the standard values corresponding to each constraint.
Step 3 The fitness values of all diet particles are calculated and assessed.According to the analysis of the above three constraints, calculate the score of each constraint, and then the fitness value equal with the average of three scores.The greater the fitness value, the better the diet particle.Therefore, the global optimal particle position and the corresponding fitness value can be obtained.
Step 4 The fitness value of the global best particle is assessed to evaluate whether it is optimal.If so, then end the process.Otherwise, assess whether it reaches the maximum number of iterations.If it does, then go to the end of the process.Otherwise, go to Step 5.
Step 5 The chaos perturbing component is used to update diet particles, and then, go to Step 3.

Prototype of the system
In this section, we will introduce a personalized dietary recommendation system, called Friend, where CS-PSO is used to address the healthy diet combination optimization problem.The system provides the interface for users to input their personal physiological data, which is used to calculate their body mass indexes (BMI), their personal standard values of calories, and standard values of nutrients.The calculation of BMI is: where w is the weight of a person and h is the height of a person.Table 1 shows the BMI for Asian adults As shown in Fig. 4, Friend is composed of the following classes: -MainActivity is the main interface of Friend for users to input their personal physiological data, -StandardInfo and DBManager select the appropriate personalized standard values of calories and nutrients from the database, -RecommActivity is an activity, which receives the personalized data from the interface of MainActivity, and the recommended diet scheme will show in this activity, -BF_PSO is a class, which is mainly used for the initialization of the diet particles, -Agent is a class, which is mainly used for updating of the diet particles, -Finally, FoodInfo and DBManager are responsible for selecting the recommended diet from the database.
CS-PSO is achieved via RecommActivity, BF_PSO, Agent, and FoodInfo, which interact with each other to provide the scheme of diet recommendation.Figure 5 shows the user interfaces of Friend.Consider breakfast for example, as shown in Fig. 5a.Friend requires users to input their personalized information, including age, gender, activity level, weight, height, and budget on food and food preference.After providing the above information, users need to click the recommendation button, and it will generate the scheme of diet recommendation as shown in Fig. 5b, c.   5 Experiments and performance analysis HLR-PSO is a typical PSO for generating healthy lifestyle recommendations and has good performance.Therefore, we applied HLR-PSO and then CS-PSO to Friend to compare their performances in the following three aspects: the diversity of the recommended food items, the times of iteration for finding the global best value, and the ergodicity of algorithm.

Diversity
Tables 2 and 3 show the schemes of diet recommendation with HLR-PSO and CS-PSO, respectively.As shown in Tables 2 and 3, the schemes of diet recommendation with CS-PSO are more reasonable than the schemes of diet recommendation with HLR-PSO.Scheme 1, recommended with HLR-PSO, includes three types of dairy Fig. 7 Mapping graphs products, and both scheme 2 and scheme 3 include three types of fruits, respectively, which are all not appropriate according to the standards of a healthy diet.However, scheme 2 includes three types of food, such as fruit and milk, and both scheme 1 and scheme 3 include three types of food, including cereal, fruit, and milk, respectively.As a consequence, CS-PSO can ensure the diversity of food, while HLR-PSO cannot.The reason is that CS-PSO adopts the prior knowledge of breakfast and food preferences of users.

Iteration times
Using HLR-PSO and CS-PSO run 20 times, respectively, we record the times of iteration when find the global best value.The results are shown in Tables 4 and 5.
The comparison between two algorithms about the times of iteration is shown in Fig. 6.In Fig. 6, the times of iteration of CS-PSO are far less than HLR-PSO.

Ergodicity
We chose three categories of food, including staple food, fruits, and milk.The index of staple food is between 0 and 32, fruits are between 0 and 30, and milk is between 0 and 30.HLR-PSO and CS-PSO run 20 times, respectively, and we recorded the index of each category.The experimental results are shown in Tables 6 and 7.
As shown in Fig. 7, the schemes traversed by HLR-PSO algorithm fall into six categories: schemes 15, 28, 29 and 21, 28, 30 appear once; schemes 22, 28, 30 appear twice; schemes 30, 30, 25 appear 8 times; schemes 31, 20, 30 appear 5 times; schemes 32, 28, 29 appear 3 times, and these schemes mainly concentrate on the latter three types.With the analysis of Fig. 7 and Table 7, the schemes traversed by CS-PSO algorithm fall into 16 types: the schemes of 2, 29, 9 and 4, 26, 6 and 7, 4, 27 and 24, 12, 30 appear twice; other schemes all appear only once.The high frequency schemes in Table 6 appear only once in Table 7, in the 10th, 13th, and 15th values, respectively.We can conclude that the traversal results by HLR-PSO are relatively more concentrated, and the traversal results are relatively fewer than CS-PSO.To sum up, CS-PSO has the better ergodicity than HLR-PSO.

Conclusion
Combinatorial optimization problem is a type of NP-hard problem.The traditional combinatorial optimization algorithms cannot guarantee the diversity of the final scheme, solve the multi-objective optimization problems effectively, or satisfy the search efficiency, etc.In order to successfully address such problems and further improve the performance of PSO, we have introduced a novel approach in solving combinatorial optimization problems, namely CS-PSO.Furthermore, we have discussed its use as part of the diet recommendation system Friend.The experimental results show that CS-PSO has the better diversity, ergodicity, and efficiency than HLR-PSO.In addition, CS-PSO can not only be used in diet recommendation, but also be used in product design, exercise programming, travel planning, etc.However, CS-PSO only considers the combination of the overall scheme, without considering the logical structure of combination.
In future research, we are aiming to integrate the automated construction mechanism of logical structure with combinatorial optimization problems.In particular, this approach will further enhance the performance and accuracy of the method discussed in this article, which is already supported by initial evaluations.

Table 1
BMI for Asian adults

Table 2
Schemes of diet recommendation with HLR-PSO

Table 4
Iteration times of HLR-PSO

Table 5
Iteration times of CS-PSO HLR-PSO Times Times Times Times Times Average

Table 6
Index of each category with HLR-PSO