CSPSO: chaotic particle swarm optimization algorithm for solving combinatorial optimization problems
Abstract
Combinatorial optimization problems are typically NPhard, due to their intrinsic complexity. In this paper, we propose a novel chaotic particle swarm optimization algorithm (CSPSO), which combines the chaos search method with the particle swarm optimization algorithm (PSO) for solving combinatorial optimization problems. In particular, in the initialization phase, the priori knowledge of the combination optimization problem is used to optimize the initial particles. According to the properties of the combination optimization problem, suitable classification algorithms are implemented to group similar items into categories, thus reducing the number of combinations. This enables a more efficient enumeration of all combination schemes and optimize the overall approach. On the other hand, in the chaos perturbing phase, a brandnew set of rules is presented to perturb the velocities and positions of particles to satisfy the ideal global search capability and adaptability, effectively avoiding the premature convergence problem found frequently in traditional PSO algorithm. In the above two stages, we control the number of selected items in each category to ensure the diversity of the final combination scheme. The fitness function of CSPSO introduces the concept of the personalized constraints and general constrains to get a personalized interface, which is used to solve a personalized combination optimization problem. As part of our evaluation, we define a personalized dietary recommendation system, called Friend, where CSPSO is applied to address a healthy diet combination optimization problem. Based on Friend, we implemented a series of experiments to test the performance of CSPSO. The experimental results show that, compared with the typical HLRPSO, CSPSO can recommend dietary schemes more efficiently, while obtaining the global optimum with fewer iterations, and have the better global ergodicity.
Keywords
Combinatorial optimization Particle swarm optimization Chaos search Personalization recommendation1 Introduction
Particle swarm optimization algorithm (PSO) is a heuristic optimization technology, presented by Kennedy and Eberhart (1995), which mimics the swarm behavior of bird flocks in performing their tasks, and to discover an optimal solution based on an objective function (Kennedy and Eberhart 1995; Eberhart and Kennedy 1995; Chang et al. 2014). With fewer parameters, PSO algorithm can achieve a faster convergence, while being simpler and easier to implement (Xu et al. 2012). PSO has already been applied to many fields, such as electric power systems, job scheduling of workshops, wireless sensor networks, route planning, and robotics (Lei 2014; Kumari and Jha 2014; Yao et al. 2012; Liao et al. 2012; Lee and Kim 2013). However, the performance of PSO still has space for improvement. For example, due to the fast convergence of PSO, it is easy to fall into local optima in solving multimodal optimization problems, potentially leading to the premature convergence of particle swarms. In the initialization and updating phase, the stochastic strategy of PSO generates a group of particles and finds the optimal solution through multiple iterations. During the iterations, the positions and velocities of particles are randomly updated, resulting in a low computational efficiency. There are mainly two ways to improve performance of the PSO: the first adjusts the parameters and procedure of PSO, such as dynamically adjusting the search step length and optimizing the update strategy of the particles (Chi et al. 2011; Zhao et al. 2014). Another approach would be the combination with other intelligent optimization algorithms, such as the genetic algorithm (GA), and the simulated annealing algorithm (Sharma and Singhal 2015; Nancharaiah and Mohan 2013). Most related research Guo et al. (2014), Zhu et al. (2014), Sorkunlu et al. (2013), Shi and Eberhart (1998), Elbedwehy et al. (2012) about the improvement in PSO now mainly focuses on the continuous optimization problems, while the combinatorial optimization problems (e.g., the combination of integer programming and the 0/1 knapsack) do not attract enough attentions, and the current research results are usually suitable to certain scenarios, which are not pervasive.
In order to solve combinatorial optimization problems more efficiently, we propose a novel chaotic particle swarm optimization algorithm (CSPSO). The main contributions of this paper are as follows. First of all, the chaos initialization and the chaos perturbing of the chaos search method are introduced into PSO in place of the random initialization and the random perturbing. The ergodicity, regularity, and randomness of the chaos search method can contribute to address the PSO issues, including the local optimum and the poor search efficiency. In the initialization phase, the priori knowledge of the combination optimization problem is used to optimize the initial particles. Furthermore, the quality of the particles and the search efficiency of the algorithm are improved. In the chaos perturbing phase, a brandnew set of perturbing rules is presented to perturb the velocities and positions of particles sufficiently to realize the ideal global search capability and adaptability, effectively solving the premature problem of particles. Subsequently, we designed the fitness function of CSPSO, which utilizes the concept of the personalized constraints and general constrains to produce a personalized interface, which is used to solve a personalized combination optimization problem. Finally, we built a personalized dietary recommendation system, Friend, which is based on CSPSO to address a healthy diet combination optimization problem. Friend is able to recommend more reasonable dietary schemes, which proves that CSPSO has an enhanced performance compared to other improved PSO algorithms, such as the typical PSO for generating healthy lifestyle recommendations (HLRPSO) (Pop et al. 2013).
The rest of the paper is organized as follows: Sect. 2 presents the related works, Sect. 3 discusses CSPSO in detail, and Sect. 4 describes the prototype personalized dietary recommendation system called Friend applied with CSPSO. Experiments and performance analysis are presented in Sect. 5, and finally, Sect. 6 concludes the paper by summarizing the main contributions of this paper and commenting on future directions of our work.
2 Related work
There is much research focusing on the improvement in the performance of the original PSO. In Wen et al. (2013), the authors propose a new modified particle swarm optimization algorithm based on subparticle circular orbit and zerovalue inertial weight (MDPSO). MDPSO utilizes the trigonometric function based on nonlinear dynamic learning factors and on a prediction method of population premature convergence, which can achieve a better balance between the local exploring ability and the global converging ability of particles (Wen et al. 2013). However, MDPSO is mainly suitable for solving the composition optimization problem of Web service, and it is not, therefore, universal. In Gao et al. (2005), the authors propose a general particle swarm optimization model (GPSO), which can be naturally extended to solve discrete and combinatorial optimization problems. GPSO uses the genetic updating operator, further improving the quality of solution and the stability of convergence, and significantly saving the computational cost (Gao et al. 2005). However, the genetic updating operator brings randomness into GPSO, which cannot guarantee the diversity of the final solution. In Guo et al. (2011), the authors propose a hybrid particle swarm optimization algorithm with the FiducciaMattheyses algorithm (FM), inspired by GA, utilizing the regeneration mechanism of particle’s position of discrete particle swarm optimization (DPSO). In particular, it is based on genetic operations to update the position of the particle defined as twopoint crossover and random twopoint exchange mutation operators to avoid generating infeasible solutions. To improve the ability of local exploration, FM is applied to update its position. A mutation strategy is also built into the proposed algorithm to achieve better diversity and break away from local optima (Guo et al. 2011). However, similar to Wen et al. (2013), the algorithm is not universal and cannot solve the multiobjective optimization problems. In Ibrahim et al. (2012), the authors propose a novel multistate particle swarm optimization algorithm (MSPSO) to solve discrete combinatorial optimization problems, which is different from the binary particle swarm optimization algorithm (BinPSO). In MSPSO, each dimension variable of each particle can attain various states, and it has been applied to two benchmark instances of the traveling salesman problem (TSP). The experimental results show that MSPSO outperforms BinPSO in solving the discrete combinatorial optimization problem (Ibrahim et al. 2012). However, MSPSO utilizes the concept of multistate, leading to the exponentially growing requirements of storage space and computation time. Therefore, the efficiency is affected when MSPSO is applied to solving highdimensional combinatorial optimization problems. In Gao and Xiei (2004), the authors attempt to apply chaos search method to PSO, while using its ergodicity, regularity, and randomness to search the current global best particle in the chaotic way, replacing a stochastic selected individual from the current “population.” The performance of PSO is improved with the chaos search method, which motivates our work. The evolution process is quickened, and the abilities to seek the global optimum, the convergence speed, and accuracy are all improved (Gao and Xiei 2004). In Wang and Wu (2011) and Yang et al. (2015), improved PSO algorithms with the chaos search method are presented and applied to the optimization of logistics distribution route and vehicle routing problem with specific time windows, respectively. However, these results all simply adopt the chaos search method, not further improving the mechanism of chaos initialization and chaos perturbing or providing the personalized interface. Therefore, the diversity of the final solution cannot be guaranteed, and the search efficiency is still unsatisfactory (Wang and Wu 2011; Yang et al. 2015). In Sfrent and Florin Pop (2015), the authors introduce a simulation infrastructure for building/analyzing different types of scenarios, which allows the extraction of scheduling metrics for three different algorithms, namely the asymptotically optimal one, FCFS and a traditional GAbased algorithm. These are combined them into a single hybrid algorithm, addressing asymptotic scheduling for a variety of tasks related to big data processing platforms. A distributed and efficient method for optimizing task assignment is introduced in Iordache et al. (2006), which utilizes a combination of genetic algorithms and lookup services. In Bessis et al. (2012), an algorithm based on a variety of einfrastructure nodes exchanging simple messages with linking nodes is discussed, with the aim to improve the energy efficiency of the network performance.
3 Chaotic particle swarm optimization algorithm
3.1 Basic idea

Most PSO algorithms are only suitable for one particular scenario, and they are not universal.

Most PSO algorithms are not based on multiobjective, or do not provide a personalized interface. So, they cannot effectively solve discrete, multiobjective, and personalized combinatorial optimization problems.

With the increasing of the particle dimension, the requirements of storage space and computation time will grow exponentially, which will lower the efficiency when solving the highdimensional combinatorial optimization problem.
3.2 Chaos search method
Definition 1
(Chaos search) Chaos search is the random movement with pseudorandomness, ergodicity, and regularity, which is determined by a deterministic equation (Lorenz 2005).
3.3 Model of combinatorial optimization problem
Definition 2
(Combinatorial optimization) Combinatorial optimization refers to the process of optimizing an object via the combination of a finite set of components.

i is the index of category,

j is the index of item,

m is the total number of categories,

\(n_i\) is the total number of items in the ith category,

\(w_{i,j}\) is the weight of the jth item in the ith category,

\(c_{i,j}\) is the cost of the jth item in ith category.

O is the object cost, which means the manufacturing cost shall not exceed the object cost and the quality of the product shall be the optimal; \(x_{i,j} \in \{0, 1\}, \forall i\) is the mapping value of the item.
3.4 Chaos initialization
Chaos initialization refers to the process of a chaotic variable of the logistic map, which randomly identifies a value as its initial value of particle.
 1.
All items are divided into m categories, which are defined as vectors, \(B_{i}\), for \(i=0, 1, \ldots , m1\), for category i.
 2.
The total number of items in category i is defined as \(N_{i}\), for \(i=0, 1, \ldots , m1\), which implies that \( B_{i} = (x_{i,0}, x_{i,1}, \ldots , x_{i,N_{i}1})\).
 3.
According to the above points, the position of particle i can be obtained, which is defined as a vector \(X_{i} = (B_{0}, B_{1}, \ldots , B_{m1})\) . The dimension of particle i is \(\displaystyle {\sum \nolimits _{i=0}^{m1} N_{i}}\).
 1.
Suppose there are \(N_0\) items in \(B_0\), the chaos search space [0, 1] is divided into \(N_0\) subspaces.
 2.
The random function is used to generate a random number between 0 and 1, described as \(k_{0,0}\), which is assigned to the chaotic variable as the initial value of the \(B_{0}\) category.
 3.
The parameter \(k_{0,0}\) is subsequently assessed to identify which subspace it belongs to. Supposing that \(k_{0,0}\) belongs to the \(\mu \)th subspace, \(x_{0, \mu } =1\), and others variables of \(B_{0}\) are all initialized to 0. It means that the \(\mu \)th item is selected in \(B_{0} = (0, 0, \ldots , 1, \ldots , 0)\).
3.5 Chaos perturbing
Definition 3
(Chaos perturbation) In the updating process of particles, their velocities and positions will be perturbed sufficiently and the search space will be traversed as sufficient as possible.
Definition 4
(Fitness value) Fitness value is a value obtained through a fitness function, which is a quantitative indicator and used to evaluate the advantage and disadvantage of individual.
 1.
The local best fitness value is defined as \(f_{i}(i = 0,1, \ldots , n1)\).
 2.
The global best fitness value is defined as F.
 3.
The local best position is defined as \(P_{i}(i = 0,1, \ldots , n1)\).
 4.
The global best position is defined as G.

i is the number of particle,

j is the index of position \(X_{i}(t)\),

\(x_{i,j}\) is the variable with index j from the ith current position,

\(x_{i,j}^\mathrm{p}\) is the variable with j from the ith local best position,

\(x_{i,j}^\mathrm{g}\) is the variable with j from the global best position,

\(C(x_{i,j})\) is a perturbing function of \(x_{i,j}\) with j from the position of particle i, and finally,

\(J(x_{i,j}^\mathrm{p})\) and \(J(x_{i,j}^\mathrm{g})\) are simple assessments based on the process initialization.
 1.First, according to the parameter j, we can determine the associated item of the corresponding category \(x_{i,j}\). In particular, the assertion
“If \(\displaystyle {j \ge \sum _{i=0}^{h1}N_{i} \quad \text{ and } \quad j < \sum _{i=0}^{h}N_{i}}\)”
 2.
The logistic map is used to iterate \(k_{i,h}\) once and generate a new chaotic variable \(k_{i,h}\).
 3.
Subsequently, the subspace \(k_{i,h}\) is assessed to understand which subspace it belongs to. Suppose that \(k_{i,h}\) belongs to the pth subspace, \(x_{i,p}=1\) and the others variables of the category \(B_{h}\) are equal to 0. If \(\displaystyle {j  \sum \nolimits _{i=0}^{s} N_{i} = p (s \le m)} \), then \(x_{i,j}(t+1) =1\). Otherwise, \(x_{i,j}(t+1) =0\) , and \(B_{h} = (0,0, \ldots , 1, \ldots , 0)\).
 1.
If \(x_{i,j}^\mathrm{p} = 1\), then \(x_{i,j}(t+1) =1\) and other variables of the corresponding category are assigned to 0.
 2.
If \(x_{i,j}^\mathrm{p} = 0\), and \(x_{i,j} =0\), then the only action carried out is to assign 0 to \(x_{i,j}(t+1)\).
 3.
If \(x_{i,j}^\mathrm{p} = 0\), and \(x_{i,j} =1\), then \(x_{i,j}(t+1) = C(x_{i,j})\).
3.6 Design of the fitness function
The fitness function is used to evaluate the performance of a combination scheme under certain constraints. Therefore, the properties of the fitness function will directly affect the combinatorial optimization results. More specifically, most combinatorial optimization problems are multiconstraints based.
3.7 Pseudocodes of CSPSO
4 A CSPSO application: a healthy diet scheme
As a case study, we will consider a healthy diet scheme, which includes a balance of nutrients and an appropriate variety of different types of food. Clearly, this can be viewed as a typical combinatorial optimization problem. The main nutrients include water, protein, carbohydrate, lipids, dietary fiber, vitamins, and minerals. The main categories of food include staple food, vegetables, fruits, eggs, seafood, milk. In order to ensure the diversity of diet and satisfy users, the healthy diet scheme would better recommend a food item of each category that users prefer.
 1.
For m types of food items category, every category is defined as a vector \(B_{i}\), for \(i = 0,\ldots , m1\).
 2.
The total number of food items in each category is defined as \(N_{i}\), for \(i = 0,\ldots , m1\).
 3.
The vector of the diet particle is defined as \(X_{i} = (B_{0}, \ldots , B_{m1})\) and \(B_{i} = (x_{i,0}, \ldots , x_{i, N_{i}1})\), and the dimension of a particle is \(\displaystyle {\sum \nolimits _{i=0}^{m1}N_i}\)
 Step 1

The chaos initialization generates n diet particles. Based on the user’s preferences, a food item of each category is initialized as the position of diet particle.
 Step 2

According to the user’s basic background, including height, weight, gender, age, and activity level, the amount of required calories and nutrients are calculated. And the constraint of cost can be provided by user. These values are used as the standard values corresponding to each constraint.
 Step 3

The fitness values of all diet particles are calculated and assessed. According to the analysis of the above three constraints, calculate the score of each constraint, and then the fitness value equal with the average of three scores. The greater the fitness value, the better the diet particle. Therefore, the global optimal particle position and the corresponding fitness value can be obtained.
 Step 4

The fitness value of the global best particle is assessed to evaluate whether it is optimal. If so, then end the process. Otherwise, assess whether it reaches the maximum number of iterations. If it does, then go to the end of the process. Otherwise, go to Step 5.
 Step 5

The chaos perturbing component is used to update diet particles, and then, go to Step 3.
4.1 Prototype of the system
BMI for Asian adults
Figure  Standard  Related disease risk 

Thinness  \({<}18.5\)  Risk of developing problems such as nutritional deficiency and osteoporosis 
Regular  18.5–22.9  Low risk (healthy range) 
Overweight  \({\ge } 23\)  Moderate risk of developing heart 
Obesity  23–24.9  Disease, high blood pressure, stroke 
Obesity—class I  25–29.9  Diabetes 
Obesity—class II  \({\ge } 30\)  High risk of developing heart disease 
Obesity—class III  \({\ge } 40\)  High blood pressure, stroke, diabetes 

MainActivity is the main interface of Friend for users to input their personal physiological data,

StandardInfo and DBManager select the appropriate personalized standard values of calories and nutrients from the database,

RecommActivity is an activity, which receives the personalized data from the interface of MainActivity, and the recommended diet scheme will show in this activity,

BF_PSO is a class, which is mainly used for the initialization of the diet particles,

Agent is a class, which is mainly used for updating of the diet particles,

Finally, FoodInfo and DBManager are responsible for selecting the recommended diet from the database.
5 Experiments and performance analysis
HLRPSO is a typical PSO for generating healthy lifestyle recommendations and has good performance. Therefore, we applied HLRPSO and then CSPSO to Friend to compare their performances in the following three aspects: the diversity of the recommended food items, the times of iteration for finding the global best value, and the ergodicity of algorithm.
5.1 Diversity
Schemes of diet recommendation with HLRPSO
Scheme  Food items  Amount of food  Calories (Kcal) 

1  Watermelon  250.0 g  35 
Yoghurt (brand A)  100.0 g  63  
Pure milk (brand A)  460 ml  258  
Yoghurt (brand B)  200.0 g  184  
2  Sweet potato  300.0 g  267 
Cucumber  130.0 g  18  
Orange  182.0 g  61  
Grapefruit  139.0 g  44  
Strawberry  500.0 g  150  
3  Dumpling  100 g  250 
Watermelon  250.0 g  35  
Orange  200.0 g  70  
Grape  500.0 g  185 
Schemes of diet recommendation with CSPSO
Scheme  Food items  Amount of food  Calories (Kcal) 

1  Noodle  100 g  284 
Peach  200.0 g  83  
Milk (brand B)  250.0 ml  173  
2  Dumpling  100 g  253 
Cherry  500.0 g  200  
Yoghurt (brand B)  100.0 ml  87  
3  Chinese style baked roll  80 g  234 
Grape  500.0 g  185  
Milk (brand C)  200.0 ml  173 
As shown in Tables 2 and 3, the schemes of diet recommendation with CSPSO are more reasonable than the schemes of diet recommendation with HLRPSO. Scheme 1, recommended with HLRPSO, includes three types of dairy products, and both scheme 2 and scheme 3 include three types of fruits, respectively, which are all not appropriate according to the standards of a healthy diet. However, scheme 2 includes three types of food, such as fruit and milk, and both scheme 1 and scheme 3 include three types of food, including cereal, fruit, and milk, respectively. As a consequence, CSPSO can ensure the diversity of food, while HLRPSO cannot. The reason is that CSPSO adopts the prior knowledge of breakfast and food preferences of users.
5.2 Iteration times
Iteration times of HLRPSO
HLRPSO  Times  Times  Times  Times  Times  Average 

1–5  34  17  12  13  26  15.1 
6–10  16  12  23  9  10  
11–15  22  12  10  16  10  
16–20  11  10  18  9  12 
Iteration times of CSPSO
HLRPSO  Times  Times  Times  Times  Times  Average 

1–5  3  4  6  4  4  4.1 
6–10  4  5  6  2  4  
11–15  3  2  4  5  4  
16–20  4  4  4  5  5 
5.3 Ergodicity
Index of each category with HLRPSO
HLRPSO  1  2  3  4  5  6  7  8  9  10 

Staple  31  30  31  30  30  30  30  30  21  15 
Fruits  20  30  20  30  30  30  30  30  28  28 
Milk  30  25  30  25  25  25  25  25  30  29 
11  12  13  14  15  16  17  18  19  20  

Milk  29  30  25  29  30  30  30  25  30  29 
Staple  32  31  30  32  31  22  22  30  31  32 
Fruits  28  20  30  28  20  28  28  30  20  28 
Index of each category with CSPSO
CSPSO  1  2  3  4  5  6  7  8  9  10 

Staple  4  2  23  1  2  7  4  14  14  30 
Fruits  26  29  15  29  29  4  26  19  21  30 
Milk  6  9  25  29  9  27  10  24  26  25 
11  12  13  14  15  16  17  18  19  20  

Staple  16  25  31  5  32  24  7  4  2  24 
Fruits  7  28  20  1  28  12  4  26  7  12 
Milk  27  19  30  16  29  30  27  6  30  30 
6 Conclusion
Combinatorial optimization problem is a type of NPhard problem. The traditional combinatorial optimization algorithms cannot guarantee the diversity of the final scheme, solve the multiobjective optimization problems effectively, or satisfy the search efficiency, etc. In order to successfully address such problems and further improve the performance of PSO, we have introduced a novel approach in solving combinatorial optimization problems, namely CSPSO. Furthermore, we have discussed its use as part of the diet recommendation system Friend. The experimental results show that CSPSO has the better diversity, ergodicity, and efficiency than HLRPSO. In addition, CSPSO can not only be used in diet recommendation, but also be used in product design, exercise programming, travel planning, etc. However, CSPSO only considers the combination of the overall scheme, without considering the logical structure of combination.
In future research, we are aiming to integrate the automated construction mechanism of logical structure with combinatorial optimization problems. In particular, this approach will further enhance the performance and accuracy of the method discussed in this article, which is already supported by initial evaluations.
Notes
Acknowledgments
This work was supported jointly sponsored by the National Natural Science Foundation of China under Grant 61472192 and 61472193.
Compliance with ethical standards
Conflict of interest
The authors declare that none of them has any conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
References
 Bessis N, Sotiriadis S, Pop F, Cristea V (2012) Optimizing the energy efficiency of message exchanging for service distribution in interoperable infrastructures. In: 4th international conference on intelligent networking and collaborative systems (INCoS), pp 105–112Google Scholar
 Chang YC, Hsieh CH, Xu YX et al (2014) Introducing the concept of velocity into bare bones particle swarm optimization. Presented at the 2014 international conference of information science, electronics and electrical engineering, Sapporo, Japan, 26–28 AprilGoogle Scholar
 Chen SY, Ren L, Xin FQ (2012) Reactive power optimization based on particle swarm optimization and simulated annealing cooperative algorithm. In: Proceedings of the 31st conference of Chinese control conference, Hefei, China, 25–27 JulyGoogle Scholar
 Chi YH, Sun FC, Wang WJ et al (2011) An improved particle swarm optimization algorithm with search space zoomed factor and attractor. Chin J Comput 34(1):115–130CrossRefGoogle Scholar
 Dong N, Li HJ, Liu XD (2013) Chaotic particle swarm optimization algorithm parametric identification of Bouc–Wen hysteresis model for piezoelectric ceramic actuator. In: Proceedings of the 25th Chinese control and decision conference, Guiyang, China, 25–27 MayGoogle Scholar
 Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of ISOMMHS, Nagoya, Japan, pp 39–43Google Scholar
 Elbedwehy MN, Zawbaa HM, Ghali H et al (2012) Detection of heart disease using binary particle swarm optimization. In: Proceedings of the 2012 conference on computer science and information systems, Wroclaw, Poland, 9–12 SeptGoogle Scholar
 Gao Y, Xiei SL (2004) Chaos particle swarm optimization algorithm. J Comput Sci 31(8):13–15Google Scholar
 Gao HB, Zhou C, Gao L (2005) General particle swarm optimization model. Chin J Comput 28(12):1980–1987Google Scholar
 Guo WZ, Chen GL, Peng SJ (2011) Hybrid particle swarm optimization algorithm for VLSI circuit partitioning. J Softw 22(5):833–842CrossRefzbMATHGoogle Scholar
 Guo T, Lan JL, Li YF (2014) Adaptive fractionalorder darwinian particle swarm optimization algorithm. J Commun 35(4):130–140Google Scholar
 Ibrahim I, Yusof ZM, Nawawi SW et al (2012) A Novel multistate particle swarm optimization for discrete combinatorial optimization problems. In: Proceedings of the 4th international conference on computational intelligence, modelling and simulation, Kuantan, Malaysia, 25–27 SeptGoogle Scholar
 Iordache GV, Boboila MS, Pop F, Stratan C, Cristea V (2006) A decentralized strategy for genetic scheduling in heterogeneous environments. On the move to meaningful Internet systems 2006: CoopIS, DOA, GADA, and ODBASE: OTM confederated international conferences, CoopIS, DOA, GADA, and ODBASE 2006, Montpellier, France, October 29–November 3, 2006. Proceedings, Part II, Springer Berlin, Heidelberg, pp 1234–1251Google Scholar
 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN, Perth, Australia, pp 1942–1948Google Scholar
 Kumari N, Jha AN (2014) Frequency control of multiarea power system network using PSO based LQR. In: Proceedings of the 6th international conference power India Delhi, India, 5–7 DecGoogle Scholar
 Lee KB, Kim JH (2013) Multi objective particle swarm optimization with preferencebased sort and its application to path following footstep optimization for humanoid robots. IEEE Trans Evolut Comput 17(6):755–766MathSciNetCrossRefGoogle Scholar
 Lei KY (2014) A highly efficient particle swarm optimizer for super highdimensional complex functions optimization. In: Proceedings of the 5th international conference software engineering and service science, Beijing, China, 27–29 JuneGoogle Scholar
 Liao YF, Yau DH, Chen CL (2012). Evolutionary algorithm to traveling salesman problems. Comput Math Appl. 64(5):788–797. Available: http://www.sciencedirect.com/science/article/pii/S089812211101073X
 Lorenz EN (2005) Designing chaotic models. J Atmos Sci 62(5):1574–1587Google Scholar
 Nancharaiah B, Mohan BC (2013) MANET link performance using ant colony optimization and particle swarm optimization algorithms. In: Proceedings of the 2013 international conference of communications and signal processing, Melmaruvathur, Tamil, 3–5 AprilGoogle Scholar
 Pop CB, Chifu VR, Salomie I et al (2013) Particle swarm optimizationbased method for generating healthy lifestyle recommendations. In: Proceedings of the international conference of intelligent computer communication and processing, ClujNapoca, Romania, 5–7 SeptGoogle Scholar
 Sfrent A, Florin Pop F (2015) Asymptotic scheduling for many task computing in big data platforms. Inf Sci 319:71–91MathSciNetCrossRefGoogle Scholar
 Sharma J, Singhal RS (2015) Comparative research on genetic algorithm, particle swarm optimization and hybrid GAPSO. In: Proceedings of the 2015 conference of computing for sustainable global development, New Delhi, India, 11–13 MarchGoogle Scholar
 Shi YH, Eberhart RA (1998) Modified particle swarm optimizer. In: Proceedings of the IICOEC, Anchorage, AK, pp 69–73Google Scholar
 Sorkunlu N, Sahin U, Sahin F (2013) Block matching with particle swarm optimization for motion estimation. In: Proceedings of the 2013 international conference on systems, man, and cybernetics, Manchester, England, 13–16 OctGoogle Scholar
 Wang TJ, Wu YC (2011) Study on optimization of logistics distribution route based on chaotic PSO. Comput Eng Appl 47(29):218–221Google Scholar
 Wen T, Sheng GJ, Guo Q et al (2013) Web service composition based on modified particle swarm optimization. Chin J Comput 36(5):1031–1046CrossRefGoogle Scholar
 Xu XB, Zhang KG, Li D et al (2012) New chaosparticle swarm optimization algorithm. J Commun 33(1):24–30Google Scholar
 Yang Q, Chen Q, Li ZZ (2015) A chaos particle swarm optimization algorithm of vehicle routing problem with time windows. Comput Technol Dev 25(8):119–122Google Scholar
 Yao JJ, Li J, Wang LM et al (2012) Wireless sensor network localization based on improved particle swarm optimization. In: Proceedings of the 2012 international conference of computing, measurement, control and sensor network, Taiyuan, China, 7–9 JulyGoogle Scholar
 Zhao XC, Liu GL, Liu HQ (2014) Particle swarm optimization algorithm based on nonuniform mutation and multiple states perturbation. Chin J Comput 37(9):2058–2070Google Scholar
 Zhu XH, Li YG, Li N et al (2014) Improved PSO algorithm based on swarm prematurely degree and nonlinear periodic oscillating strategy. J Commun 35(2):182–189Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.