Skip to main content
Log in

Attractive and Repulsive Fully Informed Particle Swarm Optimization based on the modified Fitness Model

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

A novel Attractive and Repulsive Fully Informed Particle Swarm Optimization based on the modified Fitness Model (ARFIPSOMF) is presented. In ARFIPSOMF, a modified fitness model is used as a self-organizing population structure construction mechanism. The population structure is gradually generated as the construction and the optimization processes progress asynchronously. An attractive and repulsive interacting mechanism is also introduced. The cognitive and the social effects on each particle are distributed by its ‘contextual fitness’ value \(F\). Two kinds of experiments are conducted. Results focusing on the optimization performance show that the proposed algorithm maintains stronger diversity of the population during the convergent process, resulting in good solution quality on a wide range of test functions, and converge faster. Moreover, the results concerning on topologic characteristics of the population structure indicate that (1) the final population structures developed by optimizing different test functions differ, which is an important for improving ARFIPSOMF performance, and (2) the final structures developed by optimizing some test functions exhibit scale-free property approximately.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  • Barabasi AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512

    Article  MathSciNet  Google Scholar 

  • Bianconi G, Barabasi AL (2001) Competition and multiscaling in evolving networks. Europhys Lett 54(4):436–442

    Article  Google Scholar 

  • Eguiluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV (2005) Scale-free brain functional networks. Phys Rev Lett 94(1)

  • El-Abd M, Kamel M (2005) Information exchange in multiple cooperating swarms. In: Proceedings of the 2005IEEE swarm intelligence symposium (SIS2005), Pasadena, pp 138–142

  • Fierro R, Castillo O, Valdez F, Cervantes L (2013) Design of optimal membership functions for fuzzy controllers of the water tank and inverted pendulum with PSO variants. IFSA/NAFIPS, pp 1068–1073

  • Giacobini M, Preuss M, Tomassini M (2006) Effects of scale-free and small-world topologies on binary coded self-adaptive CEA. In: Proceedings of evolutionary computation combinatorial optimization, pp 86–98

  • Janson S, Middendorf M (2005) A hierarchical particle swarm optimizer and its adaptive variant. IEEE Trans Syst Man Cybern B 35(6):1272–1282

    Article  Google Scholar 

  • Jeong H, Mason SP, Barabási AL, Oltvai ZN (2001) Lethality and centrality in protein networks. Nature 411:41–42

  • Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks. IEEE Service Center, Piscateway, pp 1942–1948

  • Kennedy J, Mendes R (2002) Population structure and particle swarm performance. In: Proceedings of congress evolutionary computation (CEC 2002), Hawaii, pp 1671–1676

  • Kirley M, Stewart R (2007a) An analysis of the effects of population structure on scalable multiobjective optimization problems. In: Proceedings of genetic evolutionary computation conference (GECCO07), pp 845–852

  • Kirley M, Stewart R (2007b) Multiobjective optimization on complex networks. In: Proceedings of 4th international conference on evolutionary multicriterion optimization (LNCS), pp 81–95

  • Maldonado Y, Castillo O, Melin P (2013) Particle swarm optimization of interval type-2 fuzzy systems for FPGA applications. Appl Soft Comput 13(1):496–508

    Article  Google Scholar 

  • Melin P, Olivas F, Castillo O, Valdez F, Soria Jose, Valdez José Mario García (2013) Optimal design of fuzzy classification systems using PSO with dynamic parameter adaptation through fuzzy logic. Expert Syst Appl 40(8):3196–3206

    Article  Google Scholar 

  • Mendes R (2004) Population topologies and their influence in particle swarm performance: [dissertation]. University of Minho, Lisbon

  • Mendes R, Kennedy J, Neves J (2004) The fully informed particle swarm: simpler, maybe better. IEEE Trans Evol Comput 7(8):204–210

    Article  Google Scholar 

  • Mo S, Zeng J (2012) Particle Swarm Optimization based on self-organization topology driven by fitness with different removing link strategies. Int J Innov Comput Appl 4(2):119–132

  • Niu B, Zhu YL et al (2006) An improved particle swarm optimization based on bacterial chemotaxis. In: Proceedings of the 6th world congress on intelligent control and automation, Dalian, pp 3193–3197

  • Riget J, Vestterstrom JS (2002) A diversity-guided particle swarm optimizer-the ARPSO. Department of Computer Science, University of Aarhus, Denmark

  • Silva A, Neves, Costa E (2002) An empirical comparison of particle swarm and predator prey optimisation. Lecture Notes in Artificial Intelligence, pp 103–110

  • Solis F, Wets R (1981) Minimization by random search techniques. Math Oper Res 6(1):19–30

    Article  MathSciNet  MATH  Google Scholar 

  • Spears DF, Kerr W et al (2004) An overview of physicomimetics. Lecture Notes in Computer Science-State of the Art Series, pp 84–97

  • Valdez F, Melin P, Castillo O (2014) A survey on nature-inspired optimization algorithms with fuzzy logic for dynamic parameter adaptation. Expert Syst Appl 41(14):6459–6466

    Article  Google Scholar 

  • Wang YX, Xiang QL, Mao JY (2008) Particle swarms with dynamic ring topology. In: IEEE congress on evolutionary computation, pp 419–423

  • Whitacre JM, Sarker RA, Pham QT (2008) The self-organization of interaction networks for nature-inspired optimization. IEEE Trans Evol Comput 12(2):220–230

    Article  Google Scholar 

  • Zhang C, Yi Z (2011) Scale-free fully informed particle swarm optimization algorithm. Inf Sci 181(20):4550–4568

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weibin Xu.

Additional information

Communicated by V. Loia.

This work is supported by Youth Foundation of Shanxi Province under Grant No. 2012021012-5 and Research Foundation for the Doctoral Program of Taiyuan University of Science and Technology under Grant No. 20122055.

Appendix A

Appendix A

1.1 A.1 Convergence analysis of ARFIPSO

To analyze the convergence of ARFIPSO, the theory of stability of linear system is used in this section. For analysis purpose, consider the situation that \(P_g\) keeps constant during a period of time. Due to particle \(i\) chosen randomly, results can be applied to all other particles. In addition, due to each dimension updated independently from others, without loss of generality, the one-dimensional case is used for convergence analysis of ARFIPSO. The Eqs. (8) and (9) can be transformed as:

$$\begin{aligned}&v_i (t+1)=wv_i (t)+(1-F_i (t)) \times c_i \nonumber \\&\times \left( \sum \limits _{\begin{array}{c} j\in n(B(i)) j\ne g j\ne i \end{array}} {r_j (p_j (t)-x_i (t))} -\sum \limits _{j\in n(W(i))} {r_j (p_j (t)-x_i (t))}\right) \, \nonumber \\&\,+F_i (t) \times \beta \times (r_g (p_g (t)-x_i (t))+r_i (p_i (t)-x_i (t))) \end{aligned}$$
(19)
$$\begin{aligned} x_i (t+1)=x_i (t)+v_i (t+1) \end{aligned}$$
(20)

Then, Eq. (19) becomes:

$$\begin{aligned} v_i (t+1)&=wv_i (t)-\left( \left( 1-F_i (t)c_i \right) \right. \nonumber \\&\quad \left. \left( \sum \limits _{j\in B(i)} {r_j }-\sum \limits _{j\in {W}(i)} {r_j } \right) \right) \nonumber \\&\quad +F_i (t)\beta r_g +F_i (t)\beta r_j )x_i (t) \nonumber \\&\quad \,+((1-F_i (t)c_i )\left( \sum \limits _{j\in {B}(i)} {r_j } -\sum \limits _{j\in {W}(i)} {r_j } )\right) p_j \nonumber \\&\quad +F_i (t)\beta r_g p_g +F_i (t)\beta r_j p_i \, \end{aligned}$$
(21)

Substituting Eqs. (20) and (21) into Eq. (22), the iterative process is obtained as follows:

$$\begin{aligned}&x_i (t\!+\!1)\!-\!\left( 1\!+\!w\!-\!(((1\!-\!F_i (t)c_i )\left( \sum \limits _{j\in {B}(i)} {r_j } \!-\!\sum \limits _{j\in {W}(i)} {r_j } \right) \right) \nonumber \\&\qquad + F_i (t)\beta r_g +F_i (t)\beta r_j))x_i (t)+wx_i (t-1) \nonumber \\&\quad = \left( (1-F_i (t)c_i )\left( \sum \limits _{j\in B(i)} {r_j } -\sum \limits _{j\in W(i)} {r_j } \right) \right) p_j \nonumber \\&\qquad + F_i (t)\beta r_g p_g +F_i (t)\beta r_j p_i \end{aligned}$$
(22)

Equation (22) can be viewed as a two-order discrete system with \( ((1 - F_{i} (t)c_{i} )(\sum \nolimits _{{j \in {{B}}(i)}} {r_{j} } - \sum \nolimits _{{j \in {{W}}(i)}} {r_{j} } ){\text {)}}p_{j} { + {F}}_{{{i}}} (t)\beta r_{g} p_{g} + {{F}}_{{{i}}} (t)\beta r_{j} p_{i}\) as the input.

To analyze the convergent condition of sequence \(\{Ex_i (t)\}\) \((Ex_i (t)\) is the expectation of random variable \(x_i (t))\), Eq. (23) is obtained from Eq. (22):

$$\begin{aligned}&Ex_i (t+1)-(1+w-(((1-F_i (t)c_i )(\frac{1}{2}\left| {n(B(i))} \right| \nonumber \\&\qquad -\frac{1}{2} \left| {n(W(i))} \right| ))+\frac{1}{2}F_i (t)\beta \nonumber \\&\qquad +\frac{1}{2}{F}_i (t)\beta ))Ex_i (t)+wEx_i (t-1) \nonumber \\&\quad =((1-F_i (t)c_i )\left( \frac{1}{2}\left| {n(B(i))} \right| -\frac{1}{2}\left| {n(W(i))} \right| )\right) p_j \nonumber \\&\qquad + \frac{1}{2}{F}_i (t)\beta p_g+\frac{1}{2}{F}_i (t)\beta p_i \end{aligned}$$
(23)

Let \(\phi =\frac{1}{2}((1-F_i (t)c_i )(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| ))+\frac{1}{2}{F}_i (t)\beta +\frac{1}{2}{F}_i (t)\beta )\).

The characteristic equation of the iterative process shown in Eq. (23) is

$$\begin{aligned} \lambda ^2-(1+w-\phi )\lambda +w=0 \end{aligned}$$
(24)

According to the theory of stability of linear system, the convergent condition of iterative process \(\{Ex_i (t)\}\) is that absolute values of eigenvalues \(\lambda _{1}\) and \(\lambda _{2}\) are both less than 1. That is,

$$\begin{aligned} \left| {\frac{1+w-\phi \pm \sqrt{(1+w-\phi )^2-4w} }{2}} \right| <1 \end{aligned}$$
(25)

Thus, the convergent condition of iterative process \(\{Ex_i (t)\}\) is written as:

$$\begin{aligned}&0 \le w {<} 1\;{\text {and}}\;0 {<} ((1 - F_{i} (t)c_{i} )(\left| {n(B(i)} \right| ) - \left| {n(W(i))} \right| ))\nonumber \\&\quad + F_{{{i}}} (t)\beta + {{F}}_{{{i}}} (t)\beta ) < 4(1 + w) \end{aligned}$$
(26)

Next, the Eq. (26) is analyzed according the value of \(F_i \) in detail:

  1. 1.

    If \(F_i =1\), then \(0<F_i \beta <2(1+w)\). So, particle \(i\) is convergent when \(0<\beta <2(1+w)\).

  2. 2.

    If \(F_i =0\), then \(0<c_i \times (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<4(1+w)\). Further analysis is given as follows:

    1. 1)

      If particle \(i\) satisfies \(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| \le 0\), then it is divergent.

    2. 2)

      If particle \(i\) satisfies \(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| >0\), then \(0<c_i <\frac{4(1+w)}{(\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )}\). Moreover, \(0\le w<1, \min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=1, \min \,4(1+w)=4\) and \(\max \,4(1+w)=8\). So,

    1. [1

      ] When \(c_i \ge 8\), \(c_i =k_i \ge 8\ge \frac{\max 4(1+w)}{\min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )}\). Thus, particle \(i\) is divergent;

    2. [2

      ] When \(c_i \le 2\), i.e., \(k_i \le 2\), then \(\max (\left| {n(B(i))} \right| \!-\!\left| {n(W(i))} \right| )\!=\!2\) and \(c_i \!=\!k_i \!\le \! 2\!\le \! \frac{\min \,4(1\!+\!w)}{\max \,(\left| {n(B(i))} \right| \!-\!\left| {n(W(i))} \right| )}\). Thus, the particle \(i\) is convergent;

    3. [3

      ] \(2<c_i =k_i <8\). Thus, particle \(i\) may be convergent or divergent.

  3. 3.

    If \(0<F_i <1\) then \(0<(1-F_i ) \times c_i \times (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )+2F_i \beta <4(1+\omega )\).

Further analysis is given as follows:

  1. 1.

    When \(\max (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=2\) and \(c_i =\beta \), because \(\min \,4(1+\omega )=4\), then \(0<c_i \times \max (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<4\). Thus, when \(0<c_i =k_i \le 2\), particle \(i\) is convergent

  2. 2.

    When \(\min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )=1\) and \(c_i =\beta \), because \(\max \,4(1+\omega )=8\) then \(0<c_i \times \min (\left| {n(B(i))} \right| -\left| {n(W(i))} \right| )<8\). Thus, when \(c_i =k_i \ge 8\), particle \(i\) is divergent

  3. 3.

    When \(2<c_i =k_i <8\) particle \(i\) may be convergent or divergent.

To summarize, when the acceleration coefficient \(c\) is set as the smaller value, nodes with fewer connections are more likely to converge; when the acceleration coefficient \(c\) is set as the larger value, the nodes with more connections are more likely to diverge. Thus, the convergence analysis of ARFIPSO can be used to determine the parameters setting in next experiments.

1.2 A.2 Global convergence analysis of ARFIPSOMF

Solis and Wets (1981) provide the conditions which the stochastic optimization algorithm converges at the global optimum with probability 1. Major conclusions are summarized as follows:

Hypothesis 1

If \(f(D(z,\xi ))\le f(z)\) and \(\xi \in \Omega \), then \(f(D(z,\xi ))\le f(\xi )\)

Here, \(D\) is the function for generating problem solutions, \(\xi \) is a random vector from probability space \((R^n,B,u_k )\), \(f\) is an objective function, \(\Omega \) is the constraint solution space of the problem and \(\Omega \subseteq R^n\), \(u_k \) is the probability measure on \(B\) and \(B\) is the \(\sigma \) field of \(R^n\) subset.

Hypothesis 2

If for \(\forall A(A\)is Borel set of\(\Omega )\), \(v(A)>0\), then

$$\begin{aligned} \prod \limits _{k=0}^\infty {(1-\mu _k [A])} =0 \end{aligned}$$
(27)

\(v(A)\) is \(n\)-dimension closure of \(A\),\(u_k (A)\) is the probability of \(u_k\) generating \(A\).

Theorem 1

If \(f\) is a measurable function, \(\Omega \) is a measurable subset of \(R^n\) and \(\left\{ {z_k } \right\} _0^\infty \) is the solution sequence generated by stochastic optimization algorithm, then the following formula (28) is found when Hypotheses 1 and 2 are satisfied:

$$\begin{aligned} \mathop {\lim }\limits _{k\rightarrow +\infty } \,P[z_k \in R_\varepsilon ]=1 \end{aligned}$$
(28)

\(R_\varepsilon \) is the global optimum set.

Function \(D\) is defined as:

$$\begin{aligned} D(P_g (t),x_i (t))=\left\{ \begin{array}{l} P_g (t),\,f(P_g (t))\le f(x_i (t)) \\ x_i (t),\,f(P_g (t))>f(x_i (t)) \\ \end{array} \right. \end{aligned}$$
(29)

We can prove that the formula (29) satisfies Hypothesis 1. Furthermore, when ARFIPSOMF searches stagnantly, for new particle \(i\), \(M_{i,t} =\Omega \) and for other particle \(l\),

$$\begin{aligned} M_{l,t}&=x_{lk} (t)+wv_{lk} (t)+(1-F_l ) \times c_l\\&\quad \times \left( \sum \limits _{\begin{array}{l} j\in n({B}(l)) j\ne g j\ne l \end{array}} {r_{jk} (p_{jk} (t)-x_{lk} (t))}\right. \\&\quad \left. -\sum \limits _{j\in n({W}(l))} {r_{jk} (p_{jk} (t)-x_{lk} (t))} \right) \\&\quad +F_l \times \beta \times (r_{gk} (p_g (t){-}x_{lk} (t)){+}r_{lk} (p_l (t)-x_{lk} (t))) \end{aligned}$$

Thus, \( \Omega \subseteq \mathop {{U}}\nolimits _{\begin{array}{c} l = 1 \\ l \ne i \end{array}}^{{N(t)}} \;M_{{l,t}} \;{{U}}\;M_{{i,t}}\).

Set \(A=M_{i,t} \), where \(A\) is the Borel subset of \(\Omega \). Thus, \(\nu [A]>0\) and \(\mu _t [A]=\sum \nolimits _{i=0}^{N(t)} {\mu _{i,t} [A]=1} \). Therefore, Hypothesis 2 is satisfied. Because Hypotheses 1 and 2 and Theorem 1 are satisfied, ARFIPSOMF can converge at the global optimum with probability 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mo, S., Zeng, J. & Xu, W. Attractive and Repulsive Fully Informed Particle Swarm Optimization based on the modified Fitness Model. Soft Comput 20, 863–884 (2016). https://doi.org/10.1007/s00500-014-1546-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-014-1546-8

Keywords

Navigation