Skip to main content
Log in

A participatory search algorithm

  • Research Paper
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

Search is one of the most useful procedures employed in numerous situations such as optimization, machine learning, information processing and retrieval. This paper introduces participatory search, a population-based heuristic search algorithm based on the participatory learning paradigm. Participatory search is an algorithm in which search progresses forming pools of compatible individuals, keeping the one that is the most compatible with the current best individual in the population, and introducing random individuals in each algorithm step. Recombination is a convex combination modulated by the compatibility between individuals while mutation is an instance of differential variation modulated by compatibility between selected and recombined individuals. The nature of the recombination and mutation operators are studied, and the convergence analysis of the algorithm is pursued within the framework of random search theory. The algorithm is evaluated using ten benchmark real-valued optimization problems and its performance is compared against population-based optimization algorithms representative of the current state of the art. The participatory search algorithm is also evaluated using a suite of twenty eight benchmark functions of a recent evolutionary, real-valued optimization competition, to compare its performance against the competition winners. Computational results suggest that participatory search algorithm performs best amongst the algorithms addressed in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. The code is available upon request.

References

  1. Agbali Muna, Reichard Martin, Bryjová Anna, Bryja Josef, Smith Carl (2010) Mate choice for nonadditive genetic benefits correlate with MHC dissimilarity in the rose bitterling (Rhodeus ocellatus). J Evol 64(6):1683–1696

    Article  Google Scholar 

  2. Antonelli M, Ducange P, Marcelloni F (2013) An efficient multi-objective evolutionary fuzzy system for regression problems. Approx Reason 54(9):1434–1451

    Article  MathSciNet  MATH  Google Scholar 

  3. Blum C, Roli A (2003) Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput Surv (CSUR) 35(3):268–308

    Article  Google Scholar 

  4. Brown R (2004) Smoothing, forecasting and prediction of discrete time series

  5. Busch J (2005) The evolution of self-compatibility in geographically peripheral populations of Leavenworthia alabamica (Brassicaceae). Am J Bot 92(9):1503–1512

    Article  Google Scholar 

  6. Chen Z-Q, Wang R-L (2010) An efficient real-coded genetic algorithm for real-parameter optimization, Natural Computation (ICNC), 2010 Sixth International Conference on, vol. 5, pp 2276–2280

  7. Chen Zhi-Qiang, Wang Rong-Long (2011) Two efficient realcoded genetic algorithms for real parameter optimization. Int J Innov Comput Inf Control 7(8):4871–4884

    Google Scholar 

  8. Colorni A, Dorigo M (1991) Maniezzo V and others. Distributed optimization by ant colonies. Proceedings of the first European conference on artificial life 142:134–142

  9. Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolut Comput 1(1):3–18

    Article  Google Scholar 

  10. Dorigo M (1992) Optimization, learning and natural algorithms, Ph. D. Thesis, Politecnico di Milano, Italy

  11. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory, Micro Machine and Human Science, 1995. MHS’95., Proceedings of the Sixth International Symposium on, pp 39–43

  12. Eiben A, Smith J (2015) Introduction to evolutionary computing, Springer

  13. Fogel D (1998) Evolutionary computation: the fossil record, Wiley-IEEE Press

  14. Glover F, Laguna M, Martí R (2000) Fundamentals of scatter search and path relinking. Control Cybern 29(3):653–684

    MathSciNet  MATH  Google Scholar 

  15. Goldberg D (1989) Genetic algorithms in search, optimization and machine learning, Addison Wesley

  16. Ishibuchi H, Tsukamoto N, Nojima Y (2007) Choosing extreme parents for diversity improvement in evolutionary multiobjective optimization algorithms, Systems, Man and Cybernetics, 2007. ISIC. IEEE International Conference on, pp 1946–1951

  17. Ishibuchi H, Narukawa K, Tsukamoto N, Nojima Y (2008) An empirical study on similarity-based mating for evolutionary multiobjective combinatorial optimization. Eur J Oper Res 188(1):57–75

    Article  MATH  Google Scholar 

  18. Karray F, Silva CD (2004) Soft computing and intelligent systems design: theory, Tools and Applications

  19. Kennedy J, Kennedy JF, Eberhart R, Shi Y (2001) Swarm intelligence, Morgan Kaufmann

  20. Khatib W, Fleming P (1998) The stud GA: A mini revolution?, Parallel Problem Solving from NaturePPSN V, pp 683–691

  21. Liang JJ, Qu B, Suganthan PN, Hernández-Díaz, Alfredo G. Problem definitions and evaluation criteria for the CEC, (2013) special session on real-parameter optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, Technical Report 2012:2013

  22. Liang JJ, Qu B, Suganthan PN, Hernández-Díaz, Alfredo G (2013) Ranking Results of CEC’13 Special Session and Competition on Real-Parameter Single Objective Optimization, Technical Report 2012, Computational Intelligence Laboratory, Zhenggzhou University, Zhengzhou China and Nanyang Technological University, Singapore, pp 1–11

  23. Liu Y, Gomide F (2016) Participatory search algorithms in fuzzy modeling, Proc. World Conference in Soft Computing, May 22–25 Berkeley. CA, USA

  24. Lomolino M, Riddle B, Brown J (2006) Biogeography

  25. Lukacs E (1975) Stochastic convergence. Academic Press, New York 39:

  26. Michalewicz Z (1996) Genetic algorithms+ data structures= evolution programs

  27. Price K, Storn R, Lampinen J (2005) Differential evolution a practical approach to global optimization

  28. Radcliffe N, Surry P (1994) Formal memetic algorithms

  29. Rothlauf F (2011) Design of modern heuristics: principles and application. Springer-Verlag, Berlin

    Book  MATH  Google Scholar 

  30. Rudolph G (1994) Convergence analysis of canonical genetic algorithms. IEEE Trans Neural Netw 5(1):96–101

    Article  Google Scholar 

  31. Rudolph G (1998) Finite Markov chain results in evolutionary computation: a tour d’horizon. Jornal of Fundamenta Informaticae 35(1):67–89

    MathSciNet  MATH  Google Scholar 

  32. Russell S, Norvig P (2010) Artificial intelligence a modern approach, Stéphane Deconinck

  33. Simon D (2008) Biogeography-based optimization. IEEE Trans Evolut Comput 12(6):702–713

    Article  Google Scholar 

  34. Simon D, Rarick R, Ergezer M, Du D (2011) Analytical and numerical comparisons of biogeography-based optimization and genetic algorithms. Inf Sci 181(7):1224–1248

    Article  MATH  Google Scholar 

  35. Simon D (2013) Evolutionary optimization algorithms. John Wiley & Sons

  36. Solis, Francisco J, Roger J-B Wets (1981) Minimization by random search techniques, Mathematics of operations research, vol. 6, no. 1, pp19–30

  37. Sra S, Nowozin S, Wright S (2012) Optimization for machine learning, Mit Press

  38. Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359

    Article  MathSciNet  MATH  Google Scholar 

  39. Whittaker R, Fernández-Palacios J (2007) Island biogeography: ecology, evolution, and conservation, Oxford University Press

  40. Yager R (1990) A model of participatory learning. IEEE Trans Syst Man Cybern 20(5):1229–1234

    Article  Google Scholar 

  41. Yao X, Liu Y (1996) Fast evolutionary programming. Evolutionary Programming, pp 451–460

Download references

Acknowledgements

The second author is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 305906/2014-3. The authors are in debt with one of the reviewers for the constructive comments and suggestions that helped to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Ling Liu.

Appendices

Appendix 1

This appendix shows that \(\rho\) is a distance measure. To see this, rewrite \(\rho\) as follows

$$\begin{aligned} \rho= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|z_{k}-v_{k}| \end{aligned}$$
(30)
$$\begin{aligned}= & {} \frac{1}{n}\sum _{k=1}^{n}(1-|z_{k}-v_{k}|). \end{aligned}$$
(31)

Consider the term \(1-|z_{k}-v_{k}|=S_{k},\) which can be understood as a measure of similarity between z and v. For property (1),

$$\begin{aligned} \rho (z,v)=\frac{1}{n}\sum _{k=1}^{n}S_{k}\ge 0 \quad \mathrm{because} \quad S_{k}\in [0,1]. \end{aligned}$$

For property (2),

$$\begin{aligned} \rho (z,v)=\frac{1}{n}\sum _{k=1}^{n}S_{k}=0 \Rightarrow \sum _{k=1}^{n}S_{k}=0, \quad \text {thus}, \quad z=v. \end{aligned}$$

On the other hand,

$$\begin{aligned} z=v \Rightarrow \sum _{k=1}^{n}S_{k}=0, \quad \text {thus}, \quad \frac{1}{n}\sum _{k=1}^{n}S_{k}=0=\rho (z,v). \end{aligned}$$

For the property (3),

$$\begin{aligned} \rho (z,v)= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|z_{k}-v_{k}| \end{aligned}$$
(32)
$$\begin{aligned}= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|v_{k}-z_{k}| \end{aligned}$$
(33)
$$\begin{aligned}= & {} \rho (v,z). \end{aligned}$$
(34)

Finally, for property (4), let \(z,v,w\in S,\)

$$\begin{aligned} \rho (z,w)= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|z_{k}-w_{k}| \end{aligned}$$
(35)
$$\begin{aligned}= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|z_{k}+v_{k}-v_{k}-w_{k}| \end{aligned}$$
(36)
$$\begin{aligned}= & {} 1-\frac{1}{n}\sum _{k=1}^{n}|(z_{k}-v_{k})+(v_{k}-w_{k})| \end{aligned}$$
(37)
$$\begin{aligned}\le & {} 1-\frac{1}{n}\sum _{k=1}^{n}|z_{k}-v_{k}| - \frac{1}{n}\sum _{k=1}^{n}|v_{k}-w_{k}| \end{aligned}$$
(38)
$$\begin{aligned}= & {} \rho (z,v) + \rho (v,w) -1 \end{aligned}$$
(39)
$$\begin{aligned}\le & {} \rho (z,v) + \rho (v,w). \end{aligned}$$
(40)

Appendix 2

Proof

The proof proceeds as follows. Recombination of PSA uses parents s and \(s'\) to produce offspring \(p_{r}\), namely

$$\begin{aligned} p_{r} = (1-\gamma )s+ \gamma s' \end{aligned}$$

where \(\gamma =\alpha \rho ^{1-a}_{r}.\) Individuals s, \(s'\) and \(p_{r}\) play the role of \(s^{p1},\) \(s^{p2}\) and \(s^{0}\) in (19). We must show that

$$\begin{aligned} d(s,s')\ge \max (d(s,p_{r}),d(s',p_{r})). \end{aligned}$$
(41)

We have that

$$\begin{aligned} d(s,p_{r})= & {} d(s, (1-\gamma )s+\gamma s')\nonumber \\\le & {} d(s, (1-\gamma )s) + d(s,\gamma s')\nonumber \\= & {} (1-\gamma )d(s,s) + \gamma d(s,s')\nonumber \\= & {} \gamma d(s,s') \le d(s,s') . \end{aligned}$$
(42)

We also have that

$$\begin{aligned} d(s',p_{r})= & {} d(s', (1-\gamma )s+\gamma s')\nonumber \\\le & {} d(s', (1-\gamma )s) + d(s',\gamma s')\nonumber \\= & {} (1-\gamma )d(s',s) + \gamma d(s',s')\nonumber \\= & {} (1-\gamma )d(s',s) \le d(s,s') . \end{aligned}$$
(43)

Because \(0\le \gamma \le 1,\) (42) and (43) yield

$$\begin{aligned} d(s,s') \ge \max (d(s,p_{r}),d(s',p_{r})). \end{aligned}$$

\(\square\)

Appendix 3

Proof

PSA mutation uses \(p_{selected}\) and \(p_{r}\) to produce \(p_{m}\) from

$$\begin{aligned} p_{m} = s^{*} + \rho ^{1-a}_{m}(p_{selected}-p_{r}). \end{aligned}$$
(44)

We have to show that

$$\begin{aligned} d(p_{selected},p_{r}) \le \max (d(p_{selected},p_{m}), d(p_{r},p_{m})). \end{aligned}$$
(45)

From (10) \(p_{selected}\) is either s or \(s'\). Assume that \(p_{selected}=s.\) We must check if

$$\begin{aligned} d(s,p_{r})\le \max (d(s,p_{m}), d(p_{r},p_{m})). \end{aligned}$$
(46)

Indeed, this is the case because

$$\begin{aligned} p_{r} = (1-\gamma )s+ \gamma s' \end{aligned}$$

where \(\gamma =\alpha \rho ^{1-a}_{r},\) we have

$$\begin{aligned} d(s,p_{r})= & {} d(s,(1-\gamma )s+ \gamma s') \nonumber \\\le & {} (1-\gamma ) d(s,s) + \gamma d(s,s') \nonumber \\= & {} \gamma d(s,s') \nonumber \\\le & {} d(s,s'). \end{aligned}$$
(47)

Similarly,

$$\begin{aligned} d(s,p_{m})= & {} d(s, s^{*}+\delta (s-(1-\gamma )s- \gamma s')) \nonumber \\= & {} d(s, s^{*}+\delta \gamma (s-s')) \nonumber \\\le & {} d(s,s^{*}) + \delta \gamma d(s,s'-s) \nonumber \\\le & {} d(s,s^{*}) + \delta \gamma d(s,s') \nonumber \\\le & {} d(s,s^{*}) + d(s,s') \end{aligned}$$
(48)

where \(\delta =\rho ^{1-a}_{m}\). Computing \(d(p_{r},p_{m})\) we obtain

$$\begin{aligned} d(p_{r},p_{m})= & {} d(p_{r}, s^{*}+\delta (s-p_{r})) \nonumber \\= & {} d(p_{r}, s^{*}+\delta s-\delta p_{r}) \nonumber \\\le & {} d(p_{r},s^{*}) + \delta d(p_{r},s) - \delta d(p_{r},p_{r}) \nonumber \\= & {} d(p_{r},s^{*}) + \delta d(p_{r},s) \nonumber \\\le & {} d(p_{r},s^{*}) + d(s,s'). \end{aligned}$$
(49)

From (47) and (48)

$$\begin{aligned} d(s,p_{m})-d(s,p_{r}) \le d(s,s^{*})> 0 \end{aligned}$$
(50)

and hence \(d(s,p_{m})>d(s,p_{r})\).

From (47) and (49) we have

$$\begin{aligned} d(p_{r},p_{m})-d(s,p_{r}) \le d(p_{r},s^{*})> 0, \end{aligned}$$
(51)

thus \(d(p_{r},p_{m})>d(s,p_{r})\). Therefore,

$$\begin{aligned} d(s,p_{r})\le \max (d(s,p_{m}), d(p_{r},p_{m})). \end{aligned}$$
(52)

Assume that \(p_{selected}=s'\). The following inequality holds

$$\begin{aligned} d(s',p_{r})\le \max (d(s',p_{m}), d(p_{r},p_{m})). \end{aligned}$$
(53)

Indeed, calculation of \(d(s',p_{r})\) gives

$$\begin{aligned} d(s',p_{r})= & {} d(s',(1-\gamma )s+ \gamma s') \nonumber \\\le & {} (1-\gamma ) d(s',s) + \gamma d(s',s') \nonumber \\= & {} (1-\gamma ) d(s',s) \nonumber \\\le & {} d(s',s) \end{aligned}$$
(54)

and calculation of \(d(s',p_{m})\) gives

$$\begin{aligned} d(s',p_{m})= & {} d(s', s^{*}+\delta (s'-(1-\gamma )s- \gamma s')) \nonumber \\= & {} d(s', s^{*}+\delta (1-\gamma )(s'-s)) \nonumber \\\le & {} d(s',s^{*}) + \delta (1-\gamma ) d(s',s'-s) \nonumber \\= & {} d(s',s^{*}) + \delta (\gamma -1)d(s',s) \nonumber \\\le & {} d(s',s^{*}) + d(s',s). \end{aligned}$$
(55)

Computing \(d(p_{r},p_{m})\) we obtain

$$\begin{aligned} d(p_{r},p_{m})= & {} d(p_{r}, s^{*}+\delta (s'-p_{r})) \nonumber \\= & {} d(p_{r}, s^{*}+\delta s'-\delta p_{r})) \nonumber \\\le & {} d(p_{r},s^{*}) + \delta d(p_{r},s') - \delta d(p_{r},p_{r}) \nonumber \\= & {} d(p_{r},s^{*}) + \delta d(p_{r},s') \nonumber \\\le & {} d(p_{r},s^{*}) + d(s',s). \end{aligned}$$
(56)

From (54) and (55) we have

$$\begin{aligned} d(s',p_{m})-d(s',p_{r}) \le d(s',s)> 0 \end{aligned}$$
(57)

and hence \(d(s',p_{m})>d(s',p_{r})\).

From (54) and (56) we get

$$\begin{aligned} d(p_{r},p_{m})-d(s',p_{r}) \le d(p_{r},s^{*})> 0, \end{aligned}$$
(58)

thus \(d(p_{r},p_{m})>d(s',p_{r})\). Therefore,

$$\begin{aligned} d(s',p_{r})\le \max (d(s',p_{m}), d(p_{r},p_{m})) \end{aligned}$$

which means that

$$\begin{aligned} d(p_{selected},p_{r}) \le \max (d(p_{selected},p_{m}), d(p_{r},p_{m})). \end{aligned}$$

\(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Y.L., Gomide, F. A participatory search algorithm. Evol. Intel. 10, 23–43 (2017). https://doi.org/10.1007/s12065-016-0151-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-016-0151-4

Keywords

Navigation