Skip to main content
Log in

Monitor system and Gaussian perturbation teaching–learning-based optimization algorithm for continuous optimization problems

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

In this paper, an improved teaching optimization algorithm called monitor system and Gaussian perturbation (GP) teaching–learning-based optimization algorithm (MG-TLBO) is proposed based on several modified variants of TLBO. TLBO is simply divided into two phases: “Teacher phase” and “Learner phase.” To further improve the solution accuracy and efficiency, we introduce two mechanisms in the learner phase, namely, monitor system and self-regulated learning (SRL) theory. In the learner phase, we assume that the monitor is the most outstanding individual in the population and possesses self-learning ability to expand his or her own strengths. In addition, GP is deployed to model the SRL process. Therefore, three different versions of MG-TLBO are proposed and related experiments are carried out. The results show that all three MG-TLBOs are more effective than the original TLBO. Finally, comparison of the experimental results with other representative meta-heuristics confirms the validity of the new MG-TLBO. In particularly, the MG-TLBO exhibits an overwhelming advantage over the TLBO, which indicates that the MG-TLBO well balances the exploration and exploitation behavior. All the aforementioned evidence manifests that the MG-TLBO improves the accuracy and efficiency of the solution of the original TLBO.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Alsewari AA, Kabir MN, Zamli KZ et al (2019) Software product line test list generation based on harmony search algorithm with constraints support. Int J Adv Comput Sci Appl 10(1):605–610

    Google Scholar 

  • Bidar M, Kanan HR, Mouhoub M, Sadaoui S (2018) Mushroom Reproduction Optimization (MRO): a novel nature-inspired evolutionary algorithm. In: 2018 IEEE congress on evolutionary computation (CEC), pp 1–10

  • Bjork RA, Dunlosky J, Kornell N (2013) Self-regulated learning: belief, techniques, and illusions. Annu Rev Psychol 64:417–444

    Article  Google Scholar 

  • Chakraborty R, Verma G, Namasudra S (2020) IFODPSO-based multi-level image segmentation scheme aided with Masi entropy. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-020-02506-w

    Article  Google Scholar 

  • Dabbagh N, Kitsantas A (2012) Personal Learning Environments, social media, and self-regulated learning: a natural formula for connecting formal and informal learning. Internet High Educ 15(1):3–8

    Article  Google Scholar 

  • Digalakis JG, Aaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77:81–506

    Article  MathSciNet  Google Scholar 

  • Eberhart RC, Shi Y (2004) Guest editorial special issue on particle swarm optimization. IEEE Trans Evol Comput 8:201–228

    Article  Google Scholar 

  • Fan BB, Yang WW, Zhang ZF (2019) Solving the two-stage hybrid flow shop scheduling problem based on mutant firefly algorithm. J Ambient Intell Humaniz Comput 10(3):979–990

    Article  Google Scholar 

  • Faris H, Ala’M A-Z, Heidari AA, Aljarah I, Mafarja M, Hassonah, MA, et al (2019) An intelligent system for spam detection and identification of the most relevant features based on evolutionary random weight networks. Inf Fusion 48:67–83

    Article  Google Scholar 

  • Ferdinando DM, Salvatore S (2011) A fuzzy particle swarm optimization algorithm and its application to hotspot events in spatial analysis. J Ambient Intell Humaniz Comput 4(1):85–97

    Google Scholar 

  • Fitzgibbon A, Pilu M, Fisher RB (1999) Direct least square fitting of ellipses. IEEE Trans Pattern Anal 21:476–480

    Article  Google Scholar 

  • Gonzalez C, Schlegel HB (1989) An improved algorithm for reaction path following. J Chem Phys 90:2154–2161

    Article  Google Scholar 

  • Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Future Gener Comput Syst 97:849–872

    Article  Google Scholar 

  • Hestenes MR (1969) Multiplier and gradient methods. J Optim Theory Appl 4:303–320

    Article  MathSciNet  MATH  Google Scholar 

  • Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39(3):459–471

    Article  MathSciNet  MATH  Google Scholar 

  • Khan ZA, Zafar A, Javaid S, Aslam S, Rahim MH, Javaid N (2019) Hybrid meta-heuristic optimization based home energy management system in smart grid. J Ambient Intell Humaniz Comput 10(12):4837–4853

    Article  Google Scholar 

  • Kumar V, Kaur A (2020) Binary spotted hyena optimizer and its application to feature selection. J Ambient Intell Humaniz Comput 11(7):2625–2645

    Article  Google Scholar 

  • Lagarias JC, Reeds JA, Wright MH, Wright PE (1998) Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM J Optim 9:112–147

    Article  MathSciNet  MATH  Google Scholar 

  • Lee KS, Geem ZW (2005) New meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Method Appl Mech 194(36–38):3902–3933

    Article  MATH  Google Scholar 

  • Li GQ, Niu PF, Zhang WP, Liu YC (2013) Model NOx emissions by least squares support vector machine with tuning based on ameliorated teaching–learning-based optimization. Chemometr Intell Lab 126:11–20

    Article  Google Scholar 

  • Lin X, Zhong Y, Zhang H (2013) An enhanced firefly algorithm for function optimisation problems. Model Identif Control 18(2):166–173

    Article  Google Scholar 

  • Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249

    Article  Google Scholar 

  • Muhlenbein H, Schlierkamp-Voosen D (1993) Predictive models for the breeder genetic algorithm: I. Continuous parameter optimization. Evol Comput 1:25–49

    Article  Google Scholar 

  • Narendra PM, Fukunaga K (1977) A branch and bound algorithm for feature subset selection. IEEE Trans Comput 26:917–922

    Article  MATH  Google Scholar 

  • Narmatha C, Eljack SM, Tuka AARM, Manimurugan S, Mustafa M (2020) A hybrid fuzzy brain-storm optimization algorithm for the classification of brain tumor MRI images. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-020-02470-5

  • Niu PF, Ma YP, Yan SS (2019) A modified teaching–learning-based optimization algorithm for numerical function optimization. Int J Mach Learn Cybern 10(6):1357–1371

    Article  Google Scholar 

  • Paechter M, Maier B, Macher D (2010) Students’ expectations of, and experiences in e-learning: their relation to learning achievements and course satisfaction. Comput Educ 54:222–229

    Article  Google Scholar 

  • Peng Z, Liao JL, Cai YQ (2015) Differential evolution with distributed direction information based mutation operators: an optimization technique for big data. J Ambient Intell Humaniz Comput 6(4):481–494

    Article  Google Scholar 

  • Rao RV, Patel V (2013) An improved teaching–learning-based optimization algorithm for solving unconstrained optimization problems. Sci Iran 20(3):710–720

    Google Scholar 

  • Rao RV, Savsani VJ, Vakharia DP (2011a) Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Design 43(3):303–315

    Article  Google Scholar 

  • Rao RV, Savsani VJ, Vakharia DP (2011b) Teaching–learning-based optimization: a novel optimization method for continuous non-linear large scale problems. Inf Sci 183(1):1–15

    Article  Google Scholar 

  • Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248

    Article  MATH  Google Scholar 

  • Sharma TK (2020) Enhanced butterfly optimization algorithm for reliability optimization problems. Ambient Intell Humaniz. https://doi.org/10.1007/s12652-020-02481-2

    Article  Google Scholar 

  • Shi Y (2011) Brain storm optimization algorithm. IEEE C Evol Comput 6728:1–14

    Google Scholar 

  • Tsochantaridis I, Joachims T, Hofmann T, Altun Y, Singer Y (2005) Large margin methods for structured and interdependent output variables. J Mach Learn Res 6:1453–1484

    MathSciNet  MATH  Google Scholar 

  • Wu G, Pedrycz W, Suganthan PN, Mallipeddi R (2017) A variable reduction strategy for evolutionary algorithms handling equality constraints. Appl Soft Comput 37:774–786

    Article  Google Scholar 

  • Xu Y, Chen H, Heidari AA, Luo J, Zhang Q, Zhao X, Li C (2019) An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Syst Appl 129:135–155

    Article  Google Scholar 

  • Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evolut Comput 3:82–102

    Article  Google Scholar 

  • Ying T, Zhu Y (2010) Fireworks algorithm for optimization. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) 6145 LNCS (PART 1), pp 355–364

  • Zhang Y, Zhou XZ, Shih PC (2020) Modified Harris Hawks optimization algorithm for global optimization problems. Arab J Sci Eng 45:10949–10974

    Article  Google Scholar 

  • Zimmerman BJ (2000) Self-efficacy: an essential motive to learn. Contemp Educ Psychol 25(1):82–91

    Article  Google Scholar 

  • Zou ZF, Qian Y (2019) Wireless sensor network routing method based on improved ant colony algorithm. J Ambient Intell Humaniz Comput 10(3):991–998

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Editage (http://www.editage.cn) for English language editing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shih, PC., Zhang, Y. & Zhou, X. Monitor system and Gaussian perturbation teaching–learning-based optimization algorithm for continuous optimization problems. J Ambient Intell Human Comput 13, 705–720 (2022). https://doi.org/10.1007/s12652-020-02796-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-020-02796-0

Keywords

Navigation