Skip to main content
Log in

Strengthened teaching–learning-based optimization algorithm for numerical optimization tasks

  • Research Paper
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

The teaching–learning-based optimization algorithm (TLBO) is an efficient optimizer. However, it has several shortcomings such as premature convergence and stagnation at local optima. In this paper, the strengthened teaching–learning-based optimization algorithm (STLBO) is proposed to enhance the basic TLBO’s exploration and exploitation properties by introducing three strengthening mechanisms: the linear increasing teaching factor, the elite system composed of new teacher and class leader, and the Cauchy mutation. Subsequently, seven variants of STLBO are designed based on the combined deployment of the three improved mechanisms. Performance of the novel STLBOs is evaluated by implementing them on thirteen numerical optimization tasks, including the seven unimodal tasks (f1–f7) and six multimodal tasks (f8–f13). The results show that STLBO7 is at the top of the list, significantly better than the original TLBO. Moreover, the remaining six variants of STLBO also outperform TLBO. Finally, a set of comparisons are implemented between STLBO7 and other advanced optimization techniques, such as HS, PSO, MFO, GA and HHO. The numerical results and convergence curves prove that STLBO7 clearly outperforms other competitors, has stronger local optimal avoidance, faster convergence speed and higher solution accuracy. All the above manifests that STLBOs has improved the search performance of TLBO. Data Availability Statements: All data generated or analyzed during this study are included in this published article (and its supplementary information files).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bidar M, Kanan HR, Mouhoub M, Sadaoui S (2018) Mushroom reproduction optimization (MRO): a novel nature-inspired evolutionary algorithm. In: 2018 IEEE Congress on evolutionary computation (CEC), pp 1–10

  2. Zhang Y, Zhou XZ, Shih PC (2020) Modified Harris hawks optimization algorithm for global optimization problems. Arab J Sci Eng 45(12):10949–10974

    Article  Google Scholar 

  3. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513

    Article  Google Scholar 

  4. Deb K, Pratap A, Agarwal S et al (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197

    Article  Google Scholar 

  5. Marini F, Walczak B (2015) Particle swarm optimization (PSO): a tutorial. Chemometr Intell Lab 149:153–165

    Article  Google Scholar 

  6. Mirjalili S (2015) Moth-flame optimization algorithm: a novel na-ture-inspired heuristic paradigm. Knowl-Based Syst 89:228–249

    Article  Google Scholar 

  7. Alsewari AA, Kabir MN, Zamli KZ et al (2019) Software product line test list generation based on harmony search algorithm with constrai ant colony optimization nts support. Int J Adv Comput Sci Appl 10(1):605–610

    Google Scholar 

  8. Lee KS, Geem ZW (2005) New meta-heuristic algorithm for con-tinuous engineering optimization: harmony search theory and practice. Comput Method Appl Mech Eng 194(36–38):3902–3933

    Article  Google Scholar 

  9. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Future Gener Comput Syst 97:849–872

    Article  Google Scholar 

  10. Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol Comput 44:148–175

    Article  Google Scholar 

  11. Wang GG, Deb S, Gao XZ et al (2016) A new metaheuristic optimisation algorithm motivated by elephant herding behavior. Int J Bio-inspir Comput 8(6):394–409

    Article  Google Scholar 

  12. Agushaka JO, Ezugwu AE, Abualigah L (2022) Dwarf mongoose optimization algorithm. Comput Method Appl Mech Eng, 391

  13. Abualigah L, Yousri D, Abd Elaziz M, et al (2021) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157

  14. Abualigah L, Abd Elaziz M, Sumari P, et al (2022) Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst Appl, 191

  15. Oyelade ON, Ezugwu AES, Mohamed TIA, Abualigah L (2022) Ebola optimization search algorithm: a new nature-inspired metaheuristic optimization algorithm. IEEE Access 10:16150–16177

    Article  Google Scholar 

  16. Rashedi E, Nezamabadi-pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179:2232–2248

    Article  Google Scholar 

  17. Wang GG, Deb S, Cui ZH (2019) Monarch butterfly optimization. Neural Comput Appl 31(7):1995–2014

    Article  Google Scholar 

  18. Wang GG, Deb S, Coelho LDS (2018) Earthworm optimization algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems. Int J Bio-inspir Comput 12(1):1–22

    Article  Google Scholar 

  19. Mirjalili S, Gandomi AH, Mirjalili SZ et al (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191

    Article  Google Scholar 

  20. Zhang HZ, Liu F, Zhou YY et al (2020) A hybrid method integrating an elite genetic algorithm with tabu search for the quadratic assignment problem. Inf Sci 539:347–374

    Article  MathSciNet  Google Scholar 

  21. Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Nonlinear Sci 17(12):4831–4845

    Article  MathSciNet  Google Scholar 

  22. Abualigah L, Diabat A, Mirjalili S (2021) The arithmetic optimization algorithm. Comput Method Appl Mech Eng, 376

  23. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching–learning-based optimization: a novel method for constrained mechanical de-sign optimization problems. Comput Aided Design 43(3):303–315

    Article  Google Scholar 

  24. Reddy MC, Rao KV, Suresh G (2021) An experimental investigation and optimization of energy consumption and surface defects in wire cut electric discharge machining. J Alloy Compd 861:158582

    Article  Google Scholar 

  25. Sahoo AK, Mishra SK, Majhi B et al (2021) Real-time identification of fuzzy PID-controlled maglev system using TLBO-based functional link artificial neural network. Arab J Sci Eng 46(4):4103–4118

    Article  Google Scholar 

  26. Sameer FO, Al-Obaidi M.J, Al-Bassam WW, et al (2021) Multi-objectives TLBO hybrid method to select the related risk features with rheumatism disease. Neural Comput Appl

  27. Rao RV, Patel V (2013) An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Sci Iran 20(3):710–720

    Google Scholar 

  28. Niu Q, Zhang HY, Li K (2014) An improved TLBO with elite strategy for parameters identification of PEM fuel cell and solar cell models. Int J Hydrog Energy 39(8):3837–3854

    Article  Google Scholar 

  29. Sultana S, Roy PK (2014) Multi-objective quasi-oppositional teaching learning based optimization for optimal location of distributed generator in radial distribution systems. Int J Electric Power Energy Syst 63:534–545

    Article  Google Scholar 

  30. Jiang ZQ, Zou F, Chen DB, et al (2021) An improved teaching–learning-based optimization for multilevel thresholding image segmentation. Arab J Sci Eng

  31. Zhan ZH, Zhang J, Li Y et al (2009) Adaptive particle swarm optimization. IEEE Trans Syst Man Cybern B 39(6):1362–1381

    Article  Google Scholar 

  32. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evolut Comput 3:82–102

    Article  Google Scholar 

  33. Digalakis JG, Aaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77:81–506

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Philosophy and Social Sciences General Project of Shanghai (No.2022BGL010), the National Natural Science Foundation of China (No.71840003), Key Soft Science Project of Science and Technology Innovation Action Plan of Shanghai Municipal Science and Technology Commission (No. 20692104300), and Science and Technology Development Foundation of University of Shanghai for Science and Technology (No.2018KJFZ043).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunming Ye.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, X., Ye, C., Zhang, Y. et al. Strengthened teaching–learning-based optimization algorithm for numerical optimization tasks. Evol. Intel. 17, 1463–1480 (2024). https://doi.org/10.1007/s12065-023-00839-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-023-00839-x

Keywords