Skip to main content

Evolutionary Algorithms

  • Chapter
  • First Online:
Machine Learning and Artificial Intelligence

Abstract

This chapter introduces the concepts of evolutionary algorithms. The evolutionary algorithms are based on Darwin’s theory of evolution by natural selection. All the algorithms described in this chapter are conceptually based on this idea. Each algorithm interprets the idea in slightly different manner and proposes a different framework to solve certain type of problems. Specifically, we will discuss the following algorithms: (1) genetic algorithms, (2) simulated annealing, (3) ant colony optimization, and (4) swarm intelligence. These algorithms are aimed at using the biologically influenced techniques to improve the convergence of the optimization when the optimization problem is almost impossible to solve completely using most of the techniques described in other chapters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    In general all the algorithms that use gradient-based search are called as greedy algorithms. These algorithms use the fact from calculus that at any local optimum (minimum or maximum), the value of gradient is 0. In order to distinguish between whether the optimum is a minimum or a maximum, the second-order gradient is used. When the second-order gradient is positive, a minimum is reached; otherwise, it is a maximum.

  2. 2.

    This problem belongs to a class of problems called as NP-hard. It stands for nondeterministic polynomial time hard problems [15]. The worst-case solution time for this problem increases in near-exponential time and quickly becomes beyond the scope of current hardware.

References

  1. Travelling Salesman Problem https://en.wikipedia.org/wiki/Travellingsalesmanproblem

  2. Genetic Programming API reference https://gplearn.readthedocs.io/en/stable/reference.html

  3. Particle Swam Optimization Library https://pyswarms.readthedocs.io/en/latest/

  4. S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimzation by Simulated Annealing, Science, New Series, Vol. 220, No. 4598, 1983.

    Google Scholar 

  5. Craig W. Reynolds Flocks, Herd and SChools: A Distributed Behavioral Model, Computer Graphics, 21(4), July 1987, pp 25-34.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Joshi, A.V. (2023). Evolutionary Algorithms. In: Machine Learning and Artificial Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-031-12282-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-12282-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-12281-1

  • Online ISBN: 978-3-031-12282-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics