Skip to main content

Reinforcement Learning and Genetic Algorithms

  • Chapter
  • First Online:
Statistical Learning from a Regression Perspective

Part of the book series: Springer Texts in Statistics ((STS))

  • 3404 Accesses

Abstract

There are a wide variety of empirical settings that do not easily fit within an optimization framework and for which results that are “good,” but not necessarily the “best,” are the only practical option. When business firms compete, for instance, a single firm can dominate a market by “just” being better than its competitors. Over the past decade, reinforcement learning has built on this perspective with considerable success. Although several features of reinforcement learning are some distance from our full regression approach, its promise motivates a brief discussion. Reinforcement learning also is sometimes included as a component of deep learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In our gridlock example, the set of possible decisions is fixed and does not depend on what choices you make. Among the most exciting applications of tree search algorithms involve playing against some opponent who reacts to your decisions and counters by changing the mix of choices you have available. The setting is adversarial. A game of checkers is a simple example. With each alternative move, the available decisions and their consequences can change. The most impressive performance to date is by Google’s AlphaGo AI that beat the best Go player in the world. A discussion of adversarial reinforcement learning is beyond the scope of this book in part because the regression formulation is a stretch. The setting is a game. See Silver et al. (2016). It is very cool stuff.

  2. 2.

    There is some disagreement about whether genetic algorithms should be seen as reinforcement learning, and there are indeed some important differences (Sutton and Barto 2018: 8–9). This section can be productively read even if genetic algorithms are at least a distant cousin to reinforcement learning.

  3. 3.

    The manner in which the initial population is generated often does not matter a great deal. For example, an initial population of 500 could be composed of 500 identical network specifications. Variation in the population is introduced later.

  4. 4.

    The new form was designed by Susan B. Sorenson in collaboration with the local police department (Berk and Sorenson 2019).

  5. 5.

    GA is written by Luca Scrucca. It is rich in options and features that work well. The documentation in R is a little thin, but additional background material is available in Scrucca (2014, 2017). It has a substantial learning curve if one wants to master its many variants.

References

  • Affenseller, M., Winkler, S., Wagner, S., & Beham, A. (2009). Genetic algorithms and genetic programming: Modern concepts and practical applications. New York: Chapman & Hall.

    Book  Google Scholar 

  • Berk, R. A., & Bleich, J. (2013) Statistical procedures for forecasting criminal behavior: A comparative assessment. Journal of Criminology and Public Policy, 12(3), 515–544.

    Google Scholar 

  • Berk, R. A., & Sorenson, S. (2019). An algorithmic approach to forecasting rare violent events: An illustration based on IPV perpetration. arXiv: 1903.00604v1.

    Google Scholar 

  • Choudhary, A. (2019). Introduction to Monte Carlo tree search: The game-changing algorithm behind DeepMind’s AlphaGo. Analytics Vidhya, Jan 23. https://www.analyticsvidhya.com/blog/2019/01/monte-carlo-tree-search-introduction-algorithm-deepmind-alphago/

  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Lapan, M. (2018). Deep reinforcement learning hands on. Birmingham: Packt Publishing.

    Google Scholar 

  • Mitchell, M. (1998). An introduction to genetic algorithms. Cambridge: MIT Press.

    Book  Google Scholar 

  • Proellochs, N. (2019). Reinforcement learning in R. https://cran.r-project.org/web/packages/ReinforcementLearning/vignettes/ReinforcementLearning.html

    Google Scholar 

  • Scrucca, L. (2014). GA: A package for genetic algorithms in R. Journal of Statistical Software, 53(4), 1–37.

    Google Scholar 

  • Scrucca, L. (2017). On some extensions to GA package: Hybrid optimisation, parallelisation and islands evolution. The R Journal, 9(1), 187–206.

    Article  Google Scholar 

  • Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(28), 484–489.

    Article  Google Scholar 

  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning (2nd ed.). Cambridge: A Bradford Book.

    MATH  Google Scholar 

  • Umbarkar, A. J., & Sheth, P. D. (2015). Crossover operators in genetic algorithms: A review. ICTACT Journal of Soft Computing, 6(1), 1083–1092.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Berk, R.A. (2020). Reinforcement Learning and Genetic Algorithms. In: Statistical Learning from a Regression Perspective. Springer Texts in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-40189-4_9

Download citation

Publish with us

Policies and ethics