Boosting Metaheuristic Search Using Reinforcement Learning

  • Tony Wauters
  • Katja Verbeeck
  • Patrick De Causmaecker
  • Greet Vanden Berghe
Part of the Studies in Computational Intelligence book series (SCI, volume 434)


Many techniques that boost the speed or quality of metaheuristic search have been reported within literature. The present contribution investigates the rather rare combination of reinforcement learning and metaheuristics. Reinforcement learning techniques describe how an autonomous agent can learn from experience. Previous work has shown that a network of simple reinforcement learning devices based on learning automata can generate good heuristics for (multi) project scheduling problems. However, using reinforcement learning to generate heuristics is just one method of how reinforcement learning can strengthen metaheuristic search. Both existing and new methodologies to boost metaheuristics using reinforcement learning are presented together with experiments on actual benchmarks.


Schedule Problem Reinforcement Learn Multiagent System Learn Automaton Project Schedule Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bai, R., Burke, E.K., Gendreau, M., Kendall, G., Mccollum, B.: Memory length in hyper-heuristics: An empirical study. In: Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Scheduling, CI-Sched 2007 (2007)Google Scholar
  2. 2.
    Battiti, R., Brunato, M., Mascia, F.: Reactive Search and Intelligent Optimization. Operations research/Computer Science Interfaces, vol. 45. Springer (2008)Google Scholar
  3. 3.
    Bennett, K.P., Parrado-Hernández, E.: The interplay of optimization and machine learning research. J. Mach. Learn. Res. 7, 1265–1281 (2006)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Boyan, J.: Learning Evaluation Functions for Global Optimization. PhD thesis, Carnegie-Mellon University (1998)Google Scholar
  5. 5.
    Boyan, J., Moore, A.W., Kaelbling, P.: Learning evaluation functions to improve optimization by local search. Journal of Machine Learning Research 1, 2000 (2000)Google Scholar
  6. 6.
    Burke, E., Hart, E., Kendall, G., Newall, J., Ross, P., Schulenburg, S.: Hyper-heuristics: An emerging direction in modern search technology. In: Handbook of Metaheuristics, pp. 457–474. Kluwer Academic Publishers (2003)Google Scholar
  7. 7.
    Burke, E.K., Kendall, G., Soubeiga, E.: A tabu-search hyperheuristic for timetabling and rostering. Journal of Heuristics 9, 451–470 (2003)CrossRefGoogle Scholar
  8. 8.
    Ceschia, S., Schaerf, A.: Local search and lower bounds for the patient admission scheduling problem. Computers & Operartions Research 38, 1452–1463 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Confessore, G., Giordani, S., Rismondo, S.: A market-based multi-agent system model for decentralized multi-project scheduling. Annals of Operational Research 150, 115–135 (2007)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Demaine, E.D., Demaine, M.L.: Jigsaw puzzles, edge matching, and polyomino packing: Connections and complexity. Graphs and Combinatorics 23, 195–208 (2007); Special issue on Computational Geometry and Graph Theory: The Akiyama-Chvatal Festschrift Google Scholar
  11. 11.
    Demeester. P.: Patient admission scheduling website (2009), (last visit August 15, 2011)
  12. 12.
    Demeester, P., De Causmaecker, P., Vanden Berghe, G.: Applying a local search algorithm to automatically assign patients to beds. In: Proceedings of the 22nd Conference on Quantitive Decision Making (Orbel 22), pp. 35–36 (2008)Google Scholar
  13. 13.
    Demeester, P., Souffriau, W., De Causmaecker, P., Vanden Berghe, G.: A hybrid tabu search algorithm for automatically assigning patients to beds. Artif. Intell. Med. 48, 61–70 (2010)CrossRefGoogle Scholar
  14. 14.
    Dietterich, T.G., Zhang, W.: Solving combinatorial optimization tasks by reinforcement learning: A general methodology applied to resource-constrained scheduling. Journal of Artificial Intelligence Research (2000)Google Scholar
  15. 15.
    Gabel, T.: Multi-agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problems. PhD thesis, Universität Osnabrück, Deutschland (2009)Google Scholar
  16. 16.
    Gambardella, L.M., Dorigo, M.: Ant-q: A réinforcement learning approach to the traveling salesman problem, pp. 252–260. Morgan Kaufmann (1995)Google Scholar
  17. 17.
    Glover, F., Kochenberger, G.A.: Handbook of metaheuristics. Springer (2003)Google Scholar
  18. 18.
    Homberger, J.: A (μ, λ)-coordination mechanism for agent-based multi-project scheduling. OR Spectrum (2009), doi:10.1007/s00291-009-0178-3Google Scholar
  19. 19.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
  20. 20.
    Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 157–163. Morgan Kaufmann (1994)Google Scholar
  21. 21.
    Miagkikh, V.V., Punch III, W.F.: An approach to solving combinatorial optimization problems using a population of reinforcement learning agents (1999)Google Scholar
  22. 22.
    Misir, M., Wauters, T., Verbeeck, K., Vanden Berghe, G.: A new learning hyper-heuristic for the traveling tournament problem. In: Proceedings of Metaheuristic International Conference (2009)Google Scholar
  23. 23.
    Moll, R., Barto, A.G., Perkins, T.J., Sutton, R.S.: Learning instance-independent value functions to enhance local search. In: Advances in Neural Information Processing Systems, pp. 1017–1023. MIT Press (1998)Google Scholar
  24. 24.
    Narendra, K., Thathachar, M.: Learning Automata: An Introduction. Prentice-Hall International, Inc. (1989)Google Scholar
  25. 25.
    Nareyek, A.: Choosing search heuristics by non-stationary reinforcement learning. In: Metaheuristics: Computer Decision-Making, pp. 523–544. Kluwer Academic Publishers (2001)Google Scholar
  26. 26.
    Özcan, E., Misir, M., Ochoa, G., Burke, E.K.: A reinforcement learning - great-deluge hyper-heuristic for examination timetabling. Int. J. of Applied Metaheuristic Computing, 39–59 (2010)Google Scholar
  27. 27.
    Rummery, G.A., Niranjan, M.: On-line q-learning using connectionist systems. Technical Report CUED/F-INFENG/TR 166, Engineering Department, Cambridge University (1994)Google Scholar
  28. 28.
    Richard, S., Sutton, R.S.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems: Proceedings of the 1995 Conference, pp. 1038–1044. MIT Press, Cambridge (1996)Google Scholar
  29. 29.
    Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press (1998)Google Scholar
  30. 30.
    Talbi, E.-G.: Metaheuristics: From Design to Implementation. John Wiley and Sons (2009)Google Scholar
  31. 31.
    Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)MathSciNetzbMATHGoogle Scholar
  32. 32.
    Taylor, M.E., Stone, P., Liu, Y.: Transfer learning via inter-task mappings for temporal difference learning. Journal of Machine Learning Research 8(1), 2125–2167 (2007)MathSciNetzbMATHGoogle Scholar
  33. 33.
    Thathachar, M.A.L., Sastry, P.S.: Networks of Learning Automata: Techniques for Online Stochastic Optimization. Kluwer Academic Publishers (2004)Google Scholar
  34. 34.
    Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine Learning 8, 279–292 (1992)zbMATHGoogle Scholar
  35. 35.
    Watkins, C.J.C.H.: Learning from Delayed Rewards. PhD thesis, Cambridge University (1989)Google Scholar
  36. 36.
    Wauters, T., Verbeeck, K., De Causmaecker, P., Vanden Berghe, G.: A game theoretic approach to decentralized multi-project scheduling (extended abstract). In: Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems, AAMAS 2010, vol. R24 (2010)Google Scholar
  37. 37.
    Wauters, T., Verbeeck, K., Vanden Berghe, G., De Causmaecker, P.: Learning agents for the multi-mode project scheduling problem. Journal of the Operational Research Society 62(2), 281–290 (2011)CrossRefGoogle Scholar
  38. 38.
    Wauters, T., Verstichel, J., Verbeeck, K., Vanden Berghe, G.: A learning metaheuristic for the multi mode resource constrained project scheduling problem. In: Proceedings of the Third Learning and Intelligent OptimizatioN Conference, LION3 (2009)Google Scholar
  39. 39.
    Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 229–256 (1992)Google Scholar
  40. 40.
    Zhang, W., Dietterich, T.: A reinforcement learning approach to job-shop scheduling. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 1114–1120. Morgan Kaufmann (1995)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Tony Wauters
    • 1
  • Katja Verbeeck
    • 2
  • Patrick De Causmaecker
    • 3
  • Greet Vanden Berghe
    • 4
  1. 1.CODeS, KAHO Sint-LievenGentBelgium
  2. 2.Vakgroep ICTUniversity College, Katholieke Hogeschool Sint-LievenGhentBelgium
  3. 3.Katholieke Universiteit LeuvenKortrijkBelgium
  4. 4.CODeS (Combinatorial Optimisation and Decision Support) Industrial SciencesLievenBelgium

Personalised recommendations