Skip to main content
Log in

Learning to sample initial solution for solving 0–1 discrete optimization problem by local search

  • Articles
  • AI Methods for Optimization Problems
  • Published:
Science China Mathematics Aims and scope Submit manuscript

Abstract

Local search methods are convenient alternatives for solving discrete optimization problems (DOPs). These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit. It is known that the quality of the initial solution greatly affects the quality of the approximated solution found by a local search method. In this paper, we propose to take the initial solution as a random variable and learn its preferable probability distribution. The aim is to sample a good initial solution from the learned distribution so that the local search can find a high-quality solution. We develop two different deep network models to deal with DOPs established on set (the knapsack problem) and graph (the maximum clique problem), respectively. The deep neural network learns the representation of an optimization problem instance and transforms the representation to its probability vector. Experimental results show that given the initial solution sampled from the learned probability distribution, a local search method can acquire much better approximate solutions than the randomly-sampled initial solution on the synthesized knapsack instances and the Erdős-Rényi random graph instances. Furthermore, with sampled initial solutions, a classical genetic algorithm can achieve better solutions than a random initialized population in solving the maximum clique problems on DIMACS instances. Particularly, we emphasize that the developed models can generalize in dimensions and across graphs with various densities, which is an important advantage on generalizing deep-learning-based optimization algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abdel-Basset M, Abdel-Fatah L, Sangaiah A K. Metaheuristic algorithms: A comprehensive review. In: Sangaiah A K, Zhang Z, Sheng M, eds. Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications. New York: Academic Press, 2018, 185–231

    Chapter  Google Scholar 

  2. Ansótegui C, Heymann, B, Pon J, et al. Hyper-reactive tabu search for MaxSAT. In: Proceedings of the International Conference on Learning and Intelligent Optimization. Berlin-Heidelberg: Springer, 2019, 309–325

    Chapter  Google Scholar 

  3. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations. New Orleans: OpenReview.net, 2015, 1–15

    Google Scholar 

  4. Bello I, Pham H, Le Q V, et al. Neural combinatorial optimization with reinforcement learning. In: Proceedings of the 5th International Conference on Learning Representations. New Orleans: OpenReview.net, 2017, 1–15

    Google Scholar 

  5. Bengio Y, Lodi A, Prouvost A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Euro J Oper Res, 2021, 290: 405–421

    Article  MathSciNet  Google Scholar 

  6. Bertsekas D P. Dynamic Programming and Optimal Control: Volume I. Cambridge: Athena Scientific, 2012

    Google Scholar 

  7. Bron C, Kerbosch J. Algorithm 457: Finding all cliques of an undirected graph. Commun ACM, 1973, 16: 575–577

    Article  Google Scholar 

  8. Bruna J, Zaremba W, Szlam A, et al. Spectral networks and locally connected networks on graphs. In: Proceedings of the 2nd International Conference on Learning Representations. New Orleans: OpenReview.net, 2014, 1–14

    Google Scholar 

  9. Cacchiani V, Iori M, Locatelli A, et al. Knapsack problems - an overview of recent advances. part I: Single knapsack problems. Comput Oper Res, 2022, 143: 105692

    Article  MathSciNet  Google Scholar 

  10. Chen Z, Liu J, Wang X, et al. On representing linear programs by graph neural networks. In: Proceedings of the 11th International Conference on Learning Representations. New Orleans: OpenReview.net, 2023, 1–29

    Google Scholar 

  11. Cobos C, Dulcey H, Ortega J, et al. A binary fisherman search procedure for the 0/1 knapsack problem. In: Proceedings of the Advances in Artificial Intelligence. Berlin-Heidelberg: Springer, 2016, 447–457

    Chapter  Google Scholar 

  12. Cortez P. Local Search. Cham: Springerg, 2021

    Book  Google Scholar 

  13. Crama Y, Kolen A W J, Pesch E J. Local Search in Combinatorial Optimization. Berlin-Heidelberg: Springer, 1995

    Google Scholar 

  14. Csirik J, Frenk J, Labbé M, et al. Heuristics for the 0–1 min-knapsack problem. Acta Cybernet, 1991, 10: 15–20

    MathSciNet  Google Scholar 

  15. Dai H, Khalil E B, Zhang Y, et al. Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2017, 6351–6361

    Google Scholar 

  16. Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2016, 3844–3852

    Google Scholar 

  17. Doerr B, Neumann F. A survey on recent progress in the theory of evolutionary algorithms for discrete optimization. ACM Trans Evo Learn Optim, 2021, 1: 1–43

    Article  Google Scholar 

  18. Erdős P, Rényi A. On the Evolution of Random Graphs. Princeton: Princeton Univ Press, 2006

    Google Scholar 

  19. Ezugwu A E, Shukla A K, Nath R, et al. Metaheuristics: A comprehensive overview and classification along with bibliometric analysis. Art Intell Rev, 2021, 54: 4237–4316

    Article  Google Scholar 

  20. Gasse M, Chetelat D, Ferroni N, et al. Exact combinatorial optimization with graph convolutional neural networks. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2019, 15554–15566

    Google Scholar 

  21. Glover F. Future paths for integer programming and links to artificial intelligence. Comput Oper Res, 1986, 13: 533–549

    Article  MathSciNet  Google Scholar 

  22. Goodfellow I, Bengio Y, Courville A. Optimization for Training Deep Models. Cambridge: MIT Press, 2016

    Google Scholar 

  23. Haghir Chehreghani M. Half a decade of graph convolutional networks. Nat Mach Intell, 2021, 4: 192–193

    Article  Google Scholar 

  24. Hansen P, Mladenovic N. Variable Neighborhood Search. Cham: Springer, 2018

    Google Scholar 

  25. Hottung A, Tanaka S, Tierney K. Deep learning assisted heuristic tree search for the container pre-marshalling problem. Comput Oper Res, 2020, 113: 104781

    Article  Google Scholar 

  26. Hussein A, Gaber M M, Elyan E, et al. Imitation learning: A survey of learning methods. ACM Comput Sur, 2017, 50: 1–35

    Google Scholar 

  27. Jiang H, Li C-M, Manyà F. Combining efficient preprocessing and incremental MaxSAT reasoning for maxclique in large graphs. In: Proceedings of the Twenty-Second European Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2016, 939–947

    Google Scholar 

  28. Johnson D J, Trick M A. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. Providence: Amer Math Soce, 1996

  29. Karimi-Mamaghan M, Mohammadi M, Meyer P, et al. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Euro J Oper Res, 2022, 296: 393–422

    Article  MathSciNet  Google Scholar 

  30. Khalil E B, Dilkina B, Nemhauser G L, et al. Learning to run heuristics in tree search. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2017, 659–666

    Google Scholar 

  31. Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. New Orleans: OpenReview.net, 2017, 1–14

    Google Scholar 

  32. Kirkpatrick S, Gelatt C D, Vecchi M P. Optimization by simulated annealing. Science, 1983, 220: 671–680

    Article  MathSciNet  Google Scholar 

  33. Kool W, van Hoof H, Welling M. Attention, learn to solve routing problems! In: Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019, 1–25

    Google Scholar 

  34. Korte B, Vygen J. Combinatorial Optimization. Berlin-Heidelberg: Springer, 2018

    Book  Google Scholar 

  35. Kruber M, Lübbecke M E, Parmentier A. Learning when to use a decomposition. In: Proceedings of the Integration of AI and OR Techniques in Constraint Programming. Berlin-Heidelberg: Springer, 2017, 202–210

    Chapter  Google Scholar 

  36. Land A H, Doig A G. An automatic method of solving discrete programming problems. Econometrica, 1960, 28: 497–520

    Article  MathSciNet  Google Scholar 

  37. Lemos H, Prates M, Avelar P, et al. Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems. In: Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). San Francisco: IEEE, 2019, 879–885

    Google Scholar 

  38. Li X, Olafsson S. Discovering dispatching rules using data mining. J Schedul, 2005, 8: 515–527

    Article  MathSciNet  Google Scholar 

  39. Li Z, Chen Q, Koltun V. Combinatorial optimization with graph convolutional networks and guided tree search. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2018, 537–546

    Chapter  Google Scholar 

  40. Liu X, Sun J, Zhang Q, et al. Learning to learn evolutionary algorithm: A learnable differential evolution. IEEE Trans Emerg Top Comput Intell, 2023, 7: 1605–1620

    Article  Google Scholar 

  41. Lodi A, Zarpellon G. On learning and branching: A survey. TOP, 2017, 25: 207–236

    Article  MathSciNet  Google Scholar 

  42. Louis S, McDonnell J. Learning with case-injected genetic algorithms. IEEE Trans Evol Comput, 2004, 8: 316–328

    Article  Google Scholar 

  43. Mahmood R, Babier A, McNiven A, et al. Automated treatment planning in radiation therapy using generative adversarial networks. In: Proceedings of the 3rd Machine Learning for Healthcare Conference. Ann Arbor: PMLR, 2018, 484–499

    Google Scholar 

  44. Marchiori E. A simple heuristic based genetic algorithm for the maximum clique problem. In: Proceedings of the 1998 ACM Symposium on Applied Computing. New York: Association for Computing Machinery, 1998, 366–373

    Chapter  Google Scholar 

  45. Marchiori E. Genetic, iterated and multistart local search for the maximum clique problem. In: Proceedings of the Applications of Evolutionary Computing. Berlin-Heidelberg: Springer, 2002, 112–121

    Google Scholar 

  46. Martí R, Reinelt G. Exact and Heuristic Methods in Combinatorial Optimization. Berlin: Springer, 2022

    Book  Google Scholar 

  47. Mencarelli L, D’Ambrosio C, Di Zio A, et al. Heuristics for the general multiple non-linear knapsack problem. Electron Note Discret Math, 2016, 55: 69–72

    Article  Google Scholar 

  48. Milan A, Rezatofighi S H, Garg R, et al. Data-driven approximations to np-hard problems. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2017, 1453–1459

    Google Scholar 

  49. Nasiri M M, Salesi S, Rahbari A, et al. A data mining approach for population-based methods to solve the JSSP. Soft Comput, 2019, 23: 11107–11122

    Article  Google Scholar 

  50. Nazari M, Oroojlooy A, Takáč M, et al. Reinforcement learning for solving the vehicle routing problem. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2018, 9861–9871

    Google Scholar 

  51. Peres F, Castelli M. Combinatorial optimization problems and metaheuristics: Review, challenges, design, and development. Appl Sci, 2021, 11: 1–39

    Article  Google Scholar 

  52. Pisinger D. Where are the hard knapsack problems? Comput Oper Res, 2005, 32: 2271–2284

    Article  MathSciNet  Google Scholar 

  53. Rossi R A, Gleich D F, Gebremedhin A H. Parallel maximum clique algorithms with applications to network analysis. SIAM J Sci Comput, 2015, 37: C589–C616

    Article  MathSciNet  Google Scholar 

  54. San Segundo P, Lopez A, Pardalos P M. A new exact maximum clique algorithm for large and massive sparse graphs. Comput Oper Res, 2016, 66: 81–94

    Article  MathSciNet  Google Scholar 

  55. Selsam D, Lamm M, Buünz B, et al. Learning a SAT solver from single-bit supervision. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019, 1–11

    Google Scholar 

  56. Sergienko I V, Shylo V P. Problems of discrete optimization: Challenges and main approaches to solve them. Cybernet Syst Anal, 2006, 42: 465–482

    Article  Google Scholar 

  57. Stüutzle T, Ruiz R. Iterated Local Search. New York-Berlin: Springer, 2018

    Book  Google Scholar 

  58. Sun J, Liu X, Back T, et al. Learning adaptive differential evolution algorithm from optimization experiences by policy gradient. IEEE Trans Evo Comput, 2021, 25: 666–680

    Article  Google Scholar 

  59. Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge: MIT press, 2018

    Google Scholar 

  60. Tahami H, Fakhravar H. A literature review on combining heuristics and exact algorithms in combinatorial optimization. Euro J Inform Tech Comput Sci, 2022, 2: 6–12

    Google Scholar 

  61. Talbi E-G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput Surv, 2021, 54: 1–32

    Google Scholar 

  62. Toth P. Dynamic programming algorithms for the zero-one knapsack problem. Computing, 1980, 25: 29–45

    Article  MathSciNet  Google Scholar 

  63. Toyer S, Trevizan F W, Thiébaux S, et al. Action schema networks: Generalised policies with deep learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018, 6294–6301

    Google Scholar 

  64. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2017, 6000–6010

    Google Scholar 

  65. Vince A. A framework for the greedy algorithm. Discrete Appl Math, 2002, 121: 247–260

    Article  MathSciNet  Google Scholar 

  66. Vinyals O, Fortunato M, Jaitly N. Pointer networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2015, 2692–2700

    Google Scholar 

  67. Wu Q, Hao J-K. A review on algorithms for maximum clique problems. Euro J Oper Res, 2015, 242: 693–709

    Article  MathSciNet  Google Scholar 

  68. Yang X-S. Nature-Inspired Optimization Algorithms. New York: Academic Press, 2021

    Google Scholar 

  69. Zaheer M, Kottur S, Ravanbakhsh S, et al. Deep sets. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2017, 3391–3401

    Google Scholar 

  70. Zhou J, Cui G, Hu S, et al. Graph neural networks: A review of methods and applications. AI Open, 2020, 1: 57–81

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 11991023 and 62076197) and Key Research and Development Project of Shaanxi Province (Grant No. 2022GXLH-01-15).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianyong Sun.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, X., Sun, J. & Xu, Z. Learning to sample initial solution for solving 0–1 discrete optimization problem by local search. Sci. China Math. (2024). https://doi.org/10.1007/s11425-023-2290-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11425-023-2290-y

Keywords

MSC(2020)

Navigation