Abstract
Local search methods are convenient alternatives for solving discrete optimization problems (DOPs). These easy-to-implement methods are able to find approximate optimal solutions within a tolerable time limit. It is known that the quality of the initial solution greatly affects the quality of the approximated solution found by a local search method. In this paper, we propose to take the initial solution as a random variable and learn its preferable probability distribution. The aim is to sample a good initial solution from the learned distribution so that the local search can find a high-quality solution. We develop two different deep network models to deal with DOPs established on set (the knapsack problem) and graph (the maximum clique problem), respectively. The deep neural network learns the representation of an optimization problem instance and transforms the representation to its probability vector. Experimental results show that given the initial solution sampled from the learned probability distribution, a local search method can acquire much better approximate solutions than the randomly-sampled initial solution on the synthesized knapsack instances and the Erdős-Rényi random graph instances. Furthermore, with sampled initial solutions, a classical genetic algorithm can achieve better solutions than a random initialized population in solving the maximum clique problems on DIMACS instances. Particularly, we emphasize that the developed models can generalize in dimensions and across graphs with various densities, which is an important advantage on generalizing deep-learning-based optimization algorithms.
Similar content being viewed by others
References
Abdel-Basset M, Abdel-Fatah L, Sangaiah A K. Metaheuristic algorithms: A comprehensive review. In: Sangaiah A K, Zhang Z, Sheng M, eds. Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications. New York: Academic Press, 2018, 185–231
Ansótegui C, Heymann, B, Pon J, et al. Hyper-reactive tabu search for MaxSAT. In: Proceedings of the International Conference on Learning and Intelligent Optimization. Berlin-Heidelberg: Springer, 2019, 309–325
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations. New Orleans: OpenReview.net, 2015, 1–15
Bello I, Pham H, Le Q V, et al. Neural combinatorial optimization with reinforcement learning. In: Proceedings of the 5th International Conference on Learning Representations. New Orleans: OpenReview.net, 2017, 1–15
Bengio Y, Lodi A, Prouvost A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Euro J Oper Res, 2021, 290: 405–421
Bertsekas D P. Dynamic Programming and Optimal Control: Volume I. Cambridge: Athena Scientific, 2012
Bron C, Kerbosch J. Algorithm 457: Finding all cliques of an undirected graph. Commun ACM, 1973, 16: 575–577
Bruna J, Zaremba W, Szlam A, et al. Spectral networks and locally connected networks on graphs. In: Proceedings of the 2nd International Conference on Learning Representations. New Orleans: OpenReview.net, 2014, 1–14
Cacchiani V, Iori M, Locatelli A, et al. Knapsack problems - an overview of recent advances. part I: Single knapsack problems. Comput Oper Res, 2022, 143: 105692
Chen Z, Liu J, Wang X, et al. On representing linear programs by graph neural networks. In: Proceedings of the 11th International Conference on Learning Representations. New Orleans: OpenReview.net, 2023, 1–29
Cobos C, Dulcey H, Ortega J, et al. A binary fisherman search procedure for the 0/1 knapsack problem. In: Proceedings of the Advances in Artificial Intelligence. Berlin-Heidelberg: Springer, 2016, 447–457
Cortez P. Local Search. Cham: Springerg, 2021
Crama Y, Kolen A W J, Pesch E J. Local Search in Combinatorial Optimization. Berlin-Heidelberg: Springer, 1995
Csirik J, Frenk J, Labbé M, et al. Heuristics for the 0–1 min-knapsack problem. Acta Cybernet, 1991, 10: 15–20
Dai H, Khalil E B, Zhang Y, et al. Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2017, 6351–6361
Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2016, 3844–3852
Doerr B, Neumann F. A survey on recent progress in the theory of evolutionary algorithms for discrete optimization. ACM Trans Evo Learn Optim, 2021, 1: 1–43
Erdős P, Rényi A. On the Evolution of Random Graphs. Princeton: Princeton Univ Press, 2006
Ezugwu A E, Shukla A K, Nath R, et al. Metaheuristics: A comprehensive overview and classification along with bibliometric analysis. Art Intell Rev, 2021, 54: 4237–4316
Gasse M, Chetelat D, Ferroni N, et al. Exact combinatorial optimization with graph convolutional neural networks. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2019, 15554–15566
Glover F. Future paths for integer programming and links to artificial intelligence. Comput Oper Res, 1986, 13: 533–549
Goodfellow I, Bengio Y, Courville A. Optimization for Training Deep Models. Cambridge: MIT Press, 2016
Haghir Chehreghani M. Half a decade of graph convolutional networks. Nat Mach Intell, 2021, 4: 192–193
Hansen P, Mladenovic N. Variable Neighborhood Search. Cham: Springer, 2018
Hottung A, Tanaka S, Tierney K. Deep learning assisted heuristic tree search for the container pre-marshalling problem. Comput Oper Res, 2020, 113: 104781
Hussein A, Gaber M M, Elyan E, et al. Imitation learning: A survey of learning methods. ACM Comput Sur, 2017, 50: 1–35
Jiang H, Li C-M, Manyà F. Combining efficient preprocessing and incremental MaxSAT reasoning for maxclique in large graphs. In: Proceedings of the Twenty-Second European Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2016, 939–947
Johnson D J, Trick M A. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge. Providence: Amer Math Soce, 1996
Karimi-Mamaghan M, Mohammadi M, Meyer P, et al. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art. Euro J Oper Res, 2022, 296: 393–422
Khalil E B, Dilkina B, Nemhauser G L, et al. Learning to run heuristics in tree search. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2017, 659–666
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. New Orleans: OpenReview.net, 2017, 1–14
Kirkpatrick S, Gelatt C D, Vecchi M P. Optimization by simulated annealing. Science, 1983, 220: 671–680
Kool W, van Hoof H, Welling M. Attention, learn to solve routing problems! In: Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019, 1–25
Korte B, Vygen J. Combinatorial Optimization. Berlin-Heidelberg: Springer, 2018
Kruber M, Lübbecke M E, Parmentier A. Learning when to use a decomposition. In: Proceedings of the Integration of AI and OR Techniques in Constraint Programming. Berlin-Heidelberg: Springer, 2017, 202–210
Land A H, Doig A G. An automatic method of solving discrete programming problems. Econometrica, 1960, 28: 497–520
Lemos H, Prates M, Avelar P, et al. Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems. In: Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). San Francisco: IEEE, 2019, 879–885
Li X, Olafsson S. Discovering dispatching rules using data mining. J Schedul, 2005, 8: 515–527
Li Z, Chen Q, Koltun V. Combinatorial optimization with graph convolutional networks and guided tree search. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2018, 537–546
Liu X, Sun J, Zhang Q, et al. Learning to learn evolutionary algorithm: A learnable differential evolution. IEEE Trans Emerg Top Comput Intell, 2023, 7: 1605–1620
Lodi A, Zarpellon G. On learning and branching: A survey. TOP, 2017, 25: 207–236
Louis S, McDonnell J. Learning with case-injected genetic algorithms. IEEE Trans Evol Comput, 2004, 8: 316–328
Mahmood R, Babier A, McNiven A, et al. Automated treatment planning in radiation therapy using generative adversarial networks. In: Proceedings of the 3rd Machine Learning for Healthcare Conference. Ann Arbor: PMLR, 2018, 484–499
Marchiori E. A simple heuristic based genetic algorithm for the maximum clique problem. In: Proceedings of the 1998 ACM Symposium on Applied Computing. New York: Association for Computing Machinery, 1998, 366–373
Marchiori E. Genetic, iterated and multistart local search for the maximum clique problem. In: Proceedings of the Applications of Evolutionary Computing. Berlin-Heidelberg: Springer, 2002, 112–121
Martí R, Reinelt G. Exact and Heuristic Methods in Combinatorial Optimization. Berlin: Springer, 2022
Mencarelli L, D’Ambrosio C, Di Zio A, et al. Heuristics for the general multiple non-linear knapsack problem. Electron Note Discret Math, 2016, 55: 69–72
Milan A, Rezatofighi S H, Garg R, et al. Data-driven approximations to np-hard problems. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2017, 1453–1459
Nasiri M M, Salesi S, Rahbari A, et al. A data mining approach for population-based methods to solve the JSSP. Soft Comput, 2019, 23: 11107–11122
Nazari M, Oroojlooy A, Takáč M, et al. Reinforcement learning for solving the vehicle routing problem. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. San Francisco: Curran Associates, 2018, 9861–9871
Peres F, Castelli M. Combinatorial optimization problems and metaheuristics: Review, challenges, design, and development. Appl Sci, 2021, 11: 1–39
Pisinger D. Where are the hard knapsack problems? Comput Oper Res, 2005, 32: 2271–2284
Rossi R A, Gleich D F, Gebremedhin A H. Parallel maximum clique algorithms with applications to network analysis. SIAM J Sci Comput, 2015, 37: C589–C616
San Segundo P, Lopez A, Pardalos P M. A new exact maximum clique algorithm for large and massive sparse graphs. Comput Oper Res, 2016, 66: 81–94
Selsam D, Lamm M, Buünz B, et al. Learning a SAT solver from single-bit supervision. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019, 1–11
Sergienko I V, Shylo V P. Problems of discrete optimization: Challenges and main approaches to solve them. Cybernet Syst Anal, 2006, 42: 465–482
Stüutzle T, Ruiz R. Iterated Local Search. New York-Berlin: Springer, 2018
Sun J, Liu X, Back T, et al. Learning adaptive differential evolution algorithm from optimization experiences by policy gradient. IEEE Trans Evo Comput, 2021, 25: 666–680
Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge: MIT press, 2018
Tahami H, Fakhravar H. A literature review on combining heuristics and exact algorithms in combinatorial optimization. Euro J Inform Tech Comput Sci, 2022, 2: 6–12
Talbi E-G. Machine learning into metaheuristics: A survey and taxonomy. ACM Comput Surv, 2021, 54: 1–32
Toth P. Dynamic programming algorithms for the zero-one knapsack problem. Computing, 1980, 25: 29–45
Toyer S, Trevizan F W, Thiébaux S, et al. Action schema networks: Generalised policies with deep learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018, 6294–6301
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2017, 6000–6010
Vince A. A framework for the greedy algorithm. Discrete Appl Math, 2002, 121: 247–260
Vinyals O, Fortunato M, Jaitly N. Pointer networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2015, 2692–2700
Wu Q, Hao J-K. A review on algorithms for maximum clique problems. Euro J Oper Res, 2015, 242: 693–709
Yang X-S. Nature-Inspired Optimization Algorithms. New York: Academic Press, 2021
Zaheer M, Kottur S, Ravanbakhsh S, et al. Deep sets. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: Association for Computing Machinery, 2017, 3391–3401
Zhou J, Cui G, Hu S, et al. Graph neural networks: A review of methods and applications. AI Open, 2020, 1: 57–81
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant Nos. 11991023 and 62076197) and Key Research and Development Project of Shaanxi Province (Grant No. 2022GXLH-01-15).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Liu, X., Sun, J. & Xu, Z. Learning to sample initial solution for solving 0–1 discrete optimization problem by local search. Sci. China Math. (2024). https://doi.org/10.1007/s11425-023-2290-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11425-023-2290-y