Abstract
Reinforcement learning to solve graph optimization problems has attracted increasing attention recently. Typically, these models require extensive training over numerous graph instances to develop generalizable strategies across diverse graph types, demanding significant computational resources and time. Instead of tackling these problems one by one, we propose to employ transfer learning to utilize knowledge gained from solving one graph optimization problem to aid in solving another. Our proposed framework, dubbed the State Extraction with Transfer-learning (SET), focuses on quickly adapting a model trained for a specific graph optimization task to a new but related problem by considering the distributional differences among the objective values between the graph optimization problems. We conduct a series of experimental evaluations on graphs that are both synthetically generated and sourced from real-world data. The results demonstrate that SET outperforms other algorithmic and learning-based baselines. Additionally, our analysis of knowledge transferability provides insights into the effectiveness of applying models trained on one graph optimization task to another. Our study is one of the first studies exploring transfer learning in the context of graph optimization problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Notably, the efficacy of knowledge transfer extends beyond this specific pair of graph optimization problems and has been observed in several other problem combinations.
- 2.
TSP finds the shortest route that visits each node once and returns to the origin.
- 3.
MIS involves selecting the largest set of vertices where no two are adjacent.
- 4.
- 5.
We perform sensitivity tests on those parameters, and select the best-performing settings as the defaults. The sensitivity tests are eliminated for the sake of space.
References
Ahn, S., Seo, Y., Shin, J.: Learning what to defer for maximum independent sets. In: ICML (2020)
Albert, R., Barabási, A.L.: Statistical mechanics of complex networks. Rev. Mod. Phys. (2002)
Bello, I., Pham, H., Le, Q.V., Norouzi, M., Bengio, S.: Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940 (2016)
Boppana, R., Halldórsson, M.M.: Approximating maximum independent sets by excluding subgraphs. BIT Numerical Mathematics (1992)
Chen, M., Wei, Z., Ding, B., Li, Y., Yuan, Y., Du, X., Wen, J.R.: Scalable graph neural networks via bidirectional propagation. In: NeurIPS (2020)
Dai, H., Khalil, E., Zhang, Y., Dilkina, B., Song, L.: Learning combinatorial optimization algorithms over graphs. In: NeurIPS (2017)
Dai, Q., Wu, X.M., Xiao, J., Shen, X., Wang, D.: Graph transfer learning via adversarial domain adaptation with graph convolution. IEEE Trans. Knowl. Data Eng. (2022)
Deudon, M., Cournut, P., Lacoste, A., Adulyasak, Y., Rousseau, L.M.: Learning heuristics for the TSP by policy gradient. In: CPAIOR (2018)
Dorigo, M., Gambardella, L.M.: Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evolutionary Comput. (1997)
Evci, U., Dumoulin, V., Larochelle, H., Mozer, M.C.: Head2toe: utilizing intermediate representations for better transfer learning. In: ICML (2022)
Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. (2016)
Gopalakrishnan, K., Khaitan, S., Choudhary, A., Agrawal, A.: Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection. Construct. Building Mater. (2017)
Groshev, E., Tamar, A., Goldstein, M., Srivastava, S., Abbeel, P.: Learning generalized reactive policies using deep neural networks. In: ICAPS (2018)
Guze, S.: Graph theory approach to the vulnerability of transportation networks. Algorithms (2019)
Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)
Hua, J., Zeng, L., Li, G., Ju, Z.: Learning for a robot: deep reinforcement learning, imitation learning, transfer learning. Sensors (2021)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artifi. Intell. Res. (1996)
Karakostas, G.: A better approximation ratio for the vertex cover problem. In: ICALP (2005)
Karalias, N., Loukas, A.: Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In: NeurIPS (2020)
Kim, H., Cosa-Linan, A., Santhanam, N., Jannesari, M., Maros, M.E., Ganslandt, T.: Transfer learning for medical image classification: a literature review. BMC Med. Imaging (2022)
Kolouri, S., Nadjahi, K., Simsekli, U., Badeau, R., Rohde, G.: Generalized sliced wasserstein distances. In: NeurIPS (2019)
Kornblith, S., Norouzi, M., Lee, H., Hinton, G.: Similarity of neural network representations revisited. In: ICML (2019)
Leskovec, J., Mcauley, J.: Learning to discover social circles in ego networks. In: NeurIPS (2012)
Nazari, M., Oroojlooy, A., Snyder, L., Takác, M.: Reinforcement learning for solving the vehicle routing problem. In: NeurIPS (2018)
Rozemberczki, B., Allen, C., Sarkar, R.: Multi-scale attributed node embedding. J. Complex Netw. (2021)
Rozemberczki, B., Sarkar, R.: Characteristic functions on graphs: Birds of a feather, from statistical descriptors to parametric models. In: CIKM (2020)
Schuetz, M.J., Brubaker, J.K., Katzgraber, H.: Combinatorial optimization with physics-inspired graph neural networks. Nat. Mach. Intell. (2022)
Selsam, D., Lamm, M., Bünz, B., Liang, P., de Moura, L., Dill, D.L.: Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685 (2018)
Shen, Z., Liu, Z., Qin, J., Savvides, M., Cheng, K.T.: Partial is better than all: revisiting fine-tuning strategy for few-shot learning. In: AAAI (2021)
Stastny, J., Skorpil, V., Balogh, Z., Klein, R.: Job shop scheduling problem optimization by means of graph-based algorithm. Appli. Sci. (2021)
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)
Wang, H.P., Wu, N., Yang, H., Hao, C., Li, P.: Unsupervised learning for combinatorial optimization with principled objective relaxation. In: NeurIPS (2022)
Wang, H., Yue, T., Ye, X., He, Z., Li, B., Li, Y.: Revisit finetuning strategy for few-shot learning to transfer the emdeddings. In: ICLR (2023)
Welling, M., Kipf, T.N.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
Yang, C.H., Shen, C.Y.: Enhancing machine learning approaches for graph optimization problems with diversifying graph augmentation. In: SIGKDD (2022)
Zhang, W., Dietterich, T.G.: Solving combinatorial optimization tasks by reinforcement learning: a general methodology applied to resource-constrained scheduling. J. Artifi. Intell. Res. (2000)
Zhu, Z., Lin, K., Jain, A.K., Zhou, J.: Transfer learning in deep reinforcement learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hung, HJ., Lee, WC., Shen, CY., He, F., Lei, Z. (2024). Leveraging Transfer Learning for Enhancing Graph Optimization Problem Solving. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14646. Springer, Singapore. https://doi.org/10.1007/978-981-97-2253-2_27
Download citation
DOI: https://doi.org/10.1007/978-981-97-2253-2_27
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2252-5
Online ISBN: 978-981-97-2253-2
eBook Packages: Computer ScienceComputer Science (R0)