Abstract
Graph convolutional networks (GCN) suffer from the over-smoothing problem, which causes most of the current GCN models to be shallow. Shallow GCN can only use a very small part of nodes and edges in the graph, which leads to over-fitting. In this paper, we propose a semi-supervised training method to solve this problem, and greatly improve the performance of GCN. Firstly, we propose an integrated data augmentation framework to conduct effective data augmentations for graph-structured data. Then consistency loss, entropy minimization loss, and graph loss are introduced to help GCN make full use of unlabeled nodes and edges, which alleviates the excessive dependence of the model on labeled nodes. Extensive experiments on three widely-used citation datasets demonstrate our method can achieve state-of-the-art performance in solving the semi-supervised node classification problem. Especially, we get \(85.52\%\) accuracy on Cora with the public split.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Berthelot, D., et al.: Remixmatch: semi-supervised learning with distribution matching and augmentation anchoring. In: ICLR (2020)
Berthelot, D., Carlini, N., Goodfellow, I.J., Papernot, N., Oliver, A., Raffel, C.: Mixmatch: a holistic approach to semi-supervised learning. arXiv abs/1905.02249 (2019)
Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. arXiv abs/2007.02133 (2020)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: Randaugment: practical automated data augmentation with a reduced search space. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3008–3017 (2020)
Deng, Z., Dong, Y., Zhu, J.: Batch virtual adversarial training for graph convolutional networks. ArXiv abs/1902.09192 (2019)
Devries, T., Taylor, G.W.: Improved regularization of convolutional neural networks with cutout. arXiv abs/1708.04552 (2017)
Feng, W., et al.: Graph random neural networks for semi-supervised learning on graphs. arXiv: Learning (2020)
Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: CAP (2004)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015)
Kipf, T., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv abs/1609.02907 (2017)
Klicpera, J., Bojchevski, A., Günnemann, S.: Predict then propagate: graph neural networks meet personalized pagerank. In: ICLR (2019)
Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML (2013)
Li, Q., Han, Z., Wu, X.M.: Deeper insights into graph convolutional networks for semi-supervised learning. arXiv abs/1801.07606 (2018)
Miyato, T., Ichi Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, pp. 1979–1993 (2019)
Rong, Y., Huang, W., Xu, T., Huang, J.: Dropedge: towards deep graph convolutional networks on node classification. In: ICLR (2020)
Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. arXiv abs/2001.07685 (2020)
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)
Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio’, P., Bengio, Y.: Graph attention networks. arXiv abs/1710.10903 (2018)
Verma, V., Lamb, A., Kannala, J., Bengio, Y., Lopez-Paz, D.: Interpolation consistency training for semi-supervised learning. Neural Netw. 145, 90–106 (2019)
Verma, V., Qu, M., Kawaguchi, K., Lamb, A., Bengio, Y., Kannala, J., Tang, J.: Graphmix: improved training of GNNs for semi-supervised learning. In: AAAI (2021)
Xu, B., Huang, J., Hou, L., Shen, H., Gao, J., Cheng, X.: Label-consistency based graph neural networks for semi-supervised node classification. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (2020)
Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. arXiv abs/1603.08861 (2016)
Zhang, H., Cissé, M., Dauphin, Y., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. arXiv abs/1710.09412 (2018)
Zhu, X., Ghahramani, Z., Lafferty, J.D.: Semi-supervised learning using gaussian fields and harmonic functions. In: ICML (2003)
Zhu, X., Ghahramani, Z., Lafferty, J.D.: Semi-supervised learning using gaussian fields and harmonic functions. In: ICML (2003)
Acknowledgement
This research is partly supported by Ministry of Science and Technology, China (No. 2019YFB1311503) and Committee of Science and Technology, Shanghai, China (No.19510711200).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tang, S., Tu, E., Yang, J. (2023). Boosting Graph Convolutional Networks with Semi-supervised Training. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_45
Download citation
DOI: https://doi.org/10.1007/978-3-031-30105-6_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30104-9
Online ISBN: 978-3-031-30105-6
eBook Packages: Computer ScienceComputer Science (R0)