Skip to main content

Optimal Node Embedding Dimension Selection Using Overall Entropy

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14262))

Included in the following conference series:

  • 699 Accesses

Abstract

Graph node embedding learning has gained significant attention with the advancement of graph neural networks (GNNs). The essential purpose of graph node embedding is down-scaling high-dimensional graph features to a lower-dimensional space while maximizing the retention of original structural information. This paper focuses on selecting the appropriate graph node embedding dimension for hidden layers, ensuring the effective representation of node information and preventing overfitting. We propose an algorithm based on the entropy minimization principle, called Minimum Overall Entropy (MOE), which combines graph node structural information and attribute information. We refer to one-dimensional and multi-dimensional structural entropy (MDSE) as a graph’s structural entropy. A novel algorithm combines graph Shannon entropy, MDSE, and prior knowledge for faster convergence of optimal MDSE. We introduce an inner product-based metric, attribute entropy, to quantify node characteristics and simplify its calculation. Extensive experiments on Cora, Citeseer, and Pubmed datasets reveal that MOE, requiring just one computation round, surpasses baseline GNNs.

Supported by the National Key R&D Program of China (No. 2022YFF0503900) and Key R &D Program of Shandong Province (No. 2021CXGC010104).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bianconi, G., Pin, P., Marsili, M.: Assessing the relevance of node features for network structure. Proc. Natl. Acad. Sci. 106(28), 11433–11438 (2009)

    Article  Google Scholar 

  2. Bonchev, D., Trinajstić, N.: Information theory, distance matrix, and molecular branching. J. Chem. Phys. 67(10), 4517–4533 (1977)

    Article  Google Scholar 

  3. Brooks, F.P., Jr.: Three great challenges for half-century-old computer science. J. ACM (JACM) 50(1), 25–26 (2003)

    Article  Google Scholar 

  4. Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: International Conference on Machine Learning, pp. 1725–1735. PMLR (2020)

    Google Scholar 

  5. Choi, Y., Szpankowski, W.: Compression of graphical structures: fundamental limits, algorithms, and experiments. IEEE Trans. Inf. Theory 58(2), 620–638 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Fang, P., et al.: How to realize efficient and scalable graph embeddings via an entropy-driven mechanism. IEEE Trans. Big Data 9(1), 358–371 (2022)

    Article  Google Scholar 

  7. Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: a survey. Knowl.-Based Syst. 151, 78–94 (2018)

    Article  Google Scholar 

  8. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864 (2016)

    Google Scholar 

  9. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025–1035 (2017)

    Google Scholar 

  10. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. In: Proceedings of the IEEE Data Engineering Bulletin, September 2017 (2017). http://sites.computer.org/debull/A17sept/p52.pdf

  11. Huang, X., Song, Q., Li, Y., Hu, X.: Graph recurrent networks with attributed random walks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 732–740 (2019)

    Google Scholar 

  12. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings. OpenReview.net (2017). https://openreview.net/forum?id=SJU4ayYgl

  13. Li, A., Pan, Y.: Structural information and dynamical complexity of networks. IEEE Trans. Inf. Theor. 62(6), 3290–3339 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  14. Li, C., Jia, K., Shen, D., Shi, C.J.R., Yang, H.: Hierarchical representation learning for bipartite graphs. In: IJCAI, vol. 19, pp. 2873–2879 (2019)

    Google Scholar 

  15. Liu, Z., Shi, K., Zhang, K., Ou, W., Wang, L.: Discriminative sparse embedding based on adaptive graph for dimension reduction. Eng. Appl. Artif. Intell. 94, 103758 (2020)

    Article  Google Scholar 

  16. Luo, G., et al.: Dynamically constructed network with error correction for accurate ventricle volume estimation. Med. Image Anal. 64, 101723 (2020)

    Article  Google Scholar 

  17. Luo, G., et al.: Graph entropy guided node embedding dimension selection for graph neural networks. In: Zhou, Z.H. (ed.) Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 2767–2774. International Joint Conferences on Artificial Intelligence Organization, August 2021. https://doi.org/10.24963/ijcai.2021/381. Main Track

  18. Ou, M., Cui, P., Pei, J., Zhang, Z., Zhu, W.: Asymmetric transitivity preserving graph embedding. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1105–1114 (2016)

    Google Scholar 

  19. Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., Zhang, C.: Adversarially regularized graph autoencoder for graph embedding. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pp. 2609–2615. International Joint Conferences on Artificial Intelligence Organization, July 2018. https://doi.org/10.24963/ijcai.2018/362

  20. Peng, H., et al.: Hierarchical taxonomy-aware and attentional graph capsule RCNNs for large-scale multi-label text classification. IEEE Trans. Knowl. Data Eng. 33(6), 2505–2519 (2019)

    Article  Google Scholar 

  21. Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701–710 (2014)

    Google Scholar 

  22. Rashevsky, N.: Life, information theory, and topology. Bull. Math. Biophys. 17(3), 229–235 (1955)

    Article  MathSciNet  Google Scholar 

  23. Raychaudhury, C., Ray, S., Ghosh, J., Roy, A., Basak, S.: Discrimination of isomeric structures using information theoretic topological indices. J. Comput. Chem. 5(6), 581–588 (1984)

    Article  Google Scholar 

  24. Rosvall, M., Bergstrom, C.T.: Maps of random walks on complex networks reveal community structure. Proc. Natl. Acad. Sci. 105(4), 1118–1123 (2008)

    Article  Google Scholar 

  25. Schubert, E., Sander, J., Ester, M., Kriegel, H., Xu, X.: DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 42(3), 19:1–19:21 (2017). https://doi.org/10.1145/3068335

  26. Sculley, D.: Web-scale k-means clustering. In: Rappa, M., Jones, P., Freire, J., Chakrabarti, S. (eds.) Proceedings of the 19th International Conference on World Wide Web, WWW 2010, Raleigh, North Carolina, USA, 26–30 April 2010, pp. 1177–1178. ACM (2010). https://doi.org/10.1145/1772690.1772862

  27. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93 (2008)

    Google Scholar 

  28. Seshadhri, C., Sharma, A., Stolman, A., Goel, A.: The impossibility of low-rank representations for triangle-rich complex networks. Proc. Natl. Acad. Sci. 117(11), 5631–5637 (2020)

    Article  Google Scholar 

  29. Shannon, C.: The lattice theory of information. Trans. IRE Prof. Group Inf. Theor. 1(1), 105–107 (1953)

    Article  MATH  Google Scholar 

  30. Shen, X.J., Liu, S.X., Bao, B.K., Pan, C.H., Zha, Z.J., Fan, J.: A generalized least-squares approach regularized with graph embedding for dimensionality reduction. Pattern Recogn. 98, 107023 (2020)

    Article  Google Scholar 

  31. Sun, Q., et al.: Sugar: subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism. In: Proceedings of the Web Conference 2021, pp. 2081–2091 (2021)

    Google Scholar 

  32. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks (2018). https://iclr.cc/Conferences/2018/Schedule?showEvent=299

  33. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)

    Article  MathSciNet  Google Scholar 

  34. Xie, Y., Li, S., Yang, C., Wong, R.C.W., Han, J.: When do GNNs work: understanding and improving neighborhood aggregation. In: IJCAI, pp. 1303–1309 (2020)

    Google Scholar 

  35. Xiong, K., Nie, F., Han, J.: Linear manifold regularization with adaptive graph for semi-supervised dimensionality reduction. In: IJCAI, pp. 3147–3153 (2017)

    Google Scholar 

  36. Yan, S., Xu, D., Zhang, B., Zhang, H.J.: Graph embedding: a general framework for dimensionality reduction. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 830–837. IEEE (2005)

    Google Scholar 

  37. Yi, Y., Wang, J., Zhou, W., Fang, Y., Kong, J., Lu, Y.: Joint graph optimization and projection learning for dimensionality reduction. Pattern Recogn. 92, 258–273 (2019)

    Article  Google Scholar 

  38. Yin, Z., Shen, Y.: On the dimensionality of word embedding. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc. (2018). https://proceedings.neurips.cc/paper/2018/file/b534ba68236ba543ae44b22bd110a1d6-Paper.pdf

  39. Zhu, S.C., Wu, Y.N., Mumford, D.: Minimax entropy principle and its application to texture modeling. Neural Comput. 9(8), 1627–1660 (1997). https://doi.org/10.1162/neco.1997.9.8.1627

    Article  Google Scholar 

  40. Zhu, X., Lei, C., Yu, H., Li, Y., Gan, J., Zhang, S.: Robust graph dimensionality reduction. In: IJCAI, pp. 3257–3263 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shan Jiang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, X., Ding, Z., Wu, Y., Yan, J., Jiang, S., Cui, Q. (2023). Optimal Node Embedding Dimension Selection Using Overall Entropy. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14262. Springer, Cham. https://doi.org/10.1007/978-3-031-44201-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44201-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44200-1

  • Online ISBN: 978-3-031-44201-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics