Skip to main content

Equivariant Hypergraph Neural Networks

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13681))

Included in the following conference series:

Abstract

Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations. Recent approaches for hypergraph learning extend graph neural networks based on message passing, which is simple yet fundamentally limited in modeling long-range dependencies and expressive power. On the other hand, tensor-based equivariant neural networks enjoy maximal expressiveness, but their application has been limited in hypergraphs due to heavy computation and strict assumptions on fixed-order hyperedges. We resolve these problems and present Equivariant Hypergraph Neural Network (EHNN), the first attempt to realize maximally expressive equivariant layers for general hypergraph learning. We also present two practical realizations of our framework based on hypernetworks (EHNN-MLP) and self-attention (EHNN-Transformer), which are easy to implement and theoretically more expressive than most message passing approaches. We demonstrate their capability in a range of hypergraph learning problems, including synthetic k-edge identification, semi-supervised classification, and visual keypoint matching, and report improved performances over strong message passing baselines. Our implementation is available at https://github.com/jw9730/ehnn.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Higher-order tensors can in principle represent directed hypergraphs as well; we constrain them to be symmetric to specifically represent undirected hypergraphs.

  2. 2.

    Note that we do not share the hypernetworks across different levels of layers.

References

  1. Albooyeh, M., Bertolini, D., Ravanbakhsh, S.: Incidence networks for geometric deep learning. arXiv (2019)

    Google Scholar 

  2. Arya, D., Gupta, D.K., Rudinac, S., Worring, M.: Hypersage: generalizing inductive representation learning on hypergraphs. arXiv (2020)

    Google Scholar 

  3. Bai, S., Zhang, F., Torr, P.H.S.: Hypergraph convolution and hypergraph attention. Pattern Recognit. (2021)

    Google Scholar 

  4. Belongie, S.J., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. (2002)

    Google Scholar 

  5. Berend, D., Tassa, T.: Improved bounds on bell numbers and on moments of sums of random variables. Probability and Mathematical Statistics. (2010)

    Google Scholar 

  6. Berge, C., Minieka., E.: Graphs and Hypergraphs. North-Holland Publishing Company (1981)

    Google Scholar 

  7. Bevilacqua, B., et al.: Equivariant subgraph aggregation networks. In: ICLR (2022)

    Google Scholar 

  8. Botsch, M., Pauly, M., Kobbelt, L., Alliez, P., Lévy, B.: Geometric modeling based on polygonal meshes. In: Eurographics Tutorials (2008)

    Google Scholar 

  9. Bourdev, L., Malik, J.: Poselets: body part detectors trained using 3d human pose annotations. In: ICCV (2009)

    Google Scholar 

  10. Bu, J., et al.: Music recommendation by unified hypergraph: combining social media information and music content. In: Proceedings of the 18th International Conference on Multimedia 2010, Firenze, Italy, 25–29 October, 2010 (2010)

    Google Scholar 

  11. Cai, C., Wang, Y.: A note on over-smoothing for graph neural networks. arXiv (2020)

    Google Scholar 

  12. Chen, Z., Chen, L., Villar, S., Bruna, J.: Can graph neural networks count substructures? In: NeurIPS (2020)

    Google Scholar 

  13. Chien, E., Pan, C., Peng, J., Milenkovic, O.: You are allset: a multiset function framework for hypergraph neural networks. In: ICLR (2022)

    Google Scholar 

  14. Cho, M., Alahari, K., Ponce, J.: Learning graphs to match. In: ICCV (2013)

    Google Scholar 

  15. Ding, K., Wang, J., Li, J., Li, D., Liu, H.: Be more with less: hypergraph attention networks for inductive text classification. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, 16–20 November, 2020 (2020)

    Google Scholar 

  16. Dong, Y., Sawin, W., Bengio, Y.: HNHN: hypergraph networks with hyperedge neurons. arXiv (2020)

    Google Scholar 

  17. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. (2010)

    Google Scholar 

  18. Feng, Y., You, H., Zhang, Z., Ji, R., Gao, Y.: Hypergraph neural networks. In: AAAI (2019)

    Google Scholar 

  19. Gao, Y., Wang, M., Tao, D., Ji, R., Dai, Q.: 3-d object retrieval and recognition with hypergraph analysis. IEEE Trans. Image Process. (2012)

    Google Scholar 

  20. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: ICML (2017)

    Google Scholar 

  21. Gu, F., Chang, H., Zhu, W., Sojoudi, S., Ghaoui, L.E.: Implicit graph neural networks. In: NeurIPS (2020)

    Google Scholar 

  22. Gu, S., Yang, M., Medaglia, J.D., Gur, R.C., Gur, R.E., Satterthwaite, T.D., Bassett, D.S.: Functional hypergraph uncovers novel covariant structures over neurodevelopment. Human Brain Mapp. (2017)

    Google Scholar 

  23. Ha, D., Dai, A.M., Le, Q.V.: Hypernetworks. In: ICLR (2017)

    Google Scholar 

  24. Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)

    Google Scholar 

  25. Hornik, K., Stinchcombe, M.B., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks (1989)

    Google Scholar 

  26. Huang, J., Yang, J.: Unignn: a unified framework for graph and hypergraph neural networks. In: IJCAI (2021)

    Google Scholar 

  27. Ishiguro, K., ichi Maeda, S., Koyama, M.: Graph warp module: an auxiliary module for boosting the power of graph neural networks in molecular graph analysis. arXiv (2019)

    Google Scholar 

  28. Kalai, G.: Linear programming, the simplex algorithm and simple polytopes. Math. Program. (1997)

    Google Scholar 

  29. Keriven, N., Peyré, G.: Universal invariant and equivariant graph neural networks. In: NeurIPS (2019)

    Google Scholar 

  30. Kim, E., Kang, W., On, K., Heo, Y., Zhang, B.: Hypergraph attention networks for multimodal learning. In: CVPR (2020)

    Google Scholar 

  31. Kim, J., et al.: Pure transformers are powerful graph learners. arXiv (2022)

    Google Scholar 

  32. Kim, J., Oh, S., Hong, S.: Transformers generalize deepsets and can be extended to graphs and hypergraphs. In: NeurIPS (2021)

    Google Scholar 

  33. Klimm, F., Deane, C.M., Reinert, G., Estrada, E.: Hypergraphs for predicting essential genes using multiprotein complex data. J. Complex Networks 9, cnaa028 (2021)

    Google Scholar 

  34. Knyazev, B., Taylor, G.W., Amer, M.R.: Understanding attention and generalization in graph neural networks. In: NeurIPS (2019)

    Google Scholar 

  35. Kofidis, E., Regalia, P.A.: On the best rank-1 approximation of higher-order supersymmetric tensors. Siam J. Matrix Anal. Appl. 23, 863–884 (2002)

    Google Scholar 

  36. Lee, J., et al.: Set transformer: a framework for attention-based permutation-invariant neural networks. In: ICML (2019)

    Google Scholar 

  37. Li, D., Xu, Z., Li, S., Sun, X.: Link prediction in social networks based on hypergraph. In: 22nd International World Wide Web Conference, WWW ’13, Rio de Janeiro, Brazil, 13–17 May 2013, Companion Volume (2013)

    Google Scholar 

  38. Li, J., Cai, D., He, X.: Learning graph-level representation for drug discovery. arXiv (2017)

    Google Scholar 

  39. Li, Q., Han, Z., Wu, X.: Deeper insights into graph convolutional networks for semi-supervised learning. In: AAAI (2018)

    Google Scholar 

  40. Louis, S.M., et al.: Global attention based graph convolutional neural networks for improved materials property prediction. arXiv (2020)

    Google Scholar 

  41. Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV (1999)

    Google Scholar 

  42. Maron, H., Ben-Hamu, H., Serviansky, H., Lipman, Y.: Provably powerful graph networks. In: NeurIPS (2019)

    Google Scholar 

  43. Maron, H., Ben-Hamu, H., Shamir, N., Lipman, Y.: Invariant and equivariant graph networks. In: ICLR (2019)

    Google Scholar 

  44. Maron, H., Fetaya, E., Segol, N., Lipman, Y.: On the universality of invariant networks. In: ICML (2019)

    Google Scholar 

  45. Maron, H., Litany, O., Chechik, G., Fetaya, E.: On learning sets of symmetric elements. In: ICML (2020)

    Google Scholar 

  46. Milano, F., Loquercio, A., Rosinol, A., Scaramuzza, D., Carlone, L.: Primal-dual mesh convolutional neural networks. In: NeurIPS (2020)

    Google Scholar 

  47. Oono, K., Suzuki, T.: Graph neural networks exponentially lose expressive power for node classification. In: ICLR (2020)

    Google Scholar 

  48. Puny, O., Ben-Hamu, H., Lipman, Y.: From graph low-rank global attention to 2-fwl approximation. In: ICML (2020)

    Google Scholar 

  49. Ray, L.A.: 2-d and 3-d image registration for medical, remote sensing, and industrial applications. J. Electronic Imaging (2005)

    Google Scholar 

  50. Rolínek, M., Swoboda, P., Zietlow, D., Paulus, A., Musil, V., Martius, G.: Deep graph matching via Blackbox differentiation of combinatorial solvers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 407–424. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_25

    Chapter  Google Scholar 

  51. Serviansky, H., et al.: Set2graph: learning graphs from sets. In: NeurIPS (2020)

    Google Scholar 

  52. Tan, S., Bu, J., Chen, C., He, X.: Using rich social media information for music recommendation via hypergraph model. In: Hoi, S., Luo, J., Boll, S., Xu, D., Jin, R., King, I. (eds.) Social Media Modeling and Computing. Springer, London (2011). https://doi.org/10.1007/978-0-85729-436-4_10

  53. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  54. Velikovic, P.: Message passing all the way up. arXiv (2022)

    Google Scholar 

  55. Wang, R., Yan, J., Yang, X.: Neural graph matching network: Learning lawler’s quadratic assignment problem with extension to hypergraph and multiple-graph matching. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  56. Wu, Z., Jain, P., Wright, M.A., Mirhoseini, A., Gonzalez, J.E., Stoica, I.: Representing long-range context for graph neural networks with global attention. arXiv (2021)

    Google Scholar 

  57. Yadati, N., Nimishakavi, M., Yadav, P., Nitin, V., Louis, A., Talukdar, P.P.: Hypergcn: A new method for training graph convolutional networks on hypergraphs. In: NeurIPS (2019)

    Google Scholar 

  58. Yavartanoo, M., Hung, S., Neshatavar, R., Zhang, Y., Lee, K.M.: Polynet: Polynomial neural network for 3d shape recognition with polyshape representation. In: 3DV (2021)

    Google Scholar 

  59. Zaheer, M., Kottur, S., Ravanbakhsh, S., Póczos, B., Salakhutdinov, R., Smola, A.J.: Deep sets. In: NeurIPS (2017)

    Google Scholar 

  60. Zhang, M., Cui, Z., Neumann, M., Chen, Y.: An end-to-end deep learning architecture for graph classification. In: AAAI (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) (No. 2021-0-00537, 2019-0-00075 and 2021-0-02068), the National Research Foundation of Korea (NRF) (No. 2021R1C1C1012540 and 2021R1A4A3032834), and Korea Meteorological Administration Research and Development Program “Development of AI techniques for Weather Forecasting” under Grant (KMA2021-00121).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seunghoon Hong .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 381 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, J., Oh, S., Cho, S., Hong, S. (2022). Equivariant Hypergraph Neural Networks. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13681. Springer, Cham. https://doi.org/10.1007/978-3-031-19803-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19803-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19802-1

  • Online ISBN: 978-3-031-19803-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics